WO2022111233A1 - Intra prediction mode coding method, and apparatus - Google Patents

Intra prediction mode coding method, and apparatus Download PDF

Info

Publication number
WO2022111233A1
WO2022111233A1 PCT/CN2021/128000 CN2021128000W WO2022111233A1 WO 2022111233 A1 WO2022111233 A1 WO 2022111233A1 CN 2021128000 W CN2021128000 W CN 2021128000W WO 2022111233 A1 WO2022111233 A1 WO 2022111233A1
Authority
WO
WIPO (PCT)
Prior art keywords
data block
side length
block
size
prediction mode
Prior art date
Application number
PCT/CN2021/128000
Other languages
French (fr)
Chinese (zh)
Inventor
杨海涛
宋楠
陈旭
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022111233A1 publication Critical patent/WO2022111233A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation

Definitions

  • the present application relates to the technical field of video or image compression based on artificial intelligence (AI), and in particular, to a decoding method and apparatus for intra prediction mode.
  • AI artificial intelligence
  • Video coding (video encoding and decoding) is widely used in digital video applications such as broadcast digital TV, video transmission over the Internet and mobile networks, real-time conversational applications such as video chat and video conferencing, DVD and Blu-ray discs, video content capture and editing systems And security applications for camcorders.
  • Video compression devices typically use software and/or hardware on the source side to encode video data prior to transmission or storage, thereby reducing the amount of data required to represent digital video images. Then, the compressed data is received by the video decompression device at the destination side.
  • the decoding method of the intra-frame prediction mode of HEVC based on the neural network introduces the neural network model into the intra-frame prediction process.
  • the syntax elements related to the intra-frame prediction mode of the current image block are obtained by means of the neural network model,
  • the acquired syntax elements are encoded; during decoding, the intra-frame prediction mode of the current image block is determined according to the syntax elements decoded in the code stream and the neural network model.
  • the present application provides a method and device for decoding an intra-frame prediction mode based on a neural network, which can improve the accuracy of encoding and decoding the intra-frame prediction mode of a current image block, save storage space, and reduce computational complexity. Thereby improving the encoding and decoding performance.
  • the present application provides an intra-frame prediction mode encoding method, the method comprising: acquiring at least two of a reconstruction block, a prediction block and a residual block of a surrounding image block of a current image block; At least two of the reconstruction block, prediction block and residual block are combined with the prediction mode probability model to obtain the probability vector of the current image block. Multiple elements in the probability vector are in one-to-one correspondence with multiple intra prediction modes. Any element in the probability vector is in one-to-one correspondence. An element is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block; determine the target intra prediction mode of the current image block; The prediction mode is encoded into the codestream.
  • the embodiment of the present application at least two of the reconstruction blocks, prediction blocks and residual blocks of the three surrounding image blocks of the current image block are obtained, because the reconstruction blocks, prediction blocks of each surrounding image block can be obtained through Two of the block and the residual block are calculated to obtain the other, so the embodiment of the present application can make full use of the relevant information of the surrounding image blocks, and then according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks
  • the generated probability vector of the current image block is more accurate, which improves the accuracy of the data encoded in the code stream and improves the encoding performance.
  • the probability vector of the current image block is obtained according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks, the current image block and the prediction mode probability model, including: when the target frame When the intra prediction mode is the non-matrix weighted intra prediction MIP mode, the probability vector of the current image block is obtained according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks, the current image block and the prediction mode probability model.
  • the method in the embodiment of the present application is used to encode the intra-frame prediction mode of the current image block, and the reconstructed blocks of the three surrounding image blocks of the current image block are obtained by obtaining the reconstructed blocks.
  • the embodiments of the present application can make full use of the relevant information of the surrounding image blocks, and then generate a The probability vector of the current image block is more accurate, thereby improving the accuracy of the data encoded in the code stream and improving the encoding performance when the target intra-frame prediction mode is a non-MIP mode.
  • the probability vector of the current image block is obtained according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks and the prediction mode probability model, including: the reconstruction block of the surrounding image blocks
  • the first data block is obtained by splicing or concatenating at least two of the prediction block and the residual block; the probability vector of the current image block is obtained according to the first data block and the prediction mode probability model.
  • prediction mode probability model in this embodiment of the present application may be a neural network model, and second data blocks of different sizes correspond to different neural network models.
  • the prediction block and the residual block of the surrounding image blocks can fully characterize the relevant information of the surrounding image blocks
  • at least two of the prediction block and the residual block are spliced or concatenated to obtain the first data block, so the first data block can fully reflect the relevant information of the current image block, and then the current image block obtained according to the first data block.
  • the probability vector of the image block is more accurate, thereby improving the accuracy of the data encoded in the code stream and improving the encoding performance.
  • obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model includes: when the size of the first data block is the target size, inputting the first data block into the prediction mode probability The model is processed to obtain a probability vector for the current image patch, or;
  • a size transformation operation is performed on the first data block to obtain a second data block, where the size of the second data block is equal to the target size, and the size transformation operation includes scaling and transposing.
  • scaling includes horizontal scaling, vertical scaling, or equal scaling in both horizontal and vertical directions; inputting the second data block into a prediction mode probability model for processing to obtain a probability vector of the current image block.
  • performing a size transformation operation on the first data block to obtain the second data block may include four methods: (1) when the first data block is not the target size When the size of a data block is not the target size, and the ratio of the first side length and the second side length of the first data block is equal to the ratio of the first side length and the second side length of the target size, the first data block is processed Proportional scaling in the horizontal and vertical directions to obtain the second data block; (2) when the size of the first data block is not the target size, and the ratio of the first side length to the second side length of the first data block When it is equal to the ratio of the second side length to the first side length of the target size, transpose the first data block, and proportionally scale the horizontal and vertical directions to obtain the second data block; (3) When the first data block is obtained; When the size of a data block is not the target size, and the first variable length and the second side length of the first data block are respectively equal to the second side length and the
  • the target sizes used in the above-mentioned four manners in this embodiment of the present application may be different, and the target size may be one or more.
  • the number of target sizes is smaller than the number of first data block sizes; the above-mentioned first side length and the second side lengths can be width and height respectively, or height and width respectively; in addition, the transposition and scaling operations performed on the first data block in the above-mentioned second method have no sequence, and the transposition is performed first and then the scaling is performed.
  • the size of the second data block obtained in the two methods is the same as that of scaling first and then transposing.
  • obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model includes: when the size of the first data block is the target size, inputting the first data block into the prediction mode probability The model is processed to obtain a probability vector for the current image patch, or;
  • a size transformation operation is performed to obtain a second data block, the size of the second data block is the target size, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or horizontal and vertical scaling Proportional scaling in two directions; input the second data block into the prediction mode probability model for processing to obtain the probability vector of the current image block.
  • the process of determining the second data block is based on the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block,
  • the self-information of the current image block is fully utilized, and the correlation between the second data block and the current image block is enhanced, thereby improving the accuracy of the probability vector obtained from the second data block, thereby improving the accuracy of the data in the code stream. , which improves encoding performance.
  • the size transformation operation includes scaling, the above-mentioned size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, to the first data block.
  • the data block is subjected to a size transformation operation to obtain a second data block, including: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the absolute value of the horizontal gradient is the same as the vertical gradient.
  • the first data block When the sum of the absolute values of , is less than the preset threshold, the first data block is scaled to obtain a second data block, and the first and second side lengths of the second data block are 4M/N, 4 respectively;
  • the first data block Scaling is performed to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively; when the first side length M of the first data block is smaller than the first data block.
  • the first data block is scaled to obtain a second data block.
  • the sum of the first side lengths of the second data block The second side lengths are respectively 4 and 4N/M; when the first side length M of the first data block is less than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are respectively 8, 8N/M; wherein, M and N are positive integer.
  • the process of determining the second data block according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block is sufficient
  • the self-information of the current image is used, thereby improving the accuracy of the probability vector obtained from the second data block subsequently, thereby improving the accuracy of the data in the code stream;
  • the number of size types of the second data block can be reduced by scaling , thereby also reducing the number of neural network models corresponding to second data blocks of different sizes, saving storage space for storing neural network models, and improving coding performance.
  • the size transformation operation includes at least one of scaling and transposition, which is based on the size relationship between the first side length and the second side length of the first data block, and/or the level of the current image block.
  • Gradient and vertical gradient performing a size transformation operation on the first data block to obtain a second data block, including: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block is scaled to obtain the second data block, and the first and second side lengths of the second data block are respectively 4M/N, 4; when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset At the threshold, the first data block is scaled to obtain a second data block, and the first side length and
  • first side length M of the first data block is smaller than the second side length N of the first data block
  • the length of the first side and the length of the second side of the second data block are respectively 8N/M, 8; wherein, M and N are positive integers.
  • the process of determining the second data block according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block The self-information of the current image is fully utilized to determine the size of the second data block, and the correlation between the second data block and the current image block is enhanced, thereby improving the accuracy of the probability vector obtained according to the second data block subsequently, thereby improving the
  • the number of second data block size types can be reduced, and the number of neural network models corresponding to second data blocks of different sizes can also be reduced. , thereby saving the storage space for storing the neural network model and improving the coding performance.
  • the above-mentioned encoding the target intra prediction mode into the code stream according to the target intra prediction mode and the probability vector includes: when the size transformation operation does not include transposition, according to the first intra prediction mode of the target intra prediction mode. an identifier and a probability vector, determine the first reference value, and encode the first reference value to obtain the code stream corresponding to the target intra-frame prediction mode; when the size transformation operation includes transposition, and the target intra-frame prediction mode is the angle prediction mode , determine the second identifier according to the preset constant and the first identifier of the target intra-frame prediction mode, determine the second reference value according to the second identifier and the probability vector; encode the second reference value to obtain the target intra-frame prediction mode Corresponding code stream; when the size transformation operation includes transposition, and the target intra-frame prediction mode is a non-angle prediction mode, determine the first reference value according to the first identifier and probability vector of the target intra-frame prediction mode; for the first reference The value is encoded to obtain the code stream corresponding
  • the preset constant is equal to the number of intra-frame prediction modes in the multi-function video coding VVC technology, that is, 67;
  • the first identifier may be the mode value corresponding to the target intra-frame prediction mode;
  • the angle prediction mode It is 65 intra-frame prediction modes with a mode value of 2-67 in VVC, and the non-angle prediction mode is a planar prediction mode with a mode value of 0 and a DC-DC prediction mode with a mode value of 1.
  • the first identifier of the target intra-frame prediction mode is processed by a preset constant to obtain the first identifier of the target intra-frame prediction mode.
  • Second identification and then determine the second reference value according to the second identification, to ensure that when the first data block is transposed, the corresponding correct second reference value is programmed into the code stream, thereby ensuring the correctness of the encoding process.
  • the present application provides an intra-frame prediction mode encoding method, the method comprising: acquiring surrounding image blocks of a current image block; splicing or concatenating the surrounding image blocks to obtain a first data block; The block performs a size transformation operation to obtain the second data block, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling or proportional scaling in both horizontal and vertical directions; Input the second data block into the prediction mode probability model for processing to obtain the probability vector of the current image block.
  • Multiple elements in the probability vector correspond to multiple intra prediction modes one-to-one, and any element in the probability vector is used to represent Use the probability of the intra prediction mode corresponding to any element when predicting the current image block; determine the target intra prediction mode of the current image block; encode the target intra prediction mode into the code according to the target intra prediction mode and the probability vector flow.
  • prediction mode probability model in this embodiment of the present application may be a neural network model, and second data blocks of different sizes correspond to different neural network models.
  • the computational complexity of the subsequent use of the second data block to participate in the calculation of the probability vector can be effectively reduced, so that Improve coding efficiency; at the same time, through the size transformation operation, the number of size types of the second data block can be reduced, thereby reducing the number of neural network models corresponding to second data blocks of different sizes, thereby saving the storage space of the neural network model. encoding performance.
  • performing a size transformation operation on the first data block to obtain the second data block includes: when the target intra-frame prediction mode is a non-matrix-weighted intra-frame prediction MIP mode, and the first data block is When the size is not the target size, a size transformation operation is performed on the first data block to obtain a second data block, and the size of the second data block is equal to the target size.
  • target size may be set according to actual scenarios, and the number of target sizes is smaller than the number of first data block sizes.
  • the number of sizes of the second data block can be effectively reduced , thereby reducing the number of neural network models corresponding to second data blocks of different sizes when the target intra prediction mode is the non-matrix weighted intra prediction MIP mode, thereby saving the storage space of the neural network model and improving the coding performance
  • a second data block with a smaller size is obtained, so that when the target intra-frame prediction mode is a non-matrix weighted intra-frame prediction MIP mode, the subsequent use of the second data block can be effectively reduced.
  • the computational complexity when the block participates in the calculation of the probability vector thereby improving the coding efficiency.
  • performing a size transformation operation on the first data block to obtain the second data block includes: when the target intra-frame prediction mode is a non-matrix-weighted intra-frame prediction MIP mode, and the first data block is When the size is not the target size, according to the size relationship between the length of the first side and the length of the second side of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, perform a size transformation operation on the first data block to A second data block is obtained, and the size of the second data block is equal to the target size.
  • first side length and second side length may be width and height, respectively, or height and width, respectively.
  • the process of determining the second data block is based on the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block,
  • the self-information of the current image block is fully utilized, and the correlation between the second data block and the current image block is enhanced, thereby improving the accuracy of the probability vector obtained from the second data block, thereby improving the accuracy of the data in the code stream. , which improves encoding performance.
  • the target intra-frame prediction mode is a non-matrix-weighted intra-frame prediction MIP mode
  • the size of the first data block is equal to the target size
  • the first data block is input into the prediction mode probability model for processing to obtain Get the probability vector of the current image patch.
  • the size transformation operation is not performed, and the first data block is directly used to participate in the generation of the probability vector, which simplifies the process of analyzing the target frame.
  • the process of coding in prediction mode improves coding efficiency.
  • the size transformation operation includes scaling, the above-mentioned size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, to the first data block.
  • the data block is subjected to a size transformation operation to obtain a second data block, including: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the absolute value of the horizontal gradient is the same as the vertical gradient.
  • the first data block When the sum of the absolute values of , is less than the preset threshold, the first data block is scaled to obtain a second data block, and the first and second side lengths of the second data block are 4M/N, 4 respectively;
  • the first data block Scaling is performed to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively; when the first side length M of the first data block is smaller than the first data block.
  • the first data block is scaled to obtain a second data block.
  • the sum of the first side lengths of the second data block The second side lengths are respectively 4 and 4N/M; when the first side length M of the first data block is less than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are respectively 8, 8N/M; wherein, M and N are positive integer.
  • the process of determining the second data block according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block is sufficient
  • the self-information of the current image is used, thereby improving the accuracy of the probability vector obtained from the second data block subsequently, thereby improving the accuracy of the data in the code stream;
  • the number of size types of the second data block can be reduced by scaling , thereby also reducing the number of neural network models corresponding to second data blocks of different sizes, saving storage space for storing neural network models, and improving coding performance.
  • the size transformation operation includes at least one of scaling and transposition; the above is based on the size relationship between the length of the first side and the length of the second side of the first data block, and/or the level of the current image block.
  • Gradient and vertical gradient performing a size transformation operation on the first data block to obtain a second data block, including: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block is scaled to obtain the second data block, and the first and second side lengths of the second data block are respectively 4M/N, 4; when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset At the threshold, the first data block is scaled to obtain a second data block, and
  • first side length M of the first data block is smaller than the second side length N of the first data block
  • the length of the first side and the length of the second side of the second data block are respectively 8N/M, 8; wherein, M and N are positive integers.
  • the process of determining the second data block according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block The self-information of the current image is fully utilized to determine the size of the second data block, and the correlation between the second data block and the current image block is enhanced, thereby improving the accuracy of the probability vector obtained according to the second data block subsequently, thereby improving the
  • the number of second data block size types can be reduced, and the number of neural network models corresponding to second data blocks of different sizes can also be reduced. , thereby saving the storage space for storing the neural network model and improving the coding performance.
  • the above-mentioned encoding the target intra prediction mode into the code stream according to the target intra prediction mode and the probability vector includes: when the size transformation operation does not include transposition, according to the first intra prediction mode of the target intra prediction mode. an identifier and a probability vector to determine the first reference value; the first reference value is encoded to obtain a code stream corresponding to the target intra-frame prediction mode; when the size transformation operation includes transposition, and the target intra-frame prediction mode is the angle prediction mode , determine the second identifier according to the preset constant and the first identifier of the target intra-frame prediction mode, determine the second reference value according to the second identifier and the probability vector; encode the second reference value to obtain the target intra-frame prediction mode Corresponding code stream; when the size transformation operation includes transposition, and the target intra-frame prediction mode is a non-angle prediction mode, determine the first reference value according to the first identifier and probability vector of the target intra-frame prediction mode; for the first reference The value is encoded to obtain the code stream
  • the preset constant is equal to the number of intra-frame prediction modes in the multi-function video coding VVC technology, that is, 67;
  • the first identifier may be the mode value corresponding to the target intra-frame prediction mode;
  • the angle prediction mode It is 65 intra-frame prediction modes with a mode value of 2-67 in VVC, and the non-angle prediction mode is a planar prediction mode with a mode value of 0 and a DC-DC prediction mode with a mode value of 1.
  • the first identifier of the target intra-frame prediction mode is processed by a preset constant to obtain the first identifier of the target intra-frame prediction mode.
  • Second identification and then determine the second reference value according to the second identification, to ensure that when the first data block is transposed, the corresponding correct second reference value is programmed into the code stream, thereby ensuring the correctness of the encoding process.
  • the present application provides a decoding method for an intra-frame prediction mode, the method comprising: acquiring a code stream corresponding to a current image block, and a reconstruction block, a prediction block, and a residual block of the surrounding image blocks of the current image block. At least two; obtain a probability vector of the current image block according to at least two of the reconstruction block, prediction block and residual block of the surrounding image blocks and the prediction mode probability model, and multiple elements in the probability vector are related to multiple intra prediction modes.
  • any element in the probability vector is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block; according to the code stream and the probability vector, determine the target intra prediction of the current image block mode, and determine the prediction block of the current image block according to the target intra prediction mode.
  • the probability vector of the current image block is obtained according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks and the prediction mode probability model, including: the reconstruction block of the surrounding image blocks
  • the first data block is obtained by splicing or concatenating at least two of the prediction block and the residual block; the probability vector of the current image block is obtained according to the first data block and the prediction mode probability model.
  • prediction mode probability model in this embodiment of the present application may be a neural network model, and second data blocks of different sizes correspond to different neural network models.
  • the prediction block and the residual block of the surrounding image blocks can fully characterize the relevant information of the surrounding image blocks
  • at least two of the prediction block and the residual block are spliced or cascaded to obtain the first data block, so the first data block can fully reflect the relevant information of the current image block, and the current image obtained according to the first data block
  • the probability vector of the block is also more accurate, and subsequently, the target intra prediction mode of the current image block can be accurately determined according to the probability vector and the code stream, thereby improving the decoding performance.
  • obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model includes: when the size of the first data block is the target size, inputting the first data block into the prediction mode probability The model is processed to obtain the probability vector of the current image block, or; when the size of the first data block is not the target size, a size transformation operation is performed on the first data block to obtain the second data block, the The size is equal to the target size, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or proportional scaling in both horizontal and vertical directions; inputting the second data block into the prediction mode The probability model is processed to obtain the probability vector of the current image patch.
  • the number of sizes of the second data block can be effectively reduced , thereby reducing the number of neural network models corresponding to the second data blocks of different sizes, thereby saving the storage space of the neural network model and improving the decoding performance.
  • the computational complexity of the subsequent calculation of the probability vector by using the neural network model can be effectively reduced, thereby improving the decoding efficiency.
  • obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model includes: when the size of the first data block is the target size, inputting the first data block into the prediction mode probability The model is processed to obtain a probability vector for the current image patch, or;
  • the size transformation operation is performed to obtain a second data block, the first side length and the second side length are the lengths of two mutually perpendicular sides in the first data block, the size of the second data block is the target size, and the size transformation operation includes at least one of scaling and transposition, the scaling includes horizontal scaling, vertical scaling or equal scaling in both horizontal and vertical directions; inputting the second data block into the prediction mode probability model for processing to obtain the current image The probability vector for the block.
  • the process of determining the second data block is based on the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block,
  • the self-information of the current image block is fully utilized, and the correlation between the second data block and the current image block is enhanced, thereby improving the accuracy of the probability vector obtained according to the second data block, so that the probability vector and the code can be obtained later.
  • Streaming accurately determines the target intra prediction mode for the current image block, improving decoding performance.
  • the size transformation operation includes scaling, the above-mentioned size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, to the first data block.
  • the data block is subjected to a size transformation operation to obtain a second data block, including: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the absolute value of the horizontal gradient is the same as the vertical gradient.
  • the first data block When the sum of the absolute values of , is less than the preset threshold, the first data block is scaled to obtain a second data block, and the first and second side lengths of the second data block are 4M/N, 4 respectively;
  • the first data block Scaling is performed to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively; when the first side length M of the first data block is smaller than the first data block.
  • the first data block is scaled to obtain a second data block.
  • the sum of the first side lengths of the second data block The second side lengths are respectively 4 and 4N/M; when the first side length M of the first data block is less than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are respectively 8, 8N/M; wherein, M and N are positive integer.
  • the process of determining the second data block according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block is sufficient.
  • the self-information of the current image is utilized, thereby improving the accuracy of the probability vector obtained subsequently according to the second data block, so that the target intra prediction mode of the current image block subsequently determined according to the probability vector and the code stream is more accurate;
  • the number of size types of the second data block can be reduced by scaling, thereby reducing the number of neural network models corresponding to the second data blocks of different sizes, saving the storage space for storing the neural network model and improving the decoding performance.
  • the size transformation operation includes at least one of scaling and transposition, which is based on the size relationship between the first side length and the second side length of the first data block, and/or the level of the current image block.
  • Gradient and vertical gradient performing a size transformation operation on the first data block to obtain a second data block, including: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block is scaled to obtain the second data block, and the first and second side lengths of the second data block are respectively 4M/N, 4; when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset At the threshold, the first data block is scaled to obtain a second data block, and the first side length and
  • first side length M of the first data block is smaller than the second side length N of the first data block
  • the length of the first side and the length of the second side of the second data block are respectively 8N/M, 8; wherein, M and N are positive integers.
  • the process of determining the second data block according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block is sufficient.
  • the self-information of the current image is used to determine the size of the second data block, which enhances the correlation between the second data block and the current image block, thereby improving the accuracy of the subsequent probability vector obtained according to the second data block, thereby improving the accuracy of the data in the code stream; at the same time, by performing scaling and transposing operations, the number of sizes of second data blocks can be reduced, and the number of neural network models corresponding to second data blocks of different sizes can also be reduced.
  • the storage space for storing the neural network model is saved, and the coding performance is improved.
  • the above-mentioned determining the target intra prediction mode of the current image block according to the code stream and the probability vector includes: decoding the code stream to obtain the target reference value; when the size transformation operation does not include transposition, Determine the target intra-frame prediction mode according to the target reference value and the probability vector; when the size transformation operation includes transposition, determine the first mark according to the target reference value and the probability vector; When the intra-frame prediction mode corresponding to the first mark is the angle prediction mode , determine the target intra-frame prediction mode according to the preset constant and the first identifier; when the intra-frame prediction mode corresponding to the first identifier is a non-angle prediction mode, determine the target intra-frame prediction mode according to the target reference value and the probability vector.
  • the intra-frame prediction mode corresponding to the above-mentioned first identifier is an angle prediction mode
  • the first identifier is transposed according to a preset constant to obtain a second identifier.
  • the intra-frame prediction mode corresponding to the second identifier is the target intra-frame prediction mode of the current image block.
  • the present application provides a decoding method for an intra prediction mode, the method comprising: acquiring a code stream corresponding to a current image block and surrounding image blocks of the current image block; splicing or concatenating the surrounding image blocks to obtain The first data block; a size transformation operation is performed on the first data block to obtain a second data block, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or horizontal and vertical scaling.
  • Proportional scaling in two directions input the second data block into the prediction mode probability model for processing to obtain the probability vector of the current image block, multiple elements in the probability vector correspond to multiple intra prediction modes one-to-one, and the probability Any element in the vector is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block; according to the code stream and the probability vector, determine the target intra prediction mode of the current image block, and according to the target Intra prediction mode, which determines the prediction block of the current image block.
  • the subsequent use of the neural network model can be effectively reduced.
  • the computational complexity when calculating the probability vector improves the decoding efficiency; in addition, since there are various sizes of the first data block, the second data block can be obtained by performing a size transformation operation on the first data block, which can effectively reduce the second data block.
  • the number of size types is reduced, thereby reducing the number of neural network models corresponding to second data blocks of different sizes, thereby saving the storage space of the neural network model and improving the decoding performance.
  • performing the size transformation operation on the first data block to obtain the second data block includes: when the size of the first data block is not the target size, performing a size transformation operation on the first data block , to obtain a second data block whose size is equal to the target size.
  • the target size can be set according to the actual application scenario, and changing the size of the first data block to the target size through the size transformation operation can effectively meet the actual scene requirements; at the same time, because the target size
  • the number of types is smaller than the number of types of the size of the first data block, so after the size transformation operation, the number of types of the size of the second data block can be effectively reduced, and the number of neural network models corresponding to the second data blocks of different sizes can be reduced. Save the memory space of the neural network model and improve the decoding performance.
  • performing the size transformation operation on the first data block to obtain the second data block includes: when the size of the first data block is not the target size, according to the first side of the first data block The size relationship between the length and the second side length, and/or the horizontal gradient and vertical gradient of the current image block, perform a size transformation operation on the first data block to obtain a second data block, and the size of the second data block is equal to the target size.
  • the process of determining the second data block is based on the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block,
  • the self-information of the current image block is fully utilized, and the correlation between the second data block and the current image block is enhanced, thereby improving the accuracy of the probability vector obtained according to the second data block, so that the probability vector and the code can be obtained later.
  • Streaming accurately determines the target intra prediction mode for the current image block, improving decoding performance.
  • the above method further includes: when the size of the first data block is equal to the target size, inputting the first data block into the prediction mode probability model for processing to obtain a probability vector of the current image block.
  • the size transformation operation is not performed, and the first data block is directly used to participate in the generation of the probability vector, which simplifies the probability of the current image block.
  • the vector generation process improves the decoding efficiency.
  • the size transformation operation includes scaling, the above-mentioned size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, to the first data block.
  • the data block is subjected to a size transformation operation to obtain a second data block, including: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the absolute value of the horizontal gradient is the same as the vertical gradient.
  • the first data block When the sum of the absolute values of , is less than the preset threshold, the first data block is scaled to obtain a second data block, and the first and second side lengths of the second data block are 4M/N, 4 respectively;
  • the first data block Scaling is performed to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively; when the first side length M of the first data block is smaller than the first data block.
  • the first data block is scaled to obtain a second data block.
  • the sum of the first side lengths of the second data block The second side lengths are respectively 4 and 4N/M; when the first side length M of the first data block is less than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are respectively 8, 8N/M; wherein, M and N are positive integer.
  • the size transformation operation includes at least one of scaling and transposition, which is based on the size relationship between the first side length and the second side length of the first data block, and/or the level of the current image block.
  • Gradient and vertical gradient performing a size transformation operation on the first data block to obtain a second data block, including: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block is scaled to obtain the second data block, and the first and second side lengths of the second data block are respectively 4M/N, 4; when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset At the threshold, the first data block is scaled to obtain a second data block, and the first side length and
  • first side length M of the first data block is smaller than the second side length N of the first data block
  • the length of the first side and the length of the second side of the second data block are respectively 8N/M, 8; wherein, M and N are positive integers.
  • the above-mentioned determining the target intra prediction mode of the current image block according to the code stream and the probability vector includes: decoding the code stream to obtain the target reference value; when the size transformation operation does not include transposition, Determine the target intra-frame prediction mode according to the target reference value and the probability vector; when the size transformation operation includes transposition, determine the first mark according to the target reference value and the probability vector; When the intra-frame prediction mode corresponding to the first mark is the angle prediction mode , determine the target intra-frame prediction mode according to the preset constant and the first identifier; when the intra-frame prediction mode corresponding to the first identifier is a non-angle prediction mode, determine the target intra-frame prediction mode according to the target reference value and the probability vector.
  • the present application provides an encoding device, the beneficial effects of which can be referred to the description of the first aspect, and are not repeated here.
  • the encoding device has the function of implementing the behavior in the method example of the first aspect above.
  • the functions can be implemented by hardware, or can be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the encoding device includes: an obtaining unit, configured to obtain at least two of a reconstruction block, a prediction block and a residual block of surrounding image blocks of the current image block; At least two of the reconstructed block, prediction block and residual block of the block and the prediction mode probability model obtain the probability vector of the current image block, and multiple elements in the probability vector correspond to multiple intra prediction modes one-to-one. Any element of is used to characterize the probability of adopting the intra prediction mode corresponding to any element when predicting the current image block; the determination unit is used to determine the target intra prediction mode of the current image block; the coding unit is used to determine the target intra prediction mode according to the target Intra prediction mode and probability vector, encoding the target intra prediction mode into the code stream.
  • These modules can perform the corresponding functions in the method examples of the first aspect. For details, please refer to the detailed descriptions in the method examples, which will not be repeated here.
  • the present application provides an encoding apparatus, the beneficial effects of which can be referred to the description of the second aspect, and are not repeated here.
  • the encoding device has the function of implementing the behavior in the method example of the second aspect above.
  • the functions can be implemented by hardware, or can be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the encoding device includes: an acquisition unit, used for acquiring surrounding image blocks of the current image block; a processing unit, used for splicing or concatenating the surrounding image blocks to obtain a first data block; A data block is subjected to a size transformation operation to obtain a second data block.
  • the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or an equal ratio between horizontal and vertical directions. scaling; and for processing the second data block into the prediction mode probability model to obtain a probability vector of the current image block, where multiple elements in the probability vector correspond to multiple intra prediction modes one-to-one, and any element in the probability vector is in one-to-one correspondence.
  • An element is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block; the determining unit is used to determine the target intra prediction mode of the current image block; the coding unit is used to determine the target intra prediction mode according to the target intra frame Prediction mode and probability vector, encoding the target intra prediction mode into the code stream.
  • the present application provides a decoding apparatus.
  • the decoding device has a function to implement the behavior in the method example of the third aspect above.
  • the functions can be implemented by hardware, or can be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the decoding apparatus includes: an obtaining unit, configured to obtain a code stream corresponding to the current image block, and at least two of the reconstruction blocks, prediction blocks and residual blocks of surrounding image blocks of the current image block A processing unit for obtaining the probability vector of the current image block according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image block and the prediction mode probability model, and multiple elements in the probability vector and multiple intraframes
  • the prediction modes are in one-to-one correspondence, and any element in the probability vector is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block; the decoding unit is used to determine the current image block according to the code stream and the probability vector.
  • the target intra prediction mode of the image block, and the prediction block of the current image block is determined according to the target intra prediction mode.
  • These modules can perform the corresponding functions in the method examples of the third aspect. For details, please refer to the detailed descriptions in the method examples, which will not be repeated here.
  • the present application provides a decoding apparatus.
  • the decoding device has a function to implement the behavior in the method example of the fourth aspect above.
  • the functions can be implemented by hardware, or can be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the decoding device includes: an acquiring unit, configured to acquire a code stream corresponding to the current image block and surrounding image blocks of the current image block; a processing unit, configured to splicing or concatenating the surrounding image blocks to obtain a first data block; perform a size transformation operation on the first data block to obtain a second data block, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling or horizontal scaling and proportional scaling in two vertical directions; and for processing the second data block into the prediction mode probability model to obtain a probability vector of the current image block, multiple elements in the probability vector and multiple intra prediction modes One-to-one correspondence, any element in the probability vector is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block; the decoding unit is used to determine the current image block according to the code stream and the probability vector The target intra-frame prediction mode is determined, and the prediction block of the current image block is determined according to the target intra-frame prediction mode.
  • the method described in the first aspect of the present application may be performed by the apparatus described in the fifth aspect of the present application.
  • Other features and implementations of the method described in the first aspect of the present application directly depend on the functionality and implementation of the device described in the fifth aspect of the present application.
  • the method described in the second aspect of the present application may be performed by the apparatus described in the sixth aspect of the present application.
  • Other features and implementations of the method described in the second aspect of the present application directly depend on the functionality and implementation of the device described in the sixth aspect of the present application.
  • the method described in the third aspect of the present application may be performed by the apparatus described in the seventh aspect of the present application.
  • Other features and implementations of the method described in the third aspect of the present application directly depend on the functionality and implementation of the device described in the seventh aspect of the present application.
  • the method described in the fourth aspect of the present application may be performed by the apparatus described in the eighth aspect of the present application.
  • Other features and implementations of the method described in the fourth aspect of the present application directly depend on the functionality and implementation of the apparatus described in the eighth aspect of the present application.
  • the present application provides an encoder (20) comprising a processing circuit, the encoder being operable to perform the method in all or any of the possible embodiments of the first aspect and the second aspect.
  • the present application provides a decoder (30) comprising a processing circuit, the decoder being operable to perform the method in all or any of the possible embodiments of the third aspect and the fourth aspect.
  • the present application provides an encoder, comprising: one or more processors; and a non-transitory computer-readable storage medium coupled to the processing and stores a program executed by the processor, wherein the program, when executed by the processor, causes the encoder to perform all or any of the possible embodiments of the first aspect or the second aspect above Methods.
  • the present application provides a decoder, comprising: one or more processors; a non-transitory computer-readable storage medium coupled to the processor and storing data from the A program executed by the processor, wherein the program, when executed by the processor, causes the decoder to execute the method in all or any of the possible embodiments of the third aspect or the fourth aspect.
  • the present application provides a non-transitory computer-readable storage medium comprising program code that, when executed by a computer device, executes all of the above-mentioned first, second, third and fourth aspects or the method in any possible embodiment.
  • the present application provides a non-transitory computer-readable storage medium, comprising a bitstream encoded according to the method in all or any of the possible embodiments of the first aspect or the second aspect.
  • the present application provides a computer program product, comprising program code, the program code executing all or any of the above-mentioned first, second, third and fourth aspects when run on a computer or processor method in one possible embodiment.
  • FIG. 1A is a block diagram of an example video coding system for implementing embodiments of the present application, wherein the system utilizes a neural network to encode or decode video images;
  • FIG. 1B is a block diagram of another example of a video coding system for implementing embodiments of the present application, wherein the video encoder and/or video decoder use a neural network to encode or decode video images;
  • FIG. 2 is a block diagram of an example example of a video encoder for implementing embodiments of the present application, wherein the video encoder 20 uses a neural network to encode video images;
  • FIG. 3 is a block diagram of an example example of a video decoder for implementing embodiments of the present application, wherein the video decoder 30 uses a neural network to decode video images;
  • FIG. 4 is a schematic block diagram of a video decoding apparatus for implementing an embodiment of the present application.
  • FIG. 5 is a schematic block diagram of a video decoding apparatus for implementing an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a neural network structure for obtaining a probability vector of a current image block according to an embodiment of the application
  • FIG. 7 is a flowchart of a method for encoding a target intra prediction mode of a current image block in an embodiment of the application
  • FIG. 8 is a flowchart of another method for encoding a target intra prediction mode of a current image block in an embodiment of the application
  • FIG. 9 is a schematic diagram of multiple intra prediction modes in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a splicing method of a current image block in an embodiment of the present application.
  • Figures 11-1 to 11-4 are schematic diagrams showing four kinds of positional relationships of the current image block in the current frame in the embodiment of the present application.
  • FIG. 12 is a flowchart of an encoder encoding related syntax elements of an intra prediction mode of a current image block according to an embodiment of the present application
  • FIG. 13 is a flowchart of a method for decoding a target intra prediction mode of a current image block in an embodiment of the present application
  • FIG. 14 is a flowchart of another method for decoding a target intra prediction mode of a current image block in an embodiment of the present application
  • 15 is a schematic block diagram of an encoding apparatus in an embodiment of the present application.
  • FIG. 16 is a schematic block diagram of a decoding apparatus in an embodiment of the present application.
  • Embodiments of the present application provide an AI-based video image compression technology, and in particular, provide a neural network-based intra-frame prediction mode encoding and decoding technology to improve traditional hybrid video encoding and decoding systems.
  • Video coding generally refers to the processing of sequences of images that form a video or video sequence. In the field of video coding, the terms “picture”, “frame” or “image” may be used as synonyms.
  • Video encoding (or commonly referred to as encoding) includes two parts: video encoding and video decoding. Video encoding is performed on the source side and typically involves processing (eg, compressing) the original video image to reduce the amount of data required to represent the video image (and thus store and/or transmit more efficiently). Video decoding is performed on the destination side and typically involves inverse processing relative to the encoder to reconstruct the video image.
  • the "encoding" of a video image in relation to the embodiments should be understood as the “encoding” or “decoding” of a video image or a video sequence.
  • the encoding part and the decoding part are also collectively referred to as codec (encoding and decoding, CODEC).
  • the original video image can be reconstructed, ie the reconstructed video image has the same quality as the original video image (assuming no transmission loss or other data loss during storage or transmission).
  • further compression is performed through quantization, etc. to reduce the amount of data required to represent the video image, and the decoder side cannot completely reconstruct the video image, that is, the quality of the reconstructed video image is higher than that of the original video image. low or poor.
  • Video coding standards fall under the category of "lossy hybrid video codecs" (ie, combining spatial and temporal prediction in the pixel domain with 2D transform coding in the transform domain for applying quantization).
  • Each image in a video sequence is usually partitioned into sets of non-overlapping blocks, usually encoded at the block level.
  • encoders typically process i.e.
  • encode video at the block (video block) level eg, by spatial (intra) prediction and temporal (inter) prediction to generate prediction blocks; block) to subtract the prediction block to get the residual block; transform the residual block in the transform domain and quantize the residual block to reduce the amount of data to be transmitted (compressed), and the decoder side will process inversely with respect to the encoder Partially applied to encoded or compressed blocks to reconstruct the current block for representation. Additionally, the encoder needs to repeat the decoder's processing steps so that the encoder and decoder generate the same predictions (eg, intra- and inter-prediction) and/or reconstructed pixels for processing, ie, encoding subsequent blocks.
  • predictions eg, intra- and inter-prediction
  • FIG. 1A is a schematic block diagram of an exemplary coding system 10, such as a video coding system 10 (or simply coding system 10) that may utilize the techniques of this application.
  • Video encoder 20 (or encoder 20 for short) and video decoder 30 (or decoder 30 for short) in video coding system 10 represent devices, etc. that may be used to perform techniques in accordance with the various examples described in this application .
  • the decoding system 10 includes a source device 12 for providing encoded image data 21 such as encoded images to a destination device 14 for decoding the encoded image data 21 .
  • the source device 12 includes an encoder 20 and, alternatively, an image source 16 , a preprocessor (or preprocessing unit) 18 such as an image preprocessor, and a communication interface (or communication unit) 22 .
  • Image source 16 may include or be any type of image capture device for capturing real-world images, etc., and/or any type of image generation device, such as a computer graphics processor or any type of user for generating computer animation images. Devices used to acquire and/or provide real-world images, computer-generated images (e.g., screen content, virtual reality (VR) images, and/or any combination thereof (e.g., augmented reality, AR) images).
  • the image source may be any type of memory or storage that stores any of the above-mentioned images.
  • the image (or image data) 17 may also be referred to as the original image (or original image data) 17 .
  • the preprocessor 18 is used to receive the (raw) image data 17 and preprocess the image data 17 to obtain a preprocessed image (or preprocessed image data) 19 .
  • the preprocessing performed by the preprocessor 18 may include trimming, color format conversion (eg, from RGB to YCbCr), toning, or denoising. It is understood that the preprocessing unit 18 may be an optional component.
  • a video encoder (or encoder) 20 is used to receive preprocessed image data 19 and to provide encoded image data 21 (described further below with respect to FIG. 2 etc.).
  • the communication interface 22 in the source device 12 can be used to: receive the encoded image data 21 and send the encoded image data 21 (or any other processed version) over the communication channel 13 to another device such as the destination device 14 or any other device for storage or rebuild directly.
  • the destination device 14 includes a decoder 30 and may additionally, alternatively, include a communication interface (or communication unit) 28 , a post-processor (or post-processing unit) 32 and a display device 34 .
  • the communication interface 28 in the destination device 14 is used to receive the encoded image data 21 (or any other processed version) directly from the source device 12 or from any other source device such as a storage device, for example, the storage device is an encoded image data storage device, The encoded image data 21 is supplied to the decoder 30 .
  • Communication interface 22 and communication interface 28 may be used through a direct communication link between source device 12 and destination device 14, such as a direct wired or wireless connection, etc., or through any type of network, such as a wired network, a wireless network, or any Combination, any type of private network and public network, or any type of combination, send or receive encoded image data (or encoded data) 21 .
  • the communication interface 22 may be used to encapsulate the encoded image data 21 into a suitable format such as a message, and/or use any type of transfer encoding or processing to process the encoded image data for transmission over a communication link or communication network transfer on.
  • the communication interface 28 corresponds to the communication interface 22 and may be used, for example, to receive transmission data and process the transmission data using any type of corresponding transmission decoding or processing and/or decapsulation to obtain encoded image data 21 .
  • Both the communication interface 22 and the communication interface 28 can be configured as a one-way communication interface as indicated by the arrows from the source device 12 to the corresponding communication channel 13 of the destination device 14 in FIG. 1A, or a two-way communication interface, and can be used to send and receive messages etc. to establish a connection, acknowledge and exchange any other information related to a communication link and/or data transfer such as encoded image data transfer, etc.
  • a video decoder (or decoder) 30 is used to receive encoded image data 21 and to provide decoded image data (or decoded image data) 31 (described further below with reference to FIG. 3 etc.).
  • the post-processor 32 is configured to perform post-processing on the decoded image data 31 (also referred to as reconstructed image data) such as a decoded image to obtain post-processed image data 33 such as a post-processed image.
  • Post-processing performed by post-processing unit 32 may include, for example, color format conversion (eg, from YCbCr to RGB), toning, trimming, or resampling, or any other processing used to generate decoded image data 31 for display by display device 34, etc. .
  • a display device 34 is used to receive post-processed image data 33 to display the image to a user or viewer or the like.
  • Display device 34 may be or include any type of display for representing the reconstructed image, eg, an integrated or external display screen or display.
  • the display screen may include a liquid crystal display (LCD), an organic light emitting diode (OLED) display, a plasma display, a projector, a micro LED display, a liquid crystal on silicon (LCoS) display ), digital light processor (DLP), or any other type of display.
  • the coding system 10 also includes a training engine 25 for training the encoder 20 (in particular the intra prediction unit 254 in the encoder 20 ) or the decoder 30 (in particular the intra prediction unit 344 in the decoder 30 ) ) to process the first data block or the second data block obtained by splicing or concatenating the surrounding image blocks of the current image block to generate a probability vector of the current image block.
  • a training engine 25 for training the encoder 20 (in particular the intra prediction unit 254 in the encoder 20 ) or the decoder 30 (in particular the intra prediction unit 344 in the decoder 30 ) ) to process the first data block or the second data block obtained by splicing or concatenating the surrounding image blocks of the current image block to generate a probability vector of the current image block.
  • FIG. 1A shows source device 12 and destination device 14 as separate devices
  • device embodiments may include both source device 12 and destination device 14 or the functions of both source device 12 and destination device 14, ie, include both source device 12 and destination device 14.
  • Device 12 or corresponding function and destination device 14 or corresponding function In these embodiments, source device 12 or corresponding functionality and destination device 14 or corresponding functionality may be implemented using the same hardware and/or software or by separate hardware and/or software, or any combination thereof.
  • Encoder 20 eg, video encoder 20
  • decoder 30 eg, video decoder 30
  • processing circuitry as shown in FIG. 1B , eg, one or more microprocessors, digital signal processors (digital signal processor, DSP), application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), discrete logic, hardware, special-purpose processor for video encoding, or any combination thereof .
  • Encoder 20 may be implemented by processing circuitry 46 to include the various modules discussed with reference to encoder 20 of FIG. 2 and/or any other encoder system or subsystem described herein.
  • Decoder 30 may be implemented by processing circuitry 46 to include the various modules discussed with reference to decoder 30 of FIG.
  • the processing circuitry 46 may be used to perform various operations discussed below. As shown in FIG. 5, if parts of the techniques are implemented in software, a device may store the instructions of the software in a suitable non-transitory computer-readable storage medium and execute the instructions in hardware using one or more processors, thereby Implement the techniques of the present invention.
  • One of the video encoder 20 and the video decoder 30 may be integrated in a single device as part of a combined codec (encoder/decoder, CODEC), as shown in FIG. 1B .
  • Source device 12 and destination device 14 may include any of a variety of devices, including any type of handheld or stationary device, such as a laptop or laptop, cell phone, smartphone, tablet or tablet, camera, Desktop computers, set-top boxes, televisions, display devices, digital media players, video game consoles, video streaming devices (eg, content service servers or content distribution servers), broadcast receiving equipment, broadcast transmitting equipment, etc., and may not Use or use any type of operating system.
  • source device 12 and destination device 14 may be equipped with components for wireless communication.
  • source device 12 and destination device 14 may be wireless communication devices.
  • the video coding system 10 shown in FIG. 1A is merely exemplary, and the techniques provided herein may be applicable to video coding settings (eg, video encoding or video decoding) that do not necessarily include the encoding device and the Decode any data communication between devices.
  • data is retrieved from local storage, sent over a network, and so on.
  • the video encoding device may encode and store the data in memory, and/or the video decoding device may retrieve and decode the data from the memory.
  • encoding and decoding are performed by devices that do not communicate with each other but merely encode data to and/or retrieve and decode data from memory.
  • Video coding system 40 may include imaging device 41, video encoder 20, video decoder 30 (and/or video encoder/decoder implemented by processing circuit 46), antenna 42, one or more processors 43, a or multiple memory stores 44 and/or display devices 45 .
  • imaging device 41, antenna 42, processing circuit 46, video encoder 20, video decoder 30, processor 43, memory storage 44, and/or display device 45 can communicate with each other.
  • video coding system 40 may include only video encoder 20 or only video decoder 30 .
  • antenna 42 may be used to transmit or receive an encoded bitstream of video data.
  • display device 45 may be used to present video data.
  • Processing circuitry 46 may include application-specific integrated circuit (ASIC) logic, graphics processors, general purpose processors, and the like.
  • Video coding system 40 may also include an optional processor 43, which may similarly include application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, and the like.
  • the memory memory 44 may be any type of memory, such as volatile memory (eg, static random access memory (SRAM), dynamic random access memory (DRAM), etc.) or non-volatile memory volatile memory (eg, flash memory, etc.), etc.
  • memory storage 44 may be implemented by cache memory.
  • processing circuitry 46 may include memory (eg, cache memory, etc.) for implementing image buffers, and the like.
  • video encoder 20 implemented by logic circuitry may include an image buffer (eg, implemented by processing circuitry 46 or memory memory 44 ) and a graphics processing unit (eg, implemented by processing circuitry 46 ).
  • the graphics processing unit may be communicatively coupled to the image buffer.
  • the graphics processing unit may include video encoder 20 implemented by processing circuitry 46 to implement the various modules discussed with reference to FIG. 2 and/or any other encoder system or subsystem described herein.
  • Logic circuits may be used to perform the various operations discussed herein.
  • video decoder 30 may be implemented by processing circuitry 46 in a similar manner to implement various of the types discussed with reference to video decoder 30 of FIG. 3 and/or any other decoder systems or subsystems described herein. module.
  • logic circuit-implemented video decoder 30 may include an image buffer (implemented by processing circuit 46 or memory memory 44) and a graphics processing unit (eg, implemented by processing circuit 46).
  • the graphics processing unit may be communicatively coupled to the image buffer.
  • the graphics processing unit may include video decoder 30 implemented by processing circuitry 46 to implement the various modules discussed with reference to Figure 3 and/or any other decoder systems or subsystems described herein.
  • antenna 42 may be used to receive an encoded bitstream of video data.
  • the encoded bitstream may include data, indicators, index values, mode selection data, etc., as discussed herein related to encoded video frames, such as data related to encoded partitions (eg, transform coefficients or quantized transform coefficients). , (as discussed) optional indicators, and/or data defining the encoding split).
  • Video coding system 40 may also include video decoder 30 coupled to antenna 42 for decoding the encoded bitstream.
  • Display device 45 is used to present video frames.
  • video decoder 30 may be used to perform the opposite process.
  • video decoder 30 may be operable to receive and parse such syntax elements, decoding the associated video data accordingly.
  • video encoder 20 may entropy encode the syntax elements into an encoded video bitstream. In such instances, video decoder 30 may parse such syntax elements and decode related video data accordingly.
  • VVC Versatile Video Coding
  • VCEG ITU-T Video Coding Experts Group
  • MPEG ISO/IEC Motion Picture Experts Group
  • HEVC High-Efficiency Video Coding
  • JCT-VC Joint Collaboration Team on Video Coding
  • FIG. 2 is a schematic block diagram of an example of a video encoder 20 for implementing the techniques of this application.
  • the video encoder 20 includes an input terminal (or input interface) 201 , a residual calculation unit 204 , a transform processing unit 206 , a quantization unit 208 , an inverse quantization unit 210 , an inverse transform processing unit 212 , and a reconstruction unit 214 , a loop filter 220 , a decoded picture buffer (DPB) 230 , a mode selection unit 260 , an entropy encoding unit 270 and an output terminal (or output interface) 272 .
  • DPB decoded picture buffer
  • Mode selection unit 260 may include inter prediction unit 244 , intra prediction unit 254 , and partition unit 262 .
  • Inter prediction unit 244 may include a motion estimation unit and a motion compensation unit (not shown).
  • the video encoder 20 shown in FIG. 2 may also be referred to as a hybrid video encoder or a hybrid video codec-based video encoder.
  • the intra-frame prediction module includes a trained target model (also called a neural network), which is used to process the first data block or the second data obtained by splicing or concatenating the surrounding image blocks of the current image block or the second data block. block to generate a probability vector for the current image block.
  • a trained target model also called a neural network
  • the residual calculation unit 204, the transform processing unit 206, the quantization unit 208 and the mode selection unit 260 constitute the forward signal path of the encoder 20, while the inverse quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214, the buffer 216, the loop
  • the path filter 220, the decoded picture buffer (DPB) 230, the inter-frame prediction unit 244 and the intra-frame prediction unit 254 constitute the backward signal path of the encoder, wherein the backward signal path of the encoder 20 corresponds to the decoding signal path of the decoder (see decoder 30 in Figure 3).
  • Inverse quantization unit 210 inverse transform processing unit 212 , reconstruction unit 214 , loop filter 220 , decoded image buffer 230 , inter prediction unit 244 , and intra prediction unit 254 also make up the “built-in decoder” of video encoder 20 .
  • the encoder 20 may be operable to receive images (or image data) 17, eg, images in a sequence of images forming a video or video sequence, via an input 201 or the like.
  • the received image or image data may also be a preprocessed image (or preprocessed image data) 19 .
  • image 17 may also be referred to as the current image or the image to be encoded (especially when distinguishing the current image from other images in video encoding, such as the same video sequence, i.e. the video sequence that also includes the current image, previously encoded in the post image and/or post decoded image).
  • a (digital) image is or can be viewed as a two-dimensional array or matrix of pixel points with intensity values.
  • the pixels in the array may also be called pixels or pels (short for picture elements).
  • the number of pixels in the array or image in the horizontal and vertical directions (or axes) determines the size and/or resolution of the image.
  • three color components are usually used, that is, an image can be represented as or include an array of three pixel points.
  • an image includes an array of corresponding red, green and blue pixel points.
  • each pixel is usually represented in a luma/chroma format or color space, such as YCbCr, including a luma component denoted by Y (and sometimes L) and two chroma components denoted by Cb and Cr.
  • the luminance (luma) component Y represents the luminance or gray level intensity (eg, both are the same in a grayscale image), while the two chrominance (chroma) components Cb and Cr represent the chrominance or color information components .
  • an image in YCbCr format includes a luminance pixel array of luminance pixel value (Y) and two chrominance pixel arrays of chrominance values (Cb and Cr).
  • Images in RGB format can be converted or transformed to YCbCr format and vice versa, the process is also known as color transformation or conversion. If the image is black and white, the image may only include an array of luminance pixels. Correspondingly, the image may be, for example, a luminance pixel array in monochrome format or a luminance pixel array and two corresponding chrominance pixel arrays in 4:2:0, 4:2:2 and 4:4:4 color formats .
  • an embodiment of the video encoder 20 may include an image segmentation unit (not shown in FIG. 2 ) for segmenting the image 17 into a plurality of (generally non-overlapping) image blocks 203 .
  • These blocks may also be referred to as root blocks, macroblocks (H.264/AVC) or Coding Tree Blocks (CTBs), or Coding Tree Units (CTUs) in the H.265/HEVC and VVC standards ).
  • the segmentation unit can be used to use the same block size for all images in a video sequence and use a corresponding grid that defines the block size, or to vary the block size between images or subsets of images or groups of images, and to segment each image into corresponding piece.
  • the video encoder may be used to directly receive blocks 203 of the image 17 , eg, one, several or all of the blocks that make up the image 17 .
  • the image block 203 may also be referred to as a current image block or an image block to be encoded.
  • image block 203 is also or can be considered as a two-dimensional array or matrix of pixels with intensity values (pixel values), but image block 203 is smaller than image 17 .
  • block 203 may include an array of pixels (eg, a luminance array in the case of a monochrome image 17 or a luminance array or a chrominance array in the case of a color image) or three arrays of pixels (eg, in the case of a color image 17 ) (one luma array and two chrominance arrays) or any other number and/or type of arrays depending on the color format employed.
  • the number of pixels in the horizontal and vertical directions (or axes) of the block 203 defines the size of the block 203 .
  • the block may be an array of M ⁇ N (M columns ⁇ N rows) pixel points, or an array of M ⁇ N transform coefficients, or the like.
  • the video encoder 20 shown in FIG. 2 is used to encode the image 17 block by block, eg, performing encoding and prediction on each block 203 .
  • the video encoder 20 shown in FIG. 2 may also be used to segment and/or encode an image using slices (also referred to as video slices), where an image may use one or more slices (typically non-overlapping slices) ) for segmentation or encoding.
  • slices also referred to as video slices
  • Each slice may include one or more blocks (eg, Coding Tree Unit CTUs) or one or more groups of blocks (eg, coding tiles in the H.265/HEVC/VVC standard and tiles in the VVC standard ( brick).
  • the video encoder 20 shown in FIG. 2 may also be used to use slice/coding block groups (also referred to as video coding block groups) and/or coding blocks (also referred to as video coding blocks) ) to segment and/or encode an image, wherein the image may be segmented or encoded using one or more slices/encoded block groups (usually non-overlapping), each slice/encoded block group may include one or more slices/encoded block groups A block (eg, CTU) or one or more coding blocks, etc., wherein each coding block may be rectangular or the like, and may include one or more full or partial blocks (eg, CTUs).
  • slice/coding block groups also referred to as video coding block groups
  • coding blocks also referred to as video coding blocks
  • the residual calculation unit 204 is configured to calculate the residual block 205 according to the image block 203 and the prediction block 265 (the prediction block 265 will be described in detail later) in the following manner: The pixel value of the prediction block 265 is subtracted from the value to obtain the residual block 205 in the pixel domain.
  • the transform processing unit 206 is configured to perform discrete cosine transform (discrete cosine transform, DCT) or discrete sine transform (discrete sine transform, DST) etc. on the pixel point values of the residual block 205 to obtain transform coefficients 207 in the transform domain.
  • Transform coefficients 207 which may also be referred to as transform residual coefficients, represent the residual block 205 in the transform domain.
  • Transform processing unit 206 may be used to apply integer approximations of DCT/DST, such as transforms specified for H.265/HEVC. Compared to the orthogonal DCT transform, this integer approximation is usually scaled by some factor. In order to maintain the norm of the forward and inversely transformed residual blocks, other scaling factors are used as part of the transformation process.
  • the scaling factor is usually chosen according to certain constraints, such as the scaling factor being a power of 2 for the shift operation, the bit depth of the transform coefficients, the trade-off between accuracy and implementation cost, etc.
  • specific scaling factors are specified for the inverse transform by the inverse transform processing unit 212 at the encoder 20 side (and for the corresponding inverse transform at the decoder 30 side by, for example, the inverse transform processing unit 312), and accordingly, can be used at the encoder
  • the 20 side specifies the corresponding scaling factor for the forward transformation through the transformation processing unit 206 .
  • the video encoder 20 (correspondingly, the transform processing unit 206 ) may be configured to output transform parameters such as the type of one or more transforms, eg, directly or after being encoded or compressed by the entropy encoding unit 270 , eg, so that video decoder 30 can receive and decode using transform parameters.
  • transform parameters such as the type of one or more transforms, eg, directly or after being encoded or compressed by the entropy encoding unit 270 , eg, so that video decoder 30 can receive and decode using transform parameters.
  • the quantization unit 208 is configured to quantize the transform coefficients 207 by, for example, scalar quantization or vector quantization, to obtain quantized transform coefficients 209 .
  • the quantized transform coefficients 209 may also be referred to as quantized residual coefficients 209 .
  • the quantization process may reduce the bit depth associated with some or all of the transform coefficients 207 .
  • n-bit transform coefficients may be rounded down to m-bit transform coefficients during quantization, where n is greater than m.
  • the degree of quantization can be modified by adjusting the quantization parameter (QP).
  • QP quantization parameter
  • QP quantization parameter
  • the quantization parameter may be an index into a predefined set of suitable quantization step sizes.
  • Quantization may include dividing by the quantization step size, and corresponding or inverse dequantization performed by the inverse quantization unit 210 or the like may include multiplying by the quantization step size.
  • Embodiments according to some standards such as HEVC may be used to use quantization parameters to determine the quantization step size.
  • the quantization step size can be calculated from the quantization parameter using a fixed-point approximation of an equation involving division.
  • the video encoder 20 may be used to output a quantization parameter (QP), eg, directly or after being encoded or compressed by the entropy encoding unit 270, eg, such that the video Decoder 30 may receive and decode using the quantization parameters.
  • QP quantization parameter
  • the inverse quantization unit 210 is used to perform inverse quantization of the quantization unit 208 on the quantized coefficients to obtain the dequantized coefficients 211, for example, perform inverse quantization with the quantization scheme performed by the quantization unit 208 according to or using the same quantization step size as the quantization unit 208 Program.
  • Dequantized coefficients 211 may also be referred to as dequantized residual coefficients 211, corresponding to transform coefficients 207, but due to losses caused by quantization, inverse quantized coefficients 211 are usually not identical to transform coefficients.
  • the inverse transform processing unit 212 is used to perform the inverse transform of the transform performed by the transform processing unit 206, for example, an inverse discrete cosine transform (DCT) or an inverse discrete sine transform (DST), to A reconstructed residual block 213 (or corresponding dequantized coefficients 213) is obtained.
  • the reconstructed residual block 213 may also be referred to as a transform block 213 .
  • the reconstruction unit 214 (eg, summer 214 ) is used to add the transform block 213 (ie, the reconstructed residual block 213 ) to the prediction block 265 to obtain the reconstructed block 215 in the pixel domain, eg, the The pixel value and the pixel value of the prediction block 265 are added.
  • the loop filter unit 220 (or “loop filter” 220 for short) is used to filter the reconstruction block 215 to obtain the filter block 221, or generally to filter the reconstructed pixels to obtain filtered pixel values.
  • loop filter units are used to smooth pixel transitions or improve video quality.
  • the loop filter unit 220 may include one or more loop filters, such as a deblocking filter, a sample-adaptive offset (SAO) filter, or one or more other filters, such as self- Adaptive loop filter (ALF), noise suppression filter (NSF), or any combination.
  • the loop filter unit 220 may include a deblocking filter, a SAO filter, and an ALF filter. The order of the filtering process can be deblocking filter, SAO filter and ALF filter.
  • LMCS luma mapping with chroma scaling
  • SBT sub-block transform
  • ISP intra sub-partition
  • loop filter unit 220 is shown in FIG. 2 as a loop filter, in other configurations, loop filter unit 220 may be implemented as a post-loop filter.
  • Filter block 221 may also be referred to as filter reconstruction block 221 .
  • video encoder 20 may be used to output loop filter parameters (eg, SAO filter parameters, ALF filter parameters, or LMCS parameters), eg, directly or by entropy
  • the encoding unit 270 performs entropy encoding and outputs, eg, so that the decoder 30 can receive and decode using the same or different loop filter parameters.
  • a decoded picture buffer (DPB) 230 may be a reference picture memory that stores reference picture data for use by the video encoder 20 in encoding the video data.
  • DPB 230 may be formed by any of a variety of memory devices, such as dynamic random access memory (DRAM), including synchronous DRAM (SDRAM), magnetoresistive RAM (MRAM), Resistive RAM (RRAM) or other types of storage devices.
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • MRAM magnetoresistive RAM
  • RRAM Resistive RAM
  • a decoded image buffer 230 may be used to store one or more filter blocks 221.
  • the decoded image buffer 230 may also be used to store other previously filtered blocks of the same current image or a different image, such as a previous reconstructed image, such as the previously reconstructed and filtered block 221, and may provide a complete previously reconstructed or decoded image (and corresponding reference blocks and pixels) and/or a partially reconstructed current image (and corresponding reference blocks and pixels), eg for inter prediction.
  • the decoded image buffer 230 may also be used to store one or more unfiltered reconstructed blocks 215, or generally unfiltered reconstructed pixels, eg, reconstructed blocks 215 not filtered by the in-loop filtering unit 220, or unfiltered Any other processed reconstructed blocks or reconstructed pixels.
  • Mode selection unit 260 includes partition unit 262, inter prediction unit 244, and intra prediction unit 254 for receiving or obtaining original blocks from decoded image buffer 230 or other buffers (eg, column buffers, not shown) 203 (current block 203 of current image 17) and original image data such as reconstructed image data, e.g. filtered and/or unfiltered reconstructed pixels or reconstructions of the same (current) image and/or one or more previously decoded images piece.
  • the reconstructed image data is used as reference image data required for prediction such as inter prediction or intra prediction to obtain the prediction block 265 or the prediction value 265 .
  • the mode selection unit 260 may be used to determine or select a partition for the current block prediction mode (including no partition) and prediction mode (eg, intra or inter prediction mode) to generate a corresponding prediction block 265 for performing the residual block 205
  • the reconstruction block 215 is computed and reconstructed.
  • mode selection unit 260 may be used to select a partitioning and prediction mode (eg, from among those supported or available by mode selection unit 260) that provides the best match or the smallest residual (minimum Residual refers to better compression in transmission or storage), or provides minimal signaling overhead (minimum signaling overhead refers to better compression in transmission or storage), or considers or balances both.
  • the mode selection unit 260 may be configured to determine the segmentation and prediction mode according to rate distortion optimization (RDO), ie select the prediction mode that provides the least rate distortion optimization.
  • RDO rate distortion optimization
  • partition unit 262 may be used to partition pictures in a video sequence into a sequence of coding tree units (CTUs), CTU 203 may be further partitioned into smaller block parts or sub-blocks (blocks again), e.g., Quad-tree partitioning (QT) partitioning, binary-tree partitioning (BT) partitioning, or triple-tree partitioning (TT) partitioning or any combination thereof is used by iteration, and for e.g. Or each of the sub-blocks performs prediction, wherein the mode selection includes selecting the tree structure of the partition block 203 and selecting a prediction mode to apply to each of the block parts or sub-blocks.
  • QT Quad-tree partitioning
  • BT binary-tree partitioning
  • TT triple-tree partitioning
  • segmentation e.g, performed by segmentation unit 262
  • prediction processing e.g, performed by inter-prediction unit 244 and intra-prediction unit 254
  • Partition unit 262 may partition (or divide) one coding tree unit 203 into smaller parts, such as square or rectangular shaped pieces.
  • a CTU For an image with three pixel arrays, a CTU consists of N ⁇ N luminance pixel blocks and two corresponding chrominance pixel blocks.
  • the maximum allowable size of a luma block in a CTU is specified as 128x128 in the under-development Versatile Video Coding (VVC) standard, but may be specified to a value other than 128x128 in the future, such as 256x256.
  • VVC Versatile Video Coding
  • the CTUs of a picture can be aggregated/grouped into slices/coded block groups, coded blocks or bricks.
  • a coding block covers a rectangular area of an image, and a coding block can be divided into one or more tiles.
  • a brick consists of multiple CTU lines within an encoded block.
  • An encoded block that is not divided into multiple bricks can be called a brick.
  • bricks are a true subset of coded blocks and are therefore not called coded blocks.
  • VVC supports two encoding block group modes, namely raster scan slice/encoded block group mode and rectangular slice mode.
  • raster scan coded block group mode a slice/coded block group contains a sequence of coded blocks in a raster scan of coded blocks of an image.
  • rectangular slice mode slices contain multiple tiles of an image that together make up a rectangular area of the image.
  • the tiles within the rectangular slice are arranged in the order of the tile raster scan of the photo.
  • These smaller blocks may be further divided into smaller parts.
  • This is also known as tree splitting or hierarchical tree splitting, where a root block at root tree level 0 (hierarchy level 0, depth 0) etc. can be recursively split into two or more blocks of the next lower tree level, For example, a node at tree level 1 (hierarchy level 1, depth 1).
  • These blocks can in turn be split into two or more blocks of the next lower level, e.g. tree level 2 (hierarchy level 2, depth 2), etc., until the split ends (since ending criteria are met, such as reaching a maximum tree depth or minimum block size).
  • Blocks that are not further divided are also called leaf blocks or leaf nodes of the tree.
  • a tree divided into two parts is called a binary-tree (BT)
  • a tree divided into three parts is called a ternary-tree (TT)
  • a tree divided into four parts is called a quadtree ( quad-tree, QT).
  • a coding tree unit may be or include a CTB for luma pixels, two corresponding CTBs for chroma pixels for an image with an array of three pixels, or a CTB for pixels for monochrome images, or a CTB using three
  • the CTB of a pixel of an image encoded by the independent color plane and syntax structure (used to encode the pixel).
  • a coding tree block can be a block of N ⁇ N pixel points, where N can be set to a certain value such that the components are divided into CTBs, which is division.
  • a coding unit may be or include a coding block of luminance pixels, two corresponding coding blocks of chrominance pixels of an image with an array of three pixel points, or a coding block of pixels of a monochrome image, or An encoding block of pixels of an image encoded using three independent color planes and syntax structures (used to encode pixels).
  • a coding block can be a block of M ⁇ N pixel points, where M and N can be set to a certain value so that the CTB is divided into coding blocks, which is division.
  • a coding tree unit may be divided into multiple CUs according to HEVC by using a quad-tree structure represented as a coding tree.
  • the decision whether to use inter (temporal) prediction or intra (spatial) prediction to encode image regions is made at the leaf-CU level.
  • Each leaf-CU may be further divided into one, two, or four PUs according to the PU partition type.
  • the same prediction process is used within a PU, and relevant information is transmitted to the decoder on a PU basis.
  • the leaf CU may be partitioned into transform units (TUs) according to other quad-tree structures similar to the coding tree used for the CU.
  • VVC Versatile Video Coding
  • a combined quadtree of nested multi-type trees eg, binary and ternary trees
  • the CU can be a square or a rectangle.
  • the coding tree unit (CTU) is first divided by the quad-tree structure.
  • the quad-leaf node is further composed of multiple types of Tree structure division.
  • Multi-type leaf nodes are called A coding unit (CU), unless the CU is too large for the maximum transform length, such a segment is used for prediction and transform processing without any other partitioning. In most cases, this means that the CU, PU, and TU are The block size is the same in the coding block structure of tree-nested multi-type trees. This exception occurs when the maximum supported transform length is less than the width or height of the color components of the CU.
  • VVC has formulated a multi-type tree with quadtree nesting
  • the only signaling mechanism for partitioning information in the coding structure In the signaling mechanism, the coding tree unit (CTU) as the root of the quad-tree is first divided by the quad-tree structure. Then each quad-leaf node (when enough can be further divided into a multi-type tree structure.
  • CTU coding tree unit
  • the decoder can derive the multi-type tree division mode (MttSplitMode) of the CU based on a predefined rule or table.
  • TT division when the width or height of the luma coding block is greater than 64, TT division is not allowed .
  • the width or height of the chroma coding block is greater than 32, TT division is also not allowed.
  • the pipeline design divides the image into multiple virtual pipeline data units (VPDUs), and each VPDU is defined in the image as mutual Non-overlapping units.
  • VPDUs virtual pipeline data units
  • successive VPDUs are processed simultaneously in multiple pipeline stages.
  • the VPDU size is roughly proportional to the buffer size, so it is necessary to keep the VPD small U.
  • the VPDU size can be set to the maximum transform block (TB) size.
  • TT ternary tree
  • BT binary tree
  • the tree node block is forced to be divided until all the pixels of each coded CU are located within the image border.
  • the intra sub-partitions (ISP) tool may divide the luma intra prediction block vertically or horizontally into two or four sub-parts depending on the block size.
  • mode selection unit 260 of video encoder 20 may be used to perform any combination of the partitioning techniques described above.
  • video encoder 20 is used to determine or select the best or optimal prediction mode from a set of (predetermined) prediction modes.
  • the set of prediction modes may include, for example, intra prediction modes and/or inter prediction modes.
  • the set of intra prediction modes may include 35 different intra prediction modes, for example, non-directional modes like DC (or mean) mode and planar mode, or directional modes as defined by HEVC, or may include 67 different Intra prediction modes, for example, non-directional modes like DC (or mean) mode and planar mode, or directional modes as defined in VVC.
  • intra prediction mode of the planar mode may also be modified using a position-dependent intra prediction combination (PDPC) method.
  • PDPC position-dependent intra prediction combination
  • the intra-frame prediction unit 254 is configured to generate an intra-frame prediction block 265 using reconstructed pixels of adjacent blocks of the same current image according to the intra-frame prediction mode in the intra-frame prediction mode set.
  • the intra prediction unit 254 (or usually the mode selection unit 260) is further configured to output intra prediction parameters (in this embodiment of the present application, the intra prediction parameters include the information of the target intra prediction mode of the current image block and the current image block).
  • the probability vector of is sent to entropy encoding unit 270 in syntax element 266 for inclusion into encoded image data 21 so that video decoder 30 may perform operations such as receiving and using prediction parameters for decoding.
  • the set of inter-prediction modes depends on available reference pictures (i.e., for example the aforementioned at least partially previously decoded pictures stored in DBP 230) and other inter-prediction parameters, for example on whether to use the entire reference picture or Use only a part of the reference image, e.g. the search window area near the area of the current block, to search for the best matching reference block, and/or e.g. depending on whether half pixel, quarter pixel and/or within 16th of a pixel is performed Interpolated pixel interpolation.
  • available reference pictures i.e., for example the aforementioned at least partially previously decoded pictures stored in DBP 230
  • other inter-prediction parameters for example on whether to use the entire reference picture or Use only a part of the reference image, e.g. the search window area near the area of the current block, to search for the best matching reference block, and/or e.g. depending on whether half pixel, quarter pixel and/or within 16th of a pixel is performed Interpol
  • skip mode and/or direct mode may also be employed.
  • the merge candidate list for this mode consists of the following five candidate types in order: spatial MVP from spatially adjacent CUs, temporal MVP from collocated CUs, history-based MVP from FIFO table, For average MVP and zero MV.
  • Decoder side motion vector refinement (DMVR) based on bilateral matching can be used to increase the accuracy of MV for merged mode.
  • the merge mode with MVD comes from the merge mode with motion vector difference. Send the MMVD flag immediately after sending the skip flag and merge flag to specify whether the CU uses MMVD mode.
  • a CU-level adaptive motion vector resolution (AMVR) scheme may be used. AMVR supports the MVD of the CU to be encoded in different precisions.
  • the MVD of the current CU is adaptively selected.
  • a combined inter/intra prediction (CIIP) mode may be applied to the current CU.
  • a weighted average is performed on the inter and intra prediction signals to obtain the CIIP prediction.
  • the affine motion field of the block is described by motion information of 2 control points (4 parameters) or 3 control points (6 parameters) motion vectors.
  • Subblock-based temporal motion vector prediction (SbTMVP) is similar to temporal motion vector prediction (TMVP) in HEVC, but predicts the motion of sub-CUs in the current CU vector.
  • Bi-directional optical flow (BDOF), formerly known as BIO, is a simplified version that reduces computation, especially in terms of the number of multiplications and the size of the multipliers.
  • the triangular division mode the CU is evenly divided into two triangular parts in two divisions: diagonal division and anti-diagonal division.
  • the bidirectional prediction mode is extended on the basis of simple averaging to support weighted average of two prediction signals.
  • Inter prediction unit 244 may include a motion estimation (ME) unit and a motion compensation (MC) unit (both not shown in FIG. 2 ).
  • the motion estimation unit may be used to receive or obtain the image block 203 (the current image block 203 of the current image 17 ) and the decoded image 231 , or at least one or more previously reconstructed blocks, eg, one or more other/different previously decoded images 231 .
  • Reconstruction blocks for motion estimation may include the current image and the previous decoded image 231, or in other words, the current image and the previous decoded image 231 may be part of or form a sequence of images forming the video sequence.
  • the encoder 20 may be operable to select a reference block from a plurality of reference blocks of the same or different pictures among a plurality of other pictures, and convert the reference picture (or reference picture index) and/or the position (x, y coordinates) of the reference block ) and the position of the current block (spatial offset) are provided to the motion estimation unit as inter prediction parameters.
  • This offset is also called a motion vector (MV).
  • the motion compensation unit is used to obtain, eg, receive, inter-prediction parameters, and perform inter-prediction based on or using the inter-prediction parameters, resulting in the inter-prediction block 246 .
  • the motion compensation performed by the motion compensation unit may involve extracting or generating prediction blocks from motion/block vectors determined through motion estimation, and may also include performing interpolation to sub-pixel precision. Interpolative filtering can generate pixels of other pixels from pixels of known pixels, thereby potentially increasing the number of candidate prediction blocks that can be used to encode an image block.
  • the motion compensation unit may locate the prediction block pointed to by the motion vector in one of the reference image lists.
  • the motion compensation unit may also generate block- and video slice-related syntax elements for use by video decoder 30 in decoding image blocks of the video slice.
  • coding block groups and/or coding blocks and corresponding syntax elements may be generated or used.
  • the entropy coding unit 270 is used for entropy coding algorithm or scheme (for example, variable length coding (variable length coding, VLC) scheme, context adaptive VLC scheme (context adaptive VLC, CALVC), arithmetic coding scheme, binarization algorithm, Context adaptive binary arithmetic coding (context adaptive binary arithmetic coding, CABAC), syntax-based context adaptive binary arithmetic coding (syntax-based context-adaptive binary arithmetic coding, SBAC), probability interval partitioning entropy (probability interval partitioning entropy, PIPE) ) coding or other entropy coding method or technique) is applied to the quantized residual coefficients 209, inter prediction parameters, intra prediction parameters, loop filter parameters and/or other syntax elements, resulting in an encoded bit stream that can be passed through output 272
  • the encoded image data 21 output in the form of 21 or the like, so that the video decoder 30 or the like can receive and use
  • the intra prediction parameters include a target intra prediction mode and a probability vector of the current image block
  • the entropy encoding unit 270 selects a first reference value or a second reference value corresponding to the target intra prediction mode from the probability vector. and then encode the first reference value or the second reference value into the code stream, where the first reference value or the second reference value is used to indicate the target intra prediction mode of the current image block.
  • video encoder 20 may be used to encode the video stream.
  • the non-transform based encoder 20 may directly quantize the residual signal without transform processing unit 206 for certain blocks or frames.
  • encoder 20 may have quantization unit 208 and inverse quantization unit 210 combined into a single unit.
  • FIG. 3 illustrates an example video decoder 30 for implementing the techniques of this application.
  • the video decoder 30 is adapted to receive the encoded image data 21 (eg, the encoded bitstream 21 ) encoded by the encoder 20 , for example, to obtain a decoded image 331 .
  • the encoded image data or bitstream includes information for decoding the encoded image data, such as data representing image blocks of an encoded video slice (and/or encoded block groups or encoded blocks) and associated syntax elements.
  • decoder 30 includes entropy decoding unit 304, inverse quantization unit 310, inverse transform processing unit 312, reconstruction unit 314 (eg, summer 314), loop filter 320, decoded image buffer (DBP) ) 330 , a mode application unit 360 , an inter prediction unit 344 and an intra prediction unit 354 .
  • Inter prediction unit 344 may be or include a motion compensation unit.
  • video decoder 30 may perform a decoding process that is substantially the inverse of the encoding process described with reference to video encoder 100 of FIG. 2 .
  • the intra-frame prediction module includes a trained target model (also called a neural network), which is used to process the first data block or the second data obtained by splicing or concatenating the surrounding image blocks of the current image block or the second data block. block to generate a probability vector for the current image block.
  • a trained target model also called a neural network
  • the inverse quantization unit 210 may be functionally the same as the inverse quantization unit 110
  • the inverse transform processing unit 312 may be functionally the same as the inverse transform processing unit 122
  • the reconstruction unit 314 may be functionally the same as the reconstruction unit 214
  • the loop Filter 320 may be functionally identical to loop filter 220
  • decoded image buffer 330 may be functionally identical to decoded image buffer 230 . Therefore, the explanations of the corresponding units and functions of the video encoder 20 apply correspondingly to the corresponding units and functions of the video decoder 30 .
  • the entropy decoding unit 304 is used to parse the bit stream 21 (or generally the encoded image data 21 ) and perform entropy decoding on the encoded image data 21 to obtain quantization coefficients 309 and/or decoded encoding parameters (not shown in FIG. 3 ), etc. , such as in inter prediction parameters (such as reference picture indices and motion vectors), intra prediction parameters (such as intra prediction mode or index), transform parameters, quantization parameters, loop filter parameters and/or other syntax elements, etc. any or all.
  • the entropy decoding unit 304 may be configured to apply a decoding algorithm or scheme corresponding to the encoding scheme of the entropy encoding unit 270 of the encoder 20 .
  • Entropy decoding unit 304 may also be used to provide inter-prediction parameters, intra-prediction parameters, and/or other syntax elements to mode application unit 360 , as well as other parameters to other units of decoder 30 .
  • Video decoder 30 may receive syntax elements at the video slice and/or video block level. In addition, or instead of slices and corresponding syntax elements, encoded block groups and/or encoded blocks and corresponding syntax elements may be received or used.
  • the intra prediction parameters that the entropy decoding unit 304 may be configured to provide to the mode application unit 360 include a target reference value for indicating a target intra prediction mode of the current image block for performing intra prediction.
  • Inverse quantization unit 310 may be operable to receive quantization parameters (QPs) (or information related to inverse quantization in general) and quantization coefficients from encoded image data 21 (eg, parsed and/or decoded by entropy decoding unit 304), and based on The quantization parameters inverse quantize the decoded quantized coefficients 309 to obtain inverse quantized coefficients 311 , which may also be referred to as transform coefficients 311 .
  • the inverse quantization process may include using quantization parameters calculated by video encoder 20 for each video block in the video slice to determine the degree of quantization, as well as the degree of inverse quantization that needs to be performed.
  • An inverse transform processing unit 312 may be operable to receive dequantized coefficients 311, also referred to as transform coefficients 311, and apply a transform to the dequantized coefficients 311 to obtain a reconstructed residual block 213 in the pixel domain.
  • the reconstructed residual block 213 may also be referred to as a transform block 313 .
  • the transform may be an inverse transform, such as an inverse DCT, an inverse DST, an inverse integer transform, or a conceptually similar inverse transform process.
  • Inverse transform processing unit 312 may also be operable to receive transform parameters or corresponding information from encoded image data 21 (eg, parsed and/or decoded by entropy decoding unit 304 ) to determine transforms to apply to dequantized coefficients 311 .
  • the reconstruction unit 314 (eg, summer 314) is used to add the reconstructed residual block 313 to the prediction block 365 to obtain the reconstructed block 315 in the pixel domain, for example, the pixel point values of the reconstructed residual block 313 and the prediction block 365 pixel values are added.
  • the loop filter unit 320 (in or after the encoding loop) is used to filter the reconstruction block 315 to obtain a filter block 321, so as to smoothly perform pixel transitions or improve video quality, etc.
  • the loop filter unit 320 may include one or more loop filters, such as a deblocking filter, a sample-adaptive offset (SAO) filter, or one or more other filters, such as a self- Adaptive loop filter (ALF), noise suppression filter (NSF), or any combination.
  • the loop filter unit 220 may include a deblocking filter, a SAO filter, and an ALF filter. The order of the filtering process can be deblocking filter, SAO filter and ALF filter.
  • LMCS luma mapping with chroma scaling
  • SBT sub-block transform
  • ISP intra sub-partition
  • loop filter unit 320 is shown in FIG. 3 as a loop filter, in other configurations, loop filter unit 320 may be implemented as a post-loop filter.
  • the decoded video block 321 in one picture is then stored in a decoded picture buffer 330 which stores the decoded picture 331 as a reference picture for subsequent motion compensation of other pictures and/or output display respectively.
  • the decoder 30 is configured to output the decoded image 311 through the output terminal 312, etc., to display to the user or for the user to view.
  • the inter prediction unit 354 may be functionally the same as the inter prediction unit 244 (in particular, the motion compensation unit), the intra prediction unit 344 may be functionally the same as the intra prediction unit 254, and is based on the encoded image data 21 (e.g. The received partitioning and/or prediction parameters or corresponding information are parsed and/or decoded by the entropy decoding unit 304 to decide the partitioning or partitioning and perform prediction.
  • the intra prediction unit 344 may determine the target intra prediction mode of the current image block for intra prediction based on the target reference value obtained in the entropy decoding unit 204 and the probability vector obtained in the neural network model.
  • the mode application unit 360 may be configured to perform prediction (intra or inter prediction) of each block according to the reconstructed image, block or corresponding pixel points (filtered or unfiltered), resulting in a prediction block 365 .
  • the intra-prediction unit 354 in the mode application unit 360 is used to generate data based on the indicated intra-prediction mode and data from previously decoded blocks of the current image.
  • an inter-prediction unit 344 eg, a motion compensation unit
  • the mode application unit 360 is used to decode the motion vector and other syntax received from the entropy decoding unit 304 according to the motion vector
  • the element generates a prediction block 365 for a video block of the current video slice.
  • these prediction blocks may be generated from one of the reference pictures in one of the reference picture lists.
  • Video decoder 30 may construct reference frame List 0 and List 1 from reference pictures stored in DPB 330 using default construction techniques.
  • slices eg, video slices
  • the same or similar process may be applied to embodiments of coding block groups (eg, video coding block groups) and/or coding blocks (eg, video coding blocks),
  • video may be encoded using I, P, or B encoding block groups and/or encoding blocks.
  • Mode application unit 360 is operable to determine prediction information for a video block of the current video slice by parsing motion vectors and other syntax elements, and use the prediction information to generate a prediction block for the current video block being decoded. For example, mode applying unit 360 uses some of the received syntax elements to determine a prediction mode (eg, intra-prediction or inter-prediction), an inter-prediction slice type (eg, B-slice, P-slice, or GPB for encoding a video block of the video slice) slice), construction information for one or more reference picture lists of the slice, motion vectors for each inter-coded video block of the slice, inter-prediction status for each inter-coded video block of the slice, other information to decode video blocks within the current video slice.
  • a prediction mode eg, intra-prediction or inter-prediction
  • an inter-prediction slice type eg, B-slice, P-slice, or GPB for encoding a video block of the video slice
  • coding block groups eg, video coding block groups
  • coding blocks eg, video coding blocks
  • video may be encoded using I, P, or B encoding block groups and/or encoding blocks.
  • the video encoder 30 shown in FIG. 3 may also be used to segment and/or decode images using slices (also referred to as video slices), where an image may use one or more slices (typically non-overlapping slices) ) for segmentation or decoding.
  • slices also referred to as video slices
  • Each slice may include one or more blocks (eg, CTUs) or one or more groups of blocks (eg, coded blocks in the H.265/HEVC/VVC standard and bricks in the VVC standard.
  • the video decoder 30 shown in FIG. 3 may also be used to use slice/coding block groups (also referred to as video coding block groups) and/or coding blocks (also referred to as video coding blocks) ) to segment and/or decode an image, wherein the image may be segmented or decoded using one or more slices/encoded block groups (usually non-overlapping), each slice/encoded block group may include one or more A block (eg, CTU) or one or more coding blocks, etc., wherein each coding block may be rectangular or the like, and may include one or more full or partial blocks (eg, CTUs).
  • slice/coding block groups also referred to as video coding block groups
  • coding blocks also referred to as video coding blocks
  • video decoder 30 may be used to decode the encoded image data 21 .
  • decoder 30 may generate the output video stream without loop filter unit 320 .
  • the non-transform based decoder 30 may directly inverse quantize the residual signal without the inverse transform processing unit 312 for certain blocks or frames.
  • video decoder 30 may have inverse quantization unit 310 and inverse transform processing unit 312 combined into a single unit.
  • the processing result of the current step can be further processed, and then output to the next step.
  • further operations such as clip or shift operations, may be performed on the processing results of interpolation filtering, motion vector derivation or loop filtering.
  • the value of the motion vector is limited to a predefined range according to the representation bits of the motion vector. If the representation bit of the motion vector is bitDepth, the range is -2 ⁇ (bitDepth-1) to 2 ⁇ (bitDepth-1)-1, where " ⁇ " represents a power. For example, if bitDepth is set to 16, the range is -32768 to 32767; if bitDepth is set to 18, the range is -131072 to 131071.
  • the value of the derived motion vector (eg, the MVs of four 4x4 subblocks in an 8x8 block) is limited such that the maximum difference between the integer parts of the four 4x4 subblock MVs does not More than N pixels, eg no more than 1 pixel.
  • bitDepth There are two ways to limit motion vectors based on bitDepth.
  • embodiments of the coding system 10, encoder 20 and decoder 30, as well as other embodiments described herein, may also be used for still image processing or codecs, That is, the processing or coding of a single image in video codecs that is independent of any previous or consecutive images.
  • image processing is limited to a single image 17, inter prediction unit 244 (encoder) and inter prediction unit 344 (decoder) may not be available.
  • All other functions (also referred to as tools or techniques) of video encoder 20 and video decoder 30 are also available for still image processing, such as residual calculation 204/304, transform 206, quantization 208, inverse quantization 210/310, (inverse ) transform 212/312, partition 262/362, intra prediction 254/354 and/or loop filtering 220/320, entropy encoding 270 and entropy decoding 304.
  • FIG. 4 is a schematic diagram of a video decoding apparatus 400 according to an embodiment of the present invention.
  • Video coding apparatus 400 is suitable for implementing the disclosed embodiments described herein.
  • the video coding apparatus 400 may be a decoder, such as the video decoder 30 in FIG. 1A, or an encoder, such as the video encoder 20 in FIG. 1A.
  • the video decoding apparatus 400 includes: an input port 410 (or input port 410) for receiving data and a receiver unit (receiver unit, Rx) 420; a processor, a logic unit or a central processing unit (central processing unit) for processing data , CPU) 430; for example, the processor 430 here can be a neural network processor 430; a transmitter unit (transmitter unit, Tx) 440 for transmitting data and an output port 450 (or output port 450); memory 460.
  • the video coding apparatus 400 may also include optical-to-electrical (OE) components and electrical-to-optical (EO) components coupled to the input port 410, the receiving unit 420, the transmitting unit 440, and the output port 450, Exit or entrance for optical or electrical signals.
  • OE optical-to-electrical
  • EO electrical-to-optical
  • the processor 430 is implemented by hardware and software.
  • Processor 430 may be implemented as one or more processor chips, cores (eg, multi-core processors), FPGAs, ASICs, and DSPs.
  • the processor 430 communicates with the ingress port 410 , the receiving unit 420 , the sending unit 440 , the egress port 450 and the memory 460 .
  • the processor 430 includes a decoding module 470 (eg, a neural network NN based decoding module 470).
  • the decoding module 470 implements the embodiments disclosed above. For example, the transcoding module 470 performs, processes, prepares or provides various encoding operations.
  • decoding module 470 is implemented as instructions stored in memory 460 and executed by processor 430 .
  • Memory 460 includes one or more magnetic disks, tape drives, and solid-state drives, and may serve as an overflow data storage device for storing programs when such programs are selected for execution, and for storing instructions and data read during program execution.
  • Memory 460 may be volatile and/or non-volatile, and may be read-only memory (ROM), random access memory (RAM), ternary content addressable memory (ternary) content-addressable memory, TCAM) and/or static random-access memory (SRAM).
  • FIG. 5 is a simplified block diagram of an apparatus 500 provided by an exemplary embodiment, and the apparatus 500 can be used as either or both of the source device 12 and the destination device 14 in FIG. 1A .
  • the processor 502 in the apparatus 500 may be a central processing unit.
  • the processor 502 may be any other type of device or devices, existing or to be developed in the future, capable of manipulating or processing information.
  • the disclosed implementations may be implemented using a single processor, such as processor 502 as shown, using more than one processor is faster and more efficient.
  • the memory 504 in the apparatus 500 may be a read only memory (ROM) device or a random access memory (RAM) device. Any other suitable type of storage device may be used as memory 504 .
  • Memory 504 may include code and data 506 accessed by processor 502 via bus 512 .
  • the memory 504 may also include an operating system 508 and application programs 510 including at least one program that allows the processor 502 to perform the methods described herein.
  • applications 510 may include applications 1 through N, and also include video coding applications that perform the methods described herein.
  • Apparatus 500 may also include one or more output devices, such as display 518 .
  • display 518 may be a touch-sensitive display that combines a display with touch-sensitive elements that may be used to sense touch input.
  • Display 518 may be coupled to processor 502 through bus 512 .
  • bus 512 in device 500 is described herein as a single bus, bus 512 may include multiple buses.
  • secondary storage may be directly coupled to other components of the device 500 or accessed through a network, and may include a single integrated unit, such as a memory card, or multiple units, such as multiple memory cards. Accordingly, the apparatus 500 may have various configurations.
  • Neural Network is a machine learning model.
  • a neural network can be composed of neural units.
  • a neural unit can refer to an operation unit that takes xs and intercept 1 as inputs.
  • the output of the operation unit can be:
  • s 1, 2,...n, n is a natural number greater than 1
  • Ws is the weight of xs
  • b is the bias of the neural unit.
  • f is the activation function of the neural unit, which is used to introduce nonlinear characteristics into the neural network to convert the input signal in the neural unit into an output signal. The output signal of this activation function can be used as the input of the next convolutional layer.
  • the activation function can be a sigmoid function.
  • a neural network is a network formed by connecting many of the above single neural units together, that is, the output of one neural unit can be the input of another neural unit.
  • the input of each neural unit can be connected with the local receptive field of the previous layer to extract the features of the local receptive field, and the local receptive field can be an area composed of several neural units.
  • Deep Neural Network also known as multi-layer neural network
  • DNN Deep Neural Network
  • the neural network inside DNN can be divided into three categories: input layer, hidden layer, and output layer.
  • the first layer is the input layer
  • the last layer is the output layer
  • the middle layers are all hidden layers.
  • the layers are fully connected, that is, any neuron in the i-th layer must be connected to any neuron in the i+1-th layer.
  • the coefficient from the kth neuron in the L-1 layer to the jth neuron in the Lth layer is defined as It should be noted that the input layer does not have a W parameter.
  • more hidden layers allow the network to better capture the complexities of the real world.
  • a model with more parameters is more complex and has a larger "capacity", which means that it can complete more complex learning tasks.
  • Training the deep neural network is the process of learning the weight matrix, and its ultimate goal is to obtain the weight matrix of all layers of the trained deep neural network (the weight matrix formed by the vectors W of many layers).
  • CNN Convolutional neural network
  • a convolutional neural network consists of a feature extractor consisting of convolutional and pooling layers. The feature extractor can be viewed as a filter, and the convolution process can be viewed as convolution with an input image or a convolutional feature map using a trainable filter.
  • the convolutional layer refers to the neuron layer in the convolutional neural network that convolves the input signal.
  • the convolution layer can include many convolution operators.
  • the convolution operator is also called the kernel. Its role in image processing is equivalent to a filter that extracts specific information from the input image matrix.
  • the convolution operator can essentially is a weight matrix, which is usually pre-defined, during the convolution operation on the image, the weight matrix is usually one pixel by one pixel (or two pixels by two pixels) along the horizontal direction on the input image... ...it depends on the value of stride) to process, so as to complete the work of extracting specific features from the image.
  • the size of the weight matrix should be related to the size of the image.
  • the depth dimension of the weight matrix is the same as the depth dimension of the input image.
  • the weight matrix will be extended to Enter the entire depth of the image. Therefore, convolution with a single weight matrix will result in a single depth dimension of the convolutional output, but in most cases a single weight matrix is not used, but multiple weight matrices of the same size (row ⁇ column) are applied, That is, multiple isotype matrices.
  • the output of each weight matrix is stacked to form the depth dimension of the convolutional image, where the dimension can be understood as determined by the "multiple" described above. Different weight matrices can be used to extract different features in the image.
  • one weight matrix is used to extract image edge information
  • another weight matrix is used to extract specific colors of the image
  • another weight matrix is used to extract unwanted noise in the image. Blur, etc.
  • the multiple weight matrices have the same size (row ⁇ column), and the size of the feature maps extracted from the multiple weight matrices with the same size is also the same, and then the multiple extracted feature maps with the same size are combined to form a convolution operation. output.
  • the weight values in these weight matrices need to be obtained through a lot of training in practical applications, and each weight matrix formed by the weight values obtained by training can be used to extract information from the input image, so that the convolutional neural network can make correct predictions.
  • the initial convolutional layer often extracts more general features, which can also be called low-level features; as the depth of the convolutional neural network deepens.
  • the features extracted by the later convolutional layers are more and more complex, such as features such as high-level semantics, and the features with higher semantics are more suitable for the problem to be solved.
  • pooling layer after the convolutional layer, which can be a convolutional layer followed by a pooling layer, or a multi-layer convolutional layer followed by a layer or multiple pooling layers.
  • the pooling layer may include an average pooling operator and/or a max pooling operator for sampling the input image to obtain a smaller size image.
  • the average pooling operator can calculate the pixel values in the image within a certain range to produce an average value as the result of average pooling.
  • the max pooling operator can take the pixel with the largest value in the range as the result of max pooling.
  • the operators in the pooling layer should also be related to the size of the image.
  • the size of the output image after processing by the pooling layer can be smaller than the size of the image input to the pooling layer, and each pixel in the image output by the pooling layer represents the average or maximum value of the corresponding sub-region of the image input to the pooling layer.
  • the convolutional neural network After processing by the convolutional layer/pooling layer, the convolutional neural network is not enough to output the required output information. Because as mentioned before, convolutional/pooling layers will only extract features and reduce the parameters brought by the input image. However, in order to generate the final output information (required class information or other relevant information), the convolutional neural network needs to utilize neural network layers to generate one or a set of outputs of the required number of classes. Therefore, the neural network layer may include multiple hidden layers, and the parameters contained in the multiple hidden layers may be obtained by pre-training according to the relevant training data of a specific task type. For example, the task type may include image recognition, Image classification, image super-resolution reconstruction, and more.
  • the output layer of the entire convolutional neural network which has a loss function similar to categorical cross-entropy, specifically for calculating the prediction error, once the entire volume
  • the forward propagation of the convolutional neural network is completed, and the backpropagation will start to update the weight values and biases of the aforementioned layers to reduce the loss of the convolutional neural network, and the result and ideal output of the convolutional neural network through the output layer. error between results.
  • FIG. 6 illustrates an example architecture 600 of a target model, such as a neural network for prediction, or prediction network for short.
  • the neural network uses the convolution layer to process the input data, and uses the Softmax layer to output the probability vector of the current image block.
  • ResNet deep residual network
  • ResNet consists of 9 residual blocks (residual blocks); each residual block has two basic layers, Each base layer consists of a convolution layer (convolution layer), a non-linear activation function (rectified linear unit, ReLU) layer, and a batch normalization (BN) layer.
  • convolution layer convolution layer
  • BN batch normalization
  • convolutional layer between the first residual block and the data input, which can speed up the convergence speed of neural network model training and alleviate the influence of gradient disappearance or gradient explosion phenomenon on training.
  • a convolutional layer and an adaptive average pooling layer are connected in turn to convert the feature map into a multi-dimensional vector, and finally 67 intra-frame prediction modes are output through the Softmax layer.
  • the probability distribution of which is a multi-dimensional vector whose dimension is the sum of the number of intra-frame prediction modes shown in FIG. 9 , that is, 67.
  • the neural network model used in the embodiments of the present application is only a specific example, which is not limited in the embodiments of the present application.
  • Those skilled in the art can also construct a neural network model through other architectures, so as to achieve the same technical effect as the neural network model in the embodiments of the present application, for example: a convolutional neural network including AlexNet, ZFNet, VGGNet, GoogleNet, etc.
  • a neural network model consisting of one or more of the architectures or non-convolutional neural network architectures.
  • those skilled in the art can also achieve the same technical effect as the embodiment of the present application by changing part of the structure of the neural network model in the embodiment of the present application, for example, changing the number of residual blocks of the neural network model in the present application.
  • the training process of the neural network model in the embodiment of the present application is as follows:
  • the initial neural network model is trained on the image processor.
  • the training database of the neural network model includes a training set, a validation set and a test set, and the training set, the validation set and the test set respectively include a series of image sets.
  • the first data block or the second data block in the input neural network model is called the target data block.
  • Each image is a data block obtained after compression, and the target data is the first identifier corresponding to the target intra prediction mode of each image.
  • the compressed data block is input into the neural network model, and a 67-dimensional probability vector corresponding to the data block is obtained, and the cross entropy loss function shown in formula (1-2) is used to measure the
  • the error between the output of the neural network model and the expected so as to judge whether to end the training of the neural network model.
  • T(i) when the target data of the current image is 30, when i is 30, the value of T(i) is 1; when i is the rest of the values (0-29 or 31-66) Any value), the value of T(i) is 0; P i represents the value of the i-th dimension in the probability distribution output by the Softmax layer. It should be understood that the larger the value of Pi , that is, the larger the probability of using the target intra prediction mode to perform intra prediction, the smaller the loss function value will be.
  • each cross-entropy loss function value determines the relationship between each cross-entropy loss function value and the preset loss value, when each cross-entropy loss function value is greater than the preset loss value , adjust the weight of each parameter in the neural network model according to the value of each cross-entropy loss function, and use the adjusted neural network model for the next training, until the cross-entropy loss function value is less than or equal to the preset loss value, end the neural network Training of the network model. Then, use the verification set and test set of the target data block to verify and test the neural network model respectively; after verification and testing, the parameters of the neural network model corresponding to the target data block can be determined, that is, the parameters corresponding to the target data block can be obtained.
  • the neural network model corresponding to the target data block use the verification set and test set of the target data block to verify and test the neural network model respectively; after verification and testing, the parameters of the neural network model corresponding to the target data block can be determined, that is, the parameters corresponding to the target data block can
  • the embodiments of the present application may adopt different training databases according to different application scenarios, which are not specifically limited in the embodiments of the present application.
  • the training method for the neural network model in the embodiment of the present application is only a specific example, which is not specifically limited in the present application.
  • other methods can also be used to train the neural network model. For example, firstly, the initial neural network model is trained, verified and tested by using the training set, the validation set and the test set containing images of a certain size, and an intermediate model is determined. Neural network model, and then using the method described in the above embodiment, for target data blocks of different sizes, the intermediate neural network model is trained, verified and tested respectively, and different parameters corresponding to target data blocks of different sizes are obtained.
  • the configured neural network model, wherein the above-mentioned specific size may be any size selected according to the actual scene, which is not specifically limited here.
  • target data blocks of different sizes correspond to neural network models with different parameter configurations.
  • the size of the second data block is less than the size of the first data block.
  • FIG. 7 is a flowchart of a method 700 for encoding a target intra prediction mode of a current image block in an embodiment of the present application.
  • the method 700 may be performed by the video encoder 20 , and in particular, may be performed by the intra prediction unit 254 and the entropy encoding unit 270 of the video encoder 20 .
  • the method 700 is described as a series of steps or operations, and it should be understood that the execution order of some steps of the method 700 is not limited, such as steps S701 and S702.
  • the method 700 shown in FIG. 7 includes steps S701, S702, S703 and S704, which will be described in detail below.
  • Step S701 acquiring at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks of the current image block.
  • Each surrounding image block includes a reconstruction block, a residual block and a prediction block, and at least two of the reconstruction block, residual block and prediction block of each image block in the three surrounding image blocks and the current image block are selected.
  • the reconstruction block, residual block and prediction block of each image block in the three surrounding image blocks and the current image block may be selected for subsequent splicing or concatenation operations.
  • Step S702 obtain a probability vector of the current image block according to at least two of the reconstruction block, prediction block and residual block of the surrounding image blocks and the prediction mode probability model, and multiple elements in the probability vector are one with multiple intra prediction modes.
  • any element in the probability vector is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block.
  • the target intra-frame prediction mode of the current image block is the non-matrix weighted intra-frame prediction MIP mode
  • the prediction block and the residual block of the surrounding image block the current image block and the prediction mode probability model to obtain the probability vector of the current image block.
  • the above-mentioned obtaining the probability vector of the current image block according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks and the prediction mode probability model specifically includes: At least two of the blocks are spliced or concatenated to obtain a first data block; and a probability vector of the current image block is obtained according to the first data block and the prediction mode probability model. It should be noted that the sizes of the first data blocks obtained by the two methods of splicing and concatenation are different.
  • the process of obtaining the first data block can be divided into four parts according to the positional relationship of the current image block in the current frame This is the case: that is, when the current image block is located in four different positions of the upper left edge, left edge, upper edge and non-edge of the current frame (corresponding to the four positions in Figure 11-1 to Figure 11-4) relationship), the three surrounding image blocks of the current image block are selected in different ways.
  • the width and height of the current image block are respectively M and N for detailed description, and M and N are positive integers.
  • the first positional relationship as shown in Figure 11-1, when the current image block is located at the non-edge position of the current frame, that is, the distances between the current image block and the left edge and the upper edge of the current frame are greater than or equal to M,
  • N the three surrounding image blocks on the left, the upper left and the upper side that are equal in size to the current image block and adjacent to the current image block can be directly selected from the current frame.
  • Each image block in the block contains a reconstruction block, residual block and prediction block; for the current image block, there is no data in the residual block, reconstruction block and prediction block of the current image block, and the residual block of the current image block has no data.
  • the difference block, the reconstruction block and the prediction block are filled with a default value or 0, wherein the default value is 2 n -1, and n is the pixel bit depth inside the video encoder 100 .
  • the obtained first data block is a three-channel data block with a size of 2M ⁇ 2N, and the three channels of the first data block are filled with the reconstruction blocks of the current image block and the surrounding image blocks, respectively.
  • Residual block and prediction block when selecting the reconstruction block and prediction block of each image block in the three surrounding image blocks and the current image block, or the reconstruction block and the residual block, or the prediction block and the residual block, and for the three
  • the obtained first data block is a two-channel data block with a size of 2M ⁇ 2N, and the two channels of the first data block are filled with the current image block and the current image block respectively.
  • the obtained first data block is a data block with nine channels and a size of M ⁇ N; when selecting three The reconstruction block and prediction block of each image block in the surrounding image block and the current image block, and when the three surrounding coding blocks and the current coding block are concatenated, the first data block obtained is a four-channel and the size is M ⁇ N data blocks; it should be understood that when the data blocks selected for cascading in each image block in the surrounding image block and the current image block are different, and the data filled in each data block of the current image block is different, the level The number of channels of the first data block obtained by concatenating is different. Except for the two cases listed above, in
  • the second positional relationship is when the current image block is located at the upper left edge of the current frame, that is, when the distance between the current image block and the left edge and upper edge of the current frame is smaller than M and N respectively , the three surrounding image blocks on the left, upper left and upper sides that are the same size as the current image block and adjacent to the current image block cannot be selected from the current frame. At this time, the adjacent left, upper left and upper sides of the current image block are respectively expanded out.
  • the residual block in each image block is Fill in 0, and fill in 0 into the reconstruction block, residual block and prediction block of the current image block; when selecting the reconstruction block, residual block and prediction block of each image block in the three surrounding image blocks and the current image block , and when splicing the three surrounding coding blocks and the current coding block, the obtained first data block is a two-channel data block with a size of 2M ⁇ 2N; when selecting each of the three surrounding image blocks and the current image block When the residual block and prediction block of the image block are spliced together with the three surrounding coding blocks and the current coding block, the obtained first data block is a one-channel data block with a size of 2M ⁇ 2N; it should be understood that when the surrounding The image block and each image block in the current image block have different data blocks selected for splicing, and the current image block and the data filled in
  • the residual block in each image block is Fill in 0, and fill in 0 into the reconstruction block, residual block and prediction block of the current image block;
  • the obtained first data block is a data block with four channels and a size of M ⁇ N; it should be understood that when the surrounding image blocks and the current image block are When the data blocks selected for concatenation in each image block are different, and the data filled in each data block in the current image block and the surrounding image blocks are different, the number of channels of the first data block obtained by concatenation is different. Except for the cases listed above, in other cases, the number of channels of the first data block obtained by concatenating the surrounding image blocks and the current image block can be easily derived,
  • the third positional relationship is when the current image block is located at the left edge of the current frame, that is, the distance between the current image block and the left edge of the current frame is less than M, and the distance between the current image block and the current frame is less than M.
  • the distance of the upper edge is greater than or equal to N, at this time, an upper image block adjacent to the current image block and with the same size can be selected from the current frame, and expanded to the left and upper left adjacent to the current image block respectively.
  • the reconstruction block and prediction block in each image block of the extended left and upper left image blocks are filled with default The value is 2 n -1, the residual block in each image block is filled with 0 or 2 n -1; the reconstruction block, residual block and prediction block of the current image block can be filled with 0 or 2 n -1.
  • the residual value in each image block is The difference block is filled with 0, and 0 is filled in the reconstruction block, residual block and prediction block of the current image block; when three surrounding image blocks and the reconstruction block, residual block and When predicting the block and splicing the three surrounding coding blocks and the current coding block, the obtained first data block is a two-channel data block with a size of 2M ⁇ 2N; when selecting the three surrounding image blocks and the current image block When splicing the residual block and prediction block of each image block, and splicing the three surrounding coding blocks and the current coding block, the obtained first data block is a one-channel data block with a size of 2M ⁇ 2N; it should be understood that, When the data blocks selected for splicing are different between the surrounding image block and each image block in the current image block, and the data filled in each data block of the current image
  • the current image block is sent to the current image block.
  • 0 is filled in the reconstructed block and residual block of
  • 2 n -1 is filled in the prediction block;
  • the obtained first data block is a data block with ten channels and a size of M ⁇ N; it should be understood that when each of the surrounding image blocks and the current image block is When the data blocks selected for concatenation are different from the image blocks, and the data filled in each data block in the current image block and the surrounding image blocks are different, the number of channels of the first data block obtained by concatenation is different, except In addition to the cases listed above, in other cases, the number of channels of the first data block obtained by con
  • the fourth positional relationship is when the current image block is located at the left edge of the current frame, that is, the distance between the current image block and the left edge of the current frame is greater than or equal to M, and the current image block is located at the left edge of the current frame.
  • a left image block of the same size adjacent to the current image block can be selected from the current frame, and then expanded to the adjacent upper and upper left sides of the current image block.
  • Two surrounding image blocks with the same size as the current image block are obtained.
  • the reconstructed block and prediction block in each image block of the extended upper and upper left image blocks are filled with default values.
  • the value is 2 n -1, the residual block in each image block can be filled with 0 or 2 n -1; the reconstruction block, residual block and prediction block of the current image block can be filled with 0 or 2 n -1.
  • the residual error in each image block is The block is filled with 0, and 0 is filled in the reconstruction block, residual block and prediction block of the current image block; when three surrounding image blocks and the reconstruction block, residual block and prediction block of each image block in the current image block are selected
  • the first data block obtained is a two-channel data block with a size of 2M ⁇ 2N; when three surrounding image blocks and each of the current image blocks are selected
  • the obtained first data block is a one-channel data block with a size of 2M ⁇ 2N; it should be understood that when When the data blocks selected for splicing are different between the surrounding image block and each image block in the current image block, and
  • the current image block is sent to the current image block.
  • 0 is filled in the reconstructed block and residual block of
  • 2 n -1 is filled in the prediction block;
  • the obtained first data block is a data block with ten channels and a size of M ⁇ N; it should be understood that when each of the surrounding image blocks and the current image block is When the data blocks selected for concatenation are different from the image blocks, and the data filled in each data block in the current image block and the surrounding image blocks are different, the number of channels of the first data block obtained by concatenation is different, except In addition to the cases listed above, in other cases, the number of channels of the first data block obtained by concatenation is different, except In addition to the cases listed above, in other cases, the number of channels of the first data block obtained by con
  • the image block described in this application may be understood as, but not limited to, a prediction unit (prediction unit, PU), a coding unit CU, or a transformation unit TU, and so on.
  • a CU may include one or more prediction unit PUs.
  • one CU may include multiple PUs, but in VVC, one CU corresponds to one PU.
  • Image blocks may have fixed or variable sizes, and vary in size according to different video compression codec standards.
  • the current image block refers to an image block to be encoded or decoded currently, such as a prediction unit to be encoded or decoded.
  • obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model may include: when the size of the first data block is the target size, inputting the first data block into the prediction The mode probability model is processed to obtain the probability vector of the current image block, or; when the size of the first data block is not the target size, a size transformation operation is performed on the first data block to obtain the second data block, the second data block
  • the size of the block is equal to the target size
  • the size transformation operation includes at least one of scaling and transposition
  • the scaling includes horizontal scaling, vertical scaling, or proportional scaling in both horizontal and vertical directions; input the second data block into The prediction mode probability model is processed to obtain the probability vector of the current image block.
  • the above-mentioned probability mode prediction model may be the neural network model shown in FIG. 6 , or other forms of neural network models, or mathematical models, which are not specifically limited in this application.
  • Table 1 Size categories of the first data block
  • the sizes of the first data blocks obtained are different.
  • there are 17 types of sizes of the first data block as shown in Table 1
  • a size transformation operation is performed on the first data block to obtain a second data block
  • the size of the second data block is less than 17 types. kind.
  • the process of performing a size transformation operation on the first data block to obtain the second data block can be divided into four ways:
  • the first data block is proportionally scaled in both horizontal and vertical directions to obtain a second data block;
  • the ratio of the first side length to the second side length from Table 1 is 1:1, 1:2, 2:1, 1:4, 4:1, One of the sizes of 1:8 and 8:1 is selected as the target size.
  • the size of the first data block is selected as the target size.
  • one of the above four methods (1), (2) or (3) can be used.
  • the second data block is obtained.
  • the process of obtaining the second data block in the above embodiment is described in detail by taking the length of the first side and the length of the second side of the first data block as 16 ⁇ 8: if the length of the first side and the length of the second side in the target size are The ratio is 2:1, and the first side length and the second side length of the target size are 32 ⁇ 16 respectively, then the first side length and the second side length of the first data block are measured in horizontal and vertical directions respectively.
  • target sizes there may be four types of target sizes, that is, one is selected from the sizes in Table 1 where the ratio of the length of the first side to the length of the second side is 1:1, and the size of 1:2 or 2:1 Choose one of the sizes of 1:4 or 4:1, and choose one of the sizes of 1:8 or 8:1.
  • the size types of the first data block can be reduced from 17 to 7 or 4 types of the second data block, thereby effectively reducing the subsequent The type of the neural network model corresponding to the second data block of the size, thereby saving the storage space for storing the neural network model and improving the coding performance.
  • the target size may be one of the 17 sizes in Table 1, or one other than the 17 sizes in Table 1.
  • the size of the first data block is compared with the one target size, and according to the relationship between the size of the first data block and the four target sizes, the above-mentioned method (4) can be used to obtain the second data block.
  • the first side length and the second side length of the target size are 4 ⁇ 8 to describe the specific process implemented in the above embodiment in detail: when the first side length and the second side length of the first data block size are 4 When ⁇ 4, the second side length of the first data block is enlarged in the vertical direction to obtain a second data block whose first side length and second side length are 4 ⁇ 8 respectively; when the size of the first data block is When the first side length and the second side length are respectively 16 ⁇ 4, the first side length and the second side length of the first data block are reduced in the horizontal direction and enlarged in the vertical direction, respectively, to obtain the sum of the first side length and the second side length.
  • the second data blocks whose second side lengths are respectively 4 ⁇ 8.
  • the size types of the first data block can be reduced from 17 types to 1 type, and the size types of the second data block are reduced to the greatest extent, so that one type of data block that is related to the second data block can be obtained subsequently.
  • the neural network model corresponding to the block size significantly reduces the storage space for storing the neural network model and improves the coding efficiency; at the same time, the first data block with simple texture and large size is reduced to a smaller size After the second data block, the computational complexity of the subsequent calculation of the probability vector using the neural network model can be effectively reduced, thereby improving the coding performance.
  • obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model may include: when the size of the first data block is the target size, inputting the first data block into the prediction The pattern probability model is processed to obtain the probability vector of the current image block, or;
  • a size transformation operation is performed to obtain a second data block, the size of the second data block is the target size, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or horizontal and vertical scaling Proportional scaling in two directions; input the second data block into the prediction mode probability model for processing to obtain the probability vector of the current image block.
  • the first data block is first determined according to the size relationship between the first side length and the second side length of the first data block, and the size relationship between the absolute value of the horizontal gradient and the sum of the absolute value of the vertical gradient and the preset threshold.
  • the corresponding target size when the size of the first data block is not the target size, a size transformation operation is performed on the first data block to obtain the second data block.
  • the value of the preset threshold may be 4*M*N
  • the length of the first side and the length of the second side of the current image block are M and N respectively
  • the length of the first side and the length of the first side of the first data block are respectively M and N.
  • the lengths of the two sides are 2M and 2N respectively
  • the horizontal gradient of the current image block and the vertical gradient of the current image block are calculated by the Sobel operator.
  • the value of the preset threshold can be determined according to the actual situation, which is not limited in this embodiment of the present application; in addition, the operator for calculating the horizontal gradient and the vertical gradient is not limited to the Sobel operator, and can be determined according to the actual situation, for example , you can also choose Scharr operator, laplacian operator or other operators with similar functions.
  • the gradient of the image represents the change of the image gray level
  • the horizontal gradient and vertical gradient of the current image block respectively represent the gray level change of the current image block in the horizontal and vertical directions relative to the surrounding image blocks in the current frame.
  • the image can be understood as a two-dimensional discrete function f(x,y), the gradient of the image f(x,y) at point (x,y) is a vector with size and direction, let Gx and Gy represent the current image block respectively
  • the gradient in the horizontal and vertical directions, the vector of this gradient can be expressed as:
  • the size of the first data block is not the target size, according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, the first data block
  • the process of performing the size transformation operation on the block to obtain the second data block can be divided into two cases:
  • the above-mentioned size transformation operation includes scaling, and the size of the first data block is scaled according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block.
  • the transformation operation to obtain the second data block includes: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the absolute value of the horizontal gradient and the absolute value of the vertical gradient are summed When the sum is smaller than the preset threshold, the first data block is scaled to obtain the second data block, and the first side length and the second side length of the second data block are 4M/N, 4 respectively; When the first side length M is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to a preset threshold, the first data block is scaled to The second data block is obtained, and the first side length and the second side length of the second data block
  • the above-mentioned size transformation operation includes at least one of scaling and transposition, the above-mentioned size relationship according to the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block. , performing a size transformation operation on the first data block to obtain a second data block, including:
  • the first data block Scaling is performed to obtain a second data block, and the first side length and the second side length of the second data block are 4M/N, 4 respectively;
  • the first data block is scaled to obtain the second data block, the The length of one side and the length of the second side are 8M/N, 8 respectively;
  • first side length M of the first data block is smaller than the second side length N of the first data block
  • the length of the first side and the length of the second side of the second data block are respectively 8N/M, 8; wherein, M and N are positive integers.
  • the process of determining the second data block according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block The self-information of the current image is fully utilized to determine the size of the second data block, and the correlation between the second data block and the current image block is enhanced, thereby improving the accuracy of the probability vector obtained according to the second data block subsequently, thereby improving the
  • the number of second data block size types can be reduced, and the number of neural network models corresponding to second data blocks of different sizes can also be reduced. , thereby saving the storage space for storing the neural network model and improving the coding performance.
  • Step S703 determining the target intra prediction mode of the current image block.
  • search out the 67 intra-frame prediction modes (as shown in The target intra-frame prediction mode for intra-frame prediction of the image block, the mode value (0-66) of the target intra-frame prediction mode in FIG. 9 is the corresponding first identifier.
  • 67 intra-frame prediction modes in VVC include direct current (direct current, DC) prediction mode, plane Planar prediction mode and 65 angle prediction modes, and the first identifiers of DC and Planar prediction modes are 1 and 1 respectively. 0, the first identifiers of the remaining 65 angle prediction modes are 2-66 from the lower left corner to the upper right corner, respectively.
  • Step S704 according to the target intra-frame prediction mode and the probability vector, encode the target intra-frame prediction mode into the code stream.
  • encoding the target intra-frame prediction mode into the code stream can be divided into two cases:
  • the mode value of the target intra prediction mode in FIG. 9 is the corresponding first identifier
  • the probability vector is a 67-dimensional vector
  • each element in the probability vector corresponds to a probability interval on the interval (0, 1).
  • the elements in the probability vector correspond one-to-one with the 67 pattern values in Figure 9.
  • the above-mentioned first identification and probability vector of the target intra-frame prediction mode determine a first reference value, and encode the first reference value to obtain a code stream corresponding to the target intra-frame prediction mode, specifically: according to the target intra-frame prediction mode
  • the first identification of the first identification determines the corresponding element of the first identification in the probability vector, and then arbitrarily selects a value in the probability interval corresponding to the element as the first reference value to be encoded into the code stream, and the first reference value can be used to characterize the current The target intra prediction mode of the image block.
  • the second mark is determined according to the preset constant and the first mark of the target intra-frame prediction mode, according to the second mark and the probability vector, Determine the second reference value; encode the second reference value to obtain a code stream corresponding to the target intra-frame prediction mode; when the size transformation operation includes transposition, and the target intra-frame prediction mode is a non-angle prediction mode, according to the target frame
  • the first identifier and probability vector of the intra prediction mode are used to determine the first reference value; the first reference value is encoded to obtain a code stream corresponding to the target intra prediction mode.
  • the preset constant is the number of the multiple intra-frame prediction modes shown in FIG. 3 , that is, 67
  • the angle prediction mode is the intra-frame prediction mode with mode values 2-66
  • the non-angle prediction mode is the mode value of DC DC mode or flat mode.
  • the first identifier is subtracted from the preset constant to obtain the second identifier, and the corresponding identifier of the second identifier is selected from the probability vector. element, and then arbitrarily select a value in the probability interval corresponding to this element as the second reference value and encode it into the code stream; when the size transformation operation includes transposition, and the target intra-frame prediction mode is non-angle prediction mode, the target intra-frame
  • the coding method of the prediction mode is the same as that of the method (1), which is not repeated here.
  • the embodiment of the present application can make full use of the relevant information of the surrounding image blocks, and then according to the reconstruction block, the prediction block and the residual block of the surrounding image blocks
  • the probability vectors of at least two generated current image blocks are more accurate, which improves the accuracy of the data encoded in the code stream and improves the encoding performance;
  • the size transformation operation is performed to obtain the second data block, which can effectively reduce the number of size types of the second data block, thereby reducing the number of neural network models corresponding to the second data blocks of different sizes, thereby saving the storage space of the neural network model, Improve coding performance; at the same time, for the first data block with simple texture and large size, after the size transformation operation is reduced to a second data block with a smaller size, it can
  • FIG. 8 is a flowchart of another method 800 for encoding a target intra prediction mode of a current image block in the present application. As shown in Figure 8, the method 800 includes the following six steps:
  • each surrounding image block in the block contains a reconstruction block, a residual block and a prediction block, and one or more of the reconstruction block, residual block and prediction block of each image block in the three surrounding image blocks and the current image block can be selected. for subsequent splicing or cascading operations.
  • S803 perform a size transformation operation on the first data block to obtain a second data block, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or both horizontal and vertical scaling A proportional scaling of the direction.
  • S805 Determine the target intra prediction mode of the current image block.
  • steps S802 to S804 are respectively the same as the processes corresponding to step S702, and steps S805 and S806 are respectively the same as the corresponding steps in steps S703 and S704, which will not be repeated here.
  • the size types of the second data block can be effectively reduced , thereby reducing the number of neural network models corresponding to second data blocks of different sizes, thereby saving the storage space of neural network models and improving coding performance; at the same time, for the first data block with simpler texture and larger size , after the size transformation operation is reduced to a second data block with a smaller size, the computational complexity of the subsequent calculation of the probability vector by using the neural network model can be effectively reduced, thereby improving the coding efficiency.
  • FIG. 12 is a flowchart of an encoder encoding related syntax elements of an intra prediction mode of a current image block according to an embodiment of the present application.
  • the syntax elements related to the intra prediction mode include the MIP flag bit, the multi-reference line MRL index, the flag corresponding to the MIP mode and the ISP mode, the The first identifier corresponding to the target intra prediction mode of the current image block, and the probability vector of the current image block; when the entropy encoder 103 encodes the syntax elements related to the intra prediction of the current image block, it is divided into the following steps: The description in step S704, select the first reference value or the second reference value corresponding to the target intra-frame prediction mode from the probability vector; use the conventional encoding method to encode the MIP flag bit, and determine whether the MIP flag bit is the first preset value, if it is, then adopt the bypass encoding method to encode the MIP mode; if the MIP flag
  • the information such as the MIP flag bit, the MRL index and the ISP mode have been determined in the process of searching the target intra prediction mode of the current image block, and the MIP flag bit is used to indicate whether the current image block adopts the MIP mode for intra prediction.
  • the MRL index indicates the index of the reference line used for intra prediction of the current image block
  • the ISP mode indicates the sub-block division method of the current image block, wherein all sub-blocks of the current image block share one intra prediction mode, and in the current image block When the image block adopts MRL, ISP is not performed.
  • the encoder 20 described in the embodiments of the present application uses the context-adaptive binary arithmetic coding CABAC method to encode the syntax elements related to the intra prediction of the current image block.
  • the current image can also be encoded in other ways
  • the syntax elements related to intra-block prediction are encoded, which is not specifically limited here.
  • FIG. 13 is a flowchart of a method 1300 for decoding a target intra prediction mode of a current image block in an embodiment of the present application.
  • the method 1300 shown in FIG. 13 includes steps S1301 , S1302 and S1303 , these steps are described in detail below.
  • Step S1301 Acquire a code stream corresponding to the current image block and at least two of the reconstruction blocks, prediction blocks and residual blocks of surrounding image blocks of the current image block.
  • the decoder 30 obtains the video code stream encoded by the entropy encoder 304; the specific process of obtaining at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks of the current image block is the same as that of step S701. It is not repeated here.
  • Step S1302 Obtain a probability vector of the current image block according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks and the prediction mode probability model, and multiple elements in the probability vector are one with multiple intra prediction modes.
  • any element in the probability vector is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block.
  • step S1302 is the same as that of step S702, which is not repeated here.
  • Step S1303 Determine the target intra prediction mode of the current image block according to the code stream and the probability vector, and determine the prediction block of the current image block according to the target intra prediction mode.
  • the code stream is decoded to obtain the target reference value; the target intra prediction mode of the current image block is determined according to the target reference value.
  • the above-mentioned determination of the target intra prediction mode of the current image block according to the target reference value includes two situations:
  • the target intra prediction mode is determined according to the target reference value and the probability vector.
  • the target reference value is between 0 and 1
  • the corresponding identifier in the probability vector is determined according to the probability interval in which the target reference value falls
  • the target intra prediction mode of the current image block is determined according to the corresponding identifier.
  • the first identifier of the element corresponding to the target reference value in the probability vector is determined according to the probability interval in which the target reference value falls, and when the intra prediction mode corresponding to the first identifier is the angle prediction mode, a preset constant is used to subtract the The first mark obtains the second mark, and the intra-frame prediction mode corresponding to the second mark in FIG. 9 is the target intra-frame prediction mode of the current image block; when the intra-frame prediction mode corresponding to the first mark is not In the case of the angle prediction mode, the intra prediction mode corresponding to the first identifier in FIG. 9 is the target intra prediction mode of the current image block.
  • FIG. 14 is a flowchart of a method 1400 for decoding a target intra prediction mode of a current image block in an embodiment of the present application.
  • the method 1400 shown in FIG. 14 includes steps S1401 , S1402 and 1403 , S1404 and S1405, these steps will be described in detail below.
  • S1401 Acquire a code stream corresponding to the current image block and surrounding image blocks of the current image block.
  • acquiring the code stream corresponding to the current image block above is the same as the corresponding step in step S1301, and acquiring the surrounding image blocks of the current image block above is the same as the corresponding step in step S801, which is not repeated here.
  • S1403 perform a size transformation operation on the first data block to obtain a second data block, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or both horizontal and vertical scaling A proportional scaling of the direction.
  • S1404 Input the second data block into the prediction mode probability model for processing to obtain a probability vector of the current image block.
  • Multiple elements in the probability vector correspond to multiple intra prediction modes one-to-one, and any element in the probability vector uses Indicates the probability of using the intra prediction mode corresponding to any element when predicting the current image block.
  • S1405 Determine the target intra prediction mode of the current image block according to the code stream and the probability vector, and determine the prediction block of the current image block according to the target intra prediction mode.
  • steps S1402 , S1403 , S1404 and S1405 are the same as those of S802 , S803 , S804 and S1303 respectively, and will not be repeated here.
  • FIG. 15 is a schematic block diagram of an encoding apparatus 1500 provided by the present application. It should be understood that the encoding apparatus 1500 here may correspond to the encoder 20 in FIG. 1A or may include:
  • the obtaining unit 1501 is configured to obtain at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks of the current image block.
  • the processing unit 1502 is configured to obtain a probability vector of the current image block according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks and the prediction mode probability model, and multiple elements in the probability vector are related to multiple intra-frame values.
  • the prediction modes are in one-to-one correspondence, and any element in the probability vector is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block.
  • the determining unit 1503 is configured to determine the target intra prediction mode of the current image block.
  • the encoding unit 1504 is configured to encode the target intra-frame prediction mode into the code stream according to the target intra-frame prediction mode and the probability vector.
  • the above processing unit is specifically configured to: when the target intra prediction mode is the non-matrix weighted intra prediction MIP mode, according to at least one of the reconstruction block, prediction block and residual block of the surrounding image blocks Two, the current image block and the prediction mode probability model to obtain the probability vector of the current image block.
  • the above processing unit is specifically configured to: splicing or concatenating at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks to obtain the first data block;
  • the data block and prediction mode probability model obtains the probability vector of the current image block.
  • the processing unit is specifically configured to: when the size of the first data block is the target size, The first data block is input into the prediction mode probability model for processing to obtain the probability vector of the current image block, or; when the size of the first data block is not the target size, a size transformation operation is performed on the first data block to obtain the second data block.
  • data block, the size of the second data block is equal to the target size, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling or equal scaling in both horizontal and vertical directions;
  • the second data block is input into the prediction mode probability model for processing to obtain the probability vector of the current image block.
  • the processing unit is specifically configured to: when the size of the first data block is the target size, The first data block is input into the prediction mode probability model for processing to obtain the probability vector of the current image block, or; when the size of the first data block is not the target size, according to the length of the first side and the second side of the first data block long size relationship, and/or the horizontal gradient and vertical gradient of the current image block, perform a size transformation operation on the first data block to obtain a second data block, the size of the second data block is the target size, and the size transformation operation includes scaling and at least one of transposition, the scaling includes horizontal scaling, vertical scaling or equal scaling in both horizontal and vertical directions; input the second data block into the prediction mode probability model for processing to obtain the current image block probability vector.
  • the size transformation operation includes scaling, and according to the size relationship between the length of the first side and the length of the second side of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, for the first data block
  • the processing unit is specifically used for:
  • the first data block scaling to obtain a second data block the first side length and the second side length of the second data block are respectively 4M/N, 4;
  • the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first The data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively;
  • the first data block is scaled , to obtain the second data block, the first side length and the second side length of the second data block are respectively 4, 4N/M;
  • the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold scaling to obtain a second data block, the first side length and the second side length of the second data block are respectively 8,8N/M;
  • the size transformation operation includes at least one of scaling and transposing, and the size relationship between the first side length and the second side length of the first data block, and/or the size relationship of the current image block
  • the size transformation operation is performed on the first data block to obtain the aspect of the second data block, and the processing unit is specifically used for:
  • the first data block scaling to obtain a second data block the first side length and the second side length of the second data block are respectively 4M/N, 4;
  • the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first The data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively;
  • the first The side length and the second side length are 4N/M, 4 respectively;
  • the first data block is transposed and scaled to obtain a second data block whose The length of the first side and the length of the second side are 8N/M, 8 respectively;
  • the above encoding unit is specifically used to: when the size transformation operation does not include transposition, determine the first reference value according to the first identifier and probability vector of the target intra prediction mode; The value is encoded to obtain the code stream corresponding to the target intra prediction mode;
  • the second identifier is determined according to the preset constant and the first identifier, and the second reference value is determined according to the second identifier and the probability vector; the second reference value is encoded to obtain the target intra prediction The code stream corresponding to the mode.
  • the encoding apparatus 1500 specifically performs the method embodiment described in FIG. 13 ; similarly, the encoding apparatus 1500 can also be used to implement all or any steps in the method embodiment shown in FIG. 8 , which is not repeated here. Repeat.
  • FIG. 16 is a schematic block diagram of an encoding apparatus 1600 provided by the present application. It should be understood that the encoding apparatus 1600 here may correspond to FIG. 1A or the decoder 30 in FIG. 3 , and the encoding apparatus 1600 may include:
  • the obtaining unit 1601 is configured to obtain a code stream corresponding to the current image block and at least two of the reconstruction blocks, prediction blocks and residual blocks of surrounding image blocks of the current image block.
  • the processing unit 1602 is configured to obtain a probability vector of the current image block according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks and the prediction mode probability model, and multiple elements in the probability vector are related to multiple intra-frame values.
  • the prediction modes are in one-to-one correspondence, and any element in the probability vector is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block.
  • the decoding unit 1603 is configured to determine the target intra prediction mode of the current image block according to the code stream and the probability vector, and determine the prediction block of the current image block according to the target intra prediction mode.
  • the above processing unit is specifically configured to: splicing or concatenating at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks to obtain the first data block;
  • the data block and prediction mode probability model obtains the probability vector of the current image block.
  • the processing unit is specifically configured to: when the size of the first data block is the target size, The first data block is input into the prediction mode probability model for processing to obtain the probability vector of the current image block, or;
  • a size transformation operation is performed on the first data block to obtain a second data block, where the size of the second data block is equal to the target size, and the size transformation operation includes scaling and transposing.
  • scaling includes horizontal scaling, vertical scaling, or equal scaling in both horizontal and vertical directions; inputting the second data block into a prediction mode probability model for processing to obtain a probability vector of the current image block.
  • the processing unit is specifically configured to: when the size of the first data block is the target size, The first data block is input into the prediction mode probability model for processing to obtain the probability vector of the current image block, or;
  • the size transformation operation is performed to obtain a second data block, the first side length and the second side length are the lengths of two mutually perpendicular sides in the first data block, the size of the second data block is the target size, and the size transformation operation includes at least one of scaling and transposition, the scaling includes horizontal scaling, vertical scaling or equal scaling in both horizontal and vertical directions; inputting the second data block into the prediction mode probability model for processing to obtain the current image The probability vector for the block.
  • the size transformation operation includes scaling.
  • the processing unit is specifically used for:
  • the first data block scaling to obtain a second data block the first side length and the second side length of the second data block are respectively 4M/N, 4;
  • the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first The data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively;
  • the first data block is scaled , to obtain the second data block, the first side length and the second side length of the second data block are respectively 4, 4N/M;
  • the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold scaling to obtain a second data block, the first side length and the second side length of the second data block are respectively 8,8N/M;
  • the size transformation operation includes at least one of scaling and transposition, where the size relationship between the first side length and the second side length of the first data block, and/or the current image block
  • the horizontal gradient and vertical gradient of perform a size transformation operation on the first data block to obtain the aspect of the second data block, and the processing unit is specifically used for:
  • the first data block scaling to obtain a second data block the first side length and the second side length of the second data block are respectively 4M/N, 4;
  • the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first The data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively;
  • the first The side length and the second side length are 4N/M, 4 respectively;
  • the first data block is transposed and scaled to obtain a second data block whose The length of the first side and the length of the second side are 8N/M, 8 respectively;
  • the above-mentioned decoding unit is specifically used to: decode the code stream to obtain the target reference value; when the size transformation operation does not include transposition, determine the target intra-frame prediction mode according to the target reference value and the probability vector ;
  • the first identifier is determined according to the target reference value and the probability vector; when the intra-frame prediction mode corresponding to the first identifier is the angle prediction mode, the target intra-frame prediction mode is determined according to the preset constant and the first identifier Prediction mode; when the intra-frame prediction mode corresponding to the first identifier is a non-angle prediction mode, the target intra-frame prediction mode is determined according to the target reference value and the probability vector.
  • the encoding apparatus 1600 specifically performs the method embodiment described in FIG. 13 ; similarly, the encoding apparatus 1600 can also be used to implement all or any steps in the method embodiment described in FIG. 14 , which is not repeated here. Repeat.
  • Computer-readable media may include computer-readable storage media, which corresponds to tangible media, such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another (eg, according to a communication protocol) .
  • a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium, such as a signal or carrier wave.
  • Data storage media can be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described in this application.
  • the computer program product may comprise a computer-readable medium.
  • such computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory or may be used to store instructions or data structures desired program code in the form of any other medium that can be accessed by a computer.
  • any connection is properly termed a computer-readable medium.
  • coaxial cable, fiber optic cable, twisted pair, digital subscriber line DSL, or wireless technologies such as infrared, radio, and microwave are used to transmit instructions from a website, server, or other remote source
  • coaxial cable, Fiber optic cables, twisted pair, DSL or wireless technologies such as infrared, radio and microwave are included in the definition of media.
  • Disk and disc includes compact disc CD, laser disc, optical disc, digital versatile disc DVD, and blu-ray disc, where disks typically reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • processors such as one or more digital signal processors DSPs, general purpose microprocessors, application specific integrated circuits ASICs, field programmable logic arrays FPGAs or other equivalent integrated or discrete logic circuits.
  • processors such as one or more digital signal processors DSPs, general purpose microprocessors, application specific integrated circuits ASICs, field programmable logic arrays FPGAs or other equivalent integrated or discrete logic circuits.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable logic arrays
  • the techniques of this application may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC), or a set of ICs (eg, a chip set).
  • IC integrated circuit
  • Various components, modules, or units are described herein to emphasize functional aspects of means for performing the disclosed techniques, but do not necessarily require realization by different hardware units. Indeed, as described above, the various units may be combined in codec hardware units in conjunction with suitable software and/or firmware, or by interoperating hardware units (including one or more processors as described above) supply.

Abstract

Disclosed in the present application are an intra prediction mode coding method and a device. The present invention relates to the technical field of artificial intelligence (AI)-based video or image compression, and specifically relates to the technical field of neural network-based video compression. The method comprises an encoding method and a decoding method. The encoding method comprises: obtaining a probability vector for a current image block according to at least two among a residual block, a predicted block, and a reconstructed block of a peripheral image of the current image block as well as a prediction mode probability model; and encoding a target intra prediction mode into a code stream according to a target intra prediction mode of the current image block and the probability vector. The decoding method comprises: using a same method as an encoding end and obtaining a probability vector for a current image block; and determining a target intra prediction mode according to a code stream and the probability vector. When encoding or decoding is performed on a target intra prediction mode of a current image block, accuracy can be improved, storage overhead can be saved, and computational complexity can be reduced.

Description

帧内预测模式的译码方法和装置Decoding method and apparatus for intra prediction mode
本申请要求于2020年11月30日提交中国专利局、申请号为202011387076.6、申请名称为“帧内预测模式的译码方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202011387076.6 and the application title "Decoding Method and Device for Intra Prediction Mode", which was filed with the Chinese Patent Office on November 30, 2020, the entire contents of which are incorporated by reference in in this application.
技术领域technical field
本申请涉及基于人工智能(AI)的视频或图像压缩技术领域,尤其涉及一种帧内预测模式的译码方法和装置。The present application relates to the technical field of video or image compression based on artificial intelligence (AI), and in particular, to a decoding method and apparatus for intra prediction mode.
背景技术Background technique
视频编码(视频编码和解码)广泛用于数字视频应用,例如广播数字电视、互联网和移动网络上的视频传输、视频聊天和视频会议等实时会话应用、DVD和蓝光光盘、视频内容采集和编辑系统以及可携式摄像机的安全应用。Video coding (video encoding and decoding) is widely used in digital video applications such as broadcast digital TV, video transmission over the Internet and mobile networks, real-time conversational applications such as video chat and video conferencing, DVD and Blu-ray discs, video content capture and editing systems And security applications for camcorders.
即使在影片较短的情况下也需要对大量的视频数据进行描述,当数据要在带宽容量受限的网络中发送或以其它方式传输时,这样可能会造成困难。因此,视频数据通常要先压缩然后在现代电信网络中传输。由于内存资源可能有限,当在存储设备上存储视频时,视频的大小也可能成为问题。视频压缩设备通常在信源侧使用软件和/或硬件,以在传输或存储之前对视频数据进行编码,从而减少用来表示数字视频图像所需的数据量。然后,压缩的数据在目的地侧由视频解压缩设备接收。在有限的网络资源以及对更高视频质量的需求不断增长的情况下,需要改进压缩和解压缩技术,这些改进的技术能够提高压缩率而几乎不影响图像质量。The large amount of video data that needs to be described even in the case of short films can create difficulties when the data is to be sent or otherwise transmitted over a network with limited bandwidth capacity. Therefore, video data is usually compressed before being transmitted in modern telecommunication networks. Since memory resources can be limited, the size of the video can also be an issue when storing the video on a storage device. Video compression devices typically use software and/or hardware on the source side to encode video data prior to transmission or storage, thereby reducing the amount of data required to represent digital video images. Then, the compressed data is received by the video decompression device at the destination side. With limited network resources and growing demand for higher video quality, there is a need for improved compression and decompression techniques that can increase compression ratios with little impact on image quality.
近年来,将深度学习应用于在图像和视频编解码领域逐渐成为一种趋势。In recent years, the application of deep learning in the field of image and video encoding and decoding has gradually become a trend.
其中,基于神经网络的HEVC的帧内预测模式的译码方法引入神经网络模型到帧内预测过程中,在编码时,借助神经网络模型得到与当前图像块的帧内预测模式有关的语法元素,并对所获取的语法元素进行编码;在解码时,根据码流中解码得到的语法元素和神经网络模型确定当前图像块的帧内预测模式。Among them, the decoding method of the intra-frame prediction mode of HEVC based on the neural network introduces the neural network model into the intra-frame prediction process. During encoding, the syntax elements related to the intra-frame prediction mode of the current image block are obtained by means of the neural network model, The acquired syntax elements are encoded; during decoding, the intra-frame prediction mode of the current image block is determined according to the syntax elements decoded in the code stream and the neural network model.
然而,现有技术在对当前图像块的帧内预测模式进行编解码时,存在着准确度受限、存储开销较大,计算复杂度较高等问题。However, when encoding and decoding the intra prediction mode of the current image block in the prior art, there are problems such as limited accuracy, large storage overhead, and high computational complexity.
发明内容SUMMARY OF THE INVENTION
本申请提供了一种基于神经网络的帧内预测模式的译码方法和装置,可以提高对当前图像块的帧内预测模式进行编解码时的准确性,并节约存储空间,降低计算复杂度,从而提高编解码性能。The present application provides a method and device for decoding an intra-frame prediction mode based on a neural network, which can improve the accuracy of encoding and decoding the intra-frame prediction mode of a current image block, save storage space, and reduce computational complexity. Thereby improving the encoding and decoding performance.
具体实施例在所附独立权利要求中概述,其它实施例在从属权利要求中概述。Specific embodiments are outlined in the appended independent claims, other embodiments are outlined in the dependent claims.
第一方面,本申请提供一种帧内预测模式的编码方法,该方法包括:获取当前图像块的周围图像块的重建块、预测块和残差块中的至少两个;根据周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到当前图像块的概率向量,概率向量中的多个元素与多个帧内预测模式一一对应,概率向量中的任一元素用于表征对当前图像块进行预测时采用任一元素对应的帧内预测模式的概率;确定当前图像块的目标帧内预测模式;根据目标帧内预测模式和概率向量,将目标帧内预测模式编入码流。In a first aspect, the present application provides an intra-frame prediction mode encoding method, the method comprising: acquiring at least two of a reconstruction block, a prediction block and a residual block of a surrounding image block of a current image block; At least two of the reconstruction block, prediction block and residual block are combined with the prediction mode probability model to obtain the probability vector of the current image block. Multiple elements in the probability vector are in one-to-one correspondence with multiple intra prediction modes. Any element in the probability vector is in one-to-one correspondence. An element is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block; determine the target intra prediction mode of the current image block; The prediction mode is encoded into the codestream.
可以看出,在本申请实施例中,获取当前图像块的三个周围图像块的重建块、预测块和残差块中的至少两个,由于可以通过每个周围图像块的重建块、预测块和残差块中的两个计算得到另一个,因而本申请实施例可以充分利用了周围图像块的相关信息,进而根据周围图像块的重建块、预测块和残差块中的至少两个生成的当前图像块的概率向量更加准确,提高了编入码流中数据的准确性,提升了编码性能。It can be seen that, in this embodiment of the present application, at least two of the reconstruction blocks, prediction blocks and residual blocks of the three surrounding image blocks of the current image block are obtained, because the reconstruction blocks, prediction blocks of each surrounding image block can be obtained through Two of the block and the residual block are calculated to obtain the other, so the embodiment of the present application can make full use of the relevant information of the surrounding image blocks, and then according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks The generated probability vector of the current image block is more accurate, which improves the accuracy of the data encoded in the code stream and improves the encoding performance.
在一种可能的设计中,上述根据周围图像块的重建块、预测块和残差块中的至少两个、当前图像块和预测模式概率模型得到当前图像块的概率向量,包括:当目标帧内预测模式为非矩阵加权帧内预测MIP模式时,根据周围图像块的重建块、预测块和残差块中的至少两个、当前图像块和预测模式概率模型得到当前图像块的概率向量。In a possible design, the probability vector of the current image block is obtained according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks, the current image block and the prediction mode probability model, including: when the target frame When the intra prediction mode is the non-matrix weighted intra prediction MIP mode, the probability vector of the current image block is obtained according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks, the current image block and the prediction mode probability model.
可以看出,当目标帧内预测模式为非MIP模式时,采用本申请实施例中的方法对当前图像块的帧内预测模式进行编码,通过获取当前图像块的三个周围图像块的重建块、预测块和残差块中的至少两个,本申请实施例可以充分利用了周围图像块的相关信息,进而根据周围图像块的重建块、预测块和残差块中的至少两个生成的当前图像块的概率向量更加准确,从而在目标帧内预测模式为非MIP模式的情况下,提高了编入码流中数据的准确性,提升了编码性能。It can be seen that when the target intra-frame prediction mode is a non-MIP mode, the method in the embodiment of the present application is used to encode the intra-frame prediction mode of the current image block, and the reconstructed blocks of the three surrounding image blocks of the current image block are obtained by obtaining the reconstructed blocks. , at least two of the prediction block and the residual block, the embodiments of the present application can make full use of the relevant information of the surrounding image blocks, and then generate a The probability vector of the current image block is more accurate, thereby improving the accuracy of the data encoded in the code stream and improving the encoding performance when the target intra-frame prediction mode is a non-MIP mode.
在一种可能的设计中,上述根据周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到当前图像块的概率向量,包括:对周围图像块的重建块、预测块和残差块中的至少两个进行拼接或者级联,得到第一数据块;根据第一数据块和预测模式概率模型得到当前图像块的概率向量。In a possible design, the probability vector of the current image block is obtained according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks and the prediction mode probability model, including: the reconstruction block of the surrounding image blocks The first data block is obtained by splicing or concatenating at least two of the prediction block and the residual block; the probability vector of the current image block is obtained according to the first data block and the prediction mode probability model.
应当理解,本申请实施例中的预测模式概率模型可以是神经网络模型,不同尺寸的第二数据块对应不同的神经网络模型。It should be understood that the prediction mode probability model in this embodiment of the present application may be a neural network model, and second data blocks of different sizes correspond to different neural network models.
可以看出,在本申请实施例中,由于周围图像块的重建块、预测块和残差块中的至少两个可以充分表征周围图像块的相关信息,通过对三个周围图像块的重建块、预测块和残差块中的至少两个进行拼接或者级联,得到第一数据块,因而该第一数据块可以充分反映当前图像块的相关信息,进而根据该第一数据块得到的当前图像块的概率向量更加准确,从而提高了编入码流中数据的准确性,提升了编码性能。It can be seen that in this embodiment of the present application, since at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks can fully characterize the relevant information of the surrounding image blocks, , at least two of the prediction block and the residual block are spliced or concatenated to obtain the first data block, so the first data block can fully reflect the relevant information of the current image block, and then the current image block obtained according to the first data block. The probability vector of the image block is more accurate, thereby improving the accuracy of the data encoded in the code stream and improving the encoding performance.
在一种可能的设计中,上述根据第一数据块和预测模式概率模型得到当前图像块的概率向量,包括:当第一数据块的尺寸为目标尺寸时,将第一数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量,或者;In a possible design, obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model includes: when the size of the first data block is the target size, inputting the first data block into the prediction mode probability The model is processed to obtain a probability vector for the current image patch, or;
当第一数据块的尺寸不为目标尺寸时,对第一数据块进行尺寸变换操作,以得到第二数据块,第二数据块的尺寸等于目标尺寸,尺寸变换操作包括缩放和转置中的至少一项,缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将第二数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量。When the size of the first data block is not the target size, a size transformation operation is performed on the first data block to obtain a second data block, where the size of the second data block is equal to the target size, and the size transformation operation includes scaling and transposing. In at least one item, scaling includes horizontal scaling, vertical scaling, or equal scaling in both horizontal and vertical directions; inputting the second data block into a prediction mode probability model for processing to obtain a probability vector of the current image block.
在一种可能的设计中,上述当第一数据块的尺寸不为目标尺寸时,对第一数据块进行尺寸变换操作,以得到第二数据块,可以包括四种方式:(1)当第一数据块的尺寸不为目标尺寸,且第一数据块的第一边长和第二边长之比等于目标尺寸的第一边长和第二边长之比时,对第一数据块进行水平和竖直两个方向的等比缩放,得到第二数据块;(2)当第一数据块的尺寸不为目标尺寸,且第一数据块的第一边长和第二边长之比等于目标尺寸的第二边长和第一边长之比时,对第一数据块进行转置,以及水平和竖直两个方向的等比缩放,得到第二数据块;(3)当第一数据块的尺寸不为目标尺寸,且第一数据块的第一变长和第二边长分别等于目标尺寸的第二边长和第一边长时,对第一数据块进行转置,得到第二数据块;(4)当第 一数据块的尺寸不为目标尺寸时,对第一数据块进行缩放,得到第二数据块。In a possible design, when the size of the first data block is not the target size, performing a size transformation operation on the first data block to obtain the second data block may include four methods: (1) when the first data block is not the target size When the size of a data block is not the target size, and the ratio of the first side length and the second side length of the first data block is equal to the ratio of the first side length and the second side length of the target size, the first data block is processed Proportional scaling in the horizontal and vertical directions to obtain the second data block; (2) when the size of the first data block is not the target size, and the ratio of the first side length to the second side length of the first data block When it is equal to the ratio of the second side length to the first side length of the target size, transpose the first data block, and proportionally scale the horizontal and vertical directions to obtain the second data block; (3) When the first data block is obtained; When the size of a data block is not the target size, and the first variable length and the second side length of the first data block are respectively equal to the second side length and the first side length of the target size, the first data block is transposed, Obtain the second data block; (4) when the size of the first data block is not the target size, scale the first data block to obtain the second data block.
应当理解,本申请实施例中上述四种方式中使用的目标尺寸可以不同,且目标尺寸可以是一个或多个,同时,目标尺寸的数量小于第一数据块尺寸的数量;上述第一边长和第二边长可以分别为宽和高,或者分别为高和宽;此外,上述第二种方式中对第一数据块进行的转置和缩放操作没有先后顺序,先进行转置再进行缩放和先进行缩放再进行转置,两种方式下得到的第二数据块的尺寸相同。It should be understood that the target sizes used in the above-mentioned four manners in this embodiment of the present application may be different, and the target size may be one or more. At the same time, the number of target sizes is smaller than the number of first data block sizes; the above-mentioned first side length and the second side lengths can be width and height respectively, or height and width respectively; in addition, the transposition and scaling operations performed on the first data block in the above-mentioned second method have no sequence, and the transposition is performed first and then the scaling is performed. The size of the second data block obtained in the two methods is the same as that of scaling first and then transposing.
可以看出,在上述实施例中,由于第一数据块的尺寸有多种,通过对第一数据块进行尺寸变换操作,得到第二数据块,可以有效减少第二数据块尺寸种类的数量,从而减少与不同尺寸的第二数据块对应的神经网络模型的数量,进而节省神经网络模型的存储空间,提高编码性能,此外,对于纹理较简单,且尺寸较大的第一数据块,经过尺寸变换操作缩小为较小尺寸的第二数据块后,可以有效地降低后续利用神经网络模型计算概率向量时的计算复杂度,从而提高编码效率。It can be seen that, in the above embodiment, since there are various sizes of the first data block, by performing a size transformation operation on the first data block to obtain the second data block, the number of sizes of the second data block can be effectively reduced, Thus, the number of neural network models corresponding to the second data blocks of different sizes is reduced, thereby saving the storage space of the neural network model and improving the coding performance. After the transformation operation is reduced to a second data block with a smaller size, the computational complexity of the subsequent calculation of the probability vector by using the neural network model can be effectively reduced, thereby improving the coding efficiency.
在一种可能的设计中,上述根据第一数据块和预测模式概率模型得到当前图像块的概率向量,包括:当第一数据块的尺寸为目标尺寸时,将第一数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量,或者;In a possible design, obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model includes: when the size of the first data block is the target size, inputting the first data block into the prediction mode probability The model is processed to obtain a probability vector for the current image patch, or;
当第一数据块的尺寸不为目标尺寸时,根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,第二数据块的尺寸为目标尺寸,尺寸变换操作包括缩放和转置中的至少一项,缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将第二数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量。When the size of the first data block is not the target size, according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, for the first data block A size transformation operation is performed to obtain a second data block, the size of the second data block is the target size, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or horizontal and vertical scaling Proportional scaling in two directions; input the second data block into the prediction mode probability model for processing to obtain the probability vector of the current image block.
可以看出,在本申请实施例中,通过第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度确定第二数据块的过程,充分利用了当前图像块的自身信息,增强了第二数据块与当前图像块的相关性,因而提高了后续根据第二数据块得到的概率向量的准确性,从而提高了码流中数据的准确性,提升了编码性能。It can be seen that, in the embodiment of the present application, the process of determining the second data block is based on the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, The self-information of the current image block is fully utilized, and the correlation between the second data block and the current image block is enhanced, thereby improving the accuracy of the probability vector obtained from the second data block, thereby improving the accuracy of the data in the code stream. , which improves encoding performance.
在一种可能的设计中,尺寸变换操作包括缩放,上述根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,包括:当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4M/N,4;当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8M/N,8;当第一数据块的第一边长M小于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4,4N/M;当第一数据块的第一边长M小于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8,8N/M;其中,M和N为正整数。In a possible design, the size transformation operation includes scaling, the above-mentioned size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, to the first data block. The data block is subjected to a size transformation operation to obtain a second data block, including: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the absolute value of the horizontal gradient is the same as the vertical gradient. When the sum of the absolute values of , is less than the preset threshold, the first data block is scaled to obtain a second data block, and the first and second side lengths of the second data block are 4M/N, 4 respectively; When the first side length M of a data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first data block Scaling is performed to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively; when the first side length M of the first data block is smaller than the first data block. When the length of the two sides is N, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block is scaled to obtain a second data block. The sum of the first side lengths of the second data block The second side lengths are respectively 4 and 4N/M; when the first side length M of the first data block is less than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are respectively 8, 8N/M; wherein, M and N are positive integer.
可以看出,在本申请实施例中,根据第一数据块第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度确定第二数据块的过程,充分利用了当前图像的自身信息,因而提高了后续根据第二数据块得到的概率向量的准确性,从而提高了码流中数据的准确性;同时,经过缩放可以减少第二数据块尺寸种类的数量,从而也会减少与不同尺寸的第二数据 块分别对应的神经网络模型的数量,节约了用于存储神经网络模型的存储空间,提高编码性能。It can be seen that, in the embodiment of the present application, the process of determining the second data block according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, is sufficient The self-information of the current image is used, thereby improving the accuracy of the probability vector obtained from the second data block subsequently, thereby improving the accuracy of the data in the code stream; at the same time, the number of size types of the second data block can be reduced by scaling , thereby also reducing the number of neural network models corresponding to second data blocks of different sizes, saving storage space for storing neural network models, and improving coding performance.
在一种可能的设计中,尺寸变换操作包括缩放和转置中的至少一项,上述根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,包括:当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4M/N,4;当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8M/N,8;In a possible design, the size transformation operation includes at least one of scaling and transposition, which is based on the size relationship between the first side length and the second side length of the first data block, and/or the level of the current image block. Gradient and vertical gradient, performing a size transformation operation on the first data block to obtain a second data block, including: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block is scaled to obtain the second data block, and the first and second side lengths of the second data block are respectively 4M/N, 4; when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset At the threshold, the first data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively;
当第一数据块的第一边长M小于第一数据块的第二边长N时,对第一数据块执行以下操作,以得到第二数据块:当第一数据块的水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行转置和缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4N/M,4;当第一数据块的水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行转置和缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8N/M,8;其中,M和N为正整数。When the first side length M of the first data block is smaller than the second side length N of the first data block, perform the following operations on the first data block to obtain the second data block: when the absolute value of the horizontal gradient of the first data block is When the sum of the absolute value of the value and the vertical gradient is less than the preset threshold, transpose and scale the first data block to obtain a second data block, and the first and second side lengths of the second data block are 4N respectively. /M, 4; when the sum of the absolute value of the horizontal gradient of the first data block and the absolute value of the vertical gradient is greater than or equal to the preset threshold, transpose and scale the first data block to obtain the second data block, The length of the first side and the length of the second side of the second data block are respectively 8N/M, 8; wherein, M and N are positive integers.
可以看出,在本申请实施例中,根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度确定第二数据块的过程,充分利用了当前图像的自身信息来确定第二数据块的尺寸,增强了第二数据块与当前图像块的相关性,因而提高了后续根据第二数据块得到的概率向量的准确性,从而提高了码流中数据的准确性;同时,通过执行缩放和转置操作,可以减少第二数据块尺寸种类的数量,同时也会减少与不同尺寸的第二数据块分别对应的神经网络模型的数量,从而节约了用于存储神经网络模型的存储空间,提高编码性能。It can be seen that, in the embodiment of the present application, the process of determining the second data block according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, The self-information of the current image is fully utilized to determine the size of the second data block, and the correlation between the second data block and the current image block is enhanced, thereby improving the accuracy of the probability vector obtained according to the second data block subsequently, thereby improving the At the same time, by performing scaling and transposing operations, the number of second data block size types can be reduced, and the number of neural network models corresponding to second data blocks of different sizes can also be reduced. , thereby saving the storage space for storing the neural network model and improving the coding performance.
在一种可能的设计中,上述根据目标帧内预测模式和概率向量,将目标帧内预测模式编入码流,包括:当尺寸变换操作不包含转置时,根据目标帧内预测模式的第一标识和概率向量,确定第一参考值,对第一参考值进行编码,以得到目标帧内预测模式对应的码流;当尺寸变换操作包含转置,且目标帧内预测模式为角度预测模式时,根据预设常数和目标帧内预测模式的第一标识确定第二标识,根据第二标识和概率向量,确定第二参考值;对第二参考值进行编码,以得到目标帧内预测模式对应的码流;当尺寸变换操作包含转置,且目标帧内预测模式为非角度预测模式时,根据目标帧内预测模式的第一标识和概率向量,确定第一参考值;对第一参考值进行编码,以得到目标帧内预测模式对应的码流。In a possible design, the above-mentioned encoding the target intra prediction mode into the code stream according to the target intra prediction mode and the probability vector includes: when the size transformation operation does not include transposition, according to the first intra prediction mode of the target intra prediction mode. an identifier and a probability vector, determine the first reference value, and encode the first reference value to obtain the code stream corresponding to the target intra-frame prediction mode; when the size transformation operation includes transposition, and the target intra-frame prediction mode is the angle prediction mode , determine the second identifier according to the preset constant and the first identifier of the target intra-frame prediction mode, determine the second reference value according to the second identifier and the probability vector; encode the second reference value to obtain the target intra-frame prediction mode Corresponding code stream; when the size transformation operation includes transposition, and the target intra-frame prediction mode is a non-angle prediction mode, determine the first reference value according to the first identifier and probability vector of the target intra-frame prediction mode; for the first reference The value is encoded to obtain the code stream corresponding to the target intra prediction mode.
应当理解,本申请实施例中,预设常数与多功能视频编码VVC技术中帧内预测模式的数量相等,即为67;第一标识可以为目标帧内预测模式对应的模式值;角度预测模式为VVC中模式值为2-67的65种帧内预测模式,非角度预测模式为模式值为0的平面planar预测模式和模式值为1的直流DC预测模式。It should be understood that, in this embodiment of the present application, the preset constant is equal to the number of intra-frame prediction modes in the multi-function video coding VVC technology, that is, 67; the first identifier may be the mode value corresponding to the target intra-frame prediction mode; the angle prediction mode It is 65 intra-frame prediction modes with a mode value of 2-67 in VVC, and the non-angle prediction mode is a planar prediction mode with a mode value of 0 and a DC-DC prediction mode with a mode value of 1.
可以看出,在本申请实施例中,当尺寸变换操作包含转置,且目标帧内预测模式为角度预测模式时,通过预设常数对目标帧内预测模式的第一标识进行处理,得到第二标识,进而根据第二标识确定第二参考值,确保在第一数据块经过转置的情况下,在码流中编入对应正确的第二参考值,从而保证编码过程的正确性。It can be seen that, in the embodiment of the present application, when the size transformation operation includes transposition and the target intra-frame prediction mode is the angle prediction mode, the first identifier of the target intra-frame prediction mode is processed by a preset constant to obtain the first identifier of the target intra-frame prediction mode. Second identification, and then determine the second reference value according to the second identification, to ensure that when the first data block is transposed, the corresponding correct second reference value is programmed into the code stream, thereby ensuring the correctness of the encoding process.
第二方面,本申请提供一种帧内预测模式的编码方法,该方法包括:获取当前图像块的 周围图像块;对周围图像块进行拼接或者级联,得到第一数据块;对第一数据块进行尺寸变换操作,以得到第二数据块,尺寸变换操作包括缩放和转置中的至少一项,缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将第二数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量,概率向量中的多个元素与多个帧内预测模式一一对应,概率向量中的任一元素用于表征对当前图像块进行预测时采用任一元素对应的帧内预测模式的概率;确定当前图像块的目标帧内预测模式;根据目标帧内预测模式和概率向量,将目标帧内预测模式编入码流。In a second aspect, the present application provides an intra-frame prediction mode encoding method, the method comprising: acquiring surrounding image blocks of a current image block; splicing or concatenating the surrounding image blocks to obtain a first data block; The block performs a size transformation operation to obtain the second data block, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling or proportional scaling in both horizontal and vertical directions; Input the second data block into the prediction mode probability model for processing to obtain the probability vector of the current image block. Multiple elements in the probability vector correspond to multiple intra prediction modes one-to-one, and any element in the probability vector is used to represent Use the probability of the intra prediction mode corresponding to any element when predicting the current image block; determine the target intra prediction mode of the current image block; encode the target intra prediction mode into the code according to the target intra prediction mode and the probability vector flow.
应当理解,本申请实施例中的预测模式概率模型可以是神经网络模型,不同尺寸的第二数据块对应不同的神经网络模型。It should be understood that the prediction mode probability model in this embodiment of the present application may be a neural network model, and second data blocks of different sizes correspond to different neural network models.
可以看出,在本申请实施例中,通过对第一数据块进行缩放,得到尺寸较小的第二数据块,可以有效降低后续利用第二数据块参与概率向量计算时的计算复杂度,从而提高编码效率;同时,通过尺寸变换操作,可以减少第二数据块尺寸种类的数量,从而减少与不同尺寸的第二数据块对应的神经网络模型的数量,进而节省神经网络模型的存储空间,提高编码性能。It can be seen that, in the embodiment of the present application, by scaling the first data block to obtain a second data block with a smaller size, the computational complexity of the subsequent use of the second data block to participate in the calculation of the probability vector can be effectively reduced, so that Improve coding efficiency; at the same time, through the size transformation operation, the number of size types of the second data block can be reduced, thereby reducing the number of neural network models corresponding to second data blocks of different sizes, thereby saving the storage space of the neural network model. encoding performance.
在一种可能的设计中,上述对第一数据块进行尺寸变换操作,以得到第二数据块,包括:当目标帧内预测模式为非矩阵加权帧内预测MIP模式,且第一数据块的尺寸不为目标尺寸时,对第一数据块进行尺寸变换操作,以得到第二数据块,第二数据块的尺寸等于目标尺寸。In a possible design, performing a size transformation operation on the first data block to obtain the second data block includes: when the target intra-frame prediction mode is a non-matrix-weighted intra-frame prediction MIP mode, and the first data block is When the size is not the target size, a size transformation operation is performed on the first data block to obtain a second data block, and the size of the second data block is equal to the target size.
应当理解,目标尺寸可以根据实际场景进行设定,且目标尺寸的数量小于第一数据块尺寸的数量。It should be understood that the target size may be set according to actual scenarios, and the number of target sizes is smaller than the number of first data block sizes.
可以看出,在本申请实施例中,由于第一数据块的尺寸有多种,通过对第一数据块进行尺寸变换操作,得到第二数据块,可以有效减少第二数据块尺寸种类的数量,从而在目标帧内预测模式为非矩阵加权帧内预测MIP模式的情况下,减少与不同尺寸的第二数据块对应的神经网络模型的数量,进而节省神经网络模型的存储空间,提高编码性能;同时,通过对第一数据块进行缩放,得到尺寸较小的第二数据块,从而在目标帧内预测模式为非矩阵加权帧内预测MIP模式的情况下,可以有效降低后续利用第二数据块参与概率向量计算时的计算复杂度,从而提高编码效率。It can be seen that, in this embodiment of the present application, since there are various sizes of the first data block, by performing a size transformation operation on the first data block to obtain the second data block, the number of sizes of the second data block can be effectively reduced , thereby reducing the number of neural network models corresponding to second data blocks of different sizes when the target intra prediction mode is the non-matrix weighted intra prediction MIP mode, thereby saving the storage space of the neural network model and improving the coding performance At the same time, by scaling the first data block, a second data block with a smaller size is obtained, so that when the target intra-frame prediction mode is a non-matrix weighted intra-frame prediction MIP mode, the subsequent use of the second data block can be effectively reduced. The computational complexity when the block participates in the calculation of the probability vector, thereby improving the coding efficiency.
在一种可能的设计中,上述对第一数据块进行尺寸变换操作,以得到第二数据块,包括:当目标帧内预测模式为非矩阵加权帧内预测MIP模式,且第一数据块的尺寸不为目标尺寸时,根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,第二数据块的尺寸等于目标尺寸。In a possible design, performing a size transformation operation on the first data block to obtain the second data block includes: when the target intra-frame prediction mode is a non-matrix-weighted intra-frame prediction MIP mode, and the first data block is When the size is not the target size, according to the size relationship between the length of the first side and the length of the second side of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, perform a size transformation operation on the first data block to A second data block is obtained, and the size of the second data block is equal to the target size.
应当理解,上述第一边长和第二边长可以分别为宽和高,或者分别为高和宽。It should be understood that the above-mentioned first side length and second side length may be width and height, respectively, or height and width, respectively.
可以看出,在本申请实施例中,通过第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度确定第二数据块的过程,充分利用了当前图像块的自身信息,增强了第二数据块与当前图像块的相关性,因而提高了后续根据第二数据块得到的概率向量的准确性,从而提高了码流中数据的准确性,提升了编码性能。It can be seen that, in the embodiment of the present application, the process of determining the second data block is based on the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, The self-information of the current image block is fully utilized, and the correlation between the second data block and the current image block is enhanced, thereby improving the accuracy of the probability vector obtained from the second data block, thereby improving the accuracy of the data in the code stream. , which improves encoding performance.
在一种可能的设计中,当目标帧内预测模式为非矩阵加权帧内预测MIP模式,且第一数据块的尺寸等于目标尺寸时,将第一数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量。In a possible design, when the target intra-frame prediction mode is a non-matrix-weighted intra-frame prediction MIP mode, and the size of the first data block is equal to the target size, the first data block is input into the prediction mode probability model for processing to obtain Get the probability vector of the current image patch.
可以看出,在本申请实施例中,当第一数据块的尺寸等于设定的目标尺寸时,不进行尺寸变换操作,直接利用第一数据块参与概率向量的生成,简化了对目标帧内预测模式进行编码的过程,提升了编码效率。It can be seen that, in the embodiment of the present application, when the size of the first data block is equal to the set target size, the size transformation operation is not performed, and the first data block is directly used to participate in the generation of the probability vector, which simplifies the process of analyzing the target frame. The process of coding in prediction mode improves coding efficiency.
在一种可能的设计中,尺寸变换操作包括缩放,上述根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,包括:当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4M/N,4;当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8M/N,8;当第一数据块的第一边长M小于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4,4N/M;当第一数据块的第一边长M小于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8,8N/M;其中,M和N为正整数。In a possible design, the size transformation operation includes scaling, the above-mentioned size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, to the first data block. The data block is subjected to a size transformation operation to obtain a second data block, including: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the absolute value of the horizontal gradient is the same as the vertical gradient. When the sum of the absolute values of , is less than the preset threshold, the first data block is scaled to obtain a second data block, and the first and second side lengths of the second data block are 4M/N, 4 respectively; When the first side length M of a data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first data block Scaling is performed to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively; when the first side length M of the first data block is smaller than the first data block. When the length of the two sides is N, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block is scaled to obtain a second data block. The sum of the first side lengths of the second data block The second side lengths are respectively 4 and 4N/M; when the first side length M of the first data block is less than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are respectively 8, 8N/M; wherein, M and N are positive integer.
可以看出,在本申请实施例中,根据第一数据块第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度确定第二数据块的过程,充分利用了当前图像的自身信息,因而提高了后续根据第二数据块得到的概率向量的准确性,从而提高了码流中数据的准确性;同时,经过缩放可以减少第二数据块尺寸种类的数量,从而也会减少与不同尺寸的第二数据块分别对应的神经网络模型的数量,节约了用于存储神经网络模型的存储空间,提高编码性能。It can be seen that, in the embodiment of the present application, the process of determining the second data block according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, is sufficient The self-information of the current image is used, thereby improving the accuracy of the probability vector obtained from the second data block subsequently, thereby improving the accuracy of the data in the code stream; at the same time, the number of size types of the second data block can be reduced by scaling , thereby also reducing the number of neural network models corresponding to second data blocks of different sizes, saving storage space for storing neural network models, and improving coding performance.
在一种可能的设计中,尺寸变换操作包括缩放和转置中的至少一项;上述根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,包括:当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4M/N,4;当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8M/N,8;In a possible design, the size transformation operation includes at least one of scaling and transposition; the above is based on the size relationship between the length of the first side and the length of the second side of the first data block, and/or the level of the current image block. Gradient and vertical gradient, performing a size transformation operation on the first data block to obtain a second data block, including: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block is scaled to obtain the second data block, and the first and second side lengths of the second data block are respectively 4M/N, 4; when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset At the threshold, the first data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively;
当第一数据块的第一边长M小于第一数据块的第二边长N时,对第一数据块执行以下操作,以得到第二数据块:当第一数据块的水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行转置和缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4N/M,4;当第一数据块的水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行转置和缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8N/M,8;其中,M和N为正整数。When the first side length M of the first data block is smaller than the second side length N of the first data block, perform the following operations on the first data block to obtain the second data block: when the absolute value of the horizontal gradient of the first data block is When the sum of the absolute value of the value and the vertical gradient is less than the preset threshold, transpose and scale the first data block to obtain a second data block, and the first and second side lengths of the second data block are 4N respectively. /M, 4; when the sum of the absolute value of the horizontal gradient of the first data block and the absolute value of the vertical gradient is greater than or equal to the preset threshold, transpose and scale the first data block to obtain the second data block, The length of the first side and the length of the second side of the second data block are respectively 8N/M, 8; wherein, M and N are positive integers.
可以看出,在本申请实施例中,根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度确定第二数据块的过程,充分利用了当前图像的自身信息来确定第二数据块的尺寸,增强了第二数据块与当前图像块的相关性,因而提高了后续根据第二数据块得到的概率向量的准确性,从而提高了码流中数据的准确性;同时,通过执行缩放和转置操作,可以减少第二数据块尺寸种类的数量,同时也会减少与不同尺寸的第二数据块分别对应的神经网络模型的数量,从而节约了用于存储神经网络模型的存储空间,提高编码性能。It can be seen that, in the embodiment of the present application, the process of determining the second data block according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, The self-information of the current image is fully utilized to determine the size of the second data block, and the correlation between the second data block and the current image block is enhanced, thereby improving the accuracy of the probability vector obtained according to the second data block subsequently, thereby improving the At the same time, by performing scaling and transposing operations, the number of second data block size types can be reduced, and the number of neural network models corresponding to second data blocks of different sizes can also be reduced. , thereby saving the storage space for storing the neural network model and improving the coding performance.
在一种可能的设计中,上述根据目标帧内预测模式和概率向量,将目标帧内预测模式编 入码流,包括:当尺寸变换操作不包含转置时,根据目标帧内预测模式的第一标识和概率向量,确定第一参考值;对第一参考值进行编码,以得到目标帧内预测模式对应的码流;当尺寸变换操作包含转置,且目标帧内预测模式为角度预测模式时,根据预设常数和目标帧内预测模式的第一标识确定第二标识,根据第二标识和概率向量,确定第二参考值;对第二参考值进行编码,以得到目标帧内预测模式对应的码流;当尺寸变换操作包含转置,且目标帧内预测模式为非角度预测模式时,根据目标帧内预测模式的第一标识和概率向量,确定第一参考值;对第一参考值进行编码,以得到目标帧内预测模式对应的码流。In a possible design, the above-mentioned encoding the target intra prediction mode into the code stream according to the target intra prediction mode and the probability vector includes: when the size transformation operation does not include transposition, according to the first intra prediction mode of the target intra prediction mode. an identifier and a probability vector to determine the first reference value; the first reference value is encoded to obtain a code stream corresponding to the target intra-frame prediction mode; when the size transformation operation includes transposition, and the target intra-frame prediction mode is the angle prediction mode , determine the second identifier according to the preset constant and the first identifier of the target intra-frame prediction mode, determine the second reference value according to the second identifier and the probability vector; encode the second reference value to obtain the target intra-frame prediction mode Corresponding code stream; when the size transformation operation includes transposition, and the target intra-frame prediction mode is a non-angle prediction mode, determine the first reference value according to the first identifier and probability vector of the target intra-frame prediction mode; for the first reference The value is encoded to obtain the code stream corresponding to the target intra prediction mode.
应当理解,本申请实施例中,预设常数与多功能视频编码VVC技术中帧内预测模式的数量相等,即为67;第一标识可以为目标帧内预测模式对应的模式值;角度预测模式为VVC中模式值为2-67的65种帧内预测模式,非角度预测模式为模式值为0的平面planar预测模式和模式值为1的直流DC预测模式。It should be understood that, in this embodiment of the present application, the preset constant is equal to the number of intra-frame prediction modes in the multi-function video coding VVC technology, that is, 67; the first identifier may be the mode value corresponding to the target intra-frame prediction mode; the angle prediction mode It is 65 intra-frame prediction modes with a mode value of 2-67 in VVC, and the non-angle prediction mode is a planar prediction mode with a mode value of 0 and a DC-DC prediction mode with a mode value of 1.
可以看出,在本申请实施例中,当尺寸变换操作包含转置,且目标帧内预测模式为角度预测模式时,通过预设常数对目标帧内预测模式的第一标识进行处理,得到第二标识,进而根据第二标识确定第二参考值,确保在第一数据块经过转置的情况下,在码流中编入对应正确的第二参考值,从而保证编码过程的正确性。It can be seen that, in the embodiment of the present application, when the size transformation operation includes transposition and the target intra-frame prediction mode is the angle prediction mode, the first identifier of the target intra-frame prediction mode is processed by a preset constant to obtain the first identifier of the target intra-frame prediction mode. Second identification, and then determine the second reference value according to the second identification, to ensure that when the first data block is transposed, the corresponding correct second reference value is programmed into the code stream, thereby ensuring the correctness of the encoding process.
第三方面,本申请提供一种帧内预测模式的解码方法,该方法包括:获取当前图像块对应的码流,以及当前图像块的周围图像块的重建块、预测块和残差块中的至少两个;根据周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到当前图像块的概率向量,概率向量中的多个元素与多个帧内预测模式一一对应,概率向量中的任一元素用于表征对当前图像块进行预测时采用任一元素对应的帧内预测模式的概率;根据码流和概率向量,确定当前图像块的目标帧内预测模式,并根据目标帧内预测模式,确定当前图像块的预测块。In a third aspect, the present application provides a decoding method for an intra-frame prediction mode, the method comprising: acquiring a code stream corresponding to a current image block, and a reconstruction block, a prediction block, and a residual block of the surrounding image blocks of the current image block. At least two; obtain a probability vector of the current image block according to at least two of the reconstruction block, prediction block and residual block of the surrounding image blocks and the prediction mode probability model, and multiple elements in the probability vector are related to multiple intra prediction modes. One-to-one correspondence, any element in the probability vector is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block; according to the code stream and the probability vector, determine the target intra prediction of the current image block mode, and determine the prediction block of the current image block according to the target intra prediction mode.
可以看出,在本申请实施例中,获取当前图像块的三个周围图像块的重建块、预测块和残差块中的至少两个,由于可以通过每个周围图像块的重建块、预测块和残差块中的两个计算得到另一个,因而本申请实施例可以充分利用了周围图像块的相关信息,进而根据周围图像块的重建块、预测块和残差块中的至少两个生成的当前图像块的概率向量更加准确,进而后续可以根据该概率向量和码流准确地确定当前图像块的目标帧内预测模式,提升解码性能。It can be seen that, in this embodiment of the present application, at least two of the reconstruction blocks, prediction blocks and residual blocks of the three surrounding image blocks of the current image block are obtained, because the reconstruction blocks, prediction blocks of each surrounding image block can be obtained through Two of the block and the residual block are calculated to obtain the other, so the embodiment of the present application can make full use of the relevant information of the surrounding image blocks, and then according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks The generated probability vector of the current image block is more accurate, and subsequently, the target intra prediction mode of the current image block can be accurately determined according to the probability vector and the code stream, thereby improving the decoding performance.
在一种可能的设计中,上述根据周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到当前图像块的概率向量,包括:对周围图像块的重建块、预测块和残差块中的至少两个进行拼接或者级联,得到第一数据块;根据第一数据块和预测模式概率模型得到当前图像块的概率向量。In a possible design, the probability vector of the current image block is obtained according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks and the prediction mode probability model, including: the reconstruction block of the surrounding image blocks The first data block is obtained by splicing or concatenating at least two of the prediction block and the residual block; the probability vector of the current image block is obtained according to the first data block and the prediction mode probability model.
应当理解,本申请实施例中的预测模式概率模型可以是神经网络模型,不同尺寸的第二数据块对应不同的神经网络模型。It should be understood that the prediction mode probability model in this embodiment of the present application may be a neural network model, and second data blocks of different sizes correspond to different neural network models.
可以看出,在本申请实施例中,由于周围图像块的重建块、预测块和残差块中的至少两个可以充分表征周围图像块的相关信息,通过对三个周围图像块的重建块、预测块和残差块中的至少两个进行拼接或者级联,得到第一数据块,因而该第一数据块可以充分反映当前图像块的相关信息,根据该第一数据块得到的当前图像块的概率向量也更加准确,进而后续可以根据该概率向量和码流准确地确定当前图像块的目标帧内预测模式,提升解码性能。It can be seen that in this embodiment of the present application, since at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks can fully characterize the relevant information of the surrounding image blocks, , at least two of the prediction block and the residual block are spliced or cascaded to obtain the first data block, so the first data block can fully reflect the relevant information of the current image block, and the current image obtained according to the first data block The probability vector of the block is also more accurate, and subsequently, the target intra prediction mode of the current image block can be accurately determined according to the probability vector and the code stream, thereby improving the decoding performance.
在一种可能的设计中,上述根据第一数据块和预测模式概率模型得到当前图像块的概率向量,包括:当第一数据块的尺寸为目标尺寸时,将第一数据块输入预测模式概率模型进行 处理,以得到当前图像块的概率向量,或者;当第一数据块的尺寸不为目标尺寸时,对第一数据块进行尺寸变换操作,以得到第二数据块,第二数据块的尺寸等于目标尺寸,尺寸变换操作包括缩放和转置中的至少一项,缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将第二数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量。In a possible design, obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model includes: when the size of the first data block is the target size, inputting the first data block into the prediction mode probability The model is processed to obtain the probability vector of the current image block, or; when the size of the first data block is not the target size, a size transformation operation is performed on the first data block to obtain the second data block, the The size is equal to the target size, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or proportional scaling in both horizontal and vertical directions; inputting the second data block into the prediction mode The probability model is processed to obtain the probability vector of the current image patch.
应当理解,本申请实施例中的各步骤的具体实现过程与第一方面中对应的步骤相同,此处不再赘述。It should be understood that the specific implementation process of each step in the embodiment of the present application is the same as the corresponding step in the first aspect, and details are not repeated here.
可以看出,在本申请实施例中,由于第一数据块的尺寸有多种,通过对第一数据块进行尺寸变换操作,得到第二数据块,可以有效减少第二数据块尺寸种类的数量,从而减少与不同尺寸的第二数据块对应的神经网络模型的数量,进而节省神经网络模型的存储空间,提高解码性能,此外,对于纹理较简单,且尺寸较大的第一数据块,经过尺寸变换操作缩小为较小尺寸的第二数据块后,可以有效地降低后续利用神经网络模型计算概率向量时的计算复杂度,从而提高解码效率。It can be seen that, in this embodiment of the present application, since there are various sizes of the first data block, by performing a size transformation operation on the first data block to obtain the second data block, the number of sizes of the second data block can be effectively reduced , thereby reducing the number of neural network models corresponding to the second data blocks of different sizes, thereby saving the storage space of the neural network model and improving the decoding performance. In addition, for the first data block with simpler texture and larger size, after After the size transformation operation is reduced to a second data block with a smaller size, the computational complexity of the subsequent calculation of the probability vector by using the neural network model can be effectively reduced, thereby improving the decoding efficiency.
在一种可能的设计中,上述根据第一数据块和预测模式概率模型得到当前图像块的概率向量,包括:当第一数据块的尺寸为目标尺寸时,将第一数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量,或者;In a possible design, obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model includes: when the size of the first data block is the target size, inputting the first data block into the prediction mode probability The model is processed to obtain a probability vector for the current image patch, or;
当第一数据块的尺寸不为目标尺寸时,根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,第一边长和第二边长为第一数据块中两个互相垂直的边的长度,第二数据块的尺寸为目标尺寸,尺寸变换操作包括缩放和转置中的至少一项,缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将第二数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量。When the size of the first data block is not the target size, according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, for the first data block A size transformation operation is performed to obtain a second data block, the first side length and the second side length are the lengths of two mutually perpendicular sides in the first data block, the size of the second data block is the target size, and the size transformation operation includes at least one of scaling and transposition, the scaling includes horizontal scaling, vertical scaling or equal scaling in both horizontal and vertical directions; inputting the second data block into the prediction mode probability model for processing to obtain the current image The probability vector for the block.
可以看出,在本申请实施例中,通过第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度确定第二数据块的过程,充分利用了当前图像块的自身信息,增强了第二数据块与当前图像块的相关性,因而提高了后续根据第二数据块得到的概率向量的准确性,从而后续可以根据该概率向量和码流准确地确定当前图像块的目标帧内预测模式,提升了解码性能。It can be seen that, in the embodiment of the present application, the process of determining the second data block is based on the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, The self-information of the current image block is fully utilized, and the correlation between the second data block and the current image block is enhanced, thereby improving the accuracy of the probability vector obtained according to the second data block, so that the probability vector and the code can be obtained later. Streaming accurately determines the target intra prediction mode for the current image block, improving decoding performance.
在一种可能的设计中,尺寸变换操作包括缩放,上述根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,包括:当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4M/N,4;当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8M/N,8;当第一数据块的第一边长M小于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4,4N/M;当第一数据块的第一边长M小于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8,8N/M;其中,M和N为正整数。In a possible design, the size transformation operation includes scaling, the above-mentioned size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, to the first data block. The data block is subjected to a size transformation operation to obtain a second data block, including: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the absolute value of the horizontal gradient is the same as the vertical gradient. When the sum of the absolute values of , is less than the preset threshold, the first data block is scaled to obtain a second data block, and the first and second side lengths of the second data block are 4M/N, 4 respectively; When the first side length M of a data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first data block Scaling is performed to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively; when the first side length M of the first data block is smaller than the first data block. When the length of the two sides is N, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block is scaled to obtain a second data block. The sum of the first side lengths of the second data block The second side lengths are respectively 4 and 4N/M; when the first side length M of the first data block is less than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are respectively 8, 8N/M; wherein, M and N are positive integer.
可以看出,在本申请实施例中,根据第一数据块第一边长与第二边长的大小关系,和/或 当前图像块的水平梯度和垂直梯度确定第二数据块的过程,充分利用了当前图像的自身信息,因而提高了后续根据第二数据块得到的概率向量的准确性,从而后续根据该概率向量和码流所确定的当前图像块的目标帧内预测模式更加准确;同时,经过缩放可以减少第二数据块尺寸种类的数量,从而也会减少与不同尺寸的第二数据块分别对应的神经网络模型的数量,节约了用于存储神经网络模型的存储空间,提高解码性能。It can be seen that, in the embodiment of the present application, the process of determining the second data block according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, is sufficient The self-information of the current image is utilized, thereby improving the accuracy of the probability vector obtained subsequently according to the second data block, so that the target intra prediction mode of the current image block subsequently determined according to the probability vector and the code stream is more accurate; , the number of size types of the second data block can be reduced by scaling, thereby reducing the number of neural network models corresponding to the second data blocks of different sizes, saving the storage space for storing the neural network model and improving the decoding performance. .
在一种可能的设计中,尺寸变换操作包括缩放和转置中的至少一项,上述根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,包括:当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4M/N,4;当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8M/N,8;In a possible design, the size transformation operation includes at least one of scaling and transposition, which is based on the size relationship between the first side length and the second side length of the first data block, and/or the level of the current image block. Gradient and vertical gradient, performing a size transformation operation on the first data block to obtain a second data block, including: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block is scaled to obtain the second data block, and the first and second side lengths of the second data block are respectively 4M/N, 4; when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset At the threshold, the first data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively;
当第一数据块的第一边长M小于第一数据块的第二边长N时,对第一数据块执行以下操作,以得到第二数据块:当第一数据块的水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行转置和缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4N/M,4;当第一数据块的水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行转置和缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8N/M,8;其中,M和N为正整数。When the first side length M of the first data block is smaller than the second side length N of the first data block, perform the following operations on the first data block to obtain the second data block: when the absolute value of the horizontal gradient of the first data block is When the sum of the absolute value of the value and the vertical gradient is less than the preset threshold, transpose and scale the first data block to obtain a second data block, and the first and second side lengths of the second data block are 4N respectively. /M, 4; when the sum of the absolute value of the horizontal gradient of the first data block and the absolute value of the vertical gradient is greater than or equal to the preset threshold, transpose and scale the first data block to obtain the second data block, The length of the first side and the length of the second side of the second data block are respectively 8N/M, 8; wherein, M and N are positive integers.
可以看出,在上述实施例中,根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度确定第二数据块的过程,充分利用了当前图像的自身信息来确定第二数据块的尺寸,增强了第二数据块与当前图像块的相关性,因而提高了后续根据第二数据块得到的概率向量的准确性,从而提高了码流中数据的准确性;同时,通过执行缩放和转置操作,可以减少第二数据块尺寸种类的数量,同时也会减少与不同尺寸的第二数据块分别对应的神经网络模型的数量,从而节约了用于存储神经网络模型的存储空间,提高编码性能。It can be seen that, in the above embodiment, the process of determining the second data block according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, is sufficient. The self-information of the current image is used to determine the size of the second data block, which enhances the correlation between the second data block and the current image block, thereby improving the accuracy of the subsequent probability vector obtained according to the second data block, thereby improving the accuracy of the data in the code stream; at the same time, by performing scaling and transposing operations, the number of sizes of second data blocks can be reduced, and the number of neural network models corresponding to second data blocks of different sizes can also be reduced. Thus, the storage space for storing the neural network model is saved, and the coding performance is improved.
在一种可能的设计中,上述根据码流和概率向量,确定当前图像块的目标帧内预测模式,包括:对码流进行解码,得到目标参考值;当尺寸变换操作不包含转置时,根据目标参考值和概率向量确定目标帧内预测模式;当尺寸变换操作包含转置时,根据目标参考值和概率向量确定第一标识;当第一标识对应的帧内预测模式为角度预测模式时,根据预设常数和第一标识,确定目标帧内预测模式;当第一标识对应的帧内预测模式为非角度预测模式时,根据目标参考值和概率向量确定目标帧内预测模式。In a possible design, the above-mentioned determining the target intra prediction mode of the current image block according to the code stream and the probability vector includes: decoding the code stream to obtain the target reference value; when the size transformation operation does not include transposition, Determine the target intra-frame prediction mode according to the target reference value and the probability vector; when the size transformation operation includes transposition, determine the first mark according to the target reference value and the probability vector; When the intra-frame prediction mode corresponding to the first mark is the angle prediction mode , determine the target intra-frame prediction mode according to the preset constant and the first identifier; when the intra-frame prediction mode corresponding to the first identifier is a non-angle prediction mode, determine the target intra-frame prediction mode according to the target reference value and the probability vector.
可以看出,在本申请实施例中,当对第一数据块进行的尺寸变换操作包括转置,且上述第一标识对应的帧内预测模式为角度预测模式时,表明在编码过程中上述第一标识经过了转置处理,此时,根据预设常数对第一标识进行转置处理,得到第二标识,该第二标识对应的帧内预测模式即为当前图像块的目标帧内预测模式,通过上述方式可以确保第一数据块进行了转置操作后,在解码过程中得到正确的目标帧内预测模式,进而根据目标帧内预测模式得到当前图像块真实的预测块。It can be seen that, in the embodiment of the present application, when the size transformation operation performed on the first data block includes transposition, and the intra-frame prediction mode corresponding to the above-mentioned first identifier is an angle prediction mode, it indicates that the above-mentioned first data block is in the encoding process. An identifier has undergone transposition processing. At this time, the first identifier is transposed according to a preset constant to obtain a second identifier. The intra-frame prediction mode corresponding to the second identifier is the target intra-frame prediction mode of the current image block. In the above manner, it can be ensured that after the first data block is transposed, the correct target intra-frame prediction mode is obtained in the decoding process, and then the real prediction block of the current image block is obtained according to the target intra-frame prediction mode.
第四方面,本申请提供一种帧内预测模式的解码方法,该方法包括:获取当前图像块对应的码流,以及当前图像块的周围图像块;对周围图像块进行拼接或者级联,得到第一数据 块;对第一数据块进行尺寸变换操作,以得到第二数据块,尺寸变换操作包括缩放和转置中的至少一项,缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将第二数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量,概率向量中的多个元素与多个帧内预测模式一一对应,概率向量中的任一元素用于表征对当前图像块进行预测时采用任一元素对应的帧内预测模式的概率;根据码流和概率向量,确定当前图像块的目标帧内预测模式,并根据目标帧内预测模式,确定当前图像块的预测块。In a fourth aspect, the present application provides a decoding method for an intra prediction mode, the method comprising: acquiring a code stream corresponding to a current image block and surrounding image blocks of the current image block; splicing or concatenating the surrounding image blocks to obtain The first data block; a size transformation operation is performed on the first data block to obtain a second data block, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or horizontal and vertical scaling. Proportional scaling in two directions; input the second data block into the prediction mode probability model for processing to obtain the probability vector of the current image block, multiple elements in the probability vector correspond to multiple intra prediction modes one-to-one, and the probability Any element in the vector is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block; according to the code stream and the probability vector, determine the target intra prediction mode of the current image block, and according to the target Intra prediction mode, which determines the prediction block of the current image block.
可以看出,在本申请实施例中,对于纹理较简单且尺寸较大的第一数据块,经过尺寸变换操作缩小为较小尺寸的第二数据块后,可以有效地降低后续利用神经网络模型计算概率向量时的计算复杂度,提高解码效率;此外,由于第一数据块的尺寸有多种,通过对第一数据块进行尺寸变换操作,得到第二数据块,可以有效减少第二数据块尺寸种类的数量,从而减少与不同尺寸的第二数据块对应的神经网络模型的数量,进而节省神经网络模型的存储空间,提高解码性能。It can be seen that, in the embodiment of the present application, for a first data block with a simpler texture and a larger size, after the size transformation operation is reduced to a second data block with a smaller size, the subsequent use of the neural network model can be effectively reduced. The computational complexity when calculating the probability vector improves the decoding efficiency; in addition, since there are various sizes of the first data block, the second data block can be obtained by performing a size transformation operation on the first data block, which can effectively reduce the second data block. The number of size types is reduced, thereby reducing the number of neural network models corresponding to second data blocks of different sizes, thereby saving the storage space of the neural network model and improving the decoding performance.
在一种可能的设计中,上述对第一数据块进行尺寸变换操作,以得到第二数据块,包括:当第一数据块的尺寸不为目标尺寸时,对第一数据块进行尺寸变换操作,以得到第二数据块,第二数据块的尺寸等于目标尺寸。In a possible design, performing the size transformation operation on the first data block to obtain the second data block includes: when the size of the first data block is not the target size, performing a size transformation operation on the first data block , to obtain a second data block whose size is equal to the target size.
可以看出,在本申请实施例中,目标尺寸可以根据实际应用场景进行设定,通过尺寸变换操作将第一数据块的尺寸变为目标尺寸可以有效满足实际的场景需求;同时,由于目标尺寸种类的数量小于第一数据块尺寸种类的数量,因而经过尺寸变换操作,可以有效减少第二数据块尺寸种类的数量,以及减少与不同尺寸的第二数据块对应的神经网络模型的数量,进而节省神经网络模型的存储空间,提高解码性能。It can be seen that in the embodiment of the present application, the target size can be set according to the actual application scenario, and changing the size of the first data block to the target size through the size transformation operation can effectively meet the actual scene requirements; at the same time, because the target size The number of types is smaller than the number of types of the size of the first data block, so after the size transformation operation, the number of types of the size of the second data block can be effectively reduced, and the number of neural network models corresponding to the second data blocks of different sizes can be reduced. Save the memory space of the neural network model and improve the decoding performance.
在一种可能的设计中,上述对第一数据块进行尺寸变换操作,以得到第二数据块,包括:当第一数据块的尺寸不为目标尺寸时,根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,第二数据块的尺寸等于目标尺寸。In a possible design, performing the size transformation operation on the first data block to obtain the second data block includes: when the size of the first data block is not the target size, according to the first side of the first data block The size relationship between the length and the second side length, and/or the horizontal gradient and vertical gradient of the current image block, perform a size transformation operation on the first data block to obtain a second data block, and the size of the second data block is equal to the target size.
可以看出,在本申请实施例中,通过第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度确定第二数据块的过程,充分利用了当前图像块的自身信息,增强了第二数据块与当前图像块的相关性,因而提高了后续根据第二数据块得到的概率向量的准确性,从而后续可以根据该概率向量和码流准确地确定当前图像块的目标帧内预测模式,提升了解码性能。It can be seen that, in the embodiment of the present application, the process of determining the second data block is based on the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, The self-information of the current image block is fully utilized, and the correlation between the second data block and the current image block is enhanced, thereby improving the accuracy of the probability vector obtained according to the second data block, so that the probability vector and the code can be obtained later. Streaming accurately determines the target intra prediction mode for the current image block, improving decoding performance.
在一种可能的设计中,上述方法还包括:当第一数据块的尺寸等于目标尺寸时,将第一数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量。In a possible design, the above method further includes: when the size of the first data block is equal to the target size, inputting the first data block into the prediction mode probability model for processing to obtain a probability vector of the current image block.
可以看出,在本申请实施例中,当第一数据块的尺寸等于设定的目标尺寸时,不进行尺寸变换操作,直接利用第一数据块参与概率向量的生成,简化了当前图像块概率向量的生成过程,提升了解码效率。It can be seen that in this embodiment of the present application, when the size of the first data block is equal to the set target size, the size transformation operation is not performed, and the first data block is directly used to participate in the generation of the probability vector, which simplifies the probability of the current image block. The vector generation process improves the decoding efficiency.
在一种可能的设计中,尺寸变换操作包括缩放,上述根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,包括:当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4M/N,4;当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边 长和第二边长分别为8M/N,8;当第一数据块的第一边长M小于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4,4N/M;当第一数据块的第一边长M小于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8,8N/M;其中,M和N为正整数。In a possible design, the size transformation operation includes scaling, the above-mentioned size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, to the first data block. The data block is subjected to a size transformation operation to obtain a second data block, including: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the absolute value of the horizontal gradient is the same as the vertical gradient. When the sum of the absolute values of , is less than the preset threshold, the first data block is scaled to obtain a second data block, and the first and second side lengths of the second data block are 4M/N, 4 respectively; When the first side length M of a data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first data block Scaling is performed to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively; when the first side length M of the first data block is smaller than the first data block. When the length of the two sides is N, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block is scaled to obtain a second data block. The sum of the first side lengths of the second data block The second side lengths are respectively 4 and 4N/M; when the first side length M of the first data block is less than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are respectively 8, 8N/M; wherein, M and N are positive integer.
应当理解,上述对第一数据块进行缩放得到第二数据块的具体过程和有益效果与第三方面中对应的过程相同,此处不再赘述。It should be understood that the specific process and beneficial effects of scaling the first data block to obtain the second data block are the same as the corresponding process in the third aspect, and will not be repeated here.
在一种可能的设计中,尺寸变换操作包括缩放和转置中的至少一项,上述根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,包括:当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4M/N,4;当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8M/N,8;In a possible design, the size transformation operation includes at least one of scaling and transposition, which is based on the size relationship between the first side length and the second side length of the first data block, and/or the level of the current image block. Gradient and vertical gradient, performing a size transformation operation on the first data block to obtain a second data block, including: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block is scaled to obtain the second data block, and the first and second side lengths of the second data block are respectively 4M/N, 4; when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset At the threshold, the first data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively;
当第一数据块的第一边长M小于第一数据块的第二边长N时,对第一数据块执行以下操作,以得到第二数据块:当第一数据块的水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行转置和缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4N/M,4;当第一数据块的水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行转置和缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8N/M,8;其中,M和N为正整数。When the first side length M of the first data block is smaller than the second side length N of the first data block, perform the following operations on the first data block to obtain the second data block: when the absolute value of the horizontal gradient of the first data block is When the sum of the absolute value of the value and the vertical gradient is less than the preset threshold, transpose and scale the first data block to obtain a second data block, and the first and second side lengths of the second data block are 4N respectively. /M, 4; when the sum of the absolute value of the horizontal gradient of the first data block and the absolute value of the vertical gradient is greater than or equal to the preset threshold, transpose and scale the first data block to obtain the second data block, The length of the first side and the length of the second side of the second data block are respectively 8N/M, 8; wherein, M and N are positive integers.
应当理解,上述对第一数据块进行缩放得到第二数据块的具体过程和有益效果与第三方面中对应的过程相同,此处不再赘述。It should be understood that the specific process and beneficial effects of scaling the first data block to obtain the second data block are the same as the corresponding process in the third aspect, and will not be repeated here.
在一种可能的设计中,上述根据码流和概率向量,确定当前图像块的目标帧内预测模式,包括:对码流进行解码,得到目标参考值;当尺寸变换操作不包含转置时,根据目标参考值和概率向量确定目标帧内预测模式;当尺寸变换操作包含转置时,根据目标参考值和概率向量确定第一标识;当第一标识对应的帧内预测模式为角度预测模式时,根据预设常数和第一标识,确定目标帧内预测模式;当第一标识对应的帧内预测模式为非角度预测模式时,根据目标参考值和概率向量确定目标帧内预测模式。In a possible design, the above-mentioned determining the target intra prediction mode of the current image block according to the code stream and the probability vector includes: decoding the code stream to obtain the target reference value; when the size transformation operation does not include transposition, Determine the target intra-frame prediction mode according to the target reference value and the probability vector; when the size transformation operation includes transposition, determine the first mark according to the target reference value and the probability vector; When the intra-frame prediction mode corresponding to the first mark is the angle prediction mode , determine the target intra-frame prediction mode according to the preset constant and the first identifier; when the intra-frame prediction mode corresponding to the first identifier is a non-angle prediction mode, determine the target intra-frame prediction mode according to the target reference value and the probability vector.
应当理解,上述根据码流和概率向量确定当前图像块的目标帧内预测模式的过程和有益效果与第三方面中对应的过程相同,此处不再赘述。It should be understood that the above process and beneficial effects of determining the target intra prediction mode of the current image block according to the code stream and the probability vector are the same as the corresponding process in the third aspect, and will not be repeated here.
第五方面,本申请提供一种编码装置,其有益效果可以参见第一方面的描述,此处不再赘述。所述编码装置具有实现上述第一方面的方法实例中行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。在一个可能的设计中,所述编码装置包括:获取单元,用于获取当前图像块的周围图像块的重建块、预测块和残差块中的至少两个;处理单元,用于根据周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到当前图像块的概率向量,概率向量中的多个元素与多个帧内预测模式一一对应,概率向量中的任一元素用于表征对当前图像块进行预测时采用任一元素对应的帧内预测模式的概率;确定单元,用于确定当前图像 块的目标帧内预测模式;编码单元,用于根据目标帧内预测模式和概率向量,将目标帧内预测模式编入码流。这些模块可以执行上述第一方面方法示例中的相应功能,具体参见方法示例中的详细描述,此处不做赘述。In a fifth aspect, the present application provides an encoding device, the beneficial effects of which can be referred to the description of the first aspect, and are not repeated here. The encoding device has the function of implementing the behavior in the method example of the first aspect above. The functions can be implemented by hardware, or can be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above functions. In a possible design, the encoding device includes: an obtaining unit, configured to obtain at least two of a reconstruction block, a prediction block and a residual block of surrounding image blocks of the current image block; At least two of the reconstructed block, prediction block and residual block of the block and the prediction mode probability model obtain the probability vector of the current image block, and multiple elements in the probability vector correspond to multiple intra prediction modes one-to-one. Any element of is used to characterize the probability of adopting the intra prediction mode corresponding to any element when predicting the current image block; the determination unit is used to determine the target intra prediction mode of the current image block; the coding unit is used to determine the target intra prediction mode according to the target Intra prediction mode and probability vector, encoding the target intra prediction mode into the code stream. These modules can perform the corresponding functions in the method examples of the first aspect. For details, please refer to the detailed descriptions in the method examples, which will not be repeated here.
第六方面,本申请提供一种编码装置,其有益效果可以参见第二方面的描述,此处不再赘述。所述编码装置具有实现上述第二方面的方法实例中行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。在一个可能的设计中,所述编码装置包括:获取单元,用于获取当前图像块的周围图像块;处理单元,用于对周围图像块进行拼接或者级联,得到第一数据块;对第一数据块进行尺寸变换操作,以得到第二数据块,尺寸变换操作包括缩放和转置中的至少一项,缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;以及用于将第二数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量,概率向量中的多个元素与多个帧内预测模式一一对应,概率向量中的任一元素用于表征对当前图像块进行预测时采用任一元素对应的帧内预测模式的概率;确定单元,用于确定当前图像块的目标帧内预测模式;编码单元,用于根据目标帧内预测模式和概率向量,将目标帧内预测模式编入码流。这些模块可以执行上述第二方面方法示例中的相应功能,具体参见方法示例中的详细描述,此处不做赘述。In a sixth aspect, the present application provides an encoding apparatus, the beneficial effects of which can be referred to the description of the second aspect, and are not repeated here. The encoding device has the function of implementing the behavior in the method example of the second aspect above. The functions can be implemented by hardware, or can be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above functions. In a possible design, the encoding device includes: an acquisition unit, used for acquiring surrounding image blocks of the current image block; a processing unit, used for splicing or concatenating the surrounding image blocks to obtain a first data block; A data block is subjected to a size transformation operation to obtain a second data block. The size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or an equal ratio between horizontal and vertical directions. scaling; and for processing the second data block into the prediction mode probability model to obtain a probability vector of the current image block, where multiple elements in the probability vector correspond to multiple intra prediction modes one-to-one, and any element in the probability vector is in one-to-one correspondence. An element is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block; the determining unit is used to determine the target intra prediction mode of the current image block; the coding unit is used to determine the target intra prediction mode according to the target intra frame Prediction mode and probability vector, encoding the target intra prediction mode into the code stream. These modules can perform the corresponding functions in the method examples of the second aspect. For details, please refer to the detailed descriptions in the method examples, which will not be repeated here.
第七方面,本申请提供一种解码装置,有益效果可以参见第三方面的描述,此处不再赘述。所述解码装置具有实现上述第三方面的方法实例中行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。在一个可能的设计中,所述解码装置包括:获取单元,用于获取当前图像块对应的码流,以及当前图像块的周围图像块的重建块、预测块和残差块中的至少两个;处理单元,用于根据周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到当前图像块的概率向量,概率向量中的多个元素与多个帧内预测模式一一对应,概率向量中的任一元素用于表征对当前图像块进行预测时采用任一元素对应的帧内预测模式的概率;解码单元,用于根据码流和概率向量,确定当前图像块的目标帧内预测模式,并根据目标帧内预测模式,确定当前图像块的预测块。这些模块可以执行上述第三方面方法示例中的相应功能,具体参见方法示例中的详细描述,此处不做赘述。In a seventh aspect, the present application provides a decoding apparatus. For beneficial effects, reference may be made to the description of the third aspect, which will not be repeated here. The decoding device has a function to implement the behavior in the method example of the third aspect above. The functions can be implemented by hardware, or can be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above functions. In a possible design, the decoding apparatus includes: an obtaining unit, configured to obtain a code stream corresponding to the current image block, and at least two of the reconstruction blocks, prediction blocks and residual blocks of surrounding image blocks of the current image block A processing unit for obtaining the probability vector of the current image block according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image block and the prediction mode probability model, and multiple elements in the probability vector and multiple intraframes The prediction modes are in one-to-one correspondence, and any element in the probability vector is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block; the decoding unit is used to determine the current image block according to the code stream and the probability vector. The target intra prediction mode of the image block, and the prediction block of the current image block is determined according to the target intra prediction mode. These modules can perform the corresponding functions in the method examples of the third aspect. For details, please refer to the detailed descriptions in the method examples, which will not be repeated here.
第八方面,本申请提供一种解码装置,有益效果可以参见第四方面的描述,此处不再赘述。所述解码装置具有实现上述第四方面的方法实例中行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。在一个可能的设计中,所述解码装置包括:获取单元,用于获取当前图像块对应的码流,以及当前图像块的周围图像块;处理单元,用于对周围图像块进行拼接或者级联,得到第一数据块;对第一数据块进行尺寸变换操作,以得到第二数据块,尺寸变换操作包括缩放和转置中的至少一项,缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;以及用于将第二数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量,概率向量中的多个元素与多个帧内预测模式一一对应,概率向量中的任一元素用于表征对当前图像块进行预测时采用任一元素对应的帧内预测模式的概率;解码单元,用于根据码流和概率向量,确定当前图像块的目标帧内预测模式,并根据目标帧内预测模式,确 定当前图像块的预测块。这些模块可以执行上述第四方面方法示例中的相应功能,具体参见方法示例中的详细描述,此处不做赘述。In an eighth aspect, the present application provides a decoding apparatus. For beneficial effects, reference may be made to the description of the fourth aspect, which will not be repeated here. The decoding device has a function to implement the behavior in the method example of the fourth aspect above. The functions can be implemented by hardware, or can be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above functions. In a possible design, the decoding device includes: an acquiring unit, configured to acquire a code stream corresponding to the current image block and surrounding image blocks of the current image block; a processing unit, configured to splicing or concatenating the surrounding image blocks to obtain a first data block; perform a size transformation operation on the first data block to obtain a second data block, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling or horizontal scaling and proportional scaling in two vertical directions; and for processing the second data block into the prediction mode probability model to obtain a probability vector of the current image block, multiple elements in the probability vector and multiple intra prediction modes One-to-one correspondence, any element in the probability vector is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block; the decoding unit is used to determine the current image block according to the code stream and the probability vector The target intra-frame prediction mode is determined, and the prediction block of the current image block is determined according to the target intra-frame prediction mode. These modules can perform the corresponding functions in the method example of the fourth aspect. For details, please refer to the detailed description in the method example, which will not be repeated here.
本申请第一方面所述的方法可由本申请第五方面所述的装置执行。本申请第一方面所述的方法的其它特征和实现方式直接取决于本申请第五方面所述的装置的功能性和实现方式。The method described in the first aspect of the present application may be performed by the apparatus described in the fifth aspect of the present application. Other features and implementations of the method described in the first aspect of the present application directly depend on the functionality and implementation of the device described in the fifth aspect of the present application.
本申请第二方面所述的方法可由本申请第六方面所述的装置执行。本申请第二方面所述的方法的其它特征和实现方式直接取决于本申请第六方面所述的装置的功能性和实现方式。The method described in the second aspect of the present application may be performed by the apparatus described in the sixth aspect of the present application. Other features and implementations of the method described in the second aspect of the present application directly depend on the functionality and implementation of the device described in the sixth aspect of the present application.
本申请第三方面所述的方法可由本申请第七方面所述的装置执行。本申请第三方面所述的方法的其它特征和实现方式直接取决于本申请第七方面所述的装置的功能性和实现方式。The method described in the third aspect of the present application may be performed by the apparatus described in the seventh aspect of the present application. Other features and implementations of the method described in the third aspect of the present application directly depend on the functionality and implementation of the device described in the seventh aspect of the present application.
本申请第四方面所述的方法可由本申请第八方面所述的装置执行。本申请第四方面所述的方法的其它特征和实现方式直接取决于本申请第八方面所述的装置的功能性和实现方式。The method described in the fourth aspect of the present application may be performed by the apparatus described in the eighth aspect of the present application. Other features and implementations of the method described in the fourth aspect of the present application directly depend on the functionality and implementation of the apparatus described in the eighth aspect of the present application.
第九方面,本申请提供一种编码器(20),包括处理电路,该编码器可用于执行第一方面和第二方面中全部或任一种可能的实施例中的方法。In a ninth aspect, the present application provides an encoder (20) comprising a processing circuit, the encoder being operable to perform the method in all or any of the possible embodiments of the first aspect and the second aspect.
第十方面,本申请提供一种解码器(30),包括处理电路,该解码器可用于执行第三方面和第四方面中全部或任一种可能的实施例中的方法。In a tenth aspect, the present application provides a decoder (30) comprising a processing circuit, the decoder being operable to perform the method in all or any of the possible embodiments of the third aspect and the fourth aspect.
第十一方面,本申请提供一种编码器,其特征在于,包括:一个或多个处理器;以及非瞬时性计算机可读存储介质,该非瞬时性计算机可读存储介质耦合到所述处理器并存储由所述处理器执行的程序,其中所述程序在由所述处理器执行时,使得所述编码器执行上述第一方面或第二方面中全部或任一种可能的实施例中的方法。In an eleventh aspect, the present application provides an encoder, comprising: one or more processors; and a non-transitory computer-readable storage medium coupled to the processing and stores a program executed by the processor, wherein the program, when executed by the processor, causes the encoder to perform all or any of the possible embodiments of the first aspect or the second aspect above Methods.
第十二方面,本申请提供一种解码器,包括:一个或多个处理器;非瞬时性计算机可读存储介质,该非瞬时性计算机可读存储介质耦合到所述处理器并存储由所述处理器执行的程序,其中所述程序在由所述处理器执行时,使得所述解码器执行上述第三方面或第四方面中全部或任一种可能的实施例中的方法。In a twelfth aspect, the present application provides a decoder, comprising: one or more processors; a non-transitory computer-readable storage medium coupled to the processor and storing data from the A program executed by the processor, wherein the program, when executed by the processor, causes the decoder to execute the method in all or any of the possible embodiments of the third aspect or the fourth aspect.
第十三方面,本申请提供一种非瞬时性计算机可读存储介质,包括程序代码,当其由计算机设备运行时,执行上述第一方面、第二方面、第三方面和第四方面中全部或任一种可能的实施例中的方法。In a thirteenth aspect, the present application provides a non-transitory computer-readable storage medium comprising program code that, when executed by a computer device, executes all of the above-mentioned first, second, third and fourth aspects or the method in any possible embodiment.
第十四方面,本申请提供一种非瞬时性计算机可读存储介质,包括根据上述第一方面或第二方面中全部或任一种可能的实施例中方法编码的比特流。In a fourteenth aspect, the present application provides a non-transitory computer-readable storage medium, comprising a bitstream encoded according to the method in all or any of the possible embodiments of the first aspect or the second aspect.
第十五方面,本申请提供一种计算机程序产品,包括程序代码,该程序代码在计算机或处理器上运行时执行上述第一方面、第二方面、第三方面和第四方面中全部或任一种可能的实施例中的方法。In a fifteenth aspect, the present application provides a computer program product, comprising program code, the program code executing all or any of the above-mentioned first, second, third and fourth aspects when run on a computer or processor method in one possible embodiment.
附图及以下说明中将详细描述一个或多个实施例。其它特征、目的和优点在说明、附图以及权利要求中是显而易见的。One or more embodiments are described in detail in the accompanying drawings and the description below. Other features, objects, and advantages are apparent from the description, drawings, and claims.
附图说明Description of drawings
下面对本申请实施例用到的附图进行介绍。The accompanying drawings used in the embodiments of the present application will be introduced below.
图1A为用于实现本申请实施例的视频译码系统示例的框图,其中该系统利用神经网络来编码或解码视频图像;1A is a block diagram of an example video coding system for implementing embodiments of the present application, wherein the system utilizes a neural network to encode or decode video images;
图1B为用于实现本申请实施例的视频译码系统另一示例的框图,其中该视频编码器和/或视频解码器使用神经网络来编码或解码视频图像;1B is a block diagram of another example of a video coding system for implementing embodiments of the present application, wherein the video encoder and/or video decoder use a neural network to encode or decode video images;
图2为用于实现本申请实施例的视频编码器实例示例的框图,其中该视频编码器20使用神经网络来编码视频图像;2 is a block diagram of an example example of a video encoder for implementing embodiments of the present application, wherein the video encoder 20 uses a neural network to encode video images;
图3为用于实现本申请实施例的视频解码器实例示例的框图,其中该视频解码器30使用神经网络来解码视频图像;3 is a block diagram of an example example of a video decoder for implementing embodiments of the present application, wherein the video decoder 30 uses a neural network to decode video images;
图4为用于实现本申请实施例的视频译码装置的示意性框图;4 is a schematic block diagram of a video decoding apparatus for implementing an embodiment of the present application;
图5为用于实现本申请实施例的视频译码装置的示意性框图;5 is a schematic block diagram of a video decoding apparatus for implementing an embodiment of the present application;
图6为申请实施例中的一种用于获取当前图像块的概率向量的神经网络结构示意图;6 is a schematic diagram of a neural network structure for obtaining a probability vector of a current image block according to an embodiment of the application;
图7为申请实施例中一种对当前图像块的目标帧内预测模式进行编码的方法的流程图;7 is a flowchart of a method for encoding a target intra prediction mode of a current image block in an embodiment of the application;
图8为申请实施例中另一种对当前图像块的目标帧内预测模式进行编码的方法的流程图;8 is a flowchart of another method for encoding a target intra prediction mode of a current image block in an embodiment of the application;
图9为本申请实施例中的多个帧内预测模式的示意图;9 is a schematic diagram of multiple intra prediction modes in an embodiment of the present application;
图10为本申请实施例中一种当前图像块的拼接方式的示意图;10 is a schematic diagram of a splicing method of a current image block in an embodiment of the present application;
图11-1至图11-4为本申请实施例中当前图像块在当前帧内的四种位置关系示意图;Figures 11-1 to 11-4 are schematic diagrams showing four kinds of positional relationships of the current image block in the current frame in the embodiment of the present application;
图12为本申请实施例中一种编码器对当前图像块的帧内预测模式的相关语法元素进行编码的流程图;12 is a flowchart of an encoder encoding related syntax elements of an intra prediction mode of a current image block according to an embodiment of the present application;
图13为本申请实施例中一种用于对当前图像块的目标帧内预测模式进行解码的方法的流程图;13 is a flowchart of a method for decoding a target intra prediction mode of a current image block in an embodiment of the present application;
图14为本申请实施例中另一种用于对当前图像块的目标帧内预测模式进行解码的方法的流程图;14 is a flowchart of another method for decoding a target intra prediction mode of a current image block in an embodiment of the present application;
图15是本申请实施例中一种编码装置的示意性框图;15 is a schematic block diagram of an encoding apparatus in an embodiment of the present application;
图16是本申请实施例中一种解码装置的示意性框图。FIG. 16 is a schematic block diagram of a decoding apparatus in an embodiment of the present application.
具体实施方式Detailed ways
本申请实施例提供一种基于AI的视频图像压缩技术,尤其是提供一种基于神经网络的帧内预测模式的编解码技术,以改进传统的混合视频编解码系统。Embodiments of the present application provide an AI-based video image compression technology, and in particular, provide a neural network-based intra-frame prediction mode encoding and decoding technology to improve traditional hybrid video encoding and decoding systems.
视频编码通常是指处理形成视频或视频序列的图像序列。在视频编码领域,术语“图像(picture)”、“帧(frame)”或“图片(image)”可以用作同义词。视频编码(或通常称为编码)包括视频编码和视频解码两部分。视频编码在源侧执行,通常包括处理(例如,压缩)原始视频图像以减少表示该视频图像所需的数据量(从而更高效存储和/或传输)。视频解码在目的地侧执行,通常包括相对于编码器作逆处理,以重建视频图像。实施例涉及的视频图像(或通常称为图像)的“编码”应理解为视频图像或视频序列的“编码”或“解码”。编码部分和解码部分也合称为编解码(编码和解码,CODEC)。Video coding generally refers to the processing of sequences of images that form a video or video sequence. In the field of video coding, the terms "picture", "frame" or "image" may be used as synonyms. Video encoding (or commonly referred to as encoding) includes two parts: video encoding and video decoding. Video encoding is performed on the source side and typically involves processing (eg, compressing) the original video image to reduce the amount of data required to represent the video image (and thus store and/or transmit more efficiently). Video decoding is performed on the destination side and typically involves inverse processing relative to the encoder to reconstruct the video image. The "encoding" of a video image (or commonly referred to as an image) in relation to the embodiments should be understood as the "encoding" or "decoding" of a video image or a video sequence. The encoding part and the decoding part are also collectively referred to as codec (encoding and decoding, CODEC).
在无损视频编码情况下,可以重建原始视频图像,即重建的视频图像与原始视频图像具有相同的质量(假设存储或传输期间没有传输损耗或其它数据丢失)。在有损视频编码情况下,通过量化等执行进一步压缩,来减少表示视频图像所需的数据量,而解码器侧无法完全重建视频图像,即重建的视频图像的质量比原始视频图像的质量较低或较差。In the case of lossless video coding, the original video image can be reconstructed, ie the reconstructed video image has the same quality as the original video image (assuming no transmission loss or other data loss during storage or transmission). In the case of lossy video coding, further compression is performed through quantization, etc. to reduce the amount of data required to represent the video image, and the decoder side cannot completely reconstruct the video image, that is, the quality of the reconstructed video image is higher than that of the original video image. low or poor.
几个视频编码标准属于“有损混合型视频编解码”(即,将像素域中的空间和时间预测与变换域中用于应用量化的2D变换编码结合)。视频序列中的每个图像通常分割成不重叠的块集合,通常在块级上进行编码。换句话说,编码器通常在块(视频块)级处理即编码视频,例如,通过空间(帧内)预测和时间(帧间)预测来产生预测块;从当前块(当前处理/待处理的块)中减去预测块,得到残差块;在变换域中变换残差块并量化残差块,以减少待传输 (压缩)的数据量,而解码器侧将相对于编码器的逆处理部分应用于编码或压缩的块,以重建用于表示的当前块。另外,编码器需要重复解码器的处理步骤,使得编码器和解码器生成相同的预测(例如,帧内预测和帧间预测)和/或重建像素,用于处理,即编码后续块。Several video coding standards fall under the category of "lossy hybrid video codecs" (ie, combining spatial and temporal prediction in the pixel domain with 2D transform coding in the transform domain for applying quantization). Each image in a video sequence is usually partitioned into sets of non-overlapping blocks, usually encoded at the block level. In other words, encoders typically process i.e. encode video at the block (video block) level, eg, by spatial (intra) prediction and temporal (inter) prediction to generate prediction blocks; block) to subtract the prediction block to get the residual block; transform the residual block in the transform domain and quantize the residual block to reduce the amount of data to be transmitted (compressed), and the decoder side will process inversely with respect to the encoder Partially applied to encoded or compressed blocks to reconstruct the current block for representation. Additionally, the encoder needs to repeat the decoder's processing steps so that the encoder and decoder generate the same predictions (eg, intra- and inter-prediction) and/or reconstructed pixels for processing, ie, encoding subsequent blocks.
在以下译码系统10的实施例中,编码器20和解码器30根据图1A至图3进行描述。In the following embodiments of the decoding system 10, the encoder 20 and the decoder 30 are described with respect to FIGS. 1A-3.
图1A为示例性译码系统10的示意性框图,例如可以利用本申请技术的视频译码系统10(或简称为译码系统10)。视频译码系统10中的视频编码器20(或简称为编码器20)和视频解码器30(或简称为解码器30)代表可用于根据本申请中描述的各种示例执行各技术的设备等。1A is a schematic block diagram of an exemplary coding system 10, such as a video coding system 10 (or simply coding system 10) that may utilize the techniques of this application. Video encoder 20 (or encoder 20 for short) and video decoder 30 (or decoder 30 for short) in video coding system 10 represent devices, etc. that may be used to perform techniques in accordance with the various examples described in this application .
如图1A所示,译码系统10包括源设备12,源设备12用于将编码图像等编码图像数据21提供给用于对编码图像数据21进行解码的目的设备14。As shown in FIG. 1A , the decoding system 10 includes a source device 12 for providing encoded image data 21 such as encoded images to a destination device 14 for decoding the encoded image data 21 .
源设备12包括编码器20,另外即可选地,可包括图像源16、图像预处理器等预处理器(或预处理单元)18、通信接口(或通信单元)22。The source device 12 includes an encoder 20 and, alternatively, an image source 16 , a preprocessor (or preprocessing unit) 18 such as an image preprocessor, and a communication interface (or communication unit) 22 .
图像源16可包括或可以为任意类型的用于捕获现实世界图像等的图像捕获设备,和/或任意类型的图像生成设备,例如用于生成计算机动画图像的计算机图形处理器或任意类型的用于获取和/或提供现实世界图像、计算机生成图像(例如,屏幕内容、虚拟现实(virtual reality,VR)图像和/或其任意组合(例如增强现实(augmented reality,AR)图像)的设备。所述图像源可以为存储上述图像中的任意图像的任意类型的内存或存储器。Image source 16 may include or be any type of image capture device for capturing real-world images, etc., and/or any type of image generation device, such as a computer graphics processor or any type of user for generating computer animation images. Devices used to acquire and/or provide real-world images, computer-generated images (e.g., screen content, virtual reality (VR) images, and/or any combination thereof (e.g., augmented reality, AR) images). The image source may be any type of memory or storage that stores any of the above-mentioned images.
为了区分预处理器(或预处理单元)18执行的处理,图像(或图像数据)17也可称为原始图像(或原始图像数据)17。To distinguish the processing performed by the preprocessor (or preprocessing unit) 18 , the image (or image data) 17 may also be referred to as the original image (or original image data) 17 .
预处理器18用于接收(原始)图像数据17,并对图像数据17进行预处理,得到预处理图像(或预处理图像数据)19。例如,预处理器18执行的预处理可包括修剪、颜色格式转换(例如从RGB转换为YCbCr)、调色或去噪。可以理解的是,预处理单元18可以为可选组件。The preprocessor 18 is used to receive the (raw) image data 17 and preprocess the image data 17 to obtain a preprocessed image (or preprocessed image data) 19 . For example, the preprocessing performed by the preprocessor 18 may include trimming, color format conversion (eg, from RGB to YCbCr), toning, or denoising. It is understood that the preprocessing unit 18 may be an optional component.
视频编码器(或编码器)20用于接收预处理图像数据19并提供编码图像数据21(下面将根据图2等进一步描述)。A video encoder (or encoder) 20 is used to receive preprocessed image data 19 and to provide encoded image data 21 (described further below with respect to FIG. 2 etc.).
源设备12中的通信接口22可用于:接收编码图像数据21并通过通信信道13向目的设备14等另一设备或任何其它设备发送编码图像数据21(或其它任意处理后的版本),以便存储或直接重建。The communication interface 22 in the source device 12 can be used to: receive the encoded image data 21 and send the encoded image data 21 (or any other processed version) over the communication channel 13 to another device such as the destination device 14 or any other device for storage or rebuild directly.
目的设备14包括解码器30,另外即可选地,可包括通信接口(或通信单元)28、后处理器(或后处理单元)32和显示设备34。The destination device 14 includes a decoder 30 and may additionally, alternatively, include a communication interface (or communication unit) 28 , a post-processor (or post-processing unit) 32 and a display device 34 .
目的设备14中的通信接口28用于直接从源设备12或从存储设备等任意其它源设备接收编码图像数据21(或其它任意处理后的版本),例如,存储设备为编码图像数据存储设备,并将编码图像数据21提供给解码器30。The communication interface 28 in the destination device 14 is used to receive the encoded image data 21 (or any other processed version) directly from the source device 12 or from any other source device such as a storage device, for example, the storage device is an encoded image data storage device, The encoded image data 21 is supplied to the decoder 30 .
通信接口22和通信接口28可用于通过源设备12与目的设备14之间的直连通信链路,例如直接有线或无线连接等,或者通过任意类型的网络,例如有线网络、无线网络或其任意组合、任意类型的私网和公网或其任意类型的组合,发送或接收编码图像数据(或编码数据)21。Communication interface 22 and communication interface 28 may be used through a direct communication link between source device 12 and destination device 14, such as a direct wired or wireless connection, etc., or through any type of network, such as a wired network, a wireless network, or any Combination, any type of private network and public network, or any type of combination, send or receive encoded image data (or encoded data) 21 .
例如,通信接口22可用于将编码图像数据21封装为报文等合适的格式,和/或使用任意类型的传输编码或处理来处理所述编码后的图像数据,以便在通信链路或通信网络上进行传 输。For example, the communication interface 22 may be used to encapsulate the encoded image data 21 into a suitable format such as a message, and/or use any type of transfer encoding or processing to process the encoded image data for transmission over a communication link or communication network transfer on.
通信接口28与通信接口22对应,例如,可用于接收传输数据,并使用任意类型的对应传输解码或处理和/或解封装对传输数据进行处理,得到编码图像数据21。The communication interface 28 corresponds to the communication interface 22 and may be used, for example, to receive transmission data and process the transmission data using any type of corresponding transmission decoding or processing and/or decapsulation to obtain encoded image data 21 .
通信接口22和通信接口28均可配置为如图1A中从源设备12指向目的设备14的对应通信信道13的箭头所指示的单向通信接口,或双向通信接口,并且可用于发送和接收消息等,以建立连接,确认并交换与通信链路和/或例如编码后的图像数据传输等数据传输相关的任何其它信息,等等。Both the communication interface 22 and the communication interface 28 can be configured as a one-way communication interface as indicated by the arrows from the source device 12 to the corresponding communication channel 13 of the destination device 14 in FIG. 1A, or a two-way communication interface, and can be used to send and receive messages etc. to establish a connection, acknowledge and exchange any other information related to a communication link and/or data transfer such as encoded image data transfer, etc.
视频解码器(或解码器)30用于接收编码图像数据21并提供解码图像数据(或解码图像数据)31(下面将根据图3等进一步描述)。A video decoder (or decoder) 30 is used to receive encoded image data 21 and to provide decoded image data (or decoded image data) 31 (described further below with reference to FIG. 3 etc.).
后处理器32用于对解码后的图像等解码图像数据31(也称为重建后的图像数据)进行后处理,得到后处理后的图像等后处理图像数据33。后处理单元32执行的后处理可以包括例如颜色格式转换(例如从YCbCr转换为RGB)、调色、修剪或重采样,或者用于产生供显示设备34等显示的解码图像数据31等任何其它处理。The post-processor 32 is configured to perform post-processing on the decoded image data 31 (also referred to as reconstructed image data) such as a decoded image to obtain post-processed image data 33 such as a post-processed image. Post-processing performed by post-processing unit 32 may include, for example, color format conversion (eg, from YCbCr to RGB), toning, trimming, or resampling, or any other processing used to generate decoded image data 31 for display by display device 34, etc. .
显示设备34用于接收后处理图像数据33,以向用户或观看者等显示图像。显示设备34可以为或包括任意类型的用于表示重建后图像的显示器,例如,集成或外部显示屏或显示器。例如,显示屏可包括液晶显示器(liquid crystal display,LCD)、有机发光二极管(organic light emitting diode,OLED)显示器、等离子显示器、投影仪、微型LED显示器、硅基液晶显示器(liquid crystal on silicon,LCoS)、数字光处理器(digital light processor,DLP)或任意类型的其它显示屏。A display device 34 is used to receive post-processed image data 33 to display the image to a user or viewer or the like. Display device 34 may be or include any type of display for representing the reconstructed image, eg, an integrated or external display screen or display. For example, the display screen may include a liquid crystal display (LCD), an organic light emitting diode (OLED) display, a plasma display, a projector, a micro LED display, a liquid crystal on silicon (LCoS) display ), digital light processor (DLP), or any other type of display.
译码系统10还包括训练引擎25,训练引擎25用于训练编码器20(尤其是编码器20中的帧内预测单元254)或解码器30(尤其是解码器30中的帧内预测单元344)以处理根据当前图像块的周围图像块拼接或级联得到的第一数据块或第二数据块,以生成当前图像块的概率向量。The coding system 10 also includes a training engine 25 for training the encoder 20 (in particular the intra prediction unit 254 in the encoder 20 ) or the decoder 30 (in particular the intra prediction unit 344 in the decoder 30 ) ) to process the first data block or the second data block obtained by splicing or concatenating the surrounding image blocks of the current image block to generate a probability vector of the current image block.
尽管图1A示出了源设备12和目的设备14作为独立的设备,但设备实施例也可以同时包括源设备12和目的设备14或同时包括源设备12和目的设备14的功能,即同时包括源设备12或对应功能和目的设备14或对应功能。在这些实施例中,源设备12或对应功能和目的设备14或对应功能可以使用相同硬件和/或软件或通过单独的硬件和/或软件或其任意组合来实现。Although FIG. 1A shows source device 12 and destination device 14 as separate devices, device embodiments may include both source device 12 and destination device 14 or the functions of both source device 12 and destination device 14, ie, include both source device 12 and destination device 14. Device 12 or corresponding function and destination device 14 or corresponding function. In these embodiments, source device 12 or corresponding functionality and destination device 14 or corresponding functionality may be implemented using the same hardware and/or software or by separate hardware and/or software, or any combination thereof.
根据描述,图1A所示的源设备12和/或目的设备14中的不同单元或功能的存在和(准确)划分可能根据实际设备和应用而有所不同,这对技术人员来说是显而易见的。As will be apparent to the skilled person from the description, the existence and (exact) division of different units or functions in the source device 12 and/or destination device 14 shown in FIG. 1A may vary depending on the actual device and application .
编码器20(例如视频编码器20)或解码器30(例如视频解码器30)或两者都可通过如图1B所示的处理电路实现,例如一个或多个微处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application-specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)、离散逻辑、硬件、视频编码专用处理器或其任意组合。编码器20可以通过处理电路46实现,以包含参照图2编码器20论述的各种模块和/或本文描述的任何其它编码器系统或子系统。解码器30可以通过处理电路46实现,以包含参照图3解码器30论述的各种模块和/或本文描述的任何其它解码器系统或子系统。所述处理电路46可用于执行下文论述的各种操作。如图5所示,如果部分技术在软件中实施,则设备可以将软件的指令存储在合适的非瞬时性计算机可读存储介质中,并且使用一个或多个处理器在硬件中执行指令,从而执行本发明技术。视频编码器20和视频解码器30中的其中一个可作为组合编解码器(encoder/decoder,CODEC)的一部分集成在单个设备中,如图1B 所示。Encoder 20 (eg, video encoder 20 ) or decoder 30 (eg, video decoder 30 ) or both may be implemented by processing circuitry as shown in FIG. 1B , eg, one or more microprocessors, digital signal processors (digital signal processor, DSP), application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), discrete logic, hardware, special-purpose processor for video encoding, or any combination thereof . Encoder 20 may be implemented by processing circuitry 46 to include the various modules discussed with reference to encoder 20 of FIG. 2 and/or any other encoder system or subsystem described herein. Decoder 30 may be implemented by processing circuitry 46 to include the various modules discussed with reference to decoder 30 of FIG. 3 and/or any other decoder system or subsystem described herein. The processing circuitry 46 may be used to perform various operations discussed below. As shown in FIG. 5, if parts of the techniques are implemented in software, a device may store the instructions of the software in a suitable non-transitory computer-readable storage medium and execute the instructions in hardware using one or more processors, thereby Implement the techniques of the present invention. One of the video encoder 20 and the video decoder 30 may be integrated in a single device as part of a combined codec (encoder/decoder, CODEC), as shown in FIG. 1B .
源设备12和目的设备14可包括各种设备中的任一种,包括任意类型的手持设备或固定设备,例如,笔记本电脑或膝上型电脑、手机、智能手机、平板或平板电脑、相机、台式计算机、机顶盒、电视机、显示设备、数字媒体播放器、视频游戏控制台、视频流设备(例如,内容业务服务器或内容分发服务器)、广播接收设备、广播发射设备,等等,并可以不使用或使用任意类型的操作系统。在一些情况下,源设备12和目的设备14可配备用于无线通信的组件。因此,源设备12和目的设备14可以是无线通信设备。Source device 12 and destination device 14 may include any of a variety of devices, including any type of handheld or stationary device, such as a laptop or laptop, cell phone, smartphone, tablet or tablet, camera, Desktop computers, set-top boxes, televisions, display devices, digital media players, video game consoles, video streaming devices (eg, content service servers or content distribution servers), broadcast receiving equipment, broadcast transmitting equipment, etc., and may not Use or use any type of operating system. In some cases, source device 12 and destination device 14 may be equipped with components for wireless communication. Thus, source device 12 and destination device 14 may be wireless communication devices.
在一些情况下,图1A所示的视频译码系统10仅仅是示例性的,本申请提供的技术可适用于视频编码设置(例如,视频编码或视频解码),这些设置不一定包括编码设备与解码设备之间的任何数据通信。在其它示例中,数据从本地存储器中检索,通过网络发送,等等。视频编码设备可以对数据进行编码并将数据存储到存储器中,和/或视频解码设备可以从存储器中检索数据并对数据进行解码。在一些示例中,编码和解码由相互不通信而只是编码数据到存储器和/或从存储器中检索并解码数据的设备来执行。In some cases, the video coding system 10 shown in FIG. 1A is merely exemplary, and the techniques provided herein may be applicable to video coding settings (eg, video encoding or video decoding) that do not necessarily include the encoding device and the Decode any data communication between devices. In other examples, data is retrieved from local storage, sent over a network, and so on. The video encoding device may encode and store the data in memory, and/or the video decoding device may retrieve and decode the data from the memory. In some examples, encoding and decoding are performed by devices that do not communicate with each other but merely encode data to and/or retrieve and decode data from memory.
图1B是根据一示例性实施例的包含图2的视频编码器20和/或图3的视频解码器30的视频译码系统40的实例的说明图。视频译码系统40可以包含成像设备41、视频编码器20、视频解码器30(和/或藉由处理电路46实施的视频编/解码器)、天线42、一个或多个处理器43、一个或多个内存存储器44和/或显示设备45。1B is an illustrative diagram of an example of a video coding system 40 including video encoder 20 of FIG. 2 and/or video decoder 30 of FIG. 3, according to an exemplary embodiment. Video coding system 40 may include imaging device 41, video encoder 20, video decoder 30 (and/or video encoder/decoder implemented by processing circuit 46), antenna 42, one or more processors 43, a or multiple memory stores 44 and/or display devices 45 .
如图1B所示,成像设备41、天线42、处理电路46、视频编码器20、视频解码器30、处理器43、内存存储器44和/或显示设备45能够互相通信。在不同实例中,视频译码系统40可以只包含视频编码器20或只包含视频解码器30。As shown in Figure IB, imaging device 41, antenna 42, processing circuit 46, video encoder 20, video decoder 30, processor 43, memory storage 44, and/or display device 45 can communicate with each other. In different examples, video coding system 40 may include only video encoder 20 or only video decoder 30 .
在一些实例中,天线42可以用于传输或接收视频数据的经编码比特流。另外,在一些实例中,显示设备45可以用于呈现视频数据。处理电路46可以包含专用集成电路(application-specific integrated circuit,ASIC)逻辑、图形处理器、通用处理器等。视频译码系统40也可以包含可选的处理器43,该可选处理器43类似地可以包含专用集成电路(application-specific integrated circuit,ASIC)逻辑、图形处理器、通用处理器等。另外,内存存储器44可以是任何类型的存储器,例如易失性存储器(例如,静态随机存取存储器(static random access memory,SRAM)、动态随机存储器(dynamic random access memory,DRAM)等)或非易失性存储器(例如,闪存等)等。在非限制性实例中,内存存储器44可以由超速缓存内存实施。在其它实例中,处理电路46可以包含存储器(例如,缓存等)用于实施图像缓冲器等。In some examples, antenna 42 may be used to transmit or receive an encoded bitstream of video data. Additionally, in some instances, display device 45 may be used to present video data. Processing circuitry 46 may include application-specific integrated circuit (ASIC) logic, graphics processors, general purpose processors, and the like. Video coding system 40 may also include an optional processor 43, which may similarly include application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, and the like. Additionally, the memory memory 44 may be any type of memory, such as volatile memory (eg, static random access memory (SRAM), dynamic random access memory (DRAM), etc.) or non-volatile memory volatile memory (eg, flash memory, etc.), etc. In a non-limiting example, memory storage 44 may be implemented by cache memory. In other examples, processing circuitry 46 may include memory (eg, cache memory, etc.) for implementing image buffers, and the like.
在一些实例中,通过逻辑电路实施的视频编码器20可以包含(例如,通过处理电路46或内存存储器44实施的)图像缓冲器和(例如,通过处理电路46实施的)图形处理单元。图形处理单元可以通信耦合至图像缓冲器。图形处理单元可以包含通过处理电路46实施的视频编码器20,以实施参照图2和/或本文中所描述的任何其它编码器系统或子系统所论述的各种模块。逻辑电路可以用于执行本文所论述的各种操作。In some examples, video encoder 20 implemented by logic circuitry may include an image buffer (eg, implemented by processing circuitry 46 or memory memory 44 ) and a graphics processing unit (eg, implemented by processing circuitry 46 ). The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include video encoder 20 implemented by processing circuitry 46 to implement the various modules discussed with reference to FIG. 2 and/or any other encoder system or subsystem described herein. Logic circuits may be used to perform the various operations discussed herein.
在一些实例中,视频解码器30可以以类似方式通过处理电路46实施,以实施参照图3的视频解码器30和/或本文中所描述的任何其它解码器系统或子系统所论述的各种模块。在一些实例中,逻辑电路实施的视频解码器30可以包含(通过处理电路46或内存存储器44实施的)图像缓冲器和(例如,通过处理电路46实施的)图形处理单元。图形处理单元可以通信耦合至图像缓冲器。图形处理单元可以包含通过处理电路46实施的视频解码器30,以实 施参照图3和/或本文中所描述的任何其它解码器系统或子系统所论述的各种模块。In some examples, video decoder 30 may be implemented by processing circuitry 46 in a similar manner to implement various of the types discussed with reference to video decoder 30 of FIG. 3 and/or any other decoder systems or subsystems described herein. module. In some examples, logic circuit-implemented video decoder 30 may include an image buffer (implemented by processing circuit 46 or memory memory 44) and a graphics processing unit (eg, implemented by processing circuit 46). The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include video decoder 30 implemented by processing circuitry 46 to implement the various modules discussed with reference to Figure 3 and/or any other decoder systems or subsystems described herein.
在一些实例中,天线42可以用于接收视频数据的经编码比特流。如所论述,经编码比特流可以包含本文所论述的与编码视频帧相关的数据、指示符、索引值、模式选择数据等,例如与编码分割相关的数据(例如,变换系数或经量化变换系数,(如所论述的)可选指示符,和/或定义编码分割的数据)。视频译码系统40还可包含耦合至天线42并用于解码经编码比特流的视频解码器30。显示设备45用于呈现视频帧。In some examples, antenna 42 may be used to receive an encoded bitstream of video data. As discussed, the encoded bitstream may include data, indicators, index values, mode selection data, etc., as discussed herein related to encoded video frames, such as data related to encoded partitions (eg, transform coefficients or quantized transform coefficients). , (as discussed) optional indicators, and/or data defining the encoding split). Video coding system 40 may also include video decoder 30 coupled to antenna 42 for decoding the encoded bitstream. Display device 45 is used to present video frames.
应理解,本申请实施例中对于参考视频编码器20所描述的实例,视频解码器30可以用于执行相反过程。关于信令语法元素,视频解码器30可以用于接收并解析这种语法元素,相应地解码相关视频数据。在一些例子中,视频编码器20可以将语法元素熵编码成经编码视频比特流。在此类实例中,视频解码器30可以解析这种语法元素,并相应地解码相关视频数据。It should be understood that for the examples described with reference to the video encoder 20 in the embodiments of the present application, the video decoder 30 may be used to perform the opposite process. With regard to signaling syntax elements, video decoder 30 may be operable to receive and parse such syntax elements, decoding the associated video data accordingly. In some examples, video encoder 20 may entropy encode the syntax elements into an encoded video bitstream. In such instances, video decoder 30 may parse such syntax elements and decode related video data accordingly.
为便于描述,参考通用视频编码(Versatile video coding,VVC)参考软件或由ITU-T视频编码专家组(Video Coding Experts Group,VCEG)和ISO/IEC运动图像专家组(Motion Picture Experts Group,MPEG)的视频编码联合工作组(Joint Collaboration Team on Video Coding,JCT-VC)开发的高性能视频编码(High-Efficiency Video Coding,HEVC)描述本发明实施例。本领域普通技术人员理解本发明实施例不限于HEVC或VVC。For ease of description, refer to the Versatile Video Coding (VVC) reference software or the ITU-T Video Coding Experts Group (VCEG) and ISO/IEC Motion Picture Experts Group (MPEG) The High-Efficiency Video Coding (HEVC) developed by the Joint Collaboration Team on Video Coding (JCT-VC) describes the embodiments of the present invention. Those of ordinary skill in the art understand that the embodiments of the present invention are not limited to HEVC or VVC.
编码器和编码方法Encoders and Encoding Methods
图2为用于实现本申请技术的视频编码器20的示例的示意性框图。在图2的示例中,视频编码器20包括输入端(或输入接口)201、残差计算单元204、变换处理单元206、量化单元208、反量化单元210、逆变换处理单元212、重建单元214、环路滤波器220、解码图像缓冲器(decoded picture buffer,DPB)230、模式选择单元260、熵编码单元270和输出端(或输出接口)272。模式选择单元260可包括帧间预测单元244、帧内预测单元254和分割单元262。帧间预测单元244可包括运动估计单元和运动补偿单元(未示出)。图2所示的视频编码器20也可称为混合型视频编码器或基于混合型视频编解码器的视频编码器。FIG. 2 is a schematic block diagram of an example of a video encoder 20 for implementing the techniques of this application. In the example of FIG. 2 , the video encoder 20 includes an input terminal (or input interface) 201 , a residual calculation unit 204 , a transform processing unit 206 , a quantization unit 208 , an inverse quantization unit 210 , an inverse transform processing unit 212 , and a reconstruction unit 214 , a loop filter 220 , a decoded picture buffer (DPB) 230 , a mode selection unit 260 , an entropy encoding unit 270 and an output terminal (or output interface) 272 . Mode selection unit 260 may include inter prediction unit 244 , intra prediction unit 254 , and partition unit 262 . Inter prediction unit 244 may include a motion estimation unit and a motion compensation unit (not shown). The video encoder 20 shown in FIG. 2 may also be referred to as a hybrid video encoder or a hybrid video codec-based video encoder.
参见图2,帧内预测模块包括经过训练的目标模型(亦称为神经网络),该神经网络用于处理根据当前图像块的周围图像块拼接或级联得到的第一数据块或第二数据块,以生成当前图像块的概率向量。Referring to FIG. 2 , the intra-frame prediction module includes a trained target model (also called a neural network), which is used to process the first data block or the second data obtained by splicing or concatenating the surrounding image blocks of the current image block or the second data block. block to generate a probability vector for the current image block.
残差计算单元204、变换处理单元206、量化单元208和模式选择单元260组成编码器20的前向信号路径,而反量化单元210、逆变换处理单元212、重建单元214、缓冲器216、环路滤波器220、解码图像缓冲器(decoded picture buffer,DPB)230、帧间预测单元244和帧内预测单元254组成编码器的后向信号路径,其中编码器20的后向信号路径对应于解码器的信号路径(参见图3中的解码器30)。反量化单元210、逆变换处理单元212、重建单元214、环路滤波器220、解码图像缓冲器230、帧间预测单元244和帧内预测单元254还组成视频编码器20的“内置解码器”。The residual calculation unit 204, the transform processing unit 206, the quantization unit 208 and the mode selection unit 260 constitute the forward signal path of the encoder 20, while the inverse quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214, the buffer 216, the loop The path filter 220, the decoded picture buffer (DPB) 230, the inter-frame prediction unit 244 and the intra-frame prediction unit 254 constitute the backward signal path of the encoder, wherein the backward signal path of the encoder 20 corresponds to the decoding signal path of the decoder (see decoder 30 in Figure 3). Inverse quantization unit 210 , inverse transform processing unit 212 , reconstruction unit 214 , loop filter 220 , decoded image buffer 230 , inter prediction unit 244 , and intra prediction unit 254 also make up the “built-in decoder” of video encoder 20 .
图像和图像分割(图像和块)Image and image segmentation (images and blocks)
编码器20可用于通过输入端201等接收图像(或图像数据)17,例如,形成视频或视频序列的图像序列中的图像。接收的图像或图像数据也可以是预处理后的图像(或预处理后的图像数据)19。为简单起见,以下描述使用图像17。图像17也可称为当前图像或待编码的图像(尤其是在视频编码中将当前图像与其它图像区分开时,其它图像例如同一视频序列,即也包括当前图像的视频序列,中的之前编码后图像和/或解码后图像)。The encoder 20 may be operable to receive images (or image data) 17, eg, images in a sequence of images forming a video or video sequence, via an input 201 or the like. The received image or image data may also be a preprocessed image (or preprocessed image data) 19 . For simplicity, the following description uses image 17. The image 17 may also be referred to as the current image or the image to be encoded (especially when distinguishing the current image from other images in video encoding, such as the same video sequence, i.e. the video sequence that also includes the current image, previously encoded in the post image and/or post decoded image).
(数字)图像为或可以视为具有强度值的像素点组成的二维阵列或矩阵。阵列中的像素点也可以称为像素(pixel或pel)(图像元素的简称)。阵列或图像在水平方向和垂直方向(或轴线)上的像素点数量决定了图像的大小和/或分辨率。为了表示颜色,通常采用三个颜色分量,即图像可以表示为或包括三个像素点阵列。在RBG格式或颜色空间中,图像包括对应的红色、绿色和蓝色像素点阵列。但是,在视频编码中,每个像素通常以亮度/色度格式或颜色空间表示,例如YCbCr,包括Y指示的亮度分量(有时也用L表示)以及Cb、Cr表示的两个色度分量。亮度(luma)分量Y表示亮度或灰度水平强度(例如,在灰度等级图像中两者相同),而两个色度(chrominance,简写为chroma)分量Cb和Cr表示色度或颜色信息分量。相应地,YCbCr格式的图像包括亮度像素点值(Y)的亮度像素点阵列和色度值(Cb和Cr)的两个色度像素点阵列。RGB格式的图像可以转换或变换为YCbCr格式,反之亦然,该过程也称为颜色变换或转换。如果图像是黑白的,则该图像可以只包括亮度像素点阵列。相应地,图像可以为例如单色格式的亮度像素点阵列或4:2:0、4:2:2和4:4:4彩色格式的亮度像素点阵列和两个相应的色度像素点阵列。A (digital) image is or can be viewed as a two-dimensional array or matrix of pixel points with intensity values. The pixels in the array may also be called pixels or pels (short for picture elements). The number of pixels in the array or image in the horizontal and vertical directions (or axes) determines the size and/or resolution of the image. In order to represent color, three color components are usually used, that is, an image can be represented as or include an array of three pixel points. In RBG format or color space, an image includes an array of corresponding red, green and blue pixel points. However, in video coding, each pixel is usually represented in a luma/chroma format or color space, such as YCbCr, including a luma component denoted by Y (and sometimes L) and two chroma components denoted by Cb and Cr. The luminance (luma) component Y represents the luminance or gray level intensity (eg, both are the same in a grayscale image), while the two chrominance (chroma) components Cb and Cr represent the chrominance or color information components . Correspondingly, an image in YCbCr format includes a luminance pixel array of luminance pixel value (Y) and two chrominance pixel arrays of chrominance values (Cb and Cr). Images in RGB format can be converted or transformed to YCbCr format and vice versa, the process is also known as color transformation or conversion. If the image is black and white, the image may only include an array of luminance pixels. Correspondingly, the image may be, for example, a luminance pixel array in monochrome format or a luminance pixel array and two corresponding chrominance pixel arrays in 4:2:0, 4:2:2 and 4:4:4 color formats .
在一个实施例中,视频编码器20的实施例可包括图像分割单元(图2中未示出),用于将图像17分割成多个(通常不重叠)图像块203。这些块在H.265/HEVC和VVC标准中也可以称为根块、宏块(H.264/AVC)或编码树块(Coding Tree Block,CTB),或编码树单元(Coding Tree Unit,CTU)。分割单元可用于对视频序列中的所有图像使用相同的块大小和使用限定块大小的对应网格,或在图像或图像子集或图像组之间改变块大小,并将每个图像分割成对应块。In one embodiment, an embodiment of the video encoder 20 may include an image segmentation unit (not shown in FIG. 2 ) for segmenting the image 17 into a plurality of (generally non-overlapping) image blocks 203 . These blocks may also be referred to as root blocks, macroblocks (H.264/AVC) or Coding Tree Blocks (CTBs), or Coding Tree Units (CTUs) in the H.265/HEVC and VVC standards ). The segmentation unit can be used to use the same block size for all images in a video sequence and use a corresponding grid that defines the block size, or to vary the block size between images or subsets of images or groups of images, and to segment each image into corresponding piece.
在其它实施例中,视频编码器可用于直接接收图像17的块203,例如,组成所述图像17的一个、几个或所有块。图像块203也可以称为当前图像块或待编码图像块。In other embodiments, the video encoder may be used to directly receive blocks 203 of the image 17 , eg, one, several or all of the blocks that make up the image 17 . The image block 203 may also be referred to as a current image block or an image block to be encoded.
与图像17一样,图像块203同样是或可认为是具有强度值(像素点值)的像素点组成的二维阵列或矩阵,但是图像块203的比图像17的小。换句话说,块203可包括一个像素点阵列(例如,单色图像17情况下的亮度阵列或彩色图像情况下的亮度阵列或色度阵列)或三个像素点阵列(例如,彩色图像17情况下的一个亮度阵列和两个色度阵列)或根据所采用的颜色格式的任何其它数量和/或类型的阵列。块203的水平方向和垂直方向(或轴线)上的像素点数量限定了块203的大小。相应地,块可以为M×N(M列×N行)个像素点阵列,或M×N个变换系数阵列等。Like image 17 , image block 203 is also or can be considered as a two-dimensional array or matrix of pixels with intensity values (pixel values), but image block 203 is smaller than image 17 . In other words, block 203 may include an array of pixels (eg, a luminance array in the case of a monochrome image 17 or a luminance array or a chrominance array in the case of a color image) or three arrays of pixels (eg, in the case of a color image 17 ) (one luma array and two chrominance arrays) or any other number and/or type of arrays depending on the color format employed. The number of pixels in the horizontal and vertical directions (or axes) of the block 203 defines the size of the block 203 . Correspondingly, the block may be an array of M×N (M columns×N rows) pixel points, or an array of M×N transform coefficients, or the like.
在一个实施例中,图2所示的视频编码器20用于逐块对图像17进行编码,例如,对每个块203执行编码和预测。In one embodiment, the video encoder 20 shown in FIG. 2 is used to encode the image 17 block by block, eg, performing encoding and prediction on each block 203 .
在一个实施例中,图2所示的视频编码器20还可以用于使用片(也称为视频片)分割和/或编码图像,其中图像可以使用一个或多个片(通常为不重叠的)进行分割或编码。每个片可包括一个或多个块(例如,编码树单元CTU)或一个或多个块组(例如H.265/HEVC/VVC标准中的编码区块(tile)和VVC标准中的砖(brick)。In one embodiment, the video encoder 20 shown in FIG. 2 may also be used to segment and/or encode an image using slices (also referred to as video slices), where an image may use one or more slices (typically non-overlapping slices) ) for segmentation or encoding. Each slice may include one or more blocks (eg, Coding Tree Unit CTUs) or one or more groups of blocks (eg, coding tiles in the H.265/HEVC/VVC standard and tiles in the VVC standard ( brick).
在一个实施例中,图2所示的视频编码器20还可以用于使用片/编码区块组(也称为视频编码区块组)和/或编码区块(也称为视频编码区块)对图像进行分割和/或编码,其中图像可以使用一个或多个片/编码区块组(通常为不重叠的)进行分割或编码,每个片/编码区块组可包括一个或多个块(例如CTU)或一个或多个编码区块等,其中每个编码区块可以为矩形等形状,可包括一个或多个完整或部分块(例如CTU)。In one embodiment, the video encoder 20 shown in FIG. 2 may also be used to use slice/coding block groups (also referred to as video coding block groups) and/or coding blocks (also referred to as video coding blocks) ) to segment and/or encode an image, wherein the image may be segmented or encoded using one or more slices/encoded block groups (usually non-overlapping), each slice/encoded block group may include one or more slices/encoded block groups A block (eg, CTU) or one or more coding blocks, etc., wherein each coding block may be rectangular or the like, and may include one or more full or partial blocks (eg, CTUs).
残差计算residual calculation
残差计算单元204用于通过如下方式根据图像块203和预测块265来计算残差块205(后续详细介绍了预测块265):例如,逐个像素点(逐个像素)从图像块203的像素点值中减去预测块265的像素点值,得到像素域中的残差块205。The residual calculation unit 204 is configured to calculate the residual block 205 according to the image block 203 and the prediction block 265 (the prediction block 265 will be described in detail later) in the following manner: The pixel value of the prediction block 265 is subtracted from the value to obtain the residual block 205 in the pixel domain.
变换transform
变换处理单元206用于对残差块205的像素点值执行离散余弦变换(discrete cosine transform,DCT)或离散正弦变换(discrete sine transform,DST)等,得到变换域中的变换系数207。变换系数207也可称为变换残差系数,表示变换域中的残差块205。The transform processing unit 206 is configured to perform discrete cosine transform (discrete cosine transform, DCT) or discrete sine transform (discrete sine transform, DST) etc. on the pixel point values of the residual block 205 to obtain transform coefficients 207 in the transform domain. Transform coefficients 207, which may also be referred to as transform residual coefficients, represent the residual block 205 in the transform domain.
变换处理单元206可用于应用DCT/DST的整数化近似,例如为H.265/HEVC指定的变换。与正交DCT变换相比,这种整数化近似通常由某一因子按比例缩放。为了维持经过正变换和逆变换处理的残差块的范数,使用其它比例缩放因子作为变换过程的一部分。比例缩放因子通常是根据某些约束条件来选择的,例如比例缩放因子是用于移位运算的2的幂、变换系数的位深度、准确性与实施成本之间的权衡等。例如,在编码器20侧通过逆变换处理单元212为逆变换(以及在解码器30侧通过例如逆变换处理单元312为对应逆变换)指定具体的比例缩放因子,以及相应地,可以在编码器20侧通过变换处理单元206为正变换指定对应比例缩放因子。Transform processing unit 206 may be used to apply integer approximations of DCT/DST, such as transforms specified for H.265/HEVC. Compared to the orthogonal DCT transform, this integer approximation is usually scaled by some factor. In order to maintain the norm of the forward and inversely transformed residual blocks, other scaling factors are used as part of the transformation process. The scaling factor is usually chosen according to certain constraints, such as the scaling factor being a power of 2 for the shift operation, the bit depth of the transform coefficients, the trade-off between accuracy and implementation cost, etc. For example, specific scaling factors are specified for the inverse transform by the inverse transform processing unit 212 at the encoder 20 side (and for the corresponding inverse transform at the decoder 30 side by, for example, the inverse transform processing unit 312), and accordingly, can be used at the encoder The 20 side specifies the corresponding scaling factor for the forward transformation through the transformation processing unit 206 .
在一个实施例中,视频编码器20(对应地,变换处理单元206)可用于输出一种或多种变换的类型等变换参数,例如,直接输出或由熵编码单元270进行编码或压缩后输出,例如使得视频解码器30可接收并使用变换参数进行解码。In one embodiment, the video encoder 20 (correspondingly, the transform processing unit 206 ) may be configured to output transform parameters such as the type of one or more transforms, eg, directly or after being encoded or compressed by the entropy encoding unit 270 , eg, so that video decoder 30 can receive and decode using transform parameters.
量化quantify
量化单元208用于通过例如标量量化或矢量量化对变换系数207进行量化,得到量化变换系数209。量化变换系数209也可称为量化残差系数209。The quantization unit 208 is configured to quantize the transform coefficients 207 by, for example, scalar quantization or vector quantization, to obtain quantized transform coefficients 209 . The quantized transform coefficients 209 may also be referred to as quantized residual coefficients 209 .
量化过程可减少与部分或全部变换系数207有关的位深度。例如,可在量化期间将n位变换系数向下舍入到m位变换系数,其中n大于m。可通过调整量化参数(quantization parameter,QP)修改量化程度。例如,对于标量量化,可以应用不同程度的比例来实现较细或较粗的量化。较小量化步长对应较细量化,而较大量化步长对应较粗量化。可通过量化参数(quantization parameter,QP)指示合适的量化步长。例如,量化参数可以为合适的量化步长的预定义集合的索引。例如,较小的量化参数可对应精细量化(较小量化步长),较大的量化参数可对应粗糙量化(较大量化步长),反之亦然。量化可包括除以量化步长,而反量化单元210等执行的对应或逆解量化可包括乘以量化步长。根据例如HEVC一些标准的实施例可用于使用量化参数来确定量化步长。一般而言,可以根据量化参数使用包含除法的等式的定点近似来计算量化步长。可以引入其它比例缩放因子来进行量化和解量化,以恢复可能由于在用于量化步长和量化参数的等式的定点近似中使用的比例而修改的残差块的范数。在一种示例性实现方式中,可以合并逆变换和解量化的比例。或者,可以使用自定义量化表并在比特流中等将其从编码器向解码器指示。量化是有损操作,其中量化步长越大,损耗越大。The quantization process may reduce the bit depth associated with some or all of the transform coefficients 207 . For example, n-bit transform coefficients may be rounded down to m-bit transform coefficients during quantization, where n is greater than m. The degree of quantization can be modified by adjusting the quantization parameter (QP). For example, with scalar quantization, different degrees of scaling can be applied to achieve finer or coarser quantization. Smaller quantization step sizes correspond to finer quantization, while larger quantization step sizes correspond to coarser quantization. A suitable quantization step size can be indicated by a quantization parameter (QP). For example, the quantization parameter may be an index into a predefined set of suitable quantization step sizes. For example, a smaller quantization parameter may correspond to fine quantization (smaller quantization step size), a larger quantization parameter may correspond to coarse quantization (larger quantization step size), and vice versa. Quantization may include dividing by the quantization step size, and corresponding or inverse dequantization performed by the inverse quantization unit 210 or the like may include multiplying by the quantization step size. Embodiments according to some standards such as HEVC may be used to use quantization parameters to determine the quantization step size. In general, the quantization step size can be calculated from the quantization parameter using a fixed-point approximation of an equation involving division. Other scaling factors may be introduced for quantization and dequantization to restore the norm of the residual block, which may be modified due to the scale used in the fixed-point approximation of the equations for the quantization step size and quantization parameters. In one exemplary implementation, the inverse transform and dequantized scales may be combined. Alternatively, a custom quantization table can be used and indicated from the encoder to the decoder in the bitstream etc. Quantization is a lossy operation, where the larger the quantization step size, the larger the loss.
在一个实施例中,视频编码器20(对应地,量化单元208)可用于输出量化参数(quantization parameter,QP),例如,直接输出或由熵编码单元270进行编码或压缩后输出,例如使得视频解码器30可接收并使用量化参数进行解码。In one embodiment, the video encoder 20 (correspondingly, the quantization unit 208) may be used to output a quantization parameter (QP), eg, directly or after being encoded or compressed by the entropy encoding unit 270, eg, such that the video Decoder 30 may receive and decode using the quantization parameters.
反量化inverse quantization
反量化单元210用于对量化系数执行量化单元208的反量化,得到解量化系数211,例如,根据或使用与量化单元208相同的量化步长执行与量化单元208所执行的量化方案的反量化方案。解量化系数211也可称为解量化残差系数211,对应于变换系数207,但是由于量化造成损耗,反量化系数211通常与变换系数不完全相同。The inverse quantization unit 210 is used to perform inverse quantization of the quantization unit 208 on the quantized coefficients to obtain the dequantized coefficients 211, for example, perform inverse quantization with the quantization scheme performed by the quantization unit 208 according to or using the same quantization step size as the quantization unit 208 Program. Dequantized coefficients 211 may also be referred to as dequantized residual coefficients 211, corresponding to transform coefficients 207, but due to losses caused by quantization, inverse quantized coefficients 211 are usually not identical to transform coefficients.
逆变换Inverse transform
逆变换处理单元212用于执行变换处理单元206执行的变换的逆变换,例如,逆离散余弦变换(discrete cosine transform,DCT)或逆离散正弦变换(discrete sine transform,DST),以在像素域中得到重建残差块213(或对应的解量化系数213)。重建残差块213也可称为变换块213。The inverse transform processing unit 212 is used to perform the inverse transform of the transform performed by the transform processing unit 206, for example, an inverse discrete cosine transform (DCT) or an inverse discrete sine transform (DST), to A reconstructed residual block 213 (or corresponding dequantized coefficients 213) is obtained. The reconstructed residual block 213 may also be referred to as a transform block 213 .
重建reconstruction
重建单元214(例如,求和器214)用于将变换块213(即重建残差块213)添加到预测块265,以在像素域中得到重建块215,例如,将重建残差块213的像素点值和预测块265的像素点值相加。The reconstruction unit 214 (eg, summer 214 ) is used to add the transform block 213 (ie, the reconstructed residual block 213 ) to the prediction block 265 to obtain the reconstructed block 215 in the pixel domain, eg, the The pixel value and the pixel value of the prediction block 265 are added.
滤波filter
环路滤波器单元220(或简称“环路滤波器”220)用于对重建块215进行滤波,得到滤波块221,或通常用于对重建像素点进行滤波以得到滤波像素点值。例如,环路滤波器单元用于顺利进行像素转变或提高视频质量。环路滤波器单元220可包括一个或多个环路滤波器,例如去块滤波器、像素点自适应偏移(sample-adaptive offset,SAO)滤波器或一个或多个其它滤波器,例如自适应环路滤波器(adaptive loop filter,ALF)、噪声抑制滤波器(noise suppression filter,NSF)或任意组合。例如,环路滤波器单元220可以包括去块滤波器、SAO滤波器和ALF滤波器。滤波过程的顺序可以是去块滤波器、SAO滤波器和ALF滤波器。再例如,增加一个称为具有色度缩放的亮度映射(luma mapping with chroma scaling,LMCS)(即自适应环内整形器)的过程。该过程在去块之前执行。再例如,去块滤波过程也可以应用于内部子块边缘,例如仿射子块边缘、ATMVP子块边缘、子块变换(sub-block transform,SBT)边缘和内子部分(intra sub-partition,ISP)边缘。尽管环路滤波器单元220在图2中示为环路滤波器,但在其它配置中,环路滤波器单元220可以实现为环后滤波器。滤波块221也可称为滤波重建块221。The loop filter unit 220 (or "loop filter" 220 for short) is used to filter the reconstruction block 215 to obtain the filter block 221, or generally to filter the reconstructed pixels to obtain filtered pixel values. For example, loop filter units are used to smooth pixel transitions or improve video quality. The loop filter unit 220 may include one or more loop filters, such as a deblocking filter, a sample-adaptive offset (SAO) filter, or one or more other filters, such as self- Adaptive loop filter (ALF), noise suppression filter (NSF), or any combination. For example, the loop filter unit 220 may include a deblocking filter, a SAO filter, and an ALF filter. The order of the filtering process can be deblocking filter, SAO filter and ALF filter. As another example, a process called luma mapping with chroma scaling (LMCS) (ie, adaptive in-loop shaper) is added. This process is performed before deblocking. For another example, the deblocking filtering process can also be applied to internal sub-block edges, such as affine sub-block edges, ATMVP sub-block edges, sub-block transform (SBT) edges, and intra sub-partition (ISP) edges. )edge. Although loop filter unit 220 is shown in FIG. 2 as a loop filter, in other configurations, loop filter unit 220 may be implemented as a post-loop filter. Filter block 221 may also be referred to as filter reconstruction block 221 .
在一个实施例中,视频编码器20(对应地,环路滤波器单元220)可用于输出环路滤波器参数(例如SAO滤波参数、ALF滤波参数或LMCS参数),例如,直接输出或由熵编码单元270进行熵编码后输出,例如使得解码器30可接收并使用相同或不同的环路滤波器参数进行解码。In one embodiment, video encoder 20 (correspondingly, loop filter unit 220) may be used to output loop filter parameters (eg, SAO filter parameters, ALF filter parameters, or LMCS parameters), eg, directly or by entropy The encoding unit 270 performs entropy encoding and outputs, eg, so that the decoder 30 can receive and decode using the same or different loop filter parameters.
解码图像缓冲器decoded image buffer
解码图像缓冲器(decoded picture buffer,DPB)230可以是存储参考图像数据以供视频编码器20在编码视频数据时使用的参考图像存储器。DPB 230可以由多种存储器设备中的任一种形成,例如动态随机存取存储器(dynamic random access memory,DRAM),包括同步DRAM(synchronous DRAM,SDRAM)、磁阻RAM(magnetoresistive RAM,MRAM)、电阻RAM(resistive RAM,RRAM)或其它类型的存储设备。解码图像缓冲器230可用于存储 一个或多个滤波块221。解码图像缓冲器230还可用于存储同一当前图像或例如之前的重建图像等不同图像的其它之前的滤波块,例如之前重建和滤波的块221,并可提供完整的之前重建即解码图像(和对应参考块和像素点)和/或部分重建的当前图像(和对应参考块和像素点),例如用于帧间预测。解码图像缓冲器230还可用于存储一个或多个未经滤波的重建块215,或一般存储未经滤波的重建像素点,例如,未被环路滤波单元220滤波的重建块215,或未进行任何其它处理的重建块或重建像素点。A decoded picture buffer (DPB) 230 may be a reference picture memory that stores reference picture data for use by the video encoder 20 in encoding the video data. DPB 230 may be formed by any of a variety of memory devices, such as dynamic random access memory (DRAM), including synchronous DRAM (SDRAM), magnetoresistive RAM (MRAM), Resistive RAM (RRAM) or other types of storage devices. A decoded image buffer 230 may be used to store one or more filter blocks 221. The decoded image buffer 230 may also be used to store other previously filtered blocks of the same current image or a different image, such as a previous reconstructed image, such as the previously reconstructed and filtered block 221, and may provide a complete previously reconstructed or decoded image (and corresponding reference blocks and pixels) and/or a partially reconstructed current image (and corresponding reference blocks and pixels), eg for inter prediction. The decoded image buffer 230 may also be used to store one or more unfiltered reconstructed blocks 215, or generally unfiltered reconstructed pixels, eg, reconstructed blocks 215 not filtered by the in-loop filtering unit 220, or unfiltered Any other processed reconstructed blocks or reconstructed pixels.
模式选择(分割和预测)Mode selection (segmentation and prediction)
模式选择单元260包括分割单元262、帧间预测单元244和帧内预测单元254,用于从解码图像缓冲器230或其它缓冲器(例如,列缓冲器,图中未显示)接收或获得原始块203(当前图像17的当前块203)和重建图像数据等原始图像数据,例如,同一(当前)图像和/或一个或多个之前解码图像的滤波和/或未经滤波的重建像素点或重建块。重建图像数据用作帧间预测或帧内预测等预测所需的参考图像数据,以得到预测块265或预测值265。 Mode selection unit 260 includes partition unit 262, inter prediction unit 244, and intra prediction unit 254 for receiving or obtaining original blocks from decoded image buffer 230 or other buffers (eg, column buffers, not shown) 203 (current block 203 of current image 17) and original image data such as reconstructed image data, e.g. filtered and/or unfiltered reconstructed pixels or reconstructions of the same (current) image and/or one or more previously decoded images piece. The reconstructed image data is used as reference image data required for prediction such as inter prediction or intra prediction to obtain the prediction block 265 or the prediction value 265 .
模式选择单元260可用于为当前块预测模式(包括不分割)和预测模式(例如帧内或帧间预测模式)确定或选择一种分割,生成对应的预测块265,以对残差块205进行计算和对重建块215进行重建。The mode selection unit 260 may be used to determine or select a partition for the current block prediction mode (including no partition) and prediction mode (eg, intra or inter prediction mode) to generate a corresponding prediction block 265 for performing the residual block 205 The reconstruction block 215 is computed and reconstructed.
在一个实施例中,模式选择单元260可用于选择分割和预测模式(例如,从模式选择单元260支持的或可用的预测模式中),所述预测模式提供最佳匹配或者说最小残差(最小残差是指传输或存储中更好的压缩),或者提供最小信令开销(最小信令开销是指传输或存储中更好的压缩),或者同时考虑或平衡以上两者。模式选择单元260可用于根据码率失真优化(rate distortion Optimization,RDO)确定分割和预测模式,即选择提供最小码率失真优化的预测模式。本文“最佳”、“最低”、“最优”等术语不一定指总体上“最佳”、“最低”、“最优”的,但也可以指满足终止或选择标准的情况,例如,超过或低于阈值的值或其他限制可能导致“次优选择”,但会降低复杂度和处理时间。In one embodiment, mode selection unit 260 may be used to select a partitioning and prediction mode (eg, from among those supported or available by mode selection unit 260) that provides the best match or the smallest residual (minimum Residual refers to better compression in transmission or storage), or provides minimal signaling overhead (minimum signaling overhead refers to better compression in transmission or storage), or considers or balances both. The mode selection unit 260 may be configured to determine the segmentation and prediction mode according to rate distortion optimization (RDO), ie select the prediction mode that provides the least rate distortion optimization. The terms "best", "lowest", "optimal", etc. herein do not necessarily refer to "best", "lowest", "optimal" in general, but may also refer to situations where termination or selection criteria are met, for example, Values above or below the threshold or other constraints may result in "sub-optimal choices" but reduce complexity and processing time.
换言之,分割单元262可用于将视频序列中的图像分割为编码树单元(coding tree unit,CTU)序列,CTU 203可进一步被分割成较小的块部分或子块(再次形成块),例如,通过迭代使用四叉树(quad-tree partitioning,QT)分割、二叉树(binary-tree partitioning,BT)分割或三叉树(triple-tree partitioning,TT)分割或其任意组合,并且用于例如对块部分或子块中的每一个执行预测,其中模式选择包括选择分割块203的树结构和选择应用于块部分或子块中的每一个的预测模式。In other words, partition unit 262 may be used to partition pictures in a video sequence into a sequence of coding tree units (CTUs), CTU 203 may be further partitioned into smaller block parts or sub-blocks (blocks again), e.g., Quad-tree partitioning (QT) partitioning, binary-tree partitioning (BT) partitioning, or triple-tree partitioning (TT) partitioning or any combination thereof is used by iteration, and for e.g. Or each of the sub-blocks performs prediction, wherein the mode selection includes selecting the tree structure of the partition block 203 and selecting a prediction mode to apply to each of the block parts or sub-blocks.
下文将详细地描述由视频编码器20执行的分割(例如,由分割单元262执行)和预测处理(例如,由帧间预测单元244和帧内预测单元254执行)。The segmentation (eg, performed by segmentation unit 262 ) and prediction processing (eg, performed by inter-prediction unit 244 and intra-prediction unit 254 ) performed by video encoder 20 will be described in detail below.
分割segmentation
分割单元262可将一个编码树单元203分割(或划分)为较小的部分,例如正方形或矩形形状的小块。对于具有三个像素点阵列的图像,一个CTU由N×N个亮度像素点块和两个对应的色度像素点块组成。CTU中亮度块的最大允许大小在正在开发的通用视频编码(Versatile Video Coding,VVC)标准中被指定为128×128,但是将来可指定为不同于128×128的值,例如256×256。图像的CTU可以集中/分组为片/编码区块组、编码区块或砖。一个编码区块覆盖着一个图像的矩形区域,一个编码区块可以分成一个或多个砖。一个砖由一个编码区块内的多个CTU行组成。没有分割为多个砖的编码区块可以称为砖。但是,砖是编码区 块的真正子集,因此不称为编码区块。VVC支持两种编码区块组模式,分别为光栅扫描片/编码区块组模式和矩形片模式。在光栅扫描编码区块组模式,一个片/编码区块组包含一个图像的编码区块光栅扫描中的编码区块序列。在矩形片模式中,片包含一个图像的多个砖,这些砖共同组成图像的矩形区域。矩形片内的砖按照片的砖光栅扫描顺序排列。这些较小块(也可称为子块)可进一步分割为更小的部分。这也称为树分割或分层树分割,其中在根树级别0(层次级别0、深度0)等的根块可以递归地分割为两个或两个以上下一个较低树级别的块,例如树级别1(层次级别1、深度1)的节点。这些块可以又分割为两个或两个以上下一个较低级别的块,例如树级别2(层次级别2、深度2)等,直到分割结束(因为满足结束标准,例如达到最大树深度或最小块大小)。未进一步分割的块也称为树的叶块或叶节点。分割为两个部分的树称为二叉树(binary-tree,BT),分割为三个部分的树称为三叉树(ternary-tree,TT),分割为四个部分的树称为四叉树(quad-tree,QT)。Partition unit 262 may partition (or divide) one coding tree unit 203 into smaller parts, such as square or rectangular shaped pieces. For an image with three pixel arrays, a CTU consists of N×N luminance pixel blocks and two corresponding chrominance pixel blocks. The maximum allowable size of a luma block in a CTU is specified as 128x128 in the under-development Versatile Video Coding (VVC) standard, but may be specified to a value other than 128x128 in the future, such as 256x256. The CTUs of a picture can be aggregated/grouped into slices/coded block groups, coded blocks or bricks. A coding block covers a rectangular area of an image, and a coding block can be divided into one or more tiles. A brick consists of multiple CTU lines within an encoded block. An encoded block that is not divided into multiple bricks can be called a brick. However, bricks are a true subset of coded blocks and are therefore not called coded blocks. VVC supports two encoding block group modes, namely raster scan slice/encoded block group mode and rectangular slice mode. In raster scan coded block group mode, a slice/coded block group contains a sequence of coded blocks in a raster scan of coded blocks of an image. In rectangular slice mode, slices contain multiple tiles of an image that together make up a rectangular area of the image. The tiles within the rectangular slice are arranged in the order of the tile raster scan of the photo. These smaller blocks (also referred to as sub-blocks) may be further divided into smaller parts. This is also known as tree splitting or hierarchical tree splitting, where a root block at root tree level 0 (hierarchy level 0, depth 0) etc. can be recursively split into two or more blocks of the next lower tree level, For example, a node at tree level 1 (hierarchy level 1, depth 1). These blocks can in turn be split into two or more blocks of the next lower level, e.g. tree level 2 (hierarchy level 2, depth 2), etc., until the split ends (since ending criteria are met, such as reaching a maximum tree depth or minimum block size). Blocks that are not further divided are also called leaf blocks or leaf nodes of the tree. A tree divided into two parts is called a binary-tree (BT), a tree divided into three parts is called a ternary-tree (TT), and a tree divided into four parts is called a quadtree ( quad-tree, QT).
例如,编码树单元(CTU)可以为或包括亮度像素点的CTB、具有三个像素点阵列的图像的色度像素点的两个对应CTB、或单色图像的像素点的CTB或使用三个独立颜色平面和语法结构(用于编码像素点)编码的图像的像素点的CTB。相应地,编码树块(CTB)可以为N×N个像素点块,其中N可以设为某个值使得分量划分为CTB,这就是分割。编码单元(coding unit,CU)可以为或包括亮度像素点的编码块、具有三个像素点阵列的图像的色度像素点的两个对应编码块、或单色图像的像素点的编码块或使用三个独立颜色平面和语法结构(用于编码像素点)编码的图像的像素点的编码块。相应地,编码块(CB)可以为M×N个像素点块,其中M和N可以设为某个值使得CTB划分为编码块,这就是分割。For example, a coding tree unit (CTU) may be or include a CTB for luma pixels, two corresponding CTBs for chroma pixels for an image with an array of three pixels, or a CTB for pixels for monochrome images, or a CTB using three The CTB of a pixel of an image encoded by the independent color plane and syntax structure (used to encode the pixel). Correspondingly, a coding tree block (CTB) can be a block of N×N pixel points, where N can be set to a certain value such that the components are divided into CTBs, which is division. A coding unit (CU) may be or include a coding block of luminance pixels, two corresponding coding blocks of chrominance pixels of an image with an array of three pixel points, or a coding block of pixels of a monochrome image, or An encoding block of pixels of an image encoded using three independent color planes and syntax structures (used to encode pixels). Correspondingly, a coding block (CB) can be a block of M×N pixel points, where M and N can be set to a certain value so that the CTB is divided into coding blocks, which is division.
例如,在实施例中,根据HEVC可通过使用表示为编码树的四叉树结构将编码树单元(CTU)划分为多个CU。在叶CU级作出是否使用帧间(时间)预测或帧内(空间)预测对图像区域进行编码的决定。每个叶CU可以根据PU划分类型进一步划分为一个、两个或四个PU。一个PU内使用相同的预测过程,并以PU为单位向解码器传输相关信息。在根据PU划分类型应用预测过程得到残差块之后,可以根据类似于用于CU的编码树的其它四叉树结构将叶CU分割为变换单元(TU)。For example, in an embodiment, a coding tree unit (CTU) may be divided into multiple CUs according to HEVC by using a quad-tree structure represented as a coding tree. The decision whether to use inter (temporal) prediction or intra (spatial) prediction to encode image regions is made at the leaf-CU level. Each leaf-CU may be further divided into one, two, or four PUs according to the PU partition type. The same prediction process is used within a PU, and relevant information is transmitted to the decoder on a PU basis. After applying the prediction process to obtain residual blocks according to the PU partition type, the leaf CU may be partitioned into transform units (TUs) according to other quad-tree structures similar to the coding tree used for the CU.
例如,在实施例中,根据当前正在开发的最新视频编码标准(称为通用视频编码(VVC),使用嵌套多类型树(例如二叉树和三叉树)的组合四叉树来划分用于分割编码树单元的分段结构。在编码树单元内的编码树结构中,CU可以为正方形或矩形。例如,编码树单元(CTU)首先由四叉树结构进行分割。四叉树叶节点进一步由多类型树结构分割。多类型树形结构有四种划分类型:垂直二叉树划分(SPLIT_BT_VER)、水平二叉树划分(SPLIT_BT_HOR)、垂直三叉树划分(SPLIT_TT_VER)和水平三叉树划分(SPLIT_TT_HOR)。多类型树叶节点称为编码单元(CU),除非CU对于最大变换长度而言太大,这样的分段用于预测和变换处理,无需其它任何分割。在大多数情况下,这表示CU、PU和TU在四叉树嵌套多类型树的编码块结构中的块大小相同。当最大支持变换长度小于CU的彩色分量的宽度或高度时,就会出现该异常。VVC制定了具有四叉树嵌套多类型树的编码结构中的分割划分信息的唯一信令机制。在信令机制中,编码树单元(CTU)作为四叉树的根首先被四叉树结构分割。然后每个四叉树叶节点(当足够大可以被)被进一步分割为一个多类型树结构。在多类型树结构中,通过第一标识(mtt_split_cu_flag)指示节点是否进一步分割,当对节点进一步分割时,先用第二标识(mtt_split_cu_vertical_flag)指示划分方向,再用第三标识(mtt_split_cu_binary_flag)指示划分是二叉树划分或三叉树划分。根据mtt_split_cu_vertical_flag和mtt_split_cu_binary_flag的值,解码器可以基于预定义规则或表格 推导出CU的多类型树划分模式(MttSplitMode)。需要说明的是,对于某种设计,例如VVC硬件解码器中的64×64的亮度块和32×32的色度流水线设计,当亮度编码块的宽度或高度大于64时,不允许进行TT划分。当色度编码块的宽度或高度大于32时,也不允许TT划分。流水线设计将图像分为多个虚拟流水线数据单元(virtual pipeline data unit,VPDU),每个VPDU在图像中定义为互不重叠的单元。在硬件解码器中,连续的VPDU在多个流水线阶段同时处理。在大多数流水线阶段,VPDU大小与缓冲器大小大致成正比,因此需要保持较小的VPDU。在大多数硬件解码器中,VPDU大小可以设置为最大变换块(transform block,TB)大小。但是,在VVC中,三叉树(TT)和二叉树(BT)的分割可能会增加VPDU的大小。For example, in an embodiment, according to the latest video coding standard currently under development, called Versatile Video Coding (VVC), a combined quadtree of nested multi-type trees (eg, binary and ternary trees) is used to partition for segmentation coding The segmented structure of the tree unit.In the coding tree structure in the coding tree unit, the CU can be a square or a rectangle.For example, the coding tree unit (CTU) is first divided by the quad-tree structure.The quad-leaf node is further composed of multiple types of Tree structure division. There are four division types for multi-type tree structures: vertical binary tree division (SPLIT_BT_VER), horizontal binary tree division (SPLIT_BT_HOR), vertical ternary tree division (SPLIT_TT_VER) and horizontal ternary tree division (SPLIT_TT_HOR). Multi-type leaf nodes are called A coding unit (CU), unless the CU is too large for the maximum transform length, such a segment is used for prediction and transform processing without any other partitioning. In most cases, this means that the CU, PU, and TU are The block size is the same in the coding block structure of tree-nested multi-type trees. This exception occurs when the maximum supported transform length is less than the width or height of the color components of the CU. VVC has formulated a multi-type tree with quadtree nesting The only signaling mechanism for partitioning information in the coding structure. In the signaling mechanism, the coding tree unit (CTU) as the root of the quad-tree is first divided by the quad-tree structure. Then each quad-leaf node (when enough can be further divided into a multi-type tree structure. In the multi-type tree structure, whether the node is further divided by the first flag (mtt_split_cu_flag), when the node is further divided, first use the second flag (mtt_split_cu_vertical_flag) to indicate Divide the direction, and then use the third mark (mtt_split_cu_binary_flag) to indicate that the division is binary tree division or ternary tree division. According to the values of mtt_split_cu_vertical_flag and mtt_split_cu_binary_flag, the decoder can derive the multi-type tree division mode (MttSplitMode) of the CU based on a predefined rule or table. It should be noted that for a certain design, such as a 64×64 luma block and a 32×32 chroma pipeline design in a VVC hardware decoder, when the width or height of the luma coding block is greater than 64, TT division is not allowed .When the width or height of the chroma coding block is greater than 32, TT division is also not allowed. The pipeline design divides the image into multiple virtual pipeline data units (VPDUs), and each VPDU is defined in the image as mutual Non-overlapping units. In hardware decoders, successive VPDUs are processed simultaneously in multiple pipeline stages. In most pipeline stages, the VPDU size is roughly proportional to the buffer size, so it is necessary to keep the VPD small U. In most hardware decoders, the VPDU size can be set to the maximum transform block (TB) size. However, in VVC, the partition of ternary tree (TT) and binary tree (BT) may increase the size of VPDU.
另外,需要说明的是,当树节点块的一部分超出底部或图像右边界时,强制对该树节点块进行划分,直到每个编码CU的所有像素点都位于图像边界内。In addition, it should be noted that when a part of the tree node block exceeds the bottom or the right border of the image, the tree node block is forced to be divided until all the pixels of each coded CU are located within the image border.
例如,所述帧内子分割(intra sub-partitions,ISP)工具可以根据块大小将亮度帧内预测块垂直或水平地分为两个或四个子部分。For example, the intra sub-partitions (ISP) tool may divide the luma intra prediction block vertically or horizontally into two or four sub-parts depending on the block size.
在一个示例中,视频编码器20的模式选择单元260可以用于执行上文描述的分割技术的任意组合。In one example, mode selection unit 260 of video encoder 20 may be used to perform any combination of the partitioning techniques described above.
如上所述,视频编码器20用于从(预定的)预测模式集合中确定或选择最好或最优的预测模式。预测模式集合可包括例如帧内预测模式和/或帧间预测模式。As described above, video encoder 20 is used to determine or select the best or optimal prediction mode from a set of (predetermined) prediction modes. The set of prediction modes may include, for example, intra prediction modes and/or inter prediction modes.
帧内预测Intra prediction
帧内预测模式集合可包括35种不同的帧内预测模式,例如,像DC(或均值)模式和平面模式的非方向性模式,或如HEVC定义的方向性模式,或者可包括67种不同的帧内预测模式,例如,像DC(或均值)模式和平面模式的非方向性模式,或如VVC中定义的方向性模式。例如,若干传统角度帧内预测模式自适应地替换为VVC中定义的非正方形块的广角帧内预测模式。又例如,为了避免DC预测的除法运算,仅使用较长边来计算非正方形块的平均值。并且,平面模式的帧内预测结果还可以使用位置决定的帧内预测组合(position dependent intra prediction combination,PDPC)方法修改。The set of intra prediction modes may include 35 different intra prediction modes, for example, non-directional modes like DC (or mean) mode and planar mode, or directional modes as defined by HEVC, or may include 67 different Intra prediction modes, for example, non-directional modes like DC (or mean) mode and planar mode, or directional modes as defined in VVC. For example, several conventional-angle intra-prediction modes are adaptively replaced with wide-angle intra-prediction modes for non-square blocks defined in VVC. As another example, in order to avoid the division operation of DC prediction, only the longer side is used to calculate the average value of the non-square block. In addition, the intra prediction result of the planar mode may also be modified using a position-dependent intra prediction combination (PDPC) method.
帧内预测单元254用于根据帧内预测模式集合中的帧内预测模式使用同一当前图像的相邻块的重建像素点来生成帧内预测块265。The intra-frame prediction unit 254 is configured to generate an intra-frame prediction block 265 using reconstructed pixels of adjacent blocks of the same current image according to the intra-frame prediction mode in the intra-frame prediction mode set.
帧内预测单元254(或通常为模式选择单元260)还用于输出帧内预测参数(在本申请实施例中,帧内预测参数包括当前图像块的目标帧内预测模式的信息以及当前图像块的概率向量)以语法元素266的形式发送到熵编码单元270,以包含到编码图像数据21中,从而视频解码器30可执行操作,例如接收并使用用于解码的预测参数。The intra prediction unit 254 (or usually the mode selection unit 260) is further configured to output intra prediction parameters (in this embodiment of the present application, the intra prediction parameters include the information of the target intra prediction mode of the current image block and the current image block). The probability vector of ) is sent to entropy encoding unit 270 in syntax element 266 for inclusion into encoded image data 21 so that video decoder 30 may perform operations such as receiving and using prediction parameters for decoding.
帧间预测Inter prediction
在可能的实现中,帧间预测模式集合取决于可用参考图像(即,例如前述存储在DBP 230中的至少部分之前解码的图像)和其它帧间预测参数,例如取决于是否使用整个参考图像或只使用参考图像的一部分,例如当前块的区域附近的搜索窗口区域,来搜索最佳匹配参考块,和/或例如取决于是否执行半像素、四分之一像素和/或16分之一内插的像素内插。In a possible implementation, the set of inter-prediction modes depends on available reference pictures (i.e., for example the aforementioned at least partially previously decoded pictures stored in DBP 230) and other inter-prediction parameters, for example on whether to use the entire reference picture or Use only a part of the reference image, e.g. the search window area near the area of the current block, to search for the best matching reference block, and/or e.g. depending on whether half pixel, quarter pixel and/or within 16th of a pixel is performed Interpolated pixel interpolation.
除上述预测模式外,还可以采用跳过模式和/或直接模式。In addition to the above prediction modes, skip mode and/or direct mode may also be employed.
例如,扩展合并预测,这种模式的合并候选列表由以下五种候选类型按顺序组成:来自空间相邻CU的空间MVP、来自并置CU的时间MVP、来自FIFO表的基于历史的MVP、成对平均MVP和零MV。可以使用基于双边匹配的解码器侧运动矢量修正(decoder side  motion vector refinement,DMVR)来增加合并模式的MV的准确度。带有MVD的合并模式(merge mode with MVD,MMVD)来自有运动矢量差异的合并模式。在发送跳过标志和合并标志之后立即发送MMVD标志,以指定CU是否使用MMVD模式。可以使用CU级自适应运动矢量分辨率(adaptive motion vector resolution,AMVR)方案。AMVR支持CU的MVD以不同的精度进行编码。根据当前CU的预测模式,自适应地选择当前CU的MVD。当CU以合并模式进行编码时,可以将合并的帧间/帧内预测(combined inter/intra prediction,CIIP)模式应用于当前CU。对帧间和帧内预测信号进行加权平均,得到CIIP预测。对于仿射运动补偿预测,通过2个控制点(4参数)或3个控制点(6参数)运动矢量的运动信息来描述块的仿射运动场。基于子块的时间运动矢量预测(subblock-based temporal motion vector prediction,SbTMVP),与HEVC中的时间运动矢量预测(temporal motion vector prediction,TMVP)类似,但预测的是当前CU内的子CU的运动矢量。双向光流(bi-directional optical flow,BDOF)以前称为BIO,是一种减少计算的简化版本,特别是在乘法次数和乘数大小方面的计算。在三角形分割模式中,CU以对角线划分和反对角线划分两种划分方式被均匀划分为两个三角形部分。此外,双向预测模式在简单平均的基础上进行了扩展,以支持两个预测信号的加权平均。For example, extending merge prediction, the merge candidate list for this mode consists of the following five candidate types in order: spatial MVP from spatially adjacent CUs, temporal MVP from collocated CUs, history-based MVP from FIFO table, For average MVP and zero MV. Decoder side motion vector refinement (DMVR) based on bilateral matching can be used to increase the accuracy of MV for merged mode. The merge mode with MVD (MMVD) comes from the merge mode with motion vector difference. Send the MMVD flag immediately after sending the skip flag and merge flag to specify whether the CU uses MMVD mode. A CU-level adaptive motion vector resolution (AMVR) scheme may be used. AMVR supports the MVD of the CU to be encoded in different precisions. According to the prediction mode of the current CU, the MVD of the current CU is adaptively selected. When a CU is encoded in combined mode, a combined inter/intra prediction (CIIP) mode may be applied to the current CU. A weighted average is performed on the inter and intra prediction signals to obtain the CIIP prediction. For affine motion compensation prediction, the affine motion field of the block is described by motion information of 2 control points (4 parameters) or 3 control points (6 parameters) motion vectors. Subblock-based temporal motion vector prediction (SbTMVP) is similar to temporal motion vector prediction (TMVP) in HEVC, but predicts the motion of sub-CUs in the current CU vector. Bi-directional optical flow (BDOF), formerly known as BIO, is a simplified version that reduces computation, especially in terms of the number of multiplications and the size of the multipliers. In the triangular division mode, the CU is evenly divided into two triangular parts in two divisions: diagonal division and anti-diagonal division. In addition, the bidirectional prediction mode is extended on the basis of simple averaging to support weighted average of two prediction signals.
帧间预测单元244可包括运动估计(motion estimation,ME)单元和运动补偿(motion compensation,MC)单元(两者在图2中未示出)。运动估计单元可用于接收或获取图像块203(当前图像17的当前图像块203)和解码图像231,或至少一个或多个之前重建块,例如,一个或多个其它/不同之前解码图像231的重建块,来进行运动估计。例如,视频序列可包括当前图像和之前的解码图像231,或换句话说,当前图像和之前的解码图像231可以为形成视频序列的图像序列的一部分或形成该图像序列。 Inter prediction unit 244 may include a motion estimation (ME) unit and a motion compensation (MC) unit (both not shown in FIG. 2 ). The motion estimation unit may be used to receive or obtain the image block 203 (the current image block 203 of the current image 17 ) and the decoded image 231 , or at least one or more previously reconstructed blocks, eg, one or more other/different previously decoded images 231 . Reconstruction blocks for motion estimation. For example, the video sequence may include the current image and the previous decoded image 231, or in other words, the current image and the previous decoded image 231 may be part of or form a sequence of images forming the video sequence.
例如,编码器20可用于从多个其它图像中的同一或不同图像的多个参考块中选择参考块,并将参考图像(或参考图像索引)和/或参考块的位置(x、y坐标)与当前块的位置之间的偏移(空间偏移)作为帧间预测参数提供给运动估计单元。该偏移也称为运动矢量(motion vector,MV)。For example, the encoder 20 may be operable to select a reference block from a plurality of reference blocks of the same or different pictures among a plurality of other pictures, and convert the reference picture (or reference picture index) and/or the position (x, y coordinates) of the reference block ) and the position of the current block (spatial offset) are provided to the motion estimation unit as inter prediction parameters. This offset is also called a motion vector (MV).
运动补偿单元用于获取,例如接收,帧间预测参数,并根据或使用该帧间预测参数执行帧间预测,得到帧间预测块246。由运动补偿单元执行的运动补偿可能包含根据通过运动估计确定的运动/块矢量来提取或生成预测块,还可能包括对子像素精度执行内插。内插滤波可从已知像素的像素点中产生其它像素的像素点,从而潜在地增加可用于对图像块进行编码的候选预测块的数量。一旦接收到当前图像块的PU对应的运动矢量时,运动补偿单元可在其中一个参考图像列表中定位运动矢量指向的预测块。The motion compensation unit is used to obtain, eg, receive, inter-prediction parameters, and perform inter-prediction based on or using the inter-prediction parameters, resulting in the inter-prediction block 246 . The motion compensation performed by the motion compensation unit may involve extracting or generating prediction blocks from motion/block vectors determined through motion estimation, and may also include performing interpolation to sub-pixel precision. Interpolative filtering can generate pixels of other pixels from pixels of known pixels, thereby potentially increasing the number of candidate prediction blocks that can be used to encode an image block. Once the motion vector corresponding to the PU of the current image block is received, the motion compensation unit may locate the prediction block pointed to by the motion vector in one of the reference image lists.
运动补偿单元还可以生成与块和视频片相关的语法元素,以供视频解码器30在解码视频片的图像块时使用。此外,或者作为片和相应语法元素的替代,可以生成或使用编码区块组和/或编码区块以及相应语法元素。The motion compensation unit may also generate block- and video slice-related syntax elements for use by video decoder 30 in decoding image blocks of the video slice. In addition, or instead of slices and corresponding syntax elements, coding block groups and/or coding blocks and corresponding syntax elements may be generated or used.
熵编码Entropy coding
熵编码单元270用于将熵编码算法或方案(例如,可变长度编码(variable length coding,VLC)方案、上下文自适应VLC方案(context adaptive VLC,CALVC)、算术编码方案、二值化算法、上下文自适应二进制算术编码(context adaptive binary arithmetic coding,CABAC)、基于语法的上下文自适应二进制算术编码(syntax-based context-adaptive binary arithmetic coding,SBAC)、概率区间分割熵(probability interval partitioning entropy,PIPE)编码或其 它熵编码方法或技术)应用于量化残差系数209、帧间预测参数、帧内预测参数、环路滤波器参数和/或其它语法元素,得到可以通过输出端272以编码比特流21等形式输出的编码图像数据21,使得视频解码器30等可以接收并使用用于解码的参数。可将编码比特流21传输到视频解码器30,或将其保存在存储器中稍后由视频解码器30传输或检索。The entropy coding unit 270 is used for entropy coding algorithm or scheme (for example, variable length coding (variable length coding, VLC) scheme, context adaptive VLC scheme (context adaptive VLC, CALVC), arithmetic coding scheme, binarization algorithm, Context adaptive binary arithmetic coding (context adaptive binary arithmetic coding, CABAC), syntax-based context adaptive binary arithmetic coding (syntax-based context-adaptive binary arithmetic coding, SBAC), probability interval partitioning entropy (probability interval partitioning entropy, PIPE) ) coding or other entropy coding method or technique) is applied to the quantized residual coefficients 209, inter prediction parameters, intra prediction parameters, loop filter parameters and/or other syntax elements, resulting in an encoded bit stream that can be passed through output 272 The encoded image data 21 output in the form of 21 or the like, so that the video decoder 30 or the like can receive and use the parameters for decoding. The encoded bitstream 21 may be transmitted to the video decoder 30, or stored in memory for later transmission or retrieval by the video decoder 30.
在本申请实施例中,帧内预测参数包括目标帧内预测模式和当前图像块的概率向量,熵编码单元270在概率向量中选择出与目标帧内预测模式对应的第一参考值或第二参考值,然后将该第一参考值或第二参考值编入码流,该第一参考值或第二参考值用于指示当前图像块的目标帧内预测模式。In this embodiment of the present application, the intra prediction parameters include a target intra prediction mode and a probability vector of the current image block, and the entropy encoding unit 270 selects a first reference value or a second reference value corresponding to the target intra prediction mode from the probability vector. and then encode the first reference value or the second reference value into the code stream, where the first reference value or the second reference value is used to indicate the target intra prediction mode of the current image block.
视频编码器20的其它结构变体可用于对视频流进行编码。例如,基于非变换的编码器20可以在某些块或帧没有变换处理单元206的情况下直接量化残差信号。在另一种实现方式中,编码器20可以具有组合成单个单元的量化单元208和反量化单元210。Other structural variations of video encoder 20 may be used to encode the video stream. For example, the non-transform based encoder 20 may directly quantize the residual signal without transform processing unit 206 for certain blocks or frames. In another implementation, encoder 20 may have quantization unit 208 and inverse quantization unit 210 combined into a single unit.
解码器和解码方法Decoders and Decoding Methods
图3示出了用于实现本申请技术的示例性视频解码器30。视频解码器30用于接收例如由编码器20编码的编码图像数据21(例如编码比特流21),得到解码图像331。编码图像数据或比特流包括用于解码所述编码图像数据的信息,例如表示编码视频片(和/或编码区块组或编码区块)的图像块的数据和相关的语法元素。FIG. 3 illustrates an example video decoder 30 for implementing the techniques of this application. The video decoder 30 is adapted to receive the encoded image data 21 (eg, the encoded bitstream 21 ) encoded by the encoder 20 , for example, to obtain a decoded image 331 . The encoded image data or bitstream includes information for decoding the encoded image data, such as data representing image blocks of an encoded video slice (and/or encoded block groups or encoded blocks) and associated syntax elements.
在图3的示例中,解码器30包括熵解码单元304、反量化单元310、逆变换处理单元312、重建单元314(例如求和器314)、环路滤波器320、解码图像缓冲器(DBP)330、模式应用单元360、帧间预测单元344和帧内预测单元354。帧间预测单元344可以为或包括运动补偿单元。在一些示例中,视频解码器30可执行大体上与参照图2的视频编码器100描述的编码过程相反的解码过程。In the example of FIG. 3, decoder 30 includes entropy decoding unit 304, inverse quantization unit 310, inverse transform processing unit 312, reconstruction unit 314 (eg, summer 314), loop filter 320, decoded image buffer (DBP) ) 330 , a mode application unit 360 , an inter prediction unit 344 and an intra prediction unit 354 . Inter prediction unit 344 may be or include a motion compensation unit. In some examples, video decoder 30 may perform a decoding process that is substantially the inverse of the encoding process described with reference to video encoder 100 of FIG. 2 .
参见图3,帧内预测模块包括经过训练的目标模型(亦称为神经网络),该神经网络用于处理根据当前图像块的周围图像块拼接或级联得到的第一数据块或第二数据块,以生成当前图像块的概率向量。Referring to FIG. 3 , the intra-frame prediction module includes a trained target model (also called a neural network), which is used to process the first data block or the second data obtained by splicing or concatenating the surrounding image blocks of the current image block or the second data block. block to generate a probability vector for the current image block.
如编码器20所述,反量化单元210、逆变换处理单元212、重建单元214、环路滤波器220、解码图像缓冲器DPB230、帧间预测单元344和帧内预测单元354还组成视频编码器20的“内置解码器”。相应地,反量化单元310在功能上可与反量化单元110相同,逆变换处理单元312在功能上可与逆变换处理单元122相同,重建单元314在功能上可与重建单元214相同,环路滤波器320在功能上可与环路滤波器220相同,解码图像缓冲器330在功能上可与解码图像缓冲器230相同。因此,视频编码器20的相应单元和功能的解释相应地适用于视频解码器30的相应单元和功能。As described in the encoder 20, the inverse quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214, the loop filter 220, the decoded image buffer DPB 230, the inter prediction unit 344 and the intra prediction unit 354 also constitute a video encoder 20 "built-in decoders". Accordingly, the inverse quantization unit 310 may be functionally the same as the inverse quantization unit 110, the inverse transform processing unit 312 may be functionally the same as the inverse transform processing unit 122, the reconstruction unit 314 may be functionally the same as the reconstruction unit 214, and the loop Filter 320 may be functionally identical to loop filter 220 , and decoded image buffer 330 may be functionally identical to decoded image buffer 230 . Therefore, the explanations of the corresponding units and functions of the video encoder 20 apply correspondingly to the corresponding units and functions of the video decoder 30 .
熵解码Entropy decoding
熵解码单元304用于解析比特流21(或一般为编码图像数据21)并对编码图像数据21执行熵解码,得到量化系数309和/或解码后的编码参数(图3中未示出)等,例如帧间预测参数(例如参考图像索引和运动矢量)、帧内预测参数(例如帧内预测模式或索引)、变换参数、量化参数、环路滤波器参数和/或其它语法元素等中的任一个或全部。熵解码单元304可用于应用编码器20的熵编码单元270的编码方案对应的解码算法或方案。熵解码单元304还可用于向模式应用单元360提供帧间预测参数、帧内预测参数和/或其它语法元素,以及向解码器30的其它单元提供其它参数。视频解码器30可以接收视频片和/或视频块级的语法元素。 此外,或者作为片和相应语法元素的替代,可以接收或使用编码区块组和/或编码区块以及相应语法元素。The entropy decoding unit 304 is used to parse the bit stream 21 (or generally the encoded image data 21 ) and perform entropy decoding on the encoded image data 21 to obtain quantization coefficients 309 and/or decoded encoding parameters (not shown in FIG. 3 ), etc. , such as in inter prediction parameters (such as reference picture indices and motion vectors), intra prediction parameters (such as intra prediction mode or index), transform parameters, quantization parameters, loop filter parameters and/or other syntax elements, etc. any or all. The entropy decoding unit 304 may be configured to apply a decoding algorithm or scheme corresponding to the encoding scheme of the entropy encoding unit 270 of the encoder 20 . Entropy decoding unit 304 may also be used to provide inter-prediction parameters, intra-prediction parameters, and/or other syntax elements to mode application unit 360 , as well as other parameters to other units of decoder 30 . Video decoder 30 may receive syntax elements at the video slice and/or video block level. In addition, or instead of slices and corresponding syntax elements, encoded block groups and/or encoded blocks and corresponding syntax elements may be received or used.
在本申请实施例中,熵解码单元304可用于向模式应用单元360提供的帧内预测参数包括用于指示当前图像块用于进行帧内预测的目标帧内预测模式的目标参考值。In this embodiment of the present application, the intra prediction parameters that the entropy decoding unit 304 may be configured to provide to the mode application unit 360 include a target reference value for indicating a target intra prediction mode of the current image block for performing intra prediction.
反量化inverse quantization
反量化单元310可用于从编码图像数据21(例如通过熵解码单元304解析和/或解码)接收量化参数(quantization parameter,QP)(或一般为与反量化相关的信息)和量化系数,并基于所述量化参数对所述解码的量化系数309进行反量化以获得反量化系数311,所述反量化系数311也可以称为变换系数311。反量化过程可包括使用视频编码器20为视频片中的每个视频块计算的量化参数来确定量化程度,同样也确定需要执行的反量化的程度。Inverse quantization unit 310 may be operable to receive quantization parameters (QPs) (or information related to inverse quantization in general) and quantization coefficients from encoded image data 21 (eg, parsed and/or decoded by entropy decoding unit 304), and based on The quantization parameters inverse quantize the decoded quantized coefficients 309 to obtain inverse quantized coefficients 311 , which may also be referred to as transform coefficients 311 . The inverse quantization process may include using quantization parameters calculated by video encoder 20 for each video block in the video slice to determine the degree of quantization, as well as the degree of inverse quantization that needs to be performed.
逆变换Inverse transform
逆变换处理单元312可用于接收解量化系数311,也称为变换系数311,并对解量化系数311应用变换以得到像素域中的重建残差块213。重建残差块213也可称为变换块313。变换可以为逆变换,例如逆DCT、逆DST、逆整数变换或概念上类似的逆变换过程。逆变换处理单元312还可以用于从编码图像数据21(例如通过熵解码单元304解析和/或解码)接收变换参数或相应信息,以确定应用于解量化系数311的变换。An inverse transform processing unit 312 may be operable to receive dequantized coefficients 311, also referred to as transform coefficients 311, and apply a transform to the dequantized coefficients 311 to obtain a reconstructed residual block 213 in the pixel domain. The reconstructed residual block 213 may also be referred to as a transform block 313 . The transform may be an inverse transform, such as an inverse DCT, an inverse DST, an inverse integer transform, or a conceptually similar inverse transform process. Inverse transform processing unit 312 may also be operable to receive transform parameters or corresponding information from encoded image data 21 (eg, parsed and/or decoded by entropy decoding unit 304 ) to determine transforms to apply to dequantized coefficients 311 .
重建reconstruction
重建单元314(例如,求和器314)用于将重建残差块313添加到预测块365,以在像素域中得到重建块315,例如,将重建残差块313的像素点值和预测块365的像素点值相加。The reconstruction unit 314 (eg, summer 314) is used to add the reconstructed residual block 313 to the prediction block 365 to obtain the reconstructed block 315 in the pixel domain, for example, the pixel point values of the reconstructed residual block 313 and the prediction block 365 pixel values are added.
滤波filter
环路滤波器单元320(在编码环路中或之后)用于对重建块315进行滤波,得到滤波块321,从而顺利进行像素转变或提高视频质量等。环路滤波器单元320可包括一个或多个环路滤波器,例如去块滤波器、像素点自适应偏移(sample-adaptive offset,SAO)滤波器或一个或多个其它滤波器,例如自适应环路滤波器(adaptive loop filter,ALF)、噪声抑制滤波器(noise suppression filter,NSF)或任意组合。例如,环路滤波器单元220可以包括去块滤波器、SAO滤波器和ALF滤波器。滤波过程的顺序可以是去块滤波器、SAO滤波器和ALF滤波器。再例如,增加一个称为具有色度缩放的亮度映射(luma mapping with chroma scaling,LMCS)(即自适应环内整形器)的过程。该过程在去块之前执行。再例如,去块滤波过程也可以应用于内部子块边缘,例如仿射子块边缘、ATMVP子块边缘、子块变换(sub-block transform,SBT)边缘和内子部分(intra sub-partition,ISP)边缘。尽管环路滤波器单元320在图3中示为环路滤波器,但在其它配置中,环路滤波器单元320可以实现为环后滤波器。The loop filter unit 320 (in or after the encoding loop) is used to filter the reconstruction block 315 to obtain a filter block 321, so as to smoothly perform pixel transitions or improve video quality, etc. The loop filter unit 320 may include one or more loop filters, such as a deblocking filter, a sample-adaptive offset (SAO) filter, or one or more other filters, such as a self- Adaptive loop filter (ALF), noise suppression filter (NSF), or any combination. For example, the loop filter unit 220 may include a deblocking filter, a SAO filter, and an ALF filter. The order of the filtering process can be deblocking filter, SAO filter and ALF filter. As another example, a process called luma mapping with chroma scaling (LMCS) (ie, adaptive in-loop shaper) is added. This process is performed before deblocking. For another example, the deblocking filtering process can also be applied to internal sub-block edges, such as affine sub-block edges, ATMVP sub-block edges, sub-block transform (SBT) edges, and intra sub-partition (ISP) edges. )edge. Although loop filter unit 320 is shown in FIG. 3 as a loop filter, in other configurations, loop filter unit 320 may be implemented as a post-loop filter.
解码图像缓冲器decoded image buffer
随后将一个图像中的解码视频块321存储在解码图像缓冲器330中,解码图像缓冲器330存储作为参考图像的解码图像331,参考图像用于其它图像和/或分别输出显示的后续运动补偿。The decoded video block 321 in one picture is then stored in a decoded picture buffer 330 which stores the decoded picture 331 as a reference picture for subsequent motion compensation of other pictures and/or output display respectively.
解码器30用于通过输出端312等输出解码图像311,向用户显示或供用户查看。The decoder 30 is configured to output the decoded image 311 through the output terminal 312, etc., to display to the user or for the user to view.
预测predict
帧间预测单元354在功能上可与帧间预测单元244(特别是运动补偿单元)相同,帧内预测单元344在功能上可与帧内预测单元254相同,并基于从编码图像数据21(例如通过熵解码单元304解析和/或解码)接收的分割和/或预测参数或相应信息决定划分或分割和执行预测。在本申请中,帧内预测单元344可以基于熵解码单元204中的得到的目标参考值和神经网络模型中得到的概率向量确定当前图像块用于帧内预测的目标帧内预测模式。模式应用单元360可用于根据重建图像、块或相应的像素点(已滤波或未滤波)执行每个块的预测(帧内或帧间预测),得到预测块365。The inter prediction unit 354 may be functionally the same as the inter prediction unit 244 (in particular, the motion compensation unit), the intra prediction unit 344 may be functionally the same as the intra prediction unit 254, and is based on the encoded image data 21 (e.g. The received partitioning and/or prediction parameters or corresponding information are parsed and/or decoded by the entropy decoding unit 304 to decide the partitioning or partitioning and perform prediction. In the present application, the intra prediction unit 344 may determine the target intra prediction mode of the current image block for intra prediction based on the target reference value obtained in the entropy decoding unit 204 and the probability vector obtained in the neural network model. The mode application unit 360 may be configured to perform prediction (intra or inter prediction) of each block according to the reconstructed image, block or corresponding pixel points (filtered or unfiltered), resulting in a prediction block 365 .
当将视频片编码为帧内编码(intra coded,I)片时,模式应用单元360中的帧内预测单元354用于根据指示的帧内预测模式和来自当前图像的之前解码块的数据生成用于当前视频片的图像块的预测块365。当视频图像编码为帧间编码(即,B或P)片时,模式应用单元360中的帧间预测单元344(例如运动补偿单元)用于根据运动矢量和从熵解码单元304接收的其它语法元素生成用于当前视频片的视频块的预测块365。对于帧间预测,可从其中一个参考图像列表中的其中一个参考图像产生这些预测块。视频解码器30可以根据存储在DPB330中的参考图像,使用默认构建技术来构建参考帧列表0和列表1。除了片(例如视频片)或作为片的替代,相同或类似的过程可应用于编码区块组(例如视频编码区块组)和/或编码区块(例如视频编码区块)的实施例,例如视频可以使用I、P或B编码区块组和/或编码区块进行编码。When encoding a video slice as an intra-coded (I) slice, the intra-prediction unit 354 in the mode application unit 360 is used to generate data based on the indicated intra-prediction mode and data from previously decoded blocks of the current image. Prediction block 365 for an image block of the current video slice. When a video image is encoded as an inter-coded (ie, B or P) slice, an inter-prediction unit 344 (eg, a motion compensation unit) in the mode application unit 360 is used to decode the motion vector and other syntax received from the entropy decoding unit 304 according to the motion vector The element generates a prediction block 365 for a video block of the current video slice. For inter prediction, these prediction blocks may be generated from one of the reference pictures in one of the reference picture lists. Video decoder 30 may construct reference frame List 0 and List 1 from reference pictures stored in DPB 330 using default construction techniques. In addition to or instead of slices (eg, video slices), the same or similar process may be applied to embodiments of coding block groups (eg, video coding block groups) and/or coding blocks (eg, video coding blocks), For example, video may be encoded using I, P, or B encoding block groups and/or encoding blocks.
模式应用单元360用于通过解析运动矢量和其它语法元素,确定用于当前视频片的视频块的预测信息,并使用预测信息产生用于正在解码的当前视频块的预测块。例如,模式应用单元360使用接收到的一些语法元素确定用于编码视频片的视频块的预测模式(例如帧内预测或帧间预测)、帧间预测片类型(例如B片、P片或GPB片)、用于片的一个或多个参考图像列表的构建信息、用于片的每个帧间编码视频块的运动矢量、用于片的每个帧间编码视频块的帧间预测状态、其它信息,以解码当前视频片内的视频块。除了片(例如视频片)或作为片的替代,相同或类似的过程可应用于编码区块组(例如视频编码区块组)和/或编码区块(例如视频编码区块)的实施例,例如视频可以使用I、P或B编码区块组和/或编码区块进行编码。Mode application unit 360 is operable to determine prediction information for a video block of the current video slice by parsing motion vectors and other syntax elements, and use the prediction information to generate a prediction block for the current video block being decoded. For example, mode applying unit 360 uses some of the received syntax elements to determine a prediction mode (eg, intra-prediction or inter-prediction), an inter-prediction slice type (eg, B-slice, P-slice, or GPB for encoding a video block of the video slice) slice), construction information for one or more reference picture lists of the slice, motion vectors for each inter-coded video block of the slice, inter-prediction status for each inter-coded video block of the slice, other information to decode video blocks within the current video slice. In addition to or instead of slices (eg, video slices), the same or similar process may be applied to embodiments of coding block groups (eg, video coding block groups) and/or coding blocks (eg, video coding blocks), For example, video may be encoded using I, P, or B encoding block groups and/or encoding blocks.
在一个实施例中,图3所示的视频编码器30还可以用于使用片(也称为视频片)分割和/或解码图像,其中图像可以使用一个或多个片(通常为不重叠的)进行分割或解码。每个片可包括一个或多个块(例如CTU)或一个或多个块组(例如H.265/HEVC/VVC标准中的编码区块和VVC标准中的砖。In one embodiment, the video encoder 30 shown in FIG. 3 may also be used to segment and/or decode images using slices (also referred to as video slices), where an image may use one or more slices (typically non-overlapping slices) ) for segmentation or decoding. Each slice may include one or more blocks (eg, CTUs) or one or more groups of blocks (eg, coded blocks in the H.265/HEVC/VVC standard and bricks in the VVC standard.
在一个实施例中,图3所示的视频解码器30还可以用于使用片/编码区块组(也称为视频编码区块组)和/或编码区块(也称为视频编码区块)对图像进行分割和/或解码,其中图像可以使用一个或多个片/编码区块组(通常为不重叠的)进行分割或解码,每个片/编码区块组可包括一个或多个块(例如CTU)或一个或多个编码区块等,其中每个编码区块可以为矩形等形状,可包括一个或多个完整或部分块(例如CTU)。In one embodiment, the video decoder 30 shown in FIG. 3 may also be used to use slice/coding block groups (also referred to as video coding block groups) and/or coding blocks (also referred to as video coding blocks) ) to segment and/or decode an image, wherein the image may be segmented or decoded using one or more slices/encoded block groups (usually non-overlapping), each slice/encoded block group may include one or more A block (eg, CTU) or one or more coding blocks, etc., wherein each coding block may be rectangular or the like, and may include one or more full or partial blocks (eg, CTUs).
视频解码器30的其它变型可用于对编码图像数据21进行解码。例如,解码器30可以在没有环路滤波器单元320的情况下产生输出视频流。例如,基于非变换的解码器30可以在某些块或帧没有逆变换处理单元312的情况下直接反量化残差信号。在另一种实现方式中,视频解码器30可以具有组合成单个单元的反量化单元310和逆变换处理单元312。Other variations of the video decoder 30 may be used to decode the encoded image data 21 . For example, decoder 30 may generate the output video stream without loop filter unit 320 . For example, the non-transform based decoder 30 may directly inverse quantize the residual signal without the inverse transform processing unit 312 for certain blocks or frames. In another implementation, video decoder 30 may have inverse quantization unit 310 and inverse transform processing unit 312 combined into a single unit.
应理解,在编码器20和解码器30中,可以对当前步骤的处理结果进一步处理,然后输出到下一步骤。例如,在插值滤波、运动矢量推导或环路滤波之后,可以对插值滤波、运动矢量推导或环路滤波的处理结果进行进一步的运算,例如裁剪(clip)或移位(shift)运算。It should be understood that in the encoder 20 and the decoder 30, the processing result of the current step can be further processed, and then output to the next step. For example, after interpolation filtering, motion vector derivation or loop filtering, further operations, such as clip or shift operations, may be performed on the processing results of interpolation filtering, motion vector derivation or loop filtering.
应该注意的是,可以对当前块的推导运动矢量(包括但不限于仿射模式的控制点运动矢量、仿射、平面、ATMVP模式的子块运动矢量、时间运动矢量等)进行进一步运算。例如,根据运动矢量的表示位将运动矢量的值限制在预定义范围。如果运动矢量的表示位为bitDepth,则范围为-2^(bitDepth-1)至2^(bitDepth-1)-1,其中“^”表示幂次方。例如,如果bitDepth设置为16,则范围为-32768~32767;如果bitDepth设置为18,则范围为-131072~131071。例如,推导运动矢量的值(例如一个8×8块中的4个4×4子块的MV)被限制,使得所述4个4×4子块MV的整数部分之间的最大差值不超过N个像素,例如不超过1个像素。这里提供了两种根据bitDepth限制运动矢量的方法。It should be noted that further operations may be performed on the derived motion vectors of the current block (including but not limited to control point motion vectors in affine mode, affine, plane, sub-block motion vectors in ATMVP mode, temporal motion vectors, etc.). For example, the value of the motion vector is limited to a predefined range according to the representation bits of the motion vector. If the representation bit of the motion vector is bitDepth, the range is -2^(bitDepth-1) to 2^(bitDepth-1)-1, where "^" represents a power. For example, if bitDepth is set to 16, the range is -32768 to 32767; if bitDepth is set to 18, the range is -131072 to 131071. For example, the value of the derived motion vector (eg, the MVs of four 4x4 subblocks in an 8x8 block) is limited such that the maximum difference between the integer parts of the four 4x4 subblock MVs does not More than N pixels, eg no more than 1 pixel. There are two ways to limit motion vectors based on bitDepth.
尽管上述实施例主要描述了视频编解码,但应注意的是,译码系统10、编码器20和解码器30的实施例以及本文描述的其它实施例也可以用于静止图像处理或编解码,即视频编解码中独立于任何先前或连续图像的单个图像的处理或编解码。一般情况下,如果图像处理仅限于单个图像17,帧间预测单元244(编码器)和帧间预测单元344(解码器)可能不可用。视频编码器20和视频解码器30的所有其它功能(也称为工具或技术)同样可用于静态图像处理,例如残差计算204/304、变换206、量化208、反量化210/310、(逆)变换212/312、分割262/362、帧内预测254/354和/或环路滤波220/320、熵编码270和熵解码304。Although the above embodiments have primarily described video codecs, it should be noted that embodiments of the coding system 10, encoder 20 and decoder 30, as well as other embodiments described herein, may also be used for still image processing or codecs, That is, the processing or coding of a single image in video codecs that is independent of any previous or consecutive images. In general, if image processing is limited to a single image 17, inter prediction unit 244 (encoder) and inter prediction unit 344 (decoder) may not be available. All other functions (also referred to as tools or techniques) of video encoder 20 and video decoder 30 are also available for still image processing, such as residual calculation 204/304, transform 206, quantization 208, inverse quantization 210/310, (inverse ) transform 212/312, partition 262/362, intra prediction 254/354 and/or loop filtering 220/320, entropy encoding 270 and entropy decoding 304.
图4为本发明实施例提供的视频译码设备400的示意图。视频译码设备400适用于实现本文描述的公开实施例。在一个实施例中,视频译码设备400可以是解码器,例如图1A中的视频解码器30,也可以是编码器,例如图1A中的视频编码器20。FIG. 4 is a schematic diagram of a video decoding apparatus 400 according to an embodiment of the present invention. Video coding apparatus 400 is suitable for implementing the disclosed embodiments described herein. In one embodiment, the video coding apparatus 400 may be a decoder, such as the video decoder 30 in FIG. 1A, or an encoder, such as the video encoder 20 in FIG. 1A.
视频译码设备400包括:用于接收数据的入端口410(或输入端口410)和接收单元(receiver unit,Rx)420;用于处理数据的处理器、逻辑单元或中央处理器(central processing unit,CPU)430;例如,这里的处理器430可以是神经网络处理器430;用于传输数据的发送单元(transmitter unit,Tx)440和出端口450(或输出端口450);用于存储数据的存储器460。视频译码设备400还可包括耦合到入端口410、接收单元420、发送单元440和出端口450的光电(optical-to-electrical,OE)组件和电光(electrical-to-optical,EO)组件,用于光信号或电信号的出口或入口。The video decoding apparatus 400 includes: an input port 410 (or input port 410) for receiving data and a receiver unit (receiver unit, Rx) 420; a processor, a logic unit or a central processing unit (central processing unit) for processing data , CPU) 430; for example, the processor 430 here can be a neural network processor 430; a transmitter unit (transmitter unit, Tx) 440 for transmitting data and an output port 450 (or output port 450); memory 460. The video coding apparatus 400 may also include optical-to-electrical (OE) components and electrical-to-optical (EO) components coupled to the input port 410, the receiving unit 420, the transmitting unit 440, and the output port 450, Exit or entrance for optical or electrical signals.
处理器430通过硬件和软件实现。处理器430可实现为一个或多个处理器芯片、核(例如,多核处理器)、FPGA、ASIC和DSP。处理器430与入端口410、接收单元420、发送单元440、出端口450和存储器460通信。处理器430包括译码模块470(例如,基于神经网络NN的译码模块470)。译码模块470实施上文所公开的实施例。例如,译码模块470执行、处理、准备或提供各种编码操作。因此,通过译码模块470为视频译码设备400的功能提供了实质性的改进,并且影响了视频译码设备400到不同状态的切换。或者,以存储在存储器460中并由处理器430执行的指令来实现译码模块470。The processor 430 is implemented by hardware and software. Processor 430 may be implemented as one or more processor chips, cores (eg, multi-core processors), FPGAs, ASICs, and DSPs. The processor 430 communicates with the ingress port 410 , the receiving unit 420 , the sending unit 440 , the egress port 450 and the memory 460 . The processor 430 includes a decoding module 470 (eg, a neural network NN based decoding module 470). The decoding module 470 implements the embodiments disclosed above. For example, the transcoding module 470 performs, processes, prepares or provides various encoding operations. Thus, a substantial improvement in the functionality of the video coding apparatus 400 is provided by the coding module 470, and switching of the video coding apparatus 400 to different states is affected. Alternatively, decoding module 470 is implemented as instructions stored in memory 460 and executed by processor 430 .
存储器460包括一个或多个磁盘、磁带机和固态硬盘,可以用作溢出数据存储设备,用于在选择执行程序时存储此类程序,并且存储在程序执行过程中读取的指令和数据。存储器460可以是易失性和/或非易失性的,可以是只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、三态内容寻址存储器(ternary content-addressable memory,TCAM)和/或静态随机存取存储器(static random-access memory,SRAM)。 Memory 460 includes one or more magnetic disks, tape drives, and solid-state drives, and may serve as an overflow data storage device for storing programs when such programs are selected for execution, and for storing instructions and data read during program execution. Memory 460 may be volatile and/or non-volatile, and may be read-only memory (ROM), random access memory (RAM), ternary content addressable memory (ternary) content-addressable memory, TCAM) and/or static random-access memory (SRAM).
图5为示例性实施例提供的装置500的简化框图,装置500可用作图1A中的源设备12和目的设备14中的任一个或两个。FIG. 5 is a simplified block diagram of an apparatus 500 provided by an exemplary embodiment, and the apparatus 500 can be used as either or both of the source device 12 and the destination device 14 in FIG. 1A .
装置500中的处理器502可以是中央处理器。或者,处理器502可以是现有的或今后将研发出的能够操控或处理信息的任何其它类型设备或多个设备。虽然可以使用如图所示的处理器502等单个处理器来实施已公开的实现方式,但使用一个以上的处理器速度更快和效率更高。The processor 502 in the apparatus 500 may be a central processing unit. Alternatively, the processor 502 may be any other type of device or devices, existing or to be developed in the future, capable of manipulating or processing information. Although the disclosed implementations may be implemented using a single processor, such as processor 502 as shown, using more than one processor is faster and more efficient.
在一种实现方式中,装置500中的存储器504可以是只读存储器(ROM)设备或随机存取存储器(RAM)设备。任何其它合适类型的存储设备都可以用作存储器504。存储器504可以包括处理器502通过总线512访问的代码和数据506。存储器504还可包括操作系统508和应用程序510,应用程序510包括允许处理器502执行本文所述方法的至少一个程序。例如,应用程序510可以包括应用1至N,还包括执行本文所述方法的视频译码应用。In one implementation, the memory 504 in the apparatus 500 may be a read only memory (ROM) device or a random access memory (RAM) device. Any other suitable type of storage device may be used as memory 504 . Memory 504 may include code and data 506 accessed by processor 502 via bus 512 . The memory 504 may also include an operating system 508 and application programs 510 including at least one program that allows the processor 502 to perform the methods described herein. For example, applications 510 may include applications 1 through N, and also include video coding applications that perform the methods described herein.
装置500还可以包括一个或多个输出设备,例如显示器518。在一个示例中,显示器518可以是将显示器与可用于感测触摸输入的触敏元件组合的触敏显示器。显示器518可以通过总线512耦合到处理器502。 Apparatus 500 may also include one or more output devices, such as display 518 . In one example, display 518 may be a touch-sensitive display that combines a display with touch-sensitive elements that may be used to sense touch input. Display 518 may be coupled to processor 502 through bus 512 .
虽然装置500中的总线512在本文中描述为单个总线,但是总线512可以包括多个总线。此外,辅助储存器可以直接耦合到装置500的其它组件或通过网络访问,并且可以包括存储卡等单个集成单元或多个存储卡等多个单元。因此,装置500可以具有各种各样的配置。Although bus 512 in device 500 is described herein as a single bus, bus 512 may include multiple buses. In addition, secondary storage may be directly coupled to other components of the device 500 or accessed through a network, and may include a single integrated unit, such as a memory card, or multiple units, such as multiple memory cards. Accordingly, the apparatus 500 may have various configurations.
由于本申请实施例涉及神经网络的应用,为了便于理解,下面先对本申请实施例所使用到的一些名词或术语进行解释说明,该名词或术语也作为发明内容的一部分。Since the embodiments of the present application involve the application of neural networks, for ease of understanding, some nouns or terms used in the embodiments of the present application are explained below, and the nouns or terms are also part of the content of the invention.
(1)神经网络(1) Neural network
神经网络(Neural Network,NN)是机器学习模型,神经网络可以是由神经单元组成的,神经单元可以是指以xs和截距1为输入的运算单元,该运算单元的输出可以为:Neural Network (NN) is a machine learning model. A neural network can be composed of neural units. A neural unit can refer to an operation unit that takes xs and intercept 1 as inputs. The output of the operation unit can be:
Figure PCTCN2021128000-appb-000001
Figure PCTCN2021128000-appb-000001
其中,s=1、2、……n,n为大于1的自然数,Ws为xs的权重,b为神经单元的偏置。f为神经单元的激活函数(activation functions),用于将非线性特性引入神经网络中,来将神经单元中的输入信号转换为输出信号。该激活函数的输出信号可以作为下一层卷积层的输入。激活函数可以是sigmoid函数。神经网络是将许多个上述单一的神经单元联结在一起形成的网络,即一个神经单元的输出可以是另一个神经单元的输入。每个神经单元的输入可以与前一层的局部接受域相连,来提取局部接受域的特征,局部接受域可以是由若干个神经单元组成的区域。Among them, s=1, 2,...n, n is a natural number greater than 1, Ws is the weight of xs, and b is the bias of the neural unit. f is the activation function of the neural unit, which is used to introduce nonlinear characteristics into the neural network to convert the input signal in the neural unit into an output signal. The output signal of this activation function can be used as the input of the next convolutional layer. The activation function can be a sigmoid function. A neural network is a network formed by connecting many of the above single neural units together, that is, the output of one neural unit can be the input of another neural unit. The input of each neural unit can be connected with the local receptive field of the previous layer to extract the features of the local receptive field, and the local receptive field can be an area composed of several neural units.
(2)深度神经网络(2) Deep neural network
深度神经网络(Deep Neural Network,DNN),也称多层神经网络,可以理解为具有很多层隐含层的神经网络,这里的“很多”并没有特别的度量标准。从DNN按不同层的位置划分,DNN内部的神经网络可以分为三类:输入层,隐含层,输出层。一般来说第一层是输入层,最后一层是输出层,中间的层数都是隐含层。层与层之间是全连接的,也就是说,第i层的任意一个神经元一定与第i+1层的任意一个神经元相连。虽然DNN看起来很复杂,但是就每一层的工作来说,其实并不复杂,简单来说就是如下线性关系表达式:
Figure PCTCN2021128000-appb-000002
其中,
Figure PCTCN2021128000-appb-000003
是输入向量,
Figure PCTCN2021128000-appb-000004
是输出向量,
Figure PCTCN2021128000-appb-000005
是偏移向量,W是权重矩阵(也称系数),α()是激活函数。每一层仅仅是对输入向量
Figure PCTCN2021128000-appb-000006
经过如此简单的操作得到输出向量
Figure PCTCN2021128000-appb-000007
由于DNN层数多,则系数W 和偏移向量
Figure PCTCN2021128000-appb-000008
的数量也就很多了。这些参数在DNN中的定义如下所述:以系数W为例:假设在一个三层的DNN中,第二层的第4个神经元到第三层的第2个神经元的线性系数定义为
Figure PCTCN2021128000-appb-000009
上标3代表系数W所在的层数,而下标对应的是输出的第三层索引2和输入的第二层索引4。总结就是:第L-1层的第k个神经元到第L层的第j个神经元的系数定义为
Figure PCTCN2021128000-appb-000010
需要注意的是,输入层是没有W参数的。在深度神经网络中,更多的隐含层让网络更能够刻画现实世界中的复杂情形。理论上而言,参数越多的模型复杂度越高,“容量”也就越大,也就意味着它能完成更复杂的学习任务。训练深度神经网络的也就是学习权重矩阵的过程,其最终目的是得到训练好的深度神经网络的所有层的权重矩阵(由很多层的向量W形成的权重矩阵)。
Deep Neural Network (DNN), also known as multi-layer neural network, can be understood as a neural network with many hidden layers. There is no special metric for "many" here. From the division of DNN according to the position of different layers, the neural network inside DNN can be divided into three categories: input layer, hidden layer, and output layer. Generally speaking, the first layer is the input layer, the last layer is the output layer, and the middle layers are all hidden layers. The layers are fully connected, that is, any neuron in the i-th layer must be connected to any neuron in the i+1-th layer. Although DNN looks complicated, in terms of the work of each layer, it is not complicated. In short, it is the following linear relationship expression:
Figure PCTCN2021128000-appb-000002
in,
Figure PCTCN2021128000-appb-000003
is the input vector,
Figure PCTCN2021128000-appb-000004
is the output vector,
Figure PCTCN2021128000-appb-000005
is the offset vector, W is the weight matrix (also called coefficients), and α() is the activation function. Each layer is just an input vector
Figure PCTCN2021128000-appb-000006
After such a simple operation to get the output vector
Figure PCTCN2021128000-appb-000007
Due to the large number of DNN layers, the coefficient W and the offset vector
Figure PCTCN2021128000-appb-000008
The number is also much larger. These parameters are defined in the DNN as follows: Take the coefficient W as an example: Suppose that in a three-layer DNN, the linear coefficient from the 4th neuron in the second layer to the 2nd neuron in the third layer is defined as
Figure PCTCN2021128000-appb-000009
The superscript 3 represents the number of layers where the coefficient W is located, and the subscript corresponds to the output third layer index 2 and the input second layer index 4. The summary is: the coefficient from the kth neuron in the L-1 layer to the jth neuron in the Lth layer is defined as
Figure PCTCN2021128000-appb-000010
It should be noted that the input layer does not have a W parameter. In a deep neural network, more hidden layers allow the network to better capture the complexities of the real world. In theory, a model with more parameters is more complex and has a larger "capacity", which means that it can complete more complex learning tasks. Training the deep neural network is the process of learning the weight matrix, and its ultimate goal is to obtain the weight matrix of all layers of the trained deep neural network (the weight matrix formed by the vectors W of many layers).
(3)卷积神经网络(3) Convolutional Neural Network
卷积神经网络(convolutional neuron network,CNN)是一种带有卷积结构的深度神经网络,是一种深度学习(deep learning)架构,深度学习架构是指通过机器学习的算法,在不同的抽象层级上进行多个层次的学习。作为一种深度学习架构,CNN是一种前馈(feed-forward)人工神经网络,该前馈人工神经网络中的各个神经元可以对输入其中的图像作出响应。卷积神经网络包含了一个由卷积层和池化层构成的特征抽取器。该特征抽取器可以看作是滤波器,卷积过程可以看作是使用一个可训练的滤波器与一个输入的图像或者卷积特征平面(feature map)做卷积。Convolutional neural network (CNN) is a deep neural network with a convolutional structure and a deep learning architecture. Learning at multiple levels. As a deep learning architecture, CNN is a feed-forward artificial neural network in which individual neurons can respond to images fed into it. A convolutional neural network consists of a feature extractor consisting of convolutional and pooling layers. The feature extractor can be viewed as a filter, and the convolution process can be viewed as convolution with an input image or a convolutional feature map using a trainable filter.
卷积层是指卷积神经网络中对输入信号进行卷积处理的神经元层。卷积层可以包括很多个卷积算子,卷积算子也称为核,其在图像处理中的作用相当于一个从输入图像矩阵中提取特定信息的过滤器,卷积算子本质上可以是一个权重矩阵,这个权重矩阵通常被预先定义,在对图像进行卷积操作的过程中,权重矩阵通常在输入图像上沿着水平方向一个像素接着一个像素(或两个像素接着两个像素……这取决于步长stride的取值)的进行处理,从而完成从图像中提取特定特征的工作。该权重矩阵的大小应该与图像的大小相关,需要注意的是,权重矩阵的纵深维度(depth dimension)和输入图像的纵深维度是相同的,在进行卷积运算的过程中,权重矩阵会延伸到输入图像的整个深度。因此,和一个单一的权重矩阵进行卷积会产生一个单一纵深维度的卷积化输出,但是大多数情况下不使用单一权重矩阵,而是应用多个尺寸(行×列)相同的权重矩阵,即多个同型矩阵。每个权重矩阵的输出被堆叠起来形成卷积图像的纵深维度,这里的维度可以理解为由上面所述的“多个”来决定。不同的权重矩阵可以用来提取图像中不同的特征,例如一个权重矩阵用来提取图像边缘信息,另一个权重矩阵用来提取图像的特定颜色,又一个权重矩阵用来对图像中不需要的噪点进行模糊化等。该多个权重矩阵尺寸(行×列)相同,经过该多个尺寸相同的权重矩阵提取后的特征图的尺寸也相同,再将提取到的多个尺寸相同的特征图合并形成卷积运算的输出。这些权重矩阵中的权重值在实际应用中需要经过大量的训练得到,通过训练得到的权重值形成的各个权重矩阵可以用来从输入图像中提取信息,从而使得卷积神经网络进行正确的预测。当卷积神经网络有多个卷积层的时候,初始的卷积层往往提取较多的一般特征,该一般特征也可以称之为低级别的特征;随着卷积神经网络深度的加深,越往后的卷积层提取到的特征越来越复杂,比如高级别的语义之类的特征,语义越高的特征越适用于待解决的问题。The convolutional layer refers to the neuron layer in the convolutional neural network that convolves the input signal. The convolution layer can include many convolution operators. The convolution operator is also called the kernel. Its role in image processing is equivalent to a filter that extracts specific information from the input image matrix. The convolution operator can essentially is a weight matrix, which is usually pre-defined, during the convolution operation on the image, the weight matrix is usually one pixel by one pixel (or two pixels by two pixels) along the horizontal direction on the input image... ...it depends on the value of stride) to process, so as to complete the work of extracting specific features from the image. The size of the weight matrix should be related to the size of the image. It should be noted that the depth dimension of the weight matrix is the same as the depth dimension of the input image. During the convolution operation, the weight matrix will be extended to Enter the entire depth of the image. Therefore, convolution with a single weight matrix will result in a single depth dimension of the convolutional output, but in most cases a single weight matrix is not used, but multiple weight matrices of the same size (row × column) are applied, That is, multiple isotype matrices. The output of each weight matrix is stacked to form the depth dimension of the convolutional image, where the dimension can be understood as determined by the "multiple" described above. Different weight matrices can be used to extract different features in the image. For example, one weight matrix is used to extract image edge information, another weight matrix is used to extract specific colors of the image, and another weight matrix is used to extract unwanted noise in the image. Blur, etc. The multiple weight matrices have the same size (row×column), and the size of the feature maps extracted from the multiple weight matrices with the same size is also the same, and then the multiple extracted feature maps with the same size are combined to form a convolution operation. output. The weight values in these weight matrices need to be obtained through a lot of training in practical applications, and each weight matrix formed by the weight values obtained by training can be used to extract information from the input image, so that the convolutional neural network can make correct predictions. When the convolutional neural network has multiple convolutional layers, the initial convolutional layer often extracts more general features, which can also be called low-level features; as the depth of the convolutional neural network deepens, The features extracted by the later convolutional layers are more and more complex, such as features such as high-level semantics, and the features with higher semantics are more suitable for the problem to be solved.
由于常常需要减少训练参数的数量,因此卷积层之后常常需要周期性的引入池化层,可以是一层卷积层后面跟一层池化层,也可以是多层卷积层后面接一层或多层池化层。在图像处理过程中,池化层的唯一目的就是减少图像的空间大小。池化层可以包括平均池化算子和/或最大池化算子,以用于对输入图像进行采样得到较小尺寸的图像。平均池化算子可以在特定范围内对图像中的像素值进行计算产生平均值作为平均池化的结果。最大池化算子可以在 特定范围内取该范围内值最大的像素作为最大池化的结果。另外,就像卷积层中用权重矩阵的大小应该与图像尺寸相关一样,池化层中的运算符也应该与图像的大小相关。通过池化层处理后输出的图像尺寸可以小于输入池化层的图像的尺寸,池化层输出的图像中每个像素点表示输入池化层的图像的对应子区域的平均值或最大值。Since it is often necessary to reduce the number of training parameters, it is often necessary to periodically introduce a pooling layer after the convolutional layer, which can be a convolutional layer followed by a pooling layer, or a multi-layer convolutional layer followed by a layer or multiple pooling layers. During image processing, the only purpose of pooling layers is to reduce the spatial size of the image. The pooling layer may include an average pooling operator and/or a max pooling operator for sampling the input image to obtain a smaller size image. The average pooling operator can calculate the pixel values in the image within a certain range to produce an average value as the result of average pooling. The max pooling operator can take the pixel with the largest value in the range as the result of max pooling. Also, just as the size of the weight matrix used in the convolutional layer should be related to the size of the image, the operators in the pooling layer should also be related to the size of the image. The size of the output image after processing by the pooling layer can be smaller than the size of the image input to the pooling layer, and each pixel in the image output by the pooling layer represents the average or maximum value of the corresponding sub-region of the image input to the pooling layer.
在经过卷积层/池化层的处理后,卷积神经网络还不足以输出所需要的输出信息。因为如前所述,卷积层/池化层只会提取特征,并减少输入图像带来的参数。然而为了生成最终的输出信息(所需要的类信息或其他相关信息),卷积神经网络需要利用神经网络层来生成一个或者一组所需要的类的数量的输出。因此,在神经网络层中可以包括多层隐含层,该多层隐含层中所包含的参数可以根据具体的任务类型的相关训练数据进行预先训练得到,例如该任务类型可以包括图像识别,图像分类,图像超分辨率重建等等。After processing by the convolutional layer/pooling layer, the convolutional neural network is not enough to output the required output information. Because as mentioned before, convolutional/pooling layers will only extract features and reduce the parameters brought by the input image. However, in order to generate the final output information (required class information or other relevant information), the convolutional neural network needs to utilize neural network layers to generate one or a set of outputs of the required number of classes. Therefore, the neural network layer may include multiple hidden layers, and the parameters contained in the multiple hidden layers may be obtained by pre-training according to the relevant training data of a specific task type. For example, the task type may include image recognition, Image classification, image super-resolution reconstruction, and more.
可选的,在神经网络层中的多层隐含层之后,还包括整个卷积神经网络的输出层,该输出层具有类似分类交叉熵的损失函数,具体用于计算预测误差,一旦整个卷积神经网络的前向传播完成,反向传播就会开始更新前面提到的各层的权重值以及偏差,以减少卷积神经网络的损失,及卷积神经网络通过输出层输出的结果和理想结果之间的误差。Optionally, after the multi-layer hidden layers in the neural network layer, it also includes the output layer of the entire convolutional neural network, which has a loss function similar to categorical cross-entropy, specifically for calculating the prediction error, once the entire volume The forward propagation of the convolutional neural network is completed, and the backpropagation will start to update the weight values and biases of the aforementioned layers to reduce the loss of the convolutional neural network, and the result and ideal output of the convolutional neural network through the output layer. error between results.
(4)损失函数(4) Loss function
在训练深度神经网络的过程中,因为希望深度神经网络的输出尽可能的接近真正想要预测的值,所以可以通过比较当前网络的预测值和真正想要的目标值,再根据两者之间的差异情况来更新每一层神经网络的权重向量(当然,在第一次更新之前通常会有初始化的过程,即为深度神经网络中的各层预先配置参数),比如,如果网络的预测值高了,就调整权重向量让它预测低一些,不断的调整,直到深度神经网络能够预测出真正想要的目标值或与真正想要的目标值非常接近的值。因此,就需要预先定义“如何比较预测值和目标值之间的差异”,这便是损失函数(loss function)或目标函数(objective function),它们是用于衡量预测值和目标值的差异的重要方程。其中,以损失函数举例,损失函数的输出值(loss)越高表示差异越大,那么深度神经网络的训练就变成了尽可能缩小这个loss的过程。In the process of training a deep neural network, because it is hoped that the output of the deep neural network is as close as possible to the value you really want to predict, you can compare the predicted value of the current network with the target value you really want, and then based on the difference between the two to update the weight vector of each layer of neural network (of course, there is usually an initialization process before the first update, that is, to pre-configure parameters for each layer in the deep neural network), for example, if the predicted value of the network If it is high, adjust the weight vector to make its prediction lower, and keep adjusting until the deep neural network can predict the real desired target value or a value that is very close to the real desired target value. Therefore, it is necessary to pre-define "how to compare the difference between the predicted value and the target value", which is the loss function (loss function) or objective function (objective function), which are used to measure the difference between the predicted value and the target value. important equation. Among them, taking the loss function as an example, the higher the output value of the loss function (loss), the greater the difference, then the training of the deep neural network becomes the process of reducing the loss as much as possible.
下面将结合图6详细地描述用于帧内预测的目标模型(亦称为神经网络)。图6示意目标模型(例如用于预测的神经网络,简称预测网络)的示例架构600。以第一数据块或第二数据块作为神经网络的输入,神经网络使用卷积层处理输入数据,使用Softmax层输出当前图像块的概率向量。The target model (also referred to as a neural network) for intra prediction will be described in detail below with reference to FIG. 6 . FIG. 6 illustrates an example architecture 600 of a target model, such as a neural network for prediction, or prediction network for short. Taking the first data block or the second data block as the input of the neural network, the neural network uses the convolution layer to process the input data, and uses the Softmax layer to output the probability vector of the current image block.
该神经网络模型的主体结构为卷积神经网络中的深度残差网络(deep residual network,ResNet),ResNet由9个残差块(residual block)组成;每个残差块有两个基本层,每个基本层包含一个卷积层(convolution layer)、一个非线性激活函数(rectified linear unit,ReLU)层和一个批标准化(batch normalization,BN)层。此外,在第一个残差块和数据输入之间还存在一个卷积层,该卷积层可以加快神经网络模型训练的收敛速度,减轻梯度消失或梯度爆炸现象对训练的影响。在最后一个残差块之后,还依次连接有一个卷积层和一个自适应平均池化(adaptive average pooling)层,来将特征图转化为多维向量,最后通过Softmax层输出67种帧内预测模式的概率分布,此概率分布为一个多维向量,其维度为图9所示的帧内预测模式数目之和,即67。The main structure of the neural network model is the deep residual network (ResNet) in the convolutional neural network. ResNet consists of 9 residual blocks (residual blocks); each residual block has two basic layers, Each base layer consists of a convolution layer (convolution layer), a non-linear activation function (rectified linear unit, ReLU) layer, and a batch normalization (BN) layer. In addition, there is a convolutional layer between the first residual block and the data input, which can speed up the convergence speed of neural network model training and alleviate the influence of gradient disappearance or gradient explosion phenomenon on training. After the last residual block, a convolutional layer and an adaptive average pooling layer are connected in turn to convert the feature map into a multi-dimensional vector, and finally 67 intra-frame prediction modes are output through the Softmax layer. The probability distribution of , which is a multi-dimensional vector whose dimension is the sum of the number of intra-frame prediction modes shown in FIG. 9 , that is, 67.
应当理解,本申请实施例中的使用的神经网络模型只是给出的一种具体的实例,本申请实施例对此并不限定。本领域的技术人员也可通过其它的架构来构建神经网络模型,从而达到与本申请实施例中的神经网络模型相同的技术效果,例如:由包括AlexNet、ZFNet、VGGNet、 GoogleNet等卷积神经网络架构或非卷积神经网络架构中的一种或多种所构成的神经网络模型。此外,本领域的技术人员还可以通过改变本申请实施例中神经网络模型的部分结构来达到与本申请实施例相同的技术效果,例如:改变本申请中神经网络模型的残差块数量等。It should be understood that the neural network model used in the embodiments of the present application is only a specific example, which is not limited in the embodiments of the present application. Those skilled in the art can also construct a neural network model through other architectures, so as to achieve the same technical effect as the neural network model in the embodiments of the present application, for example: a convolutional neural network including AlexNet, ZFNet, VGGNet, GoogleNet, etc. A neural network model consisting of one or more of the architectures or non-convolutional neural network architectures. In addition, those skilled in the art can also achieve the same technical effect as the embodiment of the present application by changing part of the structure of the neural network model in the embodiment of the present application, for example, changing the number of residual blocks of the neural network model in the present application.
在设计好用来计算概率向量的初始神经网络模型结构之后,开始对神经网络模型进行训练,本申请实施例中的神经网络模型进行训练过程如下:After designing the initial neural network model structure for calculating the probability vector, start training the neural network model. The training process of the neural network model in the embodiment of the present application is as follows:
本申请实施例在图像处理器上对初始神经网络模型进行训练。神经网络模型的训练数据库包括训练集、验证集和测试集,训练集、验证集和测试集分别包括一系列的图像集合。下面将输入神经网络模型中的第一数据块或第二数据块称为目标数据块,针对每一种尺寸的目标数据块,分别从训练集、验证集和测试集中选出与该目标数据块尺寸相同的图像作为该目标数据块的训练集、验证集和测试集,对该目标数据块的训练集中的每个图像进行压缩,提取出神经网络模型的输入数据和目标数据,输入数据为每个图像压缩后得到的数据块,目标数据为每个图像的目标帧内预测模式对应的第一标识。在每次训练的过程中,将压缩后的数据块输入神经网络模型,得到与该数据块对应的一个67维概率向量,采用公式(1-2)所示的交叉熵损失函数来衡量在该次训练过程中,神经网络模型的输出和预期之间的误差,从而判断是否结束神经网络模型的训练。In this embodiment of the present application, the initial neural network model is trained on the image processor. The training database of the neural network model includes a training set, a validation set and a test set, and the training set, the validation set and the test set respectively include a series of image sets. In the following, the first data block or the second data block in the input neural network model is called the target data block. For each size of the target data block, select the target data block from the training set, validation set and test set respectively. Images with the same size are used as the training set, validation set and test set of the target data block, each image in the training set of the target data block is compressed, and the input data and target data of the neural network model are extracted. Each image is a data block obtained after compression, and the target data is the first identifier corresponding to the target intra prediction mode of each image. In the process of each training, the compressed data block is input into the neural network model, and a 67-dimensional probability vector corresponding to the data block is obtained, and the cross entropy loss function shown in formula (1-2) is used to measure the In the second training process, the error between the output of the neural network model and the expected, so as to judge whether to end the training of the neural network model.
Figure PCTCN2021128000-appb-000011
Figure PCTCN2021128000-appb-000011
具体地,在采用公式(1-2)所示的交叉熵损失函数计算每次训练后的交叉熵损失函数值时,M表示样本的类别数目,这里M为图像的帧内预测模式总数,即M=67;i表示每次训练所使用的图像的目标数据,即该图像的目标帧内预测模式对应的第一标识;T(i)为指示函数,当指示的类别与样本类别相同时为1,否则其值为0,例如,在当前图像的目标数据为30的情况下,当i取30时,T(i)值为1;当i取其余值(0-29或31-66中任意值)时,T(i)值为0;P i表示Softmax层输出的概率分布中第i维的取值。应当理解,P i值越大,即采用目标帧内预测模式来进行帧内预测时的概率越大时,损失函数值将会越小。 Specifically, when the cross-entropy loss function shown in formula (1-2) is used to calculate the value of the cross-entropy loss function after each training, M represents the number of categories of samples, where M is the total number of intra-frame prediction modes of the image, that is, M=67; i represents the target data of the image used in each training, that is, the first identifier corresponding to the target intra prediction mode of the image; T(i) is an indicator function, and when the indicated category is the same as the sample category, it is 1, otherwise its value is 0. For example, when the target data of the current image is 30, when i is 30, the value of T(i) is 1; when i is the rest of the values (0-29 or 31-66) Any value), the value of T(i) is 0; P i represents the value of the i-th dimension in the probability distribution output by the Softmax layer. It should be understood that the larger the value of Pi , that is, the larger the probability of using the target intra prediction mode to perform intra prediction, the smaller the loss function value will be.
在结束每次训练及每次的交叉熵损失函数值的计算后,判断每次交叉熵损失函数值与预设损失值之间的关系,当每次交叉熵损失函数值大于预设损失值时,根据每次的交叉熵损失函数值来调整神经网络模型中各参数的权重,利用调整后的神经网络模型进行下一次训练,直到交叉熵损失函数值小于或等于预设损失值时,结束神经网络模型的训练。然后,利用该目标数据块的验证集和测试集对神经网络模型分别进行验证和测试;在经过验证和测试后,便可确定与该目标数据块对应的神经网络模型的参数,也即得到与该目标数据块对应的神经网络模型。针对每种尺寸的目标数据块分别进行上述操作,即可得到与不同尺寸的目标数据块分别对应的不同参数配置的神经网络模型。After each training and the calculation of each cross-entropy loss function value, determine the relationship between each cross-entropy loss function value and the preset loss value, when each cross-entropy loss function value is greater than the preset loss value , adjust the weight of each parameter in the neural network model according to the value of each cross-entropy loss function, and use the adjusted neural network model for the next training, until the cross-entropy loss function value is less than or equal to the preset loss value, end the neural network Training of the network model. Then, use the verification set and test set of the target data block to verify and test the neural network model respectively; after verification and testing, the parameters of the neural network model corresponding to the target data block can be determined, that is, the parameters corresponding to the target data block can be obtained. The neural network model corresponding to the target data block. The above operations are respectively performed on target data blocks of each size, and then neural network models with different parameter configurations corresponding to target data blocks of different sizes can be obtained.
应当理解,本申请实施例可根据不同的应用场景采用不同的训练数据库,本申请实施例对此不做具体限定。此外,本申请实施例中对神经网络模型的训练方法只是给出的一个具体实例,本申请对此并不做具体限定。除上述方法外,还可以采用其它方法对神经网络模型进行训练,例如,首先采用包含特定尺寸图像的训练集、验证集和测试集分别对初始神经网络模型进行训练、验证和测试,确定一个中间神经网络模型,然后再采用上述实施例中所描述的方法,针对不同尺寸的目标数据块,分别对中间神经网络模型进行训练、验证和测试,得到与不同尺寸的目标数据块分别对应的不同参数配置的神经网络模型,其中,上述特定尺寸可以是根据实际场景所选择的任一尺寸,此处不做具体限定。It should be understood that the embodiments of the present application may adopt different training databases according to different application scenarios, which are not specifically limited in the embodiments of the present application. In addition, the training method for the neural network model in the embodiment of the present application is only a specific example, which is not specifically limited in the present application. In addition to the above methods, other methods can also be used to train the neural network model. For example, firstly, the initial neural network model is trained, verified and tested by using the training set, the validation set and the test set containing images of a certain size, and an intermediate model is determined. Neural network model, and then using the method described in the above embodiment, for target data blocks of different sizes, the intermediate neural network model is trained, verified and tested respectively, and different parameters corresponding to target data blocks of different sizes are obtained. The configured neural network model, wherein the above-mentioned specific size may be any size selected according to the actual scene, which is not specifically limited here.
应当理解,不同尺寸的目标数据块对应不同参数配置的神经网络模型,经过上述尺寸变换操作后,第二数据块的尺寸种类少于第一数据块的尺寸种类,因而可以节省用于存储与目标数 据块对应的神经网络模型的存储空间。It should be understood that target data blocks of different sizes correspond to neural network models with different parameter configurations. After the above-mentioned size transformation operation, the size of the second data block is less than the size of the first data block. The storage space of the neural network model corresponding to the data block.
下文将详细的阐述利用本申请所提出的方法对当前图像块的帧内预测模式进行编解码的过程。The process of encoding and decoding the intra prediction mode of the current image block by using the method proposed in this application will be described in detail below.
图7是本申请实施例中一种用于对当前图像块的目标帧内预测模式进行编码的方法700的流程图。方法700可由视频编码器20执行,具体地,可以由视频编码器20的帧内预测单元254和熵编码单元270来执行。方法700描述为一系列的步骤或者操作,应当理解的是,方法700的部分步骤没有执行顺序的限制,如步骤S701和步骤S702。图7所示的方法700包括步骤S701、S702、S703和S704,下面将对这些步骤进行详细地描述。FIG. 7 is a flowchart of a method 700 for encoding a target intra prediction mode of a current image block in an embodiment of the present application. The method 700 may be performed by the video encoder 20 , and in particular, may be performed by the intra prediction unit 254 and the entropy encoding unit 270 of the video encoder 20 . The method 700 is described as a series of steps or operations, and it should be understood that the execution order of some steps of the method 700 is not limited, such as steps S701 and S702. The method 700 shown in FIG. 7 includes steps S701, S702, S703 and S704, which will be described in detail below.
步骤S701,获取当前图像块的周围图像块的重建块、预测块和残差块中的至少两个。Step S701, acquiring at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks of the current image block.
具体地,获取与当前图像块相邻的左侧、左上和上侧的三个周围图像块,三个周围图像块的尺寸与当前图像块相同(如图10所示);三个周围图像块中每个周围图像块包含一个重建块、残差块和预测块,选取三个周围图像块和当前图像块中每个图像块的重建块、残差块和预测块中的至少两个。在一种可行的实施方式中,可以选取三个周围图像块和当前图像块中每个图像块的重建块、残差块和预测块来进行后续的拼接或级联操作。Specifically, three surrounding image blocks on the left, upper left and upper sides adjacent to the current image block are obtained, and the size of the three surrounding image blocks is the same as that of the current image block (as shown in FIG. 10 ); Each surrounding image block includes a reconstruction block, a residual block and a prediction block, and at least two of the reconstruction block, residual block and prediction block of each image block in the three surrounding image blocks and the current image block are selected. In a feasible implementation manner, the reconstruction block, residual block and prediction block of each image block in the three surrounding image blocks and the current image block may be selected for subsequent splicing or concatenation operations.
步骤S702,根据周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到当前图像块的概率向量,概率向量中的多个元素与多个帧内预测模式一一对应,概率向量中的任一元素用于表征对当前图像块进行预测时采用该任一元素对应的帧内预测模式的概率。Step S702, obtain a probability vector of the current image block according to at least two of the reconstruction block, prediction block and residual block of the surrounding image blocks and the prediction mode probability model, and multiple elements in the probability vector are one with multiple intra prediction modes. One-to-one correspondence, any element in the probability vector is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block.
在一种可行的实施方式中,在当前图像块的目标帧内预测模式为非矩阵加权帧内预测MIP模式的情况下,根据周围图像块的重建块、预测块和残差块中的至少两个、当前图像块和预测模式概率模型得到所述当前图像块的概率向量。In a feasible implementation manner, when the target intra-frame prediction mode of the current image block is the non-matrix weighted intra-frame prediction MIP mode, according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image block , the current image block and the prediction mode probability model to obtain the probability vector of the current image block.
上述根据周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到当前图像块的概率向量,具体包括:通过对周围图像块的重建块、预测块和残差块中的至少两个进行拼接或者级联,得到第一数据块;根据所述第一数据块和所述预测模式概率模型得到所述当前图像块的概率向量。应当注意,通过拼接和级联两种方式时得到的第一数据块的尺寸不同。The above-mentioned obtaining the probability vector of the current image block according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks and the prediction mode probability model, specifically includes: At least two of the blocks are spliced or concatenated to obtain a first data block; and a probability vector of the current image block is obtained according to the first data block and the prediction mode probability model. It should be noted that the sizes of the first data blocks obtained by the two methods of splicing and concatenation are different.
其中,通过对周围图像块的重建块、预测块和残差块中的至少两个进行拼接或者级联,得到第一数据块的过程可以根据当前图像块在当前帧内的位置关系分为四种情况:即在当前图像块分别位于当前帧的左上侧边缘、左侧边缘、上侧边缘和非边缘这四种不同的位置时(对应图11-1到图11-4中的四种位置关系),选取当前图像块的三个周围图像块的方式不同,在本申请的实施例中,以当前图像块的宽和高分别为M和N来进行详细描述,M和N为正整数。Wherein, by splicing or concatenating at least two of the reconstruction block, prediction block and residual block of the surrounding image blocks, the process of obtaining the first data block can be divided into four parts according to the positional relationship of the current image block in the current frame This is the case: that is, when the current image block is located in four different positions of the upper left edge, left edge, upper edge and non-edge of the current frame (corresponding to the four positions in Figure 11-1 to Figure 11-4) relationship), the three surrounding image blocks of the current image block are selected in different ways. In the embodiments of the present application, the width and height of the current image block are respectively M and N for detailed description, and M and N are positive integers.
第一种位置关系,如图11-1所示,在当前图像块位于当前帧的非边缘位置时,即当前图像块与当前帧的左侧边缘和上侧边缘的距离分别大于或等于M、N时,可以直接从当前帧中选取与当前图像块大小相等且相邻的左侧、左上和上侧三个周围图像块,此时三个周围图像块已进行帧内预测,三个周围图像块中的每个图像块都包含一个重建块、残差块和预测块;对于当前图像块而言,当前图像块的残差块、重建块和预测块中无数据,向当前图像块的残差块、重建块和预测块中填充默认值或0,其中,默认值为2 n-1,n为视频编码器100内部的像素比特深度。 The first positional relationship, as shown in Figure 11-1, when the current image block is located at the non-edge position of the current frame, that is, the distances between the current image block and the left edge and the upper edge of the current frame are greater than or equal to M, When N, the three surrounding image blocks on the left, the upper left and the upper side that are equal in size to the current image block and adjacent to the current image block can be directly selected from the current frame. Each image block in the block contains a reconstruction block, residual block and prediction block; for the current image block, there is no data in the residual block, reconstruction block and prediction block of the current image block, and the residual block of the current image block has no data. The difference block, the reconstruction block and the prediction block are filled with a default value or 0, wherein the default value is 2 n -1, and n is the pixel bit depth inside the video encoder 100 .
可选地,在第一种位置关系中,当选取三个周围图像块和当前图像块中每个图像块的重建块、残差块和预测块,并对三个周围编码块和当前编码块进行拼接时,得到的第一数据块为一个三通道且尺寸为2M×2N的数据块,该第一数据块的三个通道中分别填入的是当前图像块和周围图像块的重建块、残差块和预测块;当选取三个周围图像块和当前图像块中每个图像块的重建块和预测块,或者重建块和残差块,或者预测块和残差块,并对三个周围编码块和当前编码块进行拼接时,得到的第一数据块为一个二通道且尺寸为2M×2N的数据块,该第一数据块的两个通道中分别填入的是当前图像块和周围图像块的重建块和预测块。Optionally, in the first positional relationship, when the reconstruction block, residual block and prediction block of each image block in the three surrounding image blocks and the current image block are selected, and the three surrounding coded blocks and the current coded block are selected. When splicing, the obtained first data block is a three-channel data block with a size of 2M×2N, and the three channels of the first data block are filled with the reconstruction blocks of the current image block and the surrounding image blocks, respectively. Residual block and prediction block; when selecting the reconstruction block and prediction block of each image block in the three surrounding image blocks and the current image block, or the reconstruction block and the residual block, or the prediction block and the residual block, and for the three When the surrounding coding block and the current coding block are spliced, the obtained first data block is a two-channel data block with a size of 2M×2N, and the two channels of the first data block are filled with the current image block and the current image block respectively. Reconstructed and predicted blocks of surrounding image blocks.
可选地,在第一种位置关系中,若当前图像块的残差块、重建块和预测块中填充的数据都为0,选取三个周围图像块和当前图像块中每个图像块的重建块、残差块和预测块,并对三个周围编码块和当前编码块进行级联时,得到的第一数据块为一个九通道且尺寸为M×N的数据块;当选取三个周围图像块和当前图像块中每个图像块的重建块和预测块,并对三个周围编码块和当前编码块进行级联时,得到的第一数据块为一个四通道且尺寸为M×N的数据块;应当理解,当周围图像块和当前图像块中的每个图像块中选取用来级联的数据块不同,且当前图像块的每个数据块中填充的数据不同时,级联所得到的第一数据块的通道数不同,除上述所列举的两种情况外,其余情况下,对周围图像块和当前图像块进行级联得到的第一数据块的通道数很容易推导得出,此处不再赘述。Optionally, in the first positional relationship, if the data filled in the residual block, the reconstruction block and the prediction block of the current image block are all 0, select the three surrounding image blocks and the data of each image block in the current image block. When the reconstruction block, the residual block and the prediction block are concatenated, and the three surrounding coding blocks and the current coding block are concatenated, the obtained first data block is a data block with nine channels and a size of M×N; when selecting three The reconstruction block and prediction block of each image block in the surrounding image block and the current image block, and when the three surrounding coding blocks and the current coding block are concatenated, the first data block obtained is a four-channel and the size is M× N data blocks; it should be understood that when the data blocks selected for cascading in each image block in the surrounding image block and the current image block are different, and the data filled in each data block of the current image block is different, the level The number of channels of the first data block obtained by concatenating is different. Except for the two cases listed above, in other cases, the number of channels of the first data block obtained by concatenating the surrounding image blocks and the current image block is easy to derive. obtained, and will not be repeated here.
第二种位置关系,如图11-2所示,在当前图像块位于当前帧的左上侧边缘时,即当前图像块与当前帧的左侧边缘和上侧边缘的距离分别小于M、N时,无法从当前帧中选取与当前图像块尺寸相同且相邻的左侧、左上和上侧三个周围图像块,此时,向当前图像块相邻的左侧、左上和上侧分别扩充出与当前图像块的尺寸相同的三个周围图像块;在此种位置关系中,三个周围图像块的每个图像块中的重建块和预测块中填入默认值2 n-1,每个图像块中的残差块填入0或2 n-1;当前图像块的重建块、残差块和预测块中可以填入0或2 n-1。 The second positional relationship, as shown in Figure 11-2, is when the current image block is located at the upper left edge of the current frame, that is, when the distance between the current image block and the left edge and upper edge of the current frame is smaller than M and N respectively , the three surrounding image blocks on the left, upper left and upper sides that are the same size as the current image block and adjacent to the current image block cannot be selected from the current frame. At this time, the adjacent left, upper left and upper sides of the current image block are respectively expanded out. Three surrounding image blocks with the same size as the current image block; in this positional relationship, the reconstruction block and prediction block in each image block of the three surrounding image blocks are filled with the default value 2 n -1, each The residual block in the image block is filled with 0 or 2 n -1; the reconstruction block, residual block and prediction block of the current image block can be filled with 0 or 2 n -1.
可选地,在第二种位置关系中,当向三个周围图像块的每个图像块中的重建块和预测块中填入默认值2 n-1,每个图像块中的残差块填入0,向当前图像块的重建块、残差块和预测块中都填入0;当选取三个周围图像块和当前图像块中每个图像块的重建块、残差块和预测块,并对三个周围编码块和当前编码块进行拼接时,得到的第一数据块为一个二通道且尺寸为2M×2N的数据块;当选取三个周围图像块和当前图像块中每个图像块的残差块和预测块,并对三个周围编码块和当前编码块进行拼接时,得到的第一数据块为一个一通道且尺寸为2M×2N的数据块;应当理解,当周围图像块和当前图像块中的每个图像块中选取用来拼接的数据块不同,且当前图像块和三个周围图像块的每个数据块中填充的数据不同时,拼接所得到的第一数据块的通道数不同,除上述所列举的两种情况外,其余情况下,对周围图像块和当前图像块进行拼接得到的第一数据块的通道数很容易推导得出,此处不再赘述。 Optionally, in the second positional relationship, when the reconstruction block and the prediction block in each of the three surrounding image blocks are filled with a default value of 2 n -1, the residual block in each image block is Fill in 0, and fill in 0 into the reconstruction block, residual block and prediction block of the current image block; when selecting the reconstruction block, residual block and prediction block of each image block in the three surrounding image blocks and the current image block , and when splicing the three surrounding coding blocks and the current coding block, the obtained first data block is a two-channel data block with a size of 2M×2N; when selecting each of the three surrounding image blocks and the current image block When the residual block and prediction block of the image block are spliced together with the three surrounding coding blocks and the current coding block, the obtained first data block is a one-channel data block with a size of 2M × 2N; it should be understood that when the surrounding The image block and each image block in the current image block have different data blocks selected for splicing, and the current image block and the data filled in each of the three surrounding image blocks are different, the first image block obtained by splicing is different. The number of channels of the data blocks is different. Except for the two cases listed above, in other cases, the number of channels of the first data block obtained by splicing the surrounding image blocks and the current image block can be easily derived. Repeat.
可选地,在第二种位置关系中,当向三个周围图像块的每个图像块中的重建块和预测块中填入默认值2 n-1,每个图像块中的残差块填入0,向当前图像块的重建块、残差块和预测块中都填入0;当选取三个周围图像块和当前图像块中每个图像块的重建块、残差块和预测块,并对三个周围编码块和当前编码块进行级联时,得到的第一数据块为一个四通道且尺寸为M×N的数据块;应当理解,当周围图像块和当前图像块中的每个图像块中选取用来级联的数据块不同,且当前图像块和周围图像块中的每个数据块中填充的数据不同时,级联所得到的第一数据块的通道数不同,除上述所列举的情况外,其余情况下,对周围图像块和当前图像块进行级联得到的第一数据块的通道数很容易推导得出,此处不再赘述。 Optionally, in the second positional relationship, when the reconstruction block and the prediction block in each of the three surrounding image blocks are filled with a default value of 2 n -1, the residual block in each image block is Fill in 0, and fill in 0 into the reconstruction block, residual block and prediction block of the current image block; when selecting the reconstruction block, residual block and prediction block of each image block in the three surrounding image blocks and the current image block , and when the three surrounding coding blocks and the current coding block are concatenated, the obtained first data block is a data block with four channels and a size of M×N; it should be understood that when the surrounding image blocks and the current image block are When the data blocks selected for concatenation in each image block are different, and the data filled in each data block in the current image block and the surrounding image blocks are different, the number of channels of the first data block obtained by concatenation is different. Except for the cases listed above, in other cases, the number of channels of the first data block obtained by concatenating the surrounding image blocks and the current image block can be easily derived, which will not be repeated here.
第三种位置关系,如图11-3所示,在当前图像块位于当前帧的左侧边缘,即当前图像块 与当前帧的左侧边缘的距离小于M,且当前图像块与当前帧的上侧边缘的距离大于或等于N时,此时,可以从当前帧中选取一个与当前图像块相邻且尺寸相同的上侧图像块,向当前图像块相邻的左侧、左上侧分别扩充出与当前图像块的尺寸大小相同的两个周围图像块;在此种位置关系中中,扩充出的左侧和左上侧图像块的每个图像块中的重建块和预测块中填入默认值2 n-1,每个图像块中的残差块填入0或2 n-1;当前图像块的重建块、残差块和预测块中可以填入0或2 n-1。 The third positional relationship, as shown in Figure 11-3, is when the current image block is located at the left edge of the current frame, that is, the distance between the current image block and the left edge of the current frame is less than M, and the distance between the current image block and the current frame is less than M. When the distance of the upper edge is greater than or equal to N, at this time, an upper image block adjacent to the current image block and with the same size can be selected from the current frame, and expanded to the left and upper left adjacent to the current image block respectively. two surrounding image blocks with the same size as the current image block; in this positional relationship, the reconstruction block and prediction block in each image block of the extended left and upper left image blocks are filled with default The value is 2 n -1, the residual block in each image block is filled with 0 or 2 n -1; the reconstruction block, residual block and prediction block of the current image block can be filled with 0 or 2 n -1.
可选地,在第三种位置关系中,当向左侧和左上侧图像块的每个图像块中的重建块和预测块中填入默认值2 n-1,每个图像块中的残差块填入0,向当前图像块的重建块、残差块和预测块中都填入0;当选取三个周围图像块和当前图像块中每个图像块的重建块、残差块和预测块,并对三个周围编码块和当前编码块进行拼接时,得到的第一数据块为一个二通道且尺寸为2M×2N的数据块;当选取三个周围图像块和当前图像块中每个图像块的残差块和预测块,并对三个周围编码块和当前编码块进行拼接时,得到的第一数据块为一个一通道且尺寸为2M×2N的数据块;应当理解,当周围图像块和当前图像块中的每个图像块中选取用来拼接的数据块不同,且当前图像块和三个周围图像块的每个数据块中填充的数据不同时,拼接所得到的第一数据块的通道数不同,除上述所列举的两种情况外,其余情况下,对周围图像块和当前图像块进行拼接得到的第一数据块的通道数很容易推导得出,此处不再赘述。 Optionally, in the third positional relationship, when a default value of 2 n -1 is filled in the reconstruction block and the prediction block in each image block of the left and upper left image blocks, the residual value in each image block is The difference block is filled with 0, and 0 is filled in the reconstruction block, residual block and prediction block of the current image block; when three surrounding image blocks and the reconstruction block, residual block and When predicting the block and splicing the three surrounding coding blocks and the current coding block, the obtained first data block is a two-channel data block with a size of 2M×2N; when selecting the three surrounding image blocks and the current image block When splicing the residual block and prediction block of each image block, and splicing the three surrounding coding blocks and the current coding block, the obtained first data block is a one-channel data block with a size of 2M×2N; it should be understood that, When the data blocks selected for splicing are different between the surrounding image block and each image block in the current image block, and the data filled in each data block of the current image block and the three surrounding image blocks are different, the splicing result The number of channels of the first data block is different. Except for the two cases listed above, in other cases, the number of channels of the first data block obtained by splicing the surrounding image block and the current image block can be easily derived. Here No longer.
可选地,在第三种位置关系中,当左侧和左上侧图像块的每个图像块中的重建块和预测块和残差块中填入默认值2 n-1,向当前图像块的重建块、残差块中都填入0,预测块中填入2 n-1;当选取三个周围图像块和当前图像块中每个图像块的重建块、残差块和预测块,并对三个周围编码块和当前编码块进行级联时,得到的第一数据块为一个十通道且尺寸为M×N的数据块;应当理解,当周围图像块和当前图像块中的每个图像块中选取用来级联的数据块不同,且当前图像块和周围图像块中的每个数据块中填充的数据不同时,级联所得到的第一数据块的通道数不同,除上述所列举的情况外,其余情况下,对周围图像块和当前图像块进行级联得到的第一数据块的通道数很容易推导得出,此处不再赘述。 Optionally, in the third positional relationship, when the reconstruction block, prediction block and residual block in each image block of the left and upper left image blocks are filled with a default value of 2 n -1, the current image block is sent to the current image block. 0 is filled in the reconstructed block and residual block of , and 2 n -1 is filled in the prediction block; when three surrounding image blocks and the reconstruction block, residual block and prediction block of each image block in the current image block are selected, When the three surrounding coding blocks and the current coding block are concatenated, the obtained first data block is a data block with ten channels and a size of M×N; it should be understood that when each of the surrounding image blocks and the current image block is When the data blocks selected for concatenation are different from the image blocks, and the data filled in each data block in the current image block and the surrounding image blocks are different, the number of channels of the first data block obtained by concatenation is different, except In addition to the cases listed above, in other cases, the number of channels of the first data block obtained by concatenating the surrounding image blocks and the current image block can be easily derived, and details are not repeated here.
第四种位置关系,如图11-4所示,在当前图像块位于当前帧的左侧边缘,即当前图像块与当前帧的左侧边缘的距离大于或等于M,且当前图像块与当前帧的上侧边缘的距离小于N时,此时,可以从当前帧选取一个与当前图像块相邻且尺寸相同的左侧图像块,然后向当前图像块相邻的上侧、左上侧分别扩充出与当前图像块的尺寸大小相同的两个周围图像块,在此种位置关系中中,扩充出的上侧和左上侧图像块的每个图像块中的重建块和预测块中填入默认值2 n-1,每个图像块中的残差块可以填入0或2 n-1;当前图像块的重建块、残差块和预测块中可以填入0或2 n-1。 The fourth positional relationship, as shown in Figure 11-4, is when the current image block is located at the left edge of the current frame, that is, the distance between the current image block and the left edge of the current frame is greater than or equal to M, and the current image block is located at the left edge of the current frame. When the distance between the upper edge of the frame is less than N, at this time, a left image block of the same size adjacent to the current image block can be selected from the current frame, and then expanded to the adjacent upper and upper left sides of the current image block. Two surrounding image blocks with the same size as the current image block are obtained. In this positional relationship, the reconstructed block and prediction block in each image block of the extended upper and upper left image blocks are filled with default values. The value is 2 n -1, the residual block in each image block can be filled with 0 or 2 n -1; the reconstruction block, residual block and prediction block of the current image block can be filled with 0 or 2 n -1.
可选地,在第四种位置关系中,当向上侧和左上侧图像块的每个图像块中的重建块和预测块中填入默认值2 n-1,每个图像块中的残差块填入0,向当前图像块的重建块、残差块和预测块中都填入0;当选取三个周围图像块和当前图像块中每个图像块的重建块、残差块和预测块,并对三个周围编码块和当前编码块进行拼接时,得到的第一数据块为一个二通道且尺寸为2M×2N的数据块;当选取三个周围图像块和当前图像块中每个图像块的残差块和预测块,并对三个周围编码块和当前编码块进行拼接时,得到的第一数据块为一个一通道且尺寸为2M×2N的数据块;应当理解,当周围图像块和当前图像块中的每个图像块中选取用来拼接的数据块不同,且当前图像块和三个周围图像块的每个数据块中填充的数据不同时,拼接所得到的第一数据块的通道数不同,除上述所列举的两种情况外,其余情况下,对周围图像块和当前图像块进行拼接得到的第一数据块的通道数很容易推导得出,此处不再赘述。 Optionally, in the fourth positional relationship, when the reconstruction block and the prediction block in each image block of the upper and upper left image blocks are filled with a default value of 2 n -1, the residual error in each image block is The block is filled with 0, and 0 is filled in the reconstruction block, residual block and prediction block of the current image block; when three surrounding image blocks and the reconstruction block, residual block and prediction block of each image block in the current image block are selected When the three surrounding coding blocks and the current coding block are spliced, the first data block obtained is a two-channel data block with a size of 2M×2N; when three surrounding image blocks and each of the current image blocks are selected When splicing the residual block and prediction block of two image blocks, and splicing the three surrounding coding blocks and the current coding block, the obtained first data block is a one-channel data block with a size of 2M×2N; it should be understood that when When the data blocks selected for splicing are different between the surrounding image block and each image block in the current image block, and the data filled in each data block of the current image block and the three surrounding image blocks are different, the splicing obtained The number of channels of a data block is different. Except for the two cases listed above, in other cases, the number of channels of the first data block obtained by splicing the surrounding image block and the current image block can be easily derived. Repeat.
可选地,在第四种位置关系中,当上侧和左上侧图像块的每个图像块中的重建块和预测块和残差块中填入默认值2 n-1,向当前图像块的重建块、残差块中都填入0,预测块中填入2 n-1;当选取三个周围图像块和当前图像块中每个图像块的重建块、残差块和预测块,并对三个周围编码块和当前编码块进行级联时,得到的第一数据块为一个十通道且尺寸为M×N的数据块;应当理解,当周围图像块和当前图像块中的每个图像块中选取用来级联的数据块不同,且当前图像块和周围图像块中的每个数据块中填充的数据不同时,级联所得到的第一数据块的通道数不同,除上述所列举的情况外,其余情况下,对周围图像块和当前图像块进行级联得到的第一数据块的通道数很容易推导得出,此处不再赘述。 Optionally, in the fourth positional relationship, when the reconstruction block, prediction block and residual block in each image block of the upper and upper left image blocks are filled with a default value of 2 n -1, the current image block is sent to the current image block. 0 is filled in the reconstructed block and residual block of , and 2 n -1 is filled in the prediction block; when three surrounding image blocks and the reconstruction block, residual block and prediction block of each image block in the current image block are selected, When the three surrounding coding blocks and the current coding block are concatenated, the obtained first data block is a data block with ten channels and a size of M×N; it should be understood that when each of the surrounding image blocks and the current image block is When the data blocks selected for concatenation are different from the image blocks, and the data filled in each data block in the current image block and the surrounding image blocks are different, the number of channels of the first data block obtained by concatenation is different, except In addition to the cases listed above, in other cases, the number of channels of the first data block obtained by concatenating the surrounding image blocks and the current image block can be easily derived, and details are not repeated here.
此外,本申请描述的图像块可以理解为但不限于:预测单元(prediction unit,PU)或者编码单元CU或者变换单元TU等。根据不同视频压缩编解码标准的规定,CU可包含一个或多个预测单元PU,例如,在HEVC中,一个CU可包含多个PU,但在VVC中,一个CU对应一个PU。图像块可具有固定或可变的大小,且根据不同视频压缩编解码标准而在大小上不同。此外,当前图像块是指当前待编码或解码的图像块,例如待编码或解码的预测单元。In addition, the image block described in this application may be understood as, but not limited to, a prediction unit (prediction unit, PU), a coding unit CU, or a transformation unit TU, and so on. According to the provisions of different video compression codec standards, a CU may include one or more prediction unit PUs. For example, in HEVC, one CU may include multiple PUs, but in VVC, one CU corresponds to one PU. Image blocks may have fixed or variable sizes, and vary in size according to different video compression codec standards. In addition, the current image block refers to an image block to be encoded or decoded currently, such as a prediction unit to be encoded or decoded.
在一种可行的实施方式中,上述根据第一数据块和预测模式概率模型得到当前图像块的概率向量,可以包括:当第一数据块的尺寸为目标尺寸时,将第一数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量,或者;当第一数据块的尺寸不为目标尺寸时,对第一数据块进行尺寸变换操作,以得到第二数据块,第二数据块的尺寸等于目标尺寸,尺寸变换操作包括缩放和转置中的至少一项,缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将第二数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量。In a feasible implementation manner, obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model may include: when the size of the first data block is the target size, inputting the first data block into the prediction The mode probability model is processed to obtain the probability vector of the current image block, or; when the size of the first data block is not the target size, a size transformation operation is performed on the first data block to obtain the second data block, the second data block The size of the block is equal to the target size, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or proportional scaling in both horizontal and vertical directions; input the second data block into The prediction mode probability model is processed to obtain the probability vector of the current image block.
其中,上述概率模式预测模型可以是图6中所示的神经网络模型,或其它形式的神经网络模型,或数学模型,本申请对此不做具体限定。The above-mentioned probability mode prediction model may be the neural network model shown in FIG. 6 , or other forms of neural network models, or mathematical models, which are not specifically limited in this application.
表1:第一数据块的尺寸种类Table 1: Size categories of the first data block
Figure PCTCN2021128000-appb-000012
Figure PCTCN2021128000-appb-000012
其中,对于不同尺寸的当前图像块,对当前图像块和当前图像块的三个周围编码块进行拼接或者级联后,得到的第一数据块的尺寸不同。在本申请实施例中,第一数据块的尺寸种类共有17种(如表1所示),对第一数据块进行尺寸变换操作,得到第二数据块,第二数据 块的尺寸种类小于17种。Wherein, for current image blocks of different sizes, after splicing or concatenating the current image block and three surrounding coding blocks of the current image block, the sizes of the first data blocks obtained are different. In the embodiment of the present application, there are 17 types of sizes of the first data block (as shown in Table 1), and a size transformation operation is performed on the first data block to obtain a second data block, and the size of the second data block is less than 17 types. kind.
上述当第一数据块的尺寸不为目标尺寸时,对第一数据块进行尺寸变换操作,以得到第二数据块的过程可以分为四种方式:When the size of the first data block is not the target size, the process of performing a size transformation operation on the first data block to obtain the second data block can be divided into four ways:
(1)当第一数据块的尺寸不为目标尺寸,且第一数据块的第一边长和第二边长之比等于目标尺寸的第一边长和第二边长之比时,对第一数据块进行水平和竖直两个方向的等比缩放,得到第二数据块;(1) When the size of the first data block is not the target size, and the ratio of the first side length to the second side length of the first data block is equal to the ratio of the first side length to the second side length of the target size, then The first data block is proportionally scaled in both horizontal and vertical directions to obtain a second data block;
(2)当第一数据块的尺寸不为目标尺寸,且第一数据块的第一边长和第二边长之比等于目标尺寸的第二边长和第一边长之比时,对第一数据块进行转置,以及水平和竖直两个方向的等比缩放,得到第二数据块;(2) When the size of the first data block is not the target size, and the ratio of the first side length to the second side length of the first data block is equal to the ratio of the second side length to the first side length of the target size, correct the Transpose the first data block, and proportionally scale the horizontal and vertical directions to obtain the second data block;
(3)当第一数据块的尺寸不为目标尺寸,且第一数据块的第一边长和第二边长分别等于目标尺寸的第二边长和第一边长时,对第一数据块进行转置,得到第二数据块;(3) When the size of the first data block is not the target size, and the first side length and the second side length of the first data block are respectively equal to the second side length and the first side length of the target size, The block is transposed to obtain the second data block;
(4)当第一数据块的尺寸不为目标尺寸时,对第一数据块进行缩放,得到第二数据块。(4) When the size of the first data block is not the target size, scaling the first data block to obtain a second data block.
可选地,目标尺寸的种类可以为7种,即从表1中第一边长和第二边长之比为1:1、1:2、2:1、1:4、4:1、1:8和8:1的尺寸中分别选取一种作为目标尺寸。将第一数据块的尺寸与7种目标尺寸进行比较,根据第一数据块的尺寸与7种目标尺寸的关系,可以采用上述四种方式中的(1)或(2)或(3)种中的任一种方式,得到第二数据块。Optionally, there can be seven types of target sizes, that is, the ratio of the first side length to the second side length from Table 1 is 1:1, 1:2, 2:1, 1:4, 4:1, One of the sizes of 1:8 and 8:1 is selected as the target size. Compare the size of the first data block with the seven target sizes. According to the relationship between the size of the first data block and the seven target sizes, one of the above four methods (1), (2) or (3) can be used. In any of the ways, the second data block is obtained.
例如,以第一数据块的第一边长和第二边长为16×8来详细说明在上述实施例中得到第二数据块的过程:若目标尺寸中第一边长和第二边长之比为2:1目标尺寸的第一边长和第二边长分别为32×16,则对第一数据块的第一边长和第二边长分别进行水平和竖直两个方向的等比放大,得到第一边长和第二边长分别为32×16的第二数据块;若目标尺寸中第一边长和第二边长之比为2:1的目标尺寸的第一边长和第二边长分别为8×4,则对第一数据块的第一边长和第二边长分别进行水平和竖直两个方向的等比缩小,得到第一边长和第二边长分别为8×4的第二数据块。应当理解,在本申请实施例中其它部分中提到的缩放的具体过程与此处相同。For example, the process of obtaining the second data block in the above embodiment is described in detail by taking the length of the first side and the length of the second side of the first data block as 16×8: if the length of the first side and the length of the second side in the target size are The ratio is 2:1, and the first side length and the second side length of the target size are 32×16 respectively, then the first side length and the second side length of the first data block are measured in horizontal and vertical directions respectively. Proportional enlargement to obtain a second data block with the first side length and the second side length respectively being 32×16; if the ratio of the first side length and the second side length in the target size is 2:1, the The side length and the second side length are respectively 8×4, then the first side length and the second side length of the first data block are scaled down in the horizontal and vertical directions respectively, and the first side length and the second side length are obtained. The second data block whose two sides are 8×4 respectively. It should be understood that the specific process of scaling mentioned in other parts of the embodiments of the present application is the same as here.
可选地,目标尺寸的种类可以为4种,即从表1中第一边长和第二边长之比为1:1的尺寸中选取一种,1:2或2:1的尺寸中选取一种,1:4或4:1的尺寸中选取一种,1:8或8:1的尺寸中选取一种,一共四种尺寸,作为目标尺寸。将第一数据块的尺寸与4种目标尺寸进行比较,根据第一数据块的尺寸与4种目标尺寸的关系,可以采用上述四种方式中的(1)或(2)或(3)种中的任一种方式,得到第二数据块。Optionally, there may be four types of target sizes, that is, one is selected from the sizes in Table 1 where the ratio of the length of the first side to the length of the second side is 1:1, and the size of 1:2 or 2:1 Choose one of the sizes of 1:4 or 4:1, and choose one of the sizes of 1:8 or 8:1. There are four sizes in total as the target size. Compare the size of the first data block with the four target sizes. According to the relationship between the size of the first data block and the four target sizes, (1) or (2) or (3) of the above four methods can be used. In any of the ways, the second data block is obtained.
需要注意的是,在本申请的实施例中,当需要对第一数据块进行转置和缩放处理时,进行转置和缩放的操作没有先后顺序,采用不同的操作顺序进行处理时得到的第二数据块尺寸相同。It should be noted that, in the embodiments of the present application, when transposing and scaling processing of the first data block is required, the transposing and scaling operations are performed in no order, and the The two data blocks have the same size.
可以看出,在本申请实施例中,通过上述两种实现方式,可以将第一数据块的尺寸种类从17种减少为第二数据块的7种或4种,从而有效减少后续分别与不同尺寸的第二数据块对应的神经网络模型的种类,进而节约用于存储神经网络模型的存储空间,提高编码性能。It can be seen that in the embodiment of the present application, through the above two implementations, the size types of the first data block can be reduced from 17 to 7 or 4 types of the second data block, thereby effectively reducing the subsequent The type of the neural network model corresponding to the second data block of the size, thereby saving the storage space for storing the neural network model and improving the coding performance.
可选地,目标尺寸的可以为表1中17种尺寸中的一种,或表1中17种尺寸以外的一种。将第一数据块的尺寸与该1种目标尺寸进行比较,根据第一数据块的尺寸与4种目标尺寸的关系,可以采用上述第(4)种方式,得到第二数据块。Optionally, the target size may be one of the 17 sizes in Table 1, or one other than the 17 sizes in Table 1. The size of the first data block is compared with the one target size, and according to the relationship between the size of the first data block and the four target sizes, the above-mentioned method (4) can be used to obtain the second data block.
例如,以目标尺寸的第一边长和第二边长为4×8来进行详细说明上述实施例实现的具体过程:当第一数据块尺寸的第一边长和第二边长分别为4×4时,对第一数据块的第二边长进行竖直方向上的放大,得到第一边长和第二边长分别为4×8的第二数据块;当第一数据块尺 寸的第一边长和第二边长分别为16×4时,对第一数据块的第一边长和第二边长分别进行水平方向上缩小和竖直方向上放大,得到第一边长和第二边长分别为4×8的第二数据块。For example, the first side length and the second side length of the target size are 4×8 to describe the specific process implemented in the above embodiment in detail: when the first side length and the second side length of the first data block size are 4 When ×4, the second side length of the first data block is enlarged in the vertical direction to obtain a second data block whose first side length and second side length are 4×8 respectively; when the size of the first data block is When the first side length and the second side length are respectively 16×4, the first side length and the second side length of the first data block are reduced in the horizontal direction and enlarged in the vertical direction, respectively, to obtain the sum of the first side length and the second side length. The second data blocks whose second side lengths are respectively 4×8.
可以看出,在本申请实施例中,可以将第一数据块的尺寸种类从17种减少为1种,最大程度地减少了第二数据块的尺寸类型,从而后续得到一种与第二数据块尺寸对应的神经网络模型,显著地减少了用于存储神经网络模型的存储空间,提高编码效率;同时,对于纹理较简单,且尺寸较大的第一数据块,经过缩小为较小尺寸的第二数据块后,可以有效地降低后续采用神经网络模型计算概率向量时的计算复杂度,从而提高编码性能。It can be seen that, in this embodiment of the present application, the size types of the first data block can be reduced from 17 types to 1 type, and the size types of the second data block are reduced to the greatest extent, so that one type of data block that is related to the second data block can be obtained subsequently. The neural network model corresponding to the block size significantly reduces the storage space for storing the neural network model and improves the coding efficiency; at the same time, the first data block with simple texture and large size is reduced to a smaller size After the second data block, the computational complexity of the subsequent calculation of the probability vector using the neural network model can be effectively reduced, thereby improving the coding performance.
在一种可行的实施方式中,上述根据第一数据块和预测模式概率模型得到当前图像块的概率向量,可以包括:当第一数据块的尺寸为目标尺寸时,将第一数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量,或者;In a feasible implementation manner, obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model may include: when the size of the first data block is the target size, inputting the first data block into the prediction The pattern probability model is processed to obtain the probability vector of the current image block, or;
当第一数据块的尺寸不为目标尺寸时,根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,第二数据块的尺寸为目标尺寸,尺寸变换操作包括缩放和转置中的至少一项,缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将第二数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量。When the size of the first data block is not the target size, according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, for the first data block A size transformation operation is performed to obtain a second data block, the size of the second data block is the target size, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or horizontal and vertical scaling Proportional scaling in two directions; input the second data block into the prediction mode probability model for processing to obtain the probability vector of the current image block.
具体地,首先根据第一数据块的第一边长与第二边长的大小关系,以及水平梯度的绝对值与垂直梯度的绝对值之和与预设阈值的大小关系,确定第一数据块对应的目标尺寸;当第一数据块的尺寸不为所述目标尺寸时,对第一数据块进行尺寸变换操作,以得到第二数据块。Specifically, the first data block is first determined according to the size relationship between the first side length and the second side length of the first data block, and the size relationship between the absolute value of the horizontal gradient and the sum of the absolute value of the vertical gradient and the preset threshold. The corresponding target size; when the size of the first data block is not the target size, a size transformation operation is performed on the first data block to obtain the second data block.
其中,可选地,预设阈值的取值可以为4*M*N,当前图像块的第一边长和第二边长分别为M和N,第一数据块的第一边长和第二边长分别为2M和2N;当前图像块的水平梯度和当前图像块的垂直梯度通过Sobel算子计算得到。应当理解,预设阈值的值可以根据实际情况确定,本申请实施例对此不做限定;此外,计算水平梯度和垂直梯度的算子也不限定于Sobel算子,可以根据实际情况确定,例如,还可以选择Scharr算子、laplacian算子或其它功能类似的算子。Wherein, optionally, the value of the preset threshold may be 4*M*N, the length of the first side and the length of the second side of the current image block are M and N respectively, the length of the first side and the length of the first side of the first data block are respectively M and N. The lengths of the two sides are 2M and 2N respectively; the horizontal gradient of the current image block and the vertical gradient of the current image block are calculated by the Sobel operator. It should be understood that the value of the preset threshold can be determined according to the actual situation, which is not limited in this embodiment of the present application; in addition, the operator for calculating the horizontal gradient and the vertical gradient is not limited to the Sobel operator, and can be determined according to the actual situation, for example , you can also choose Scharr operator, laplacian operator or other operators with similar functions.
其中,图像的梯度表示图像灰度的变化量,当前图像块的水平梯度和垂直梯度分别表示在当前帧内,当前图像块相对于周围图像块在水平方向和垂直方向上灰度的变化量。图像可以理解为二维离散函数f(x,y),图像f(x,y)在点(x,y)处的梯度是一个具有大小和方向的矢量,设Gx和Gy分别表示当前图像块在水平方向和垂直方向的梯度,这个梯度的矢量可以表示为:The gradient of the image represents the change of the image gray level, and the horizontal gradient and vertical gradient of the current image block respectively represent the gray level change of the current image block in the horizontal and vertical directions relative to the surrounding image blocks in the current frame. The image can be understood as a two-dimensional discrete function f(x,y), the gradient of the image f(x,y) at point (x,y) is a vector with size and direction, let Gx and Gy represent the current image block respectively The gradient in the horizontal and vertical directions, the vector of this gradient can be expressed as:
Figure PCTCN2021128000-appb-000013
Figure PCTCN2021128000-appb-000013
这个矢量的幅度为:The magnitude of this vector is:
Figure PCTCN2021128000-appb-000014
Figure PCTCN2021128000-appb-000014
这个矢量的方向角为:The direction angle of this vector is:
Figure PCTCN2021128000-appb-000015
Figure PCTCN2021128000-appb-000015
上述当第一数据块的尺寸不为目标尺寸时,根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二 数据块的过程可以分为两种情况:When the size of the first data block is not the target size, according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, the first data block The process of performing the size transformation operation on the block to obtain the second data block can be divided into two cases:
(1)上述尺寸变换操作包括缩放,上述根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,包括:当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4M/N,4;当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8M/N,8;当第一数据块的第一边长M小于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4,4N/M;当第一数据块的第一边长M小于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8,8N/M;其中,M和N为正整数。(1) The above-mentioned size transformation operation includes scaling, and the size of the first data block is scaled according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block. The transformation operation to obtain the second data block includes: when the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the absolute value of the horizontal gradient and the absolute value of the vertical gradient are summed When the sum is smaller than the preset threshold, the first data block is scaled to obtain the second data block, and the first side length and the second side length of the second data block are 4M/N, 4 respectively; When the first side length M is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to a preset threshold, the first data block is scaled to The second data block is obtained, and the first side length and the second side length of the second data block are 8M/N, 8 respectively; when the first side length M of the first data block is less than the second side length N of the first data block , and when the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block is scaled to obtain the second data block, the first side length and the second side length of the second data block are respectively 4, 4N/M; when the first side length M of the first data block is less than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset When the threshold is set, the first data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are respectively 8, 8N/M; wherein, M and N are positive integers.
(2)上述尺寸变换操作包括缩放和转置中的至少一项,上述根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,包括:(2) The above-mentioned size transformation operation includes at least one of scaling and transposition, the above-mentioned size relationship according to the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block. , performing a size transformation operation on the first data block to obtain a second data block, including:
当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4M/N,4;当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block Scaling is performed to obtain a second data block, and the first side length and the second side length of the second data block are 4M/N, 4 respectively; when the first side length M of the first data block is greater than or equal to the first data block When the second side length is N, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first data block is scaled to obtain the second data block, the The length of one side and the length of the second side are 8M/N, 8 respectively;
当第一数据块的第一边长M小于第一数据块的第二边长N时,对第一数据块执行以下操作,以得到第二数据块:当第一数据块的水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行转置和缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4N/M,4;当第一数据块的水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行转置和缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8N/M,8;其中,M和N为正整数。When the first side length M of the first data block is smaller than the second side length N of the first data block, perform the following operations on the first data block to obtain the second data block: when the absolute value of the horizontal gradient of the first data block is When the sum of the absolute value of the value and the vertical gradient is less than the preset threshold, transpose and scale the first data block to obtain a second data block, and the first and second side lengths of the second data block are 4N respectively. /M, 4; when the sum of the absolute value of the horizontal gradient of the first data block and the absolute value of the vertical gradient is greater than or equal to the preset threshold, transpose and scale the first data block to obtain the second data block, The length of the first side and the length of the second side of the second data block are respectively 8N/M, 8; wherein, M and N are positive integers.
可以看出,在本申请实施例中,根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度确定第二数据块的过程,充分利用了当前图像的自身信息来确定第二数据块的尺寸,增强了第二数据块与当前图像块的相关性,因而提高了后续根据第二数据块得到的概率向量的准确性,从而提高了码流中数据的准确性;同时,通过执行缩放和转置操作,可以减少第二数据块尺寸种类的数量,同时也会减少与不同尺寸的第二数据块分别对应的神经网络模型的数量,从而节约了用于存储神经网络模型的存储空间,提高编码性能。It can be seen that, in the embodiment of the present application, the process of determining the second data block according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, The self-information of the current image is fully utilized to determine the size of the second data block, and the correlation between the second data block and the current image block is enhanced, thereby improving the accuracy of the probability vector obtained according to the second data block subsequently, thereby improving the At the same time, by performing scaling and transposing operations, the number of second data block size types can be reduced, and the number of neural network models corresponding to second data blocks of different sizes can also be reduced. , thereby saving the storage space for storing the neural network model and improving the coding performance.
步骤S703,确定当前图像块的目标帧内预测模式。Step S703, determining the target intra prediction mode of the current image block.
可选地,通过RDO或神经网络或其它可能的方式从多功能视频编码(versatile video coding,VVC)技术中的67种帧内预测模式(如图9所示)和MIP模式中搜索出对当前图像块进行帧内预测的目标帧内预测模式,目标帧内预测模式在图9中的模式值(0-66)即为其 对应的第一标识。Optionally, search out the 67 intra-frame prediction modes (as shown in The target intra-frame prediction mode for intra-frame prediction of the image block, the mode value (0-66) of the target intra-frame prediction mode in FIG. 9 is the corresponding first identifier.
其中,请参见图9,VVC中67种帧内预测模式包括直流(direct current,DC)预测模式、平面Planar预测模式和65种角度预测模式,DC和Planar预测模式的第一标识分别为1和0,剩余65种角度预测模式的第一标识从左下角到右上角分别为2-66。9, 67 intra-frame prediction modes in VVC include direct current (direct current, DC) prediction mode, plane Planar prediction mode and 65 angle prediction modes, and the first identifiers of DC and Planar prediction modes are 1 and 1 respectively. 0, the first identifiers of the remaining 65 angle prediction modes are 2-66 from the lower left corner to the upper right corner, respectively.
步骤S704,根据目标帧内预测模式和概率向量,将目标帧内预测模式编入码流。Step S704, according to the target intra-frame prediction mode and the probability vector, encode the target intra-frame prediction mode into the code stream.
上述根据目标帧内预测模式和概率向量,将目标帧内预测模式编入码流可分为两种情况:According to the target intra-frame prediction mode and the probability vector, encoding the target intra-frame prediction mode into the code stream can be divided into two cases:
(1)当尺寸变换操作不包含转置时,根据目标帧内预测模式的第一标识和概率向量,确定第一参考值,对第一参考值进行编码,以得到目标帧内预测模式对应的码流。(1) When the size transformation operation does not include transposition, determine the first reference value according to the first identifier and the probability vector of the target intra-frame prediction mode, and encode the first reference value to obtain the corresponding target intra-frame prediction mode. code stream.
其中,目标帧内预测模式在图9中的模式值即为其对应的第一标识,概率向量为一个67维向量,概率向量中的每个元素对应区间(0,1)上的一个概率区间,且概率向量中的元素与图9中的67个模式值一一对应。Among them, the mode value of the target intra prediction mode in FIG. 9 is the corresponding first identifier, the probability vector is a 67-dimensional vector, and each element in the probability vector corresponds to a probability interval on the interval (0, 1). , and the elements in the probability vector correspond one-to-one with the 67 pattern values in Figure 9.
上述根据目标帧内预测模式的第一标识和概率向量,确定第一参考值,对第一参考值进行编码,以得到目标帧内预测模式对应的码流,具体为:根据目标帧内预测模式的第一标识确定该第一标识在概率向量中对应的元素,然后在该元素对应的概率区间中任意选取一个数值作为第一参考值编入码流,该第一参考值即可用于表征当前图像块的目标帧内预测模式。The above-mentioned first identification and probability vector of the target intra-frame prediction mode determine a first reference value, and encode the first reference value to obtain a code stream corresponding to the target intra-frame prediction mode, specifically: according to the target intra-frame prediction mode The first identification of the first identification determines the corresponding element of the first identification in the probability vector, and then arbitrarily selects a value in the probability interval corresponding to the element as the first reference value to be encoded into the code stream, and the first reference value can be used to characterize the current The target intra prediction mode of the image block.
(2)当尺寸变换操作包含转置,且目标帧内预测模式为角度预测模式时,根据预设常数和目标帧内预测模式的第一标识确定第二标识,根据第二标识和概率向量,确定第二参考值;对第二参考值进行编码,以得到目标帧内预测模式对应的码流;当尺寸变换操作包含转置,且目标帧内预测模式为非角度预测模式时,根据目标帧内预测模式的第一标识和概率向量,确定第一参考值;对第一参考值进行编码,以得到目标帧内预测模式对应的码流。(2) when the size transformation operation includes transposition, and the target intra-frame prediction mode is the angle prediction mode, the second mark is determined according to the preset constant and the first mark of the target intra-frame prediction mode, according to the second mark and the probability vector, Determine the second reference value; encode the second reference value to obtain a code stream corresponding to the target intra-frame prediction mode; when the size transformation operation includes transposition, and the target intra-frame prediction mode is a non-angle prediction mode, according to the target frame The first identifier and probability vector of the intra prediction mode are used to determine the first reference value; the first reference value is encoded to obtain a code stream corresponding to the target intra prediction mode.
其中,预设常数为图3中所示的多个帧内预测模式的个数,即67,角度预测模式为模式值2-66的帧内预测模式,非角度预测模式为模式值为DC直流模式或平面模式。Wherein, the preset constant is the number of the multiple intra-frame prediction modes shown in FIG. 3 , that is, 67, the angle prediction mode is the intra-frame prediction mode with mode values 2-66, and the non-angle prediction mode is the mode value of DC DC mode or flat mode.
具体地,当尺寸变换操作包含转置,且目标帧内预测模式为角度预测模式时,利用预设常数减去第一标识,得到第二标识,在概率向量中选出第二标识所对应的元素,然后在该元素对应的概率区间中任意选取一个数值作为第二参考值编入码流;当尺寸变换操作包含转置,且目标帧内预测模式为非角度预测模式时,对目标帧内预测模式的编码方式与方式(1)相同,此处不再赘述。Specifically, when the size transformation operation includes transposition and the target intra-frame prediction mode is the angle prediction mode, the first identifier is subtracted from the preset constant to obtain the second identifier, and the corresponding identifier of the second identifier is selected from the probability vector. element, and then arbitrarily select a value in the probability interval corresponding to this element as the second reference value and encode it into the code stream; when the size transformation operation includes transposition, and the target intra-frame prediction mode is non-angle prediction mode, the target intra-frame The coding method of the prediction mode is the same as that of the method (1), which is not repeated here.
可以看出,在图7所示的实施例中,获取当前图像块的三个周围图像块的重建块、预测块和残差块中的至少两个,由于可以通过每个周围图像块的重建块、预测块和残差块中的两个计算得到另一个,因而本申请实施例可以充分利用了周围图像块的相关信息,进而根据周围图像块的重建块、预测块和残差块中的至少两个生成的当前图像块的概率向量更加准确,提高了编入码流中数据的准确性,提升了编码性能;此外,由于第一数据块的尺寸有多种,通过对第一数据块进行尺寸变换操作,得到第二数据块,可以有效减少第二数据块尺寸种类的数量,从而减少与不同尺寸的第二数据块对应的神经网络模型的数量,进而节省神经网络模型的存储空间,提高编码性能;同时,对于纹理较简单,且尺寸较大的第一数据块,经过尺寸变换操作缩小为较小尺寸的第二数据块后,可以有效地降低后续利用神经网络模型计算概率向量时的计算复杂度,从而提高编码效率。It can be seen that, in the embodiment shown in FIG. 7 , at least two of the reconstruction blocks, prediction blocks and residual blocks of the three surrounding image blocks of the current image block are obtained, since the reconstruction of each surrounding image block can Two of the block, the prediction block and the residual block are calculated to obtain the other, so the embodiment of the present application can make full use of the relevant information of the surrounding image blocks, and then according to the reconstruction block, the prediction block and the residual block of the surrounding image blocks The probability vectors of at least two generated current image blocks are more accurate, which improves the accuracy of the data encoded in the code stream and improves the encoding performance; The size transformation operation is performed to obtain the second data block, which can effectively reduce the number of size types of the second data block, thereby reducing the number of neural network models corresponding to the second data blocks of different sizes, thereby saving the storage space of the neural network model, Improve coding performance; at the same time, for the first data block with simple texture and large size, after the size transformation operation is reduced to a second data block with a smaller size, it can effectively reduce the subsequent use of the neural network model to calculate the probability vector. computational complexity, thereby improving coding efficiency.
请参见图8,图8为本申请中另一种对当前图像块的目标帧内预测模式进行编码的方法800的流程图。如图8所示,方法800分别包括如下六个步骤:Please refer to FIG. 8 , which is a flowchart of another method 800 for encoding a target intra prediction mode of a current image block in the present application. As shown in Figure 8, the method 800 includes the following six steps:
S801,获取当前图像块的周围图像块。S801, acquiring surrounding image blocks of the current image block.
具体地,可以获取与当前图像块相邻的左侧、左上和上侧的三个周围图像块,三个周围图像块的尺寸与当前图像块相同(如图10所示),三个周围图像块中每个周围图像块包含一个重建块、残差块和预测块,可以选取三个周围图像块和当前图像块中每个图像块的重建块、残差块和预测块中的一个或多个来进行后续的拼接或级联操作。Specifically, three surrounding image blocks on the left, upper left and upper sides adjacent to the current image block can be obtained, the size of the three surrounding image blocks is the same as that of the current image block (as shown in FIG. 10 ), and the three surrounding image blocks are Each surrounding image block in the block contains a reconstruction block, a residual block and a prediction block, and one or more of the reconstruction block, residual block and prediction block of each image block in the three surrounding image blocks and the current image block can be selected. for subsequent splicing or cascading operations.
S802,对周围图像块进行拼接或者级联,得到第一数据块。S802, splicing or concatenating surrounding image blocks to obtain a first data block.
S803,对第一数据块进行尺寸变换操作,以得到第二数据块,尺寸变换操作包括缩放和转置中的至少一项,缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放。S803, perform a size transformation operation on the first data block to obtain a second data block, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or both horizontal and vertical scaling A proportional scaling of the direction.
S804,将第二数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量,概率向量中的多个元素与多个帧内预测模式一一对应,概率向量中的任一元素用于表征对当前图像块进行预测时采用该任一元素对应的帧内预测模式的概率。S804, input the second data block into the prediction mode probability model for processing to obtain a probability vector of the current image block, where multiple elements in the probability vector correspond to multiple intra prediction modes one-to-one, and any element in the probability vector uses Indicates the probability of using the intra prediction mode corresponding to any element when predicting the current image block.
S805,确定当前图像块的目标帧内预测模式。S805: Determine the target intra prediction mode of the current image block.
S806,根据目标帧内预测模式和概率向量,将目标帧内预测模式编入码流。S806, according to the target intra-frame prediction mode and the probability vector, encode the target intra-frame prediction mode into the code stream.
具体地,步骤S802到S804分别与步骤S702相对应的过程相同,步骤S805和步骤S806分别与步骤S703和S704中对应的步骤相同,此处不再赘述。Specifically, steps S802 to S804 are respectively the same as the processes corresponding to step S702, and steps S805 and S806 are respectively the same as the corresponding steps in steps S703 and S704, which will not be repeated here.
可以看出,在图8所示的实施例中,第一数据块的尺寸有多种,通过对第一数据块进行尺寸变换操作,得到第二数据块,可以有效减少第二数据块尺寸种类的数量,从而减少与不同尺寸的第二数据块对应的神经网络模型的数量,进而节省神经网络模型的存储空间,提高编码性能;同时,对于纹理较简单,且尺寸较大的第一数据块,经过尺寸变换操作缩小为较小尺寸的第二数据块后,可以有效地降低后续利用神经网络模型计算概率向量时的计算复杂度,从而提高编码效率。It can be seen that in the embodiment shown in FIG. 8 , there are various sizes of the first data block. By performing a size transformation operation on the first data block to obtain the second data block, the size types of the second data block can be effectively reduced , thereby reducing the number of neural network models corresponding to second data blocks of different sizes, thereby saving the storage space of neural network models and improving coding performance; at the same time, for the first data block with simpler texture and larger size , after the size transformation operation is reduced to a second data block with a smaller size, the computational complexity of the subsequent calculation of the probability vector by using the neural network model can be effectively reduced, thereby improving the coding efficiency.
请参见图12,图12为本申请实施例中一种编码器对当前图像块的帧内预测模式的相关语法元素进行编码的流程图。如图12所示,在VVC中,对于一个当前图像块来说,其帧内预测模式相关的语法元素包括MIP标志位、多参考行MRL索引、与MIP模式和ISP模式分别对应的标识、该当前图像块目标帧内预测模式对应的第一标识,以及该当前图像块的概率向量;熵编码器103在对当前图像块的帧内预测相关的语法元素进行编码时,分为如下步骤:根据步骤S704中的描述,从概率向量中选出与目标帧内预测模式对应的第一参考值或第二参考值;采用常规编码方式编码MIP标志位,并确定MIP标志位是否为第一预设值,若是,则采用旁路编码方式编码MIP模式;若MIP标志位为第二预设值,则采用常规编码方式依次编码MRL索引、ISP模式对应的标识,以及采用旁路编码方式编码该当前图像块使用目标帧内预测模式进行帧内预测的第一参考值或第二参考值,其中,第一预设值和第二预设值分别为1和0。应当注意的是,上述第一预设值和第二预设值只是本申请实施例中的一个具体实例,本申请对此不做具体限定。Please refer to FIG. 12. FIG. 12 is a flowchart of an encoder encoding related syntax elements of an intra prediction mode of a current image block according to an embodiment of the present application. As shown in FIG. 12, in VVC, for a current image block, the syntax elements related to the intra prediction mode include the MIP flag bit, the multi-reference line MRL index, the flag corresponding to the MIP mode and the ISP mode, the The first identifier corresponding to the target intra prediction mode of the current image block, and the probability vector of the current image block; when the entropy encoder 103 encodes the syntax elements related to the intra prediction of the current image block, it is divided into the following steps: The description in step S704, select the first reference value or the second reference value corresponding to the target intra-frame prediction mode from the probability vector; use the conventional encoding method to encode the MIP flag bit, and determine whether the MIP flag bit is the first preset value, if it is, then adopt the bypass encoding method to encode the MIP mode; if the MIP flag bit is the second preset value, then adopt the conventional encoding method to encode the MRL index, the corresponding sign of the ISP mode in turn, and adopt the bypass encoding method to encode the current The first reference value or the second reference value for intra prediction of the image block using the target intra prediction mode, wherein the first preset value and the second preset value are 1 and 0 respectively. It should be noted that the above-mentioned first preset value and second preset value are only a specific example in the embodiment of the present application, which is not specifically limited in the present application.
其中,MIP标志位、MRL索引和ISP模式等信息在对当前图像块的目标帧内预测模式进行搜索的过程中已经确定,MIP标志位用来指示当前图像块是否采用MIP模式进行帧内预测,MRL索引指示当前图像块进行帧内预测时采用的参考行的索引,ISP模式指示当前图像块的子块划分方式,其中,当前图像块的所有子块共用一种帧内预测模式,且在当前图像块采用MRL的情况下,不执行ISP。Among them, the information such as the MIP flag bit, the MRL index and the ISP mode have been determined in the process of searching the target intra prediction mode of the current image block, and the MIP flag bit is used to indicate whether the current image block adopts the MIP mode for intra prediction. The MRL index indicates the index of the reference line used for intra prediction of the current image block, and the ISP mode indicates the sub-block division method of the current image block, wherein all sub-blocks of the current image block share one intra prediction mode, and in the current image block When the image block adopts MRL, ISP is not performed.
应当理解,本申请实施例中所描述的编码器20采用上下文自适应的二进制算术编码 CABAC方式来对当前图像块帧内预测相关的语法元素进行编码,本申请中也可以通过其它方式对当前图像块帧内预测相关的语法元素进行编码,此处不做具体限定。It should be understood that the encoder 20 described in the embodiments of the present application uses the context-adaptive binary arithmetic coding CABAC method to encode the syntax elements related to the intra prediction of the current image block. In this application, the current image can also be encoded in other ways The syntax elements related to intra-block prediction are encoded, which is not specifically limited here.
请参见图13,图13是本申请施例中一种用于对当前图像块的目标帧内预测模式进行解码的方法1300的流程图,图13所示的方法1300包括步骤S1301、S1302和S1303,下面将对这些步骤进行详细介绍。Please refer to FIG. 13 . FIG. 13 is a flowchart of a method 1300 for decoding a target intra prediction mode of a current image block in an embodiment of the present application. The method 1300 shown in FIG. 13 includes steps S1301 , S1302 and S1303 , these steps are described in detail below.
步骤S1301,获取当前图像块对应的码流,以及当前图像块的周围图像块的重建块、预测块和残差块中的至少两个。Step S1301: Acquire a code stream corresponding to the current image block and at least two of the reconstruction blocks, prediction blocks and residual blocks of surrounding image blocks of the current image block.
具体地,解码器30获取经熵编码器304编码的视频码流;获取当前图像块的周围图像块的重建块、预测块和残差块中的至少两个的具体过程与步骤S701相同,此处不再赘述。Specifically, the decoder 30 obtains the video code stream encoded by the entropy encoder 304; the specific process of obtaining at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks of the current image block is the same as that of step S701. It is not repeated here.
步骤S1302,根据周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到当前图像块的概率向量,概率向量中的多个元素与多个帧内预测模式一一对应,概率向量中的任一元素用于表征对当前图像块进行预测时采用该任一元素对应的帧内预测模式的概率。Step S1302: Obtain a probability vector of the current image block according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks and the prediction mode probability model, and multiple elements in the probability vector are one with multiple intra prediction modes. One-to-one correspondence, any element in the probability vector is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block.
具体地,步骤S1302的具体执行过程与步骤S702相同,此处不再赘述。Specifically, the specific execution process of step S1302 is the same as that of step S702, which is not repeated here.
步骤S1303,根据码流和概率向量,确定当前图像块的目标帧内预测模式,并根据目标帧内预测模式,确定当前图像块的预测块。Step S1303: Determine the target intra prediction mode of the current image block according to the code stream and the probability vector, and determine the prediction block of the current image block according to the target intra prediction mode.
具体地,对码流进行解码,得到目标参考值;根据目标参考值确定当前图像块的目标帧内预测模式。Specifically, the code stream is decoded to obtain the target reference value; the target intra prediction mode of the current image block is determined according to the target reference value.
上述根据目标参考值确定当前图像块的目标帧内预测模式包括两种情况:The above-mentioned determination of the target intra prediction mode of the current image block according to the target reference value includes two situations:
(1)当尺寸变换操作不包含转置时,根据目标参考值和概率向量确定目标帧内预测模式。(1) When the size transformation operation does not include transposition, the target intra prediction mode is determined according to the target reference value and the probability vector.
具体地,目标参考值位于0和1之间,根据目标参考值落入的概率区间确定其在概率向量中对应的标识,根据该对应的标识确定当前图像块的目标帧内预测模式。Specifically, the target reference value is between 0 and 1, the corresponding identifier in the probability vector is determined according to the probability interval in which the target reference value falls, and the target intra prediction mode of the current image block is determined according to the corresponding identifier.
(2)当尺寸变换操作包含转置时,根据目标参考值和概率向量确定第一标识;当第一标识对应的帧内预测模式为角度预测模式时,根据预设常数和第一标识,确定目标帧内预测模式;当第一标识对应的帧内预测模式为非角度预测模式时,根据目标参考值和概率向量确定目标帧内预测模式。(2) when the size transformation operation includes transposition, determine the first identifier according to the target reference value and the probability vector; when the intra prediction mode corresponding to the first identifier is the angle prediction mode, determine according to the preset constant and the first identifier target intra-frame prediction mode; when the intra-frame prediction mode corresponding to the first identifier is a non-angular prediction mode, the target intra-frame prediction mode is determined according to the target reference value and the probability vector.
具体地,根据目标参考值落入的概率区间确定其在概率向量中对应的元素的第一标识,当该第一标识所对应的帧内预测模式为角度预测模式时,利用预设常数减去第一标识,得到第二标识,该第二标识在图9中所对应的帧内预测模式即为当前图像块的目标帧内预测模式;当该第一标识所对应的帧内预测模式为非角度预测模式时,该第一标识在图9中所对应的帧内预测模式即为当前图像块的目标帧内预测模式。Specifically, the first identifier of the element corresponding to the target reference value in the probability vector is determined according to the probability interval in which the target reference value falls, and when the intra prediction mode corresponding to the first identifier is the angle prediction mode, a preset constant is used to subtract the The first mark obtains the second mark, and the intra-frame prediction mode corresponding to the second mark in FIG. 9 is the target intra-frame prediction mode of the current image block; when the intra-frame prediction mode corresponding to the first mark is not In the case of the angle prediction mode, the intra prediction mode corresponding to the first identifier in FIG. 9 is the target intra prediction mode of the current image block.
请参见图14,图14是本申请施例中一种用于对当前图像块的目标帧内预测模式进行解码的方法1400的流程图,图14所示的方法1400包括步骤S1401、S1402、1403、S1404和S1405,下面将对这些步骤进行详细介绍。Please refer to FIG. 14 . FIG. 14 is a flowchart of a method 1400 for decoding a target intra prediction mode of a current image block in an embodiment of the present application. The method 1400 shown in FIG. 14 includes steps S1401 , S1402 and 1403 , S1404 and S1405, these steps will be described in detail below.
S1401,获取当前图像块对应的码流,以及当前图像块的周围图像块。S1401: Acquire a code stream corresponding to the current image block and surrounding image blocks of the current image block.
具体地,上述获取当前图像块对应的码流与步骤S1301中对应步骤相同,上述获取当前图像块的周围图像块与步骤S801中对应步骤相同,此处不再赘述。Specifically, acquiring the code stream corresponding to the current image block above is the same as the corresponding step in step S1301, and acquiring the surrounding image blocks of the current image block above is the same as the corresponding step in step S801, which is not repeated here.
S1402,对周围图像块进行拼接或者级联,得到第一数据块。S1402, splicing or concatenating surrounding image blocks to obtain a first data block.
S1403,对第一数据块进行尺寸变换操作,以得到第二数据块,尺寸变换操作包括缩放和 转置中的至少一项,缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放。S1403, perform a size transformation operation on the first data block to obtain a second data block, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or both horizontal and vertical scaling A proportional scaling of the direction.
S1404,将第二数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量,概率向量中的多个元素与多个帧内预测模式一一对应,概率向量中的任一元素用于表征对当前图像块进行预测时采用该任一元素对应的帧内预测模式的概率。S1404: Input the second data block into the prediction mode probability model for processing to obtain a probability vector of the current image block. Multiple elements in the probability vector correspond to multiple intra prediction modes one-to-one, and any element in the probability vector uses Indicates the probability of using the intra prediction mode corresponding to any element when predicting the current image block.
S1405,根据码流和概率向量,确定当前图像块的目标帧内预测模式,并根据目标帧内预测模式,确定当前图像块的预测块。S1405: Determine the target intra prediction mode of the current image block according to the code stream and the probability vector, and determine the prediction block of the current image block according to the target intra prediction mode.
具体地,步骤S1402、S1403、S1404和S1405具体的执行过程分别与S802、S803、S804和S1303相同,此处不再赘述。Specifically, the specific execution processes of steps S1402 , S1403 , S1404 and S1405 are the same as those of S802 , S803 , S804 and S1303 respectively, and will not be repeated here.
请参见图15,图15为本申请提供的一种编码装置1500的示意性框图。应当理解,此处的编码装置1500可以对应于图1A或者可以图2中的编码器20,该编码装置1500可以包括:Please refer to FIG. 15 , which is a schematic block diagram of an encoding apparatus 1500 provided by the present application. It should be understood that the encoding apparatus 1500 here may correspond to the encoder 20 in FIG. 1A or may include:
获取单元1501,用于获取当前图像块的周围图像块的重建块、预测块和残差块中的至少两个。The obtaining unit 1501 is configured to obtain at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks of the current image block.
处理单元1502,用于根据周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到当前图像块的概率向量,概率向量中的多个元素与多个帧内预测模式一一对应,概率向量中的任一元素用于表征对当前图像块进行预测时采用任一元素对应的帧内预测模式的概率。The processing unit 1502 is configured to obtain a probability vector of the current image block according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks and the prediction mode probability model, and multiple elements in the probability vector are related to multiple intra-frame values. The prediction modes are in one-to-one correspondence, and any element in the probability vector is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block.
确定单元1503,用于确定当前图像块的目标帧内预测模式。The determining unit 1503 is configured to determine the target intra prediction mode of the current image block.
编码单元1504,用于根据目标帧内预测模式和概率向量,将目标帧内预测模式编入码流。The encoding unit 1504 is configured to encode the target intra-frame prediction mode into the code stream according to the target intra-frame prediction mode and the probability vector.
在一种可能的实现方式中,上述处理单元具体用于:当目标帧内预测模式为非矩阵加权帧内预测MIP模式时,根据周围图像块的重建块、预测块和残差块中的至少两个、当前图像块和预测模式概率模型得到当前图像块的概率向量。In a possible implementation manner, the above processing unit is specifically configured to: when the target intra prediction mode is the non-matrix weighted intra prediction MIP mode, according to at least one of the reconstruction block, prediction block and residual block of the surrounding image blocks Two, the current image block and the prediction mode probability model to obtain the probability vector of the current image block.
在一种可能的实现方式中,上述处理单元具体用于:对周围图像块的重建块、预测块和残差块中的至少两个进行拼接或者级联,得到第一数据块;根据第一数据块和预测模式概率模型得到当前图像块的概率向量。In a possible implementation manner, the above processing unit is specifically configured to: splicing or concatenating at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks to obtain the first data block; The data block and prediction mode probability model obtains the probability vector of the current image block.
在一种可能的实现方式中,在上述根据第一数据块和预测模式概率模型得到当前图像块的概率向量的方面,处理单元具体用于:当第一数据块的尺寸为目标尺寸时,将第一数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量,或者;当第一数据块的尺寸不为目标尺寸时,对第一数据块进行尺寸变换操作,以得到第二数据块,第二数据块的尺寸等于目标尺寸,尺寸变换操作包括缩放和转置中的至少一项,缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将第二数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量。In a possible implementation manner, in the aspect of obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model, the processing unit is specifically configured to: when the size of the first data block is the target size, The first data block is input into the prediction mode probability model for processing to obtain the probability vector of the current image block, or; when the size of the first data block is not the target size, a size transformation operation is performed on the first data block to obtain the second data block. data block, the size of the second data block is equal to the target size, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling or equal scaling in both horizontal and vertical directions; The second data block is input into the prediction mode probability model for processing to obtain the probability vector of the current image block.
在一种可能的实现方式中,在上述根据第一数据块和预测模式概率模型得到当前图像块的概率向量的方面,处理单元具体用于:当第一数据块的尺寸为目标尺寸时,将第一数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量,或者;当第一数据块的尺寸不为目标尺寸时,根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,第二数据块的尺寸为目标尺寸,尺寸变换操作包括缩放和转置中的至少一项,缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将第二数据块输入预测模式概率模型进行 处理,以得到当前图像块的概率向量。In a possible implementation manner, in the aspect of obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model, the processing unit is specifically configured to: when the size of the first data block is the target size, The first data block is input into the prediction mode probability model for processing to obtain the probability vector of the current image block, or; when the size of the first data block is not the target size, according to the length of the first side and the second side of the first data block long size relationship, and/or the horizontal gradient and vertical gradient of the current image block, perform a size transformation operation on the first data block to obtain a second data block, the size of the second data block is the target size, and the size transformation operation includes scaling and at least one of transposition, the scaling includes horizontal scaling, vertical scaling or equal scaling in both horizontal and vertical directions; input the second data block into the prediction mode probability model for processing to obtain the current image block probability vector.
在一种可能的实现方式中,尺寸变换操作包括缩放,在根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块的方面,处理单元具体用于:In a possible implementation manner, the size transformation operation includes scaling, and according to the size relationship between the length of the first side and the length of the second side of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, for the first data block In the aspect of performing a size transformation operation on a data block to obtain a second data block, the processing unit is specifically used for:
当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block scaling to obtain a second data block, the first side length and the second side length of the second data block are respectively 4M/N, 4;
当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first The data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively;
当第一数据块的第一边长M小于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4,4N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block is scaled , to obtain the second data block, the first side length and the second side length of the second data block are respectively 4, 4N/M;
当第一数据块的第一边长M小于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8,8N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold scaling to obtain a second data block, the first side length and the second side length of the second data block are respectively 8,8N/M;
其中,M和N为正整数。where M and N are positive integers.
在一种可能的实现方式中,尺寸变换操作包括缩放和转置中的至少一项,在根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块的方面,处理单元具体用于:In a possible implementation manner, the size transformation operation includes at least one of scaling and transposing, and the size relationship between the first side length and the second side length of the first data block, and/or the size relationship of the current image block For the horizontal gradient and vertical gradient, the size transformation operation is performed on the first data block to obtain the aspect of the second data block, and the processing unit is specifically used for:
当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block scaling to obtain a second data block, the first side length and the second side length of the second data block are respectively 4M/N, 4;
当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first The data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively;
当第一数据块的第一边长M小于第一数据块的第二边长N时,对第一数据块执行以下操作,以得到第二数据块:When the first side length M of the first data block is smaller than the second side length N of the first data block, perform the following operations on the first data block to obtain the second data block:
当第一数据块的水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行转置和缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4N/M,4;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is smaller than the preset threshold, transpose and scale the first data block to obtain a second data block, the first The side length and the second side length are 4N/M, 4 respectively;
当第一数据块的水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行转置和缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8N/M,8;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is greater than or equal to the preset threshold, the first data block is transposed and scaled to obtain a second data block whose The length of the first side and the length of the second side are 8N/M, 8 respectively;
其中,M和N为正整数。where M and N are positive integers.
在一种可能的实现方式中,上述编码单元具体用于:当尺寸变换操作不包含转置时,根据目标帧内预测模式的第一标识和概率向量,确定第一参考值;对第一参考值进行编码,以得到目标帧内预测模式对应的码流;In a possible implementation manner, the above encoding unit is specifically used to: when the size transformation operation does not include transposition, determine the first reference value according to the first identifier and probability vector of the target intra prediction mode; The value is encoded to obtain the code stream corresponding to the target intra prediction mode;
当尺寸变换操作包含转置时,根据预设常数和第一标识确定第二标识,根据第二标识和概率向量,确定第二参考值;对第二参考值进行编码,以得到目标帧内预测模式对应的码流。When the size transformation operation includes transposition, the second identifier is determined according to the preset constant and the first identifier, and the second reference value is determined according to the second identifier and the probability vector; the second reference value is encoded to obtain the target intra prediction The code stream corresponding to the mode.
上述即为编码装置1500具体地执行图13所描述的方法实施例的过程;同理,编码装置 1500也可用于实现图8所示的方法实施例中的全部或任一步骤,此处不再赘述。The above is the process in which the encoding apparatus 1500 specifically performs the method embodiment described in FIG. 13 ; similarly, the encoding apparatus 1500 can also be used to implement all or any steps in the method embodiment shown in FIG. 8 , which is not repeated here. Repeat.
请参见图16,图16为本申请提供的一种编码装置1600的示意性框图。应当理解,此处的编码装置1600可以对应于图1A或者可以图3中的解码器30,该编码装置1600可以包括:Please refer to FIG. 16, which is a schematic block diagram of an encoding apparatus 1600 provided by the present application. It should be understood that the encoding apparatus 1600 here may correspond to FIG. 1A or the decoder 30 in FIG. 3 , and the encoding apparatus 1600 may include:
获取单元1601,用于获取当前图像块对应的码流,以及当前图像块的周围图像块的重建块、预测块和残差块中的至少两个。The obtaining unit 1601 is configured to obtain a code stream corresponding to the current image block and at least two of the reconstruction blocks, prediction blocks and residual blocks of surrounding image blocks of the current image block.
处理单元1602,用于根据周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到当前图像块的概率向量,概率向量中的多个元素与多个帧内预测模式一一对应,概率向量中的任一元素用于表征对当前图像块进行预测时采用任一元素对应的帧内预测模式的概率。The processing unit 1602 is configured to obtain a probability vector of the current image block according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks and the prediction mode probability model, and multiple elements in the probability vector are related to multiple intra-frame values. The prediction modes are in one-to-one correspondence, and any element in the probability vector is used to represent the probability of using the intra prediction mode corresponding to any element when predicting the current image block.
解码单元1603,用于根据码流和概率向量,确定当前图像块的目标帧内预测模式,并根据目标帧内预测模式,确定当前图像块的预测块。The decoding unit 1603 is configured to determine the target intra prediction mode of the current image block according to the code stream and the probability vector, and determine the prediction block of the current image block according to the target intra prediction mode.
在一种可能的实现方式中,上述处理单元具体用于:对周围图像块的重建块、预测块和残差块中的至少两个进行拼接或者级联,得到第一数据块;根据第一数据块和预测模式概率模型得到当前图像块的概率向量。In a possible implementation manner, the above processing unit is specifically configured to: splicing or concatenating at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks to obtain the first data block; The data block and prediction mode probability model obtains the probability vector of the current image block.
在一种可能的实现方式中,在上述根据第一数据块和预测模式概率模型得到当前图像块的概率向量的方面,处理单元具体用于:当第一数据块的尺寸为目标尺寸时,将第一数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量,或者;In a possible implementation manner, in the aspect of obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model, the processing unit is specifically configured to: when the size of the first data block is the target size, The first data block is input into the prediction mode probability model for processing to obtain the probability vector of the current image block, or;
当第一数据块的尺寸不为目标尺寸时,对第一数据块进行尺寸变换操作,以得到第二数据块,第二数据块的尺寸等于目标尺寸,尺寸变换操作包括缩放和转置中的至少一项,缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将第二数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量。When the size of the first data block is not the target size, a size transformation operation is performed on the first data block to obtain a second data block, where the size of the second data block is equal to the target size, and the size transformation operation includes scaling and transposing. In at least one item, scaling includes horizontal scaling, vertical scaling, or equal scaling in both horizontal and vertical directions; inputting the second data block into a prediction mode probability model for processing to obtain a probability vector of the current image block.
在一种可能的实现方式中,在上述根据第一数据块和预测模式概率模型得到当前图像块的概率向量的方面,处理单元具体用于:当第一数据块的尺寸为目标尺寸时,将第一数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量,或者;In a possible implementation manner, in the aspect of obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model, the processing unit is specifically configured to: when the size of the first data block is the target size, The first data block is input into the prediction mode probability model for processing to obtain the probability vector of the current image block, or;
当第一数据块的尺寸不为目标尺寸时,根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块,第一边长和第二边长为第一数据块中两个互相垂直的边的长度,第二数据块的尺寸为目标尺寸,尺寸变换操作包括缩放和转置中的至少一项,缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将第二数据块输入预测模式概率模型进行处理,以得到当前图像块的概率向量。When the size of the first data block is not the target size, according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, for the first data block A size transformation operation is performed to obtain a second data block, the first side length and the second side length are the lengths of two mutually perpendicular sides in the first data block, the size of the second data block is the target size, and the size transformation operation includes at least one of scaling and transposition, the scaling includes horizontal scaling, vertical scaling or equal scaling in both horizontal and vertical directions; inputting the second data block into the prediction mode probability model for processing to obtain the current image The probability vector for the block.
在一种可能的实现方式中,尺寸变换操作包括缩放,在上述根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块的方面,处理单元具体用于:In a possible implementation manner, the size transformation operation includes scaling. In the above, according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, to In the aspect of performing a size transformation operation on the first data block to obtain the second data block, the processing unit is specifically used for:
当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block scaling to obtain a second data block, the first side length and the second side length of the second data block are respectively 4M/N, 4;
当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first The data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively;
当第一数据块的第一边长M小于第一数据块的第二边长N,且水平梯度的绝对值与垂直 梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4,4N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block is scaled , to obtain the second data block, the first side length and the second side length of the second data block are respectively 4, 4N/M;
当第一数据块的第一边长M小于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8,8N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold scaling to obtain a second data block, the first side length and the second side length of the second data block are respectively 8,8N/M;
其中,M和N为正整数。where M and N are positive integers.
在一种可能的实现方式中,尺寸变换操作包括缩放和转置中的至少一项,在上述根据第一数据块的第一边长与第二边长的大小关系,和/或当前图像块的水平梯度和垂直梯度,对第一数据块进行尺寸变换操作,以得到第二数据块的方面,处理单元具体用于:In a possible implementation manner, the size transformation operation includes at least one of scaling and transposition, where the size relationship between the first side length and the second side length of the first data block, and/or the current image block The horizontal gradient and vertical gradient of , perform a size transformation operation on the first data block to obtain the aspect of the second data block, and the processing unit is specifically used for:
当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset threshold, the first data block scaling to obtain a second data block, the first side length and the second side length of the second data block are respectively 4M/N, 4;
当第一数据块的第一边长M大于或等于第一数据块的第二边长N,且水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the preset threshold, the first The data block is scaled to obtain a second data block, and the first side length and the second side length of the second data block are 8M/N, 8 respectively;
当第一数据块的第一边长M小于第一数据块的第二边长N时,对第一数据块执行以下操作,以得到第二数据块:When the first side length M of the first data block is smaller than the second side length N of the first data block, perform the following operations on the first data block to obtain the second data block:
当第一数据块的水平梯度的绝对值与垂直梯度的绝对值之和小于预设阈值时,对第一数据块进行转置和缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为4N/M,4;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is smaller than the preset threshold, transpose and scale the first data block to obtain a second data block, the first The side length and the second side length are 4N/M, 4 respectively;
当第一数据块的水平梯度的绝对值与垂直梯度的绝对值之和大于或等于预设阈值时,对第一数据块进行转置和缩放,以得到第二数据块,第二数据块的第一边长和第二边长分别为8N/M,8;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is greater than or equal to the preset threshold, the first data block is transposed and scaled to obtain a second data block whose The length of the first side and the length of the second side are 8N/M, 8 respectively;
其中,M和N为正整数。where M and N are positive integers.
在一种可能的实现方式中,上述解码单元具体用于:对码流进行解码,得到目标参考值;当尺寸变换操作不包含转置时,根据目标参考值和概率向量确定目标帧内预测模式;In a possible implementation manner, the above-mentioned decoding unit is specifically used to: decode the code stream to obtain the target reference value; when the size transformation operation does not include transposition, determine the target intra-frame prediction mode according to the target reference value and the probability vector ;
当尺寸变换操作包含转置时,根据目标参考值和概率向量确定第一标识;当第一标识对应的帧内预测模式为角度预测模式时,根据预设常数和第一标识,确定目标帧内预测模式;当第一标识对应的帧内预测模式为非角度预测模式时,根据目标参考值和概率向量确定目标帧内预测模式。When the size transformation operation includes transposition, the first identifier is determined according to the target reference value and the probability vector; when the intra-frame prediction mode corresponding to the first identifier is the angle prediction mode, the target intra-frame prediction mode is determined according to the preset constant and the first identifier Prediction mode; when the intra-frame prediction mode corresponding to the first identifier is a non-angle prediction mode, the target intra-frame prediction mode is determined according to the target reference value and the probability vector.
上述即为编码装置1600具体地执行图13所描述的方法实施例的过程;同理,编码装置1600也可用于实现图14所描述的方法实施例中的全部或任一步骤,此处不再赘述。The above is the process in which the encoding apparatus 1600 specifically performs the method embodiment described in FIG. 13 ; similarly, the encoding apparatus 1600 can also be used to implement all or any steps in the method embodiment described in FIG. 14 , which is not repeated here. Repeat.
本领域技术人员能够领会,结合本文公开描述的各种说明性逻辑框、模块和算法步骤所描述的功能可以硬件、软件、固件或其任何组合来实施。如果以软件来实施,那么各种说明性逻辑框、模块、和步骤描述的功能可作为一或多个指令或代码在计算机可读媒体上存储或传输,且由基于硬件的处理单元执行。计算机可读媒体可包含计算机可读存储媒体,其对应于有形媒体,例如数据存储媒体,或包括任何促进将计算机程序从一处传送到另一处的媒体(例如,根据通信协议)的通信媒体。以此方式,计算机可读媒体大体上可对应于(1)非暂时性的有形计算机可读存储媒体,或(2)通信媒体,例如信号或载波。数据存储媒体可为可由一或多个计算机或一或多个处理器存取以检索用于实施本申请中描述的技术的指令、代码 和/或数据结构的任何可用媒体。计算机程序产品可包含计算机可读媒体。Those skilled in the art will appreciate that the functions described in connection with the various illustrative logical blocks, modules, and algorithm steps described in this disclosure may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions described by the various illustrative logical blocks, modules, and steps may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to tangible media, such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another (eg, according to a communication protocol) . In this manner, a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium, such as a signal or carrier wave. Data storage media can be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described in this application. The computer program product may comprise a computer-readable medium.
作为实例而非限制,此类计算机可读存储媒体可包括RAM、ROM、EEPROM、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁性存储装置、快闪存储器或可用来存储指令或数据结构的形式的所要程序代码并且可由计算机存取的任何其它媒体。并且,任何连接被恰当地称作计算机可读媒体。举例来说,如果使用同轴缆线、光纤缆线、双绞线、数字订户线DSL或例如红外线、无线电和微波等无线技术从网站、服务器或其它远程源传输指令,那么同轴缆线、光纤缆线、双绞线、DSL或例如红外线、无线电和微波等无线技术包含在媒体的定义中。但是,应理解,所述计算机可读存储媒体和数据存储媒体并不包括连接、载波、信号或其它暂时媒体,而是实际上针对于非暂时性有形存储媒体。如本文中所使用,磁盘和光盘包含压缩光盘CD、激光光盘、光学光盘、数字多功能光盘DVD和蓝光光盘,其中磁盘通常以磁性方式再现数据,而光盘利用激光以光学方式再现数据。以上各项的组合也应包含在计算机可读媒体的范围内。By way of example and not limitation, such computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory or may be used to store instructions or data structures desired program code in the form of any other medium that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if coaxial cable, fiber optic cable, twisted pair, digital subscriber line DSL, or wireless technologies such as infrared, radio, and microwave are used to transmit instructions from a website, server, or other remote source, coaxial cable, Fiber optic cables, twisted pair, DSL or wireless technologies such as infrared, radio and microwave are included in the definition of media. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes compact disc CD, laser disc, optical disc, digital versatile disc DVD, and blu-ray disc, where disks typically reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
可通过例如一或多个数字信号处理器DSP、通用微处理器、专用集成电路ASIC、现场可编程逻辑阵列FPGA或其它等效集成或离散逻辑电路等一或多个处理器来执行指令。因此,如本文中所使用的术语“处理器”可指前述结构或适合于实施本文中所描述的技术的任一其它结构中的任一者。另外,在一些方面中,本文中所描述的各种说明性逻辑框、模块、和步骤所描述的功能可以提供于经配置以用于编码和解码的专用硬件和/或软件模块内,或者并入在组合编解码器中。而且,所述技术可完全实施于一或多个电路或逻辑元件中。Instructions may be executed by one or more processors such as one or more digital signal processors DSPs, general purpose microprocessors, application specific integrated circuits ASICs, field programmable logic arrays FPGAs or other equivalent integrated or discrete logic circuits. Accordingly, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Additionally, in some aspects, the functions described by the various illustrative logical blocks, modules, and steps described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or in combination with into the combined codec. Furthermore, the techniques may be fully implemented in one or more circuits or logic elements.
本申请的技术可在各种各样的装置或设备中实施,包含无线手持机、集成电路(IC)或一组IC(例如,芯片组)。本申请中描述各种组件、模块或单元是为了强调用于执行所揭示的技术的装置的功能方面,但未必需要由不同硬件单元实现。实际上,如上文所描述,各种单元可结合合适的软件和/或固件组合在编码解码器硬件单元中,或者通过互操作硬件单元(包含如上文所描述的一或多个处理器)来提供。The techniques of this application may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC), or a set of ICs (eg, a chip set). Various components, modules, or units are described herein to emphasize functional aspects of means for performing the disclosed techniques, but do not necessarily require realization by different hardware units. Indeed, as described above, the various units may be combined in codec hardware units in conjunction with suitable software and/or firmware, or by interoperating hardware units (including one or more processors as described above) supply.
以上所述,仅为本申请示例性的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应该以权利要求的保护范围为准。The above is only an exemplary embodiment of the present application, but the protection scope of the present application is not limited to this. Substitutions should be covered within the protection scope of this application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.

Claims (65)

  1. 一种帧内预测模式的编码方法,其特征在于,所述方法包括:A method for encoding an intra-frame prediction mode, wherein the method comprises:
    获取当前图像块的周围图像块的重建块、预测块和残差块中的至少两个;obtaining at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks of the current image block;
    根据所述周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到所述当前图像块的概率向量,所述概率向量中的多个元素与多个帧内预测模式一一对应,所述概率向量中的任一元素用于表征对所述当前图像块进行预测时采用所述任一元素对应的帧内预测模式的概率;A probability vector of the current image block is obtained according to at least two of the reconstruction block, prediction block and residual block of the surrounding image blocks and a prediction mode probability model, and multiple elements in the probability vector are related to multiple intra-frame The prediction modes are in one-to-one correspondence, and any element in the probability vector is used to represent the probability of using the intra prediction mode corresponding to the any element when predicting the current image block;
    确定所述当前图像块的目标帧内预测模式;determining the target intra prediction mode of the current image block;
    根据所述目标帧内预测模式和所述概率向量,将所述目标帧内预测模式编入码流。The target intra-frame prediction mode is encoded into the code stream according to the target intra-frame prediction mode and the probability vector.
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到所述当前图像块的概率向量,包括:The method according to claim 1, wherein the probability vector of the current image block is obtained according to at least two of a reconstruction block, a prediction block and a residual block of the surrounding image blocks and a prediction mode probability model ,include:
    当所述目标帧内预测模式为非矩阵加权帧内预测MIP模式时,根据所述周围图像块的重建块、预测块和残差块中的至少两个和所述预测模式概率模型得到所述当前图像块的概率向量。When the target intra prediction mode is a non-matrix weighted intra prediction MIP mode, obtain the prediction mode probability model according to at least two of the reconstruction block, prediction block and residual block of the surrounding image blocks The probability vector of the current image patch.
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到所述当前图像块的概率向量,包括:The method according to claim 1 or 2, wherein, obtaining the current image block according to at least two of a reconstruction block, a prediction block and a residual block of the surrounding image blocks and a prediction mode probability model probability vector, including:
    对所述周围图像块的重建块、预测块和残差块中的至少两个进行拼接或者级联,得到第一数据块;splicing or concatenating at least two of the reconstruction block, prediction block and residual block of the surrounding image blocks to obtain a first data block;
    根据所述第一数据块和所述预测模式概率模型得到所述当前图像块的概率向量。The probability vector of the current image block is obtained according to the first data block and the prediction mode probability model.
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述第一数据块和所述预测模式概率模型得到所述当前图像块的概率向量,包括:The method according to claim 3, wherein the obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model comprises:
    当所述第一数据块的尺寸为目标尺寸时,将所述第一数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量,或者;When the size of the first data block is the target size, inputting the first data block into the prediction mode probability model for processing to obtain a probability vector of the current image block, or;
    当所述第一数据块的尺寸不为所述目标尺寸时,对所述第一数据块进行尺寸变换操作,以得到第二数据块,所述第二数据块的尺寸等于所述目标尺寸,所述尺寸变换操作包括缩放和转置中的至少一项,所述缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将所述第二数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量。When the size of the first data block is not the target size, a size transformation operation is performed on the first data block to obtain a second data block, and the size of the second data block is equal to the target size, The size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or proportional scaling in both horizontal and vertical directions; inputting the second data block into all The prediction mode probability model is processed to obtain the probability vector of the current image block.
  5. 根据权利要求3所述的方法,其特征在于,所述根据所述第一数据块和所述预测模式概率模型得到所述当前图像块的概率向量,包括:The method according to claim 3, wherein the obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model comprises:
    当所述第一数据块的尺寸为目标尺寸时,将所述第一数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量,或者;When the size of the first data block is the target size, inputting the first data block into the prediction mode probability model for processing to obtain a probability vector of the current image block, or;
    当所述第一数据块的尺寸不为所述目标尺寸时,根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行尺寸变换操作,以得到第二数据块,所述第二数据块的尺寸为所述目标尺寸,所述尺寸变换操作包括缩放和转置中的至少一项,所述缩放包括水平方向缩放、竖直方向缩放或者水平和竖直 两个方向的等比缩放;将所述第二数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量。When the size of the first data block is not the target size, according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient sum of the current image block vertical gradient, performing a size transformation operation on the first data block to obtain a second data block, where the size of the second data block is the target size, and the size transformation operation includes at least one of scaling and transposition item, the scaling includes horizontal scaling, vertical scaling or equal scaling in both horizontal and vertical directions; inputting the second data block into the prediction mode probability model for processing to obtain the current image The probability vector for the block.
  6. 根据权利要求5所述的方法,其特征在于,所述尺寸变换操作包括所述缩放,所述根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行尺寸变换操作,以得到第二数据块,包括:The method according to claim 5, wherein the size transformation operation comprises the scaling, the scaling according to the size relationship between the first side length and the second side length of the first data block, and/or the The horizontal gradient and vertical gradient of the current image block, and performing a size transformation operation on the first data block to obtain a second data block, including:
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than a preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4M/N, 4;
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to At the preset threshold, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8M/N, 8;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4,4N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset value When the threshold value, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4, 4N/M;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8,8N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the When the threshold is preset, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8 and 8N/M;
    其中,所述M和所述N为正整数。Wherein, the M and the N are positive integers.
  7. 根据权利要求4所述的方法,其特征在于,所述尺寸变换操作包括所述缩放和所述转置中的至少一项,所述根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行尺寸变换操作,以得到第二数据块,包括:The method according to claim 4, wherein the size transformation operation includes at least one of the scaling and the transposing, and the first side length of the first data block and the second The size relationship between the side lengths, and/or the horizontal gradient and vertical gradient of the current image block, and performing a size transformation operation on the first data block to obtain a second data block, including:
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than a preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4M/N, 4;
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to At the preset threshold, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8M/N, 8;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N时,对所述第一数据块执行以下操作,以得到所述第二数据块:When the first side length M of the first data block is smaller than the second side length N of the first data block, perform the following operations on the first data block to obtain the second data block:
    当所述第一数据块的所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于所述预设阈值时,对所述第一数据块进行所述转置和所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4N/M,4;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is less than the preset threshold, performing the transposing and the scaling on the first data block, To obtain the second data block, the first side length and the second side length of the second data block are respectively 4N/M, 4;
    当所述第一数据块的所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述转置和所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8N/M,8;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is greater than or equal to the preset threshold, performing the transposing and the transposing on the first data block scaling to obtain the second data block, the first side length and the second side length of the second data block are 8N/M, 8 respectively;
    其中,所述M和所述N为正整数。Wherein, the M and the N are positive integers.
  8. 根据权利要求4-7中任一项所述方法,其特征在于,所述根据所述目标帧内预测模式和所述概率向量,将所述目标帧内预测模式编入码流,包括:The method according to any one of claims 4-7, wherein the encoding the target intra-frame prediction mode into a code stream according to the target intra-frame prediction mode and the probability vector comprises:
    当所述尺寸变换操作不包含所述转置时,根据所述目标帧内预测模式的第一标识和所述概率向量,确定第一参考值;对所述第一参考值进行编码,以得到所述目标帧内预测模式对应的码流;When the size transformation operation does not include the transposition, a first reference value is determined according to the first identifier of the target intra-frame prediction mode and the probability vector; and the first reference value is encoded to obtain the code stream corresponding to the target intra prediction mode;
    当所述尺寸变换操作包含所述转置,且目标帧内预测模式为角度预测模式时,根据预设常数和所述目标帧内预测模式的第一标识确定第二标识,根据所述第二标识和所述概率向量,确定第二参考值;对所述第二参考值进行编码,以得到所述目标帧内预测模式对应的码流;当所述尺寸变换操作包含所述转置,且目标帧内预测模式为非角度预测模式时,根据所述目标帧内预测模式的第一标识和所述概率向量,确定第一参考值;对所述第一参考值进行编码,以得到所述目标帧内预测模式对应的码流。When the size transformation operation includes the transposition and the target intra-frame prediction mode is the angle prediction mode, a second identifier is determined according to a preset constant and the first identifier of the target intra-frame prediction mode, and the second identifier is determined according to the second identifier of the target intra-frame prediction mode. identifying the probability vector and determining a second reference value; encoding the second reference value to obtain a code stream corresponding to the target intra-frame prediction mode; when the size transformation operation includes the transposition, and When the target intra-frame prediction mode is a non-angle prediction mode, a first reference value is determined according to the first identifier of the target intra-frame prediction mode and the probability vector; and the first reference value is encoded to obtain the The code stream corresponding to the target intra prediction mode.
  9. 一种帧内预测模式的编码方法,其特征在于,所述方法包括:A method for encoding an intra-frame prediction mode, wherein the method comprises:
    获取当前图像块的周围图像块;Get the surrounding image blocks of the current image block;
    对所述周围图像块进行拼接或者级联,得到第一数据块;splicing or concatenating the surrounding image blocks to obtain a first data block;
    对所述第一数据块进行尺寸变换操作,以得到第二数据块,所述尺寸变换操作包括缩放和转置中的至少一项,所述缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;A size transformation operation is performed on the first data block to obtain a second data block, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or horizontal and vertical scaling. Proportional scaling in both vertical directions;
    将所述第二数据块输入预测模式概率模型进行处理,以得到所述当前图像块的概率向量,所述概率向量中的多个元素与多个帧内预测模式一一对应,所述概率向量中的任一元素用于表征对所述当前图像块进行预测时采用所述任一元素对应的帧内预测模式的概率;Inputting the second data block into a prediction mode probability model for processing to obtain a probability vector of the current image block, where multiple elements in the probability vector correspond to multiple intra prediction modes one-to-one, and the probability vector Any element in is used to represent the probability of adopting the intra prediction mode corresponding to the any element when predicting the current image block;
    确定所述当前图像块的目标帧内预测模式;determining the target intra prediction mode of the current image block;
    根据所述目标帧内预测模式和所述概率向量,将所述目标帧内预测模式编入码流。The target intra-frame prediction mode is encoded into the code stream according to the target intra-frame prediction mode and the probability vector.
  10. 根据权利要求9所述的方法,其特征在于,所述对所述第一数据块进行尺寸变换操作,以得到第二数据块,包括:The method according to claim 9, wherein, performing a size transformation operation on the first data block to obtain a second data block, comprising:
    当所述目标帧内预测模式为非矩阵加权帧内预测MIP模式,且所述第一数据块的尺寸不为目标尺寸时,对所述第一数据块进行所述尺寸变换操作,以得到所述第二数据块,所述第二数据块的尺寸等于所述目标尺寸。When the target intra prediction mode is a non-matrix weighted intra prediction MIP mode and the size of the first data block is not the target size, the size transformation operation is performed on the first data block to obtain the the second data block, the size of the second data block is equal to the target size.
  11. 根据权利要求9所述的方法,其特征在于,所述对所述第一数据块进行尺寸变换操作,以得到第二数据块,包括:The method according to claim 9, wherein, performing a size transformation operation on the first data block to obtain a second data block, comprising:
    当所述目标帧内预测模式为非矩阵加权帧内预测MIP模式,且所述第一数据块的尺寸不为目标尺寸时,根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行所述尺寸变换操作,以得到所述第二数据块,所述第二数据块的尺寸等于所述目标尺寸。When the target intra prediction mode is the non-matrix weighted intra prediction MIP mode, and the size of the first data block is not the target size, according to the first side length and the second side length of the first data block size relationship, and/or the horizontal gradient and vertical gradient of the current image block, perform the size transformation operation on the first data block to obtain the second data block, the size of the second data block equal to the target size.
  12. 根据权利要求10或11所述的方法,其特征在于,所述方法还包括:The method according to claim 10 or 11, wherein the method further comprises:
    当所述目标帧内预测模式为非矩阵加权帧内预测MIP模式,且所述第一数据块的尺寸等于目标尺寸时,将所述第一数据块输入所述预测模式概率模型进行处理,以得到所述当前图 像块的概率向量。When the target intra prediction mode is a non-matrix weighted intra prediction MIP mode and the size of the first data block is equal to the target size, the first data block is input into the prediction mode probability model for processing to Obtain the probability vector of the current image block.
  13. 根据权利要求11或12所述的方法,其特征在于,所述尺寸变换操作包括所述缩放,所述根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行所述尺寸变换操作,以得到所述第二数据块,包括:The method according to claim 11 or 12, wherein the size transformation operation includes the scaling, the size relationship according to the first side length and the second side length of the first data block, and/ or the horizontal gradient and vertical gradient of the current image block, performing the size transformation operation on the first data block to obtain the second data block, including:
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than a preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4M/N, 4;
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to At the preset threshold, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8M/N, 8;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4,4N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4, 4N/M;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8,8N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the When the threshold is preset, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8 and 8N/M;
    其中,所述M和所述N为正整数。Wherein, the M and the N are positive integers.
  14. 根据权利要求11或12所述的方法,其特征在于,所述尺寸变换操作包括所述缩放和所述转置中的至少一项,所述根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行所述尺寸变换操作,以得到所述第二数据块,包括:The method according to claim 11 or 12, wherein the size transformation operation includes at least one of the scaling and the transposing, and the first side length of the first data block and the The size relationship of the second side length, and/or the horizontal gradient and the vertical gradient of the current image block, and performing the size transformation operation on the first data block to obtain the second data block, including:
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than a preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4M/N, 4;
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to At the preset threshold, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8M/N, 8;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N时,对所述第一数据块执行以下操作,以得到所述第二数据块:When the first side length M of the first data block is smaller than the second side length N of the first data block, perform the following operations on the first data block to obtain the second data block:
    当所述第一数据块的所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于所述预设阈值时,对所述第一数据块进行所述转置和所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4N/M,4;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is less than the preset threshold, performing the transposing and the scaling on the first data block, To obtain the second data block, the first side length and the second side length of the second data block are respectively 4N/M, 4;
    当所述第一数据块的所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述转置和所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8N/M,8;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is greater than or equal to the preset threshold, performing the transposing and the transposing on the first data block scaling to obtain the second data block, the first side length and the second side length of the second data block are 8N/M, 8 respectively;
    其中,所述M和所述N为正整数。Wherein, the M and the N are positive integers.
  15. 根据权利要求9-14中任一项所述方法,其特征在于,所述根据所述目标帧内预测模式和所述概率向量,将所述目标帧内预测模式编入码流,包括:The method according to any one of claims 9-14, wherein the encoding the target intra-frame prediction mode into a code stream according to the target intra-frame prediction mode and the probability vector comprises:
    当所述尺寸变换操作不包含所述转置时,根据所述目标帧内预测模式的第一标识和所述概率向量,确定第一参考值;对所述第一参考值进行编码,以得到所述目标帧内预测模式对应的码流;When the size transformation operation does not include the transposition, a first reference value is determined according to the first identifier of the target intra-frame prediction mode and the probability vector; and the first reference value is encoded to obtain the code stream corresponding to the target intra prediction mode;
    当所述尺寸变换操作包含所述转置,且目标帧内预测模式为角度预测模式时,根据预设常数和所述目标帧内预测模式的第一标识确定第二标识,根据所述第二标识和所述概率向量,确定第二参考值;对所述第二参考值进行编码,以得到所述目标帧内预测模式对应的码流;当所述尺寸变换操作包含所述转置,且目标帧内预测模式为非角度预测模式时,根据所述目标帧内预测模式的第一标识和所述概率向量,确定第一参考值;对所述第一参考值进行编码,以得到所述目标帧内预测模式对应的码流。When the size transformation operation includes the transposition and the target intra-frame prediction mode is the angle prediction mode, a second identifier is determined according to a preset constant and the first identifier of the target intra-frame prediction mode, and the second identifier is determined according to the second identifier of the target intra-frame prediction mode. identifying the probability vector and determining a second reference value; encoding the second reference value to obtain a code stream corresponding to the target intra-frame prediction mode; when the size transformation operation includes the transposition, and When the target intra-frame prediction mode is a non-angle prediction mode, a first reference value is determined according to the first identifier of the target intra-frame prediction mode and the probability vector; and the first reference value is encoded to obtain the The code stream corresponding to the target intra prediction mode.
  16. 一种帧内预测模式的解码方法,其特征在于,所述方法包括:A decoding method for intra prediction mode, characterized in that the method comprises:
    获取当前图像块对应的码流,以及所述当前图像块的周围图像块的重建块、预测块和残差块中的至少两个;Obtain the code stream corresponding to the current image block, and at least two of the reconstruction blocks, prediction blocks and residual blocks of the surrounding image blocks of the current image block;
    根据所述周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到所述当前图像块的概率向量,所述概率向量中的多个元素与多个帧内预测模式一一对应,所述概率向量中的任一元素用于表征对所述当前图像块进行预测时采用所述任一元素对应的帧内预测模式的概率;A probability vector of the current image block is obtained according to at least two of the reconstruction block, prediction block and residual block of the surrounding image blocks and a prediction mode probability model, and multiple elements in the probability vector are related to multiple intra-frame The prediction modes are in one-to-one correspondence, and any element in the probability vector is used to represent the probability of using the intra prediction mode corresponding to the any element when predicting the current image block;
    根据所述码流和所述概率向量,确定所述当前图像块的目标帧内预测模式,并根据所述目标帧内预测模式,确定所述当前图像块的预测块。A target intra prediction mode of the current image block is determined according to the code stream and the probability vector, and a prediction block of the current image block is determined according to the target intra prediction mode.
  17. 根据权利要求16所述的方法,其特征在于,所述根据所述周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到所述当前图像块的概率向量,包括:The method according to claim 16, wherein the probability vector of the current image block is obtained according to at least two of a reconstruction block, a prediction block and a residual block of the surrounding image blocks and a prediction mode probability model ,include:
    对所述周围图像块的重建块、预测块和残差块中的至少两个进行拼接或者级联,得到第一数据块;splicing or concatenating at least two of the reconstruction block, prediction block and residual block of the surrounding image blocks to obtain a first data block;
    根据所述第一数据块和所述预测模式概率模型得到所述当前图像块的概率向量。The probability vector of the current image block is obtained according to the first data block and the prediction mode probability model.
  18. 根据权利要求17所述的方法,其特征在于,所述根据所述第一数据块和所述预测模式概率模型得到所述当前图像块的概率向量,包括:The method according to claim 17, wherein the obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model comprises:
    当所述第一数据块的尺寸为目标尺寸时,将所述第一数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量,或者;When the size of the first data block is the target size, inputting the first data block into the prediction mode probability model for processing to obtain a probability vector of the current image block, or;
    当所述第一数据块的尺寸不为所述目标尺寸时,对所述第一数据块进行尺寸变换操作,以得到第二数据块,所述第二数据块的尺寸等于所述目标尺寸,所述尺寸变换操作包括缩放和转置中的至少一项,所述缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将所述第二数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量。When the size of the first data block is not the target size, a size transformation operation is performed on the first data block to obtain a second data block, and the size of the second data block is equal to the target size, The size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or proportional scaling in both horizontal and vertical directions; inputting the second data block into all The prediction mode probability model is processed to obtain the probability vector of the current image block.
  19. 根据权利要求17所述方法,其特征在于,所述根据所述第一数据块和所述预测模式概率模型得到所述当前图像块的概率向量,包括:The method according to claim 17, wherein the obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model comprises:
    当所述第一数据块的尺寸为目标尺寸时,将所述第一数据块输入所述预测模式概率模型 进行处理,以得到所述当前图像块的概率向量,或者;When the size of the first data block is the target size, the first data block is input into the prediction mode probability model for processing to obtain the probability vector of the current image block, or;
    当所述第一数据块的尺寸不为所述目标尺寸时,根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行尺寸变换操作,以得到第二数据块,所述第一边长和所述第二边长为所述第一数据块中两个互相垂直的边的长度,所述第二数据块的尺寸为所述目标尺寸,所述尺寸变换操作包括缩放和转置中的至少一项,所述缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将所述第二数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量。When the size of the first data block is not the target size, according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient sum of the current image block vertical gradient, performing a size transformation operation on the first data block to obtain a second data block, where the first side length and the second side length are the difference between two mutually perpendicular sides in the first data block length, the size of the second data block is the target size, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or both horizontal and vertical scaling proportional scaling in each direction; inputting the second data block into the prediction mode probability model for processing to obtain the probability vector of the current image block.
  20. 根据权利要求19所述方法,其特征在于,所述尺寸变换操作包括所述缩放,所述根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行尺寸变换操作,以得到第二数据块,包括:The method according to claim 19, wherein the size transformation operation comprises the scaling, the size relationship according to the first side length and the second side length of the first data block, and/or the For the horizontal gradient and vertical gradient of the current image block, a size transformation operation is performed on the first data block to obtain a second data block, including:
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than a preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4M/N, 4;
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to At the preset threshold, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8M/N, 8;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4,4N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4, 4N/M;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8,8N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the When the threshold is preset, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8 and 8N/M;
    其中,所述M和所述N为正整数。Wherein, the M and the N are positive integers.
  21. 根据权利要求19所述的方法,其特征在于,所述尺寸变换操作包括所述缩放和所述转置中的至少一项,所述根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行尺寸变换操作,以得到第二数据块,包括:The method according to claim 19, wherein the size transformation operation includes at least one of the scaling and the transposing, and the first side length of the first data block and the second The size relationship between the side lengths, and/or the horizontal gradient and vertical gradient of the current image block, and performing a size transformation operation on the first data block to obtain a second data block, including:
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than a preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4M/N, 4;
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to At the preset threshold, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8M/N, 8;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N时,对所述第一数据块执行以下操作,以得到所述第二数据块:When the first side length M of the first data block is smaller than the second side length N of the first data block, perform the following operations on the first data block to obtain the second data block:
    当所述第一数据块的所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于所述预设 阈值时,对所述第一数据块进行所述转置和所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4N/M,4;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is less than the preset threshold, performing the transposing and the scaling on the first data block, To obtain the second data block, the first side length and the second side length of the second data block are respectively 4N/M, 4;
    当所述第一数据块的所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述转置和所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8N/M,8;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is greater than or equal to the preset threshold, performing the transposing and the transposing on the first data block scaling to obtain the second data block, the first side length and the second side length of the second data block are 8N/M, 8 respectively;
    其中,所述M和所述N为正整数。Wherein, the M and the N are positive integers.
  22. 根据权利要求18-21中任一项所述的方法,其特征在于,所述根据所述码流和所述概率向量,确定所述当前图像块的目标帧内预测模式,包括:The method according to any one of claims 18-21, wherein the determining the target intra prediction mode of the current image block according to the code stream and the probability vector comprises:
    对所述码流进行解码,得到目标参考值;Decoding the code stream to obtain a target reference value;
    当所述尺寸变换操作不包含所述转置时,根据所述目标参考值和所述概率向量确定所述目标帧内预测模式;When the size transformation operation does not include the transposition, determining the target intra prediction mode according to the target reference value and the probability vector;
    当所述尺寸变换操作包含所述转置时,根据所述目标参考值和所述概率向量确定第一标识;当所述第一标识对应的帧内预测模式为角度预测模式时,根据预设常数和所述第一标识,确定所述目标帧内预测模式;当所述第一标识对应的帧内预测模式为非角度预测模式时,根据所述目标参考值和所述概率向量确定所述目标帧内预测模式。When the size transformation operation includes the transposition, a first identifier is determined according to the target reference value and the probability vector; when the intra prediction mode corresponding to the first identifier is an angle prediction mode, a preset The constant and the first identifier are used to determine the target intra-frame prediction mode; when the intra-frame prediction mode corresponding to the first identifier is a non-angular prediction mode, the target reference value and the probability vector are determined according to the Target intra prediction mode.
  23. 一种帧内预测模式的解码方法,其特征在于,所述方法包括:A decoding method for intra prediction mode, characterized in that the method comprises:
    获取当前图像块对应的码流,以及所述当前图像块的周围图像块;Obtain the code stream corresponding to the current image block, and the surrounding image blocks of the current image block;
    对所述周围图像块进行拼接或者级联,得到第一数据块;splicing or concatenating the surrounding image blocks to obtain a first data block;
    对所述第一数据块进行尺寸变换操作,以得到第二数据块,所述尺寸变换操作包括缩放和转置中的至少一项,所述缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;A size transformation operation is performed on the first data block to obtain a second data block, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or horizontal and vertical scaling. Proportional scaling in both vertical directions;
    将所述第二数据块输入预测模式概率模型进行处理,以得到所述当前图像块的概率向量,所述概率向量中的多个元素与多个帧内预测模式一一对应,所述概率向量中的任一元素用于表征对所述当前图像块进行预测时采用所述任一元素对应的帧内预测模式的概率;Inputting the second data block into a prediction mode probability model for processing to obtain a probability vector of the current image block, where multiple elements in the probability vector correspond to multiple intra prediction modes one-to-one, and the probability vector Any element in is used to represent the probability of adopting the intra prediction mode corresponding to the any element when predicting the current image block;
    根据所述码流和所述概率向量,确定所述当前图像块的目标帧内预测模式,并根据所述目标帧内预测模式,确定所述当前图像块的预测块。A target intra prediction mode of the current image block is determined according to the code stream and the probability vector, and a prediction block of the current image block is determined according to the target intra prediction mode.
  24. 根据权利要求23所述的方法,其特征在于,所述对所述第一数据块进行尺寸变换操作,以得到第二数据块,包括:The method according to claim 23, wherein, performing a size transformation operation on the first data block to obtain a second data block, comprising:
    当所述第一数据块的尺寸不为目标尺寸时,对所述第一数据块进行所述尺寸变换操作,以得到所述第二数据块,所述第二数据块的尺寸等于所述目标尺寸。When the size of the first data block is not the target size, the size transformation operation is performed on the first data block to obtain the second data block, and the size of the second data block is equal to the target size size.
  25. 根据权利要求23所述的方法,其特征在于,所述对所述第一数据块进行尺寸变换操作,以得到第二数据块,包括:The method according to claim 23, wherein, performing a size transformation operation on the first data block to obtain a second data block, comprising:
    当所述第一数据块的尺寸不为目标尺寸时,根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行所述尺寸变换操作,以得到所述第二数据块,所述第二数据块的尺寸等于所述目标尺寸。When the size of the first data block is not the target size, according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block , performing the size transformation operation on the first data block to obtain the second data block, where the size of the second data block is equal to the target size.
  26. 根据权利要求24或25所述的方法,其特征在于,所述方法还包括:The method according to claim 24 or 25, wherein the method further comprises:
    当所述第一数据块的尺寸等于目标尺寸时,将所述第一数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量。When the size of the first data block is equal to the target size, the first data block is input into the prediction mode probability model for processing to obtain a probability vector of the current image block.
  27. 根据权利要求25或26所述方法,其特征在于,所述尺寸变换操作包括所述缩放,所述根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行所述尺寸变换操作,以得到所述第二数据块,包括:The method according to claim 25 or 26, wherein the size transformation operation comprises the scaling, the size relationship according to the first side length and the second side length of the first data block, and/or The horizontal gradient and vertical gradient of the current image block, and performing the size transformation operation on the first data block to obtain the second data block, including:
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than a preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4M/N, 4;
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to At the preset threshold, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8M/N, 8;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4,4N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4, 4N/M;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8,8N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the When the threshold is preset, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8 and 8N/M;
    其中,所述M和所述N为正整数。Wherein, the M and the N are positive integers.
  28. 根据权利要求25或26所述的方法,其特征在于,所述尺寸变换操作包括所述缩放和所述转置中的至少一项,所述根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行所述尺寸变换操作,以得到所述第二数据块,包括:The method according to claim 25 or 26, wherein the size transformation operation includes at least one of the scaling and the transposing, and the first side length of the first data block and the The size relationship of the second side length, and/or the horizontal gradient and the vertical gradient of the current image block, and performing the size transformation operation on the first data block to obtain the second data block, including:
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than a preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4M/N, 4;
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to At the preset threshold, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8M/N, 8;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N时,对所述第一数据块执行以下操作,以得到所述第二数据块:When the first side length M of the first data block is smaller than the second side length N of the first data block, perform the following operations on the first data block to obtain the second data block:
    当所述第一数据块的所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于所述预设阈值时,对所述第一数据块进行所述转置和所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4N/M,4;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is less than the preset threshold, performing the transposing and the scaling on the first data block, To obtain the second data block, the first side length and the second side length of the second data block are respectively 4N/M, 4;
    当所述第一数据块的所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述转置和所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8N/M,8;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is greater than or equal to the preset threshold, performing the transposing and the transposing on the first data block scaling to obtain the second data block, the first side length and the second side length of the second data block are 8N/M, 8 respectively;
    其中,所述M和所述N为正整数。Wherein, the M and the N are positive integers.
  29. 根据权利要求23-28中任一项所述的方法,其特征在于,所述根据所述码流和所述概率向量,确定所述当前图像块的目标帧内预测模式,包括:The method according to any one of claims 23-28, wherein the determining the target intra prediction mode of the current image block according to the code stream and the probability vector comprises:
    对所述码流进行解码,得到目标参考值;Decoding the code stream to obtain a target reference value;
    当所述尺寸变换操作不包含所述转置时,根据所述目标参考值和所述概率向量确定所述目标帧内预测模式;When the size transformation operation does not include the transposition, determining the target intra prediction mode according to the target reference value and the probability vector;
    当所述尺寸变换操作包含所述转置时,根据所述目标参考值和所述概率向量确定第一标识;当所述第一标识对应的帧内预测模式为角度预测模式时,根据预设常数和所述第一标识,确定所述目标帧内预测模式;当所述第一标识对应的帧内预测模式为非角度预测模式时,根据所述目标参考值和所述概率向量确定所述目标帧内预测模式。When the size transformation operation includes the transposition, a first identifier is determined according to the target reference value and the probability vector; when the intra prediction mode corresponding to the first identifier is an angle prediction mode, a preset The constant and the first identifier are used to determine the target intra-frame prediction mode; when the intra-frame prediction mode corresponding to the first identifier is a non-angular prediction mode, the target reference value and the probability vector are determined according to the Target intra prediction mode.
  30. 一种编码装置,其特征在于,所述装置包括:An encoding device, characterized in that the device comprises:
    获取单元,用于获取当前图像块的周围图像块的重建块、预测块和残差块中的至少两个;an acquisition unit for acquiring at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks of the current image block;
    处理单元,用于根据所述周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到所述当前图像块的概率向量,所述概率向量中的多个元素与多个帧内预测模式一一对应,所述概率向量中的任一元素用于表征对所述当前图像块进行预测时采用所述任一元素对应的帧内预测模式的概率;A processing unit, configured to obtain a probability vector of the current image block according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image block and the prediction mode probability model, and a plurality of elements in the probability vector One-to-one correspondence with multiple intra-frame prediction modes, and any element in the probability vector is used to represent the probability of using the intra-frame prediction mode corresponding to the any element when predicting the current image block;
    确定单元,用于确定所述当前图像块的目标帧内预测模式;a determining unit for determining a target intra prediction mode of the current image block;
    编码单元,用于根据所述目标帧内预测模式和所述概率向量,将所述目标帧内预测模式编入码流。A coding unit, configured to encode the target intra-frame prediction mode into a code stream according to the target intra-frame prediction mode and the probability vector.
  31. 根据权利要求30中所述的装置,其特征在于,所述处理单元具体用于:The device according to claim 30, wherein the processing unit is specifically configured to:
    当所述目标帧内预测模式为非矩阵加权帧内预测MIP模式时,根据所述周围图像块的重建块、预测块和残差块中的至少两个、所述当前图像块和预测模式概率模型得到所述当前图像块的概率向量。When the target intra prediction mode is a non-matrix weighted intra prediction MIP mode, the probability of the current image block and the prediction mode is determined according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image blocks. The model obtains the probability vector of the current image block.
  32. 根据权利要求30或31中所述的装置,其特征在于,所述处理单元具体用于:The device according to claim 30 or 31, wherein the processing unit is specifically configured to:
    对所述周围图像块的重建块、预测块和残差块中的至少两个进行拼接或者级联,得到第一数据块;splicing or concatenating at least two of the reconstruction block, prediction block and residual block of the surrounding image blocks to obtain a first data block;
    根据所述第一数据块和所述预测模式概率模型得到所述当前图像块的概率向量。The probability vector of the current image block is obtained according to the first data block and the prediction mode probability model.
  33. 根据权利要求32中所述的装置,其特征在于,在所述根据所述第一数据块和所述预测模式概率模型得到所述当前图像块的概率向量的方面,所述处理单元具体用于:The apparatus according to claim 32, wherein, in the aspect of obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model, the processing unit is specifically configured to: :
    当所述第一数据块的尺寸为目标尺寸时,将所述第一数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量,或者;When the size of the first data block is the target size, inputting the first data block into the prediction mode probability model for processing to obtain a probability vector of the current image block, or;
    当所述第一数据块的尺寸不为所述目标尺寸时,对所述第一数据块进行尺寸变换操作,以得到第二数据块,所述第二数据块的尺寸等于所述目标尺寸,所述尺寸变换操作包括缩放和转置中的至少一项,所述缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将所述第二数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量。When the size of the first data block is not the target size, a size transformation operation is performed on the first data block to obtain a second data block, and the size of the second data block is equal to the target size, The size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or proportional scaling in both horizontal and vertical directions; inputting the second data block into all The prediction mode probability model is processed to obtain the probability vector of the current image block.
  34. 根据权利要求32中所述的装置,其特征在于,在所述根据所述第一数据块和所述预测模式概率模型得到所述当前图像块的概率向量的方面,所述处理单元具体用于:The apparatus according to claim 32, wherein, in the aspect of obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model, the processing unit is specifically configured to: :
    当所述第一数据块的尺寸为目标尺寸时,将所述第一数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量,或者;When the size of the first data block is the target size, inputting the first data block into the prediction mode probability model for processing to obtain a probability vector of the current image block, or;
    当所述第一数据块的尺寸不为所述目标尺寸时,根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行尺寸变换操作,以得到第二数据块,所述第二数据块的尺寸为所述目标尺寸,所述尺寸变换操作包括缩放和转置中的至少一项,所述缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将所述第二数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量。When the size of the first data block is not the target size, according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient sum of the current image block vertical gradient, performing a size transformation operation on the first data block to obtain a second data block, where the size of the second data block is the target size, and the size transformation operation includes at least one of scaling and transposition item, the scaling includes horizontal scaling, vertical scaling or equal scaling in both horizontal and vertical directions; inputting the second data block into the prediction mode probability model for processing to obtain the current image The probability vector for the block.
  35. 根据权利要求34中所述的装置,其特征在于,所述尺寸变换操作包括所述缩放,在所述根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行尺寸变换操作,以得到第二数据块的方面,所述处理单元具体用于:The apparatus according to claim 34, wherein the size transformation operation includes the scaling, in the said first data block according to the size relationship between the first side length and the second side length, and/ Or the horizontal gradient and vertical gradient of the current image block, performing a size transformation operation on the first data block to obtain the aspect of the second data block, and the processing unit is specifically used for:
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than a preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4M/N, 4;
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to At the preset threshold, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8M/N, 8;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4,4N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4, 4N/M;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8,8N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the When the threshold is preset, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8 and 8N/M;
    其中,所述M和所述N为正整数。Wherein, the M and the N are positive integers.
  36. 根据权利要求34中所述的装置,其特征在于,所述尺寸变换操作包括所述缩放和所述转置中的至少一项,在所述根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行尺寸变换操作,以得到第二数据块的方面,所述处理单元具体用于:The apparatus according to claim 34, wherein the size transformation operation includes at least one of the scaling and the transposing, and wherein the length of the first side of the first data block is different from that of the first data block. The size relationship of the second side length, and/or the horizontal gradient and the vertical gradient of the current image block, the size transformation operation is performed on the first data block to obtain the second data block, the processing unit specifically uses At:
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than a preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4M/N, 4;
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to At the preset threshold, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8M/N, 8;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N时,对所述第一数据块执行以下操作,以得到所述第二数据块:When the first side length M of the first data block is smaller than the second side length N of the first data block, perform the following operations on the first data block to obtain the second data block:
    当所述第一数据块的所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于所述预设阈值时,对所述第一数据块进行所述转置和所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4N/M,4;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is less than the preset threshold, performing the transposing and the scaling on the first data block, To obtain the second data block, the first side length and the second side length of the second data block are respectively 4N/M, 4;
    当所述第一数据块的所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述转置和所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8N/M,8;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is greater than or equal to the preset threshold, performing the transposing and the transposing on the first data block scaling to obtain the second data block, the first side length and the second side length of the second data block are 8N/M, 8 respectively;
    其中,所述M和所述N为正整数。Wherein, the M and the N are positive integers.
  37. 根据权利要求33-36中任一所述装置,其特征在于,所述编码单元具体用于:The device according to any one of claims 33-36, wherein the encoding unit is specifically used for:
    当所述尺寸变换操作不包含所述转置时,根据所述目标帧内预测模式的第一标识和所述概率向量,确定第一参考值;对所述第一参考值进行编码,以得到所述目标帧内预测模式对应的码流;When the size transformation operation does not include the transposition, a first reference value is determined according to the first identifier of the target intra-frame prediction mode and the probability vector; and the first reference value is encoded to obtain the code stream corresponding to the target intra prediction mode;
    当所述尺寸变换操作包含所述转置时,根据预设常数和所述第一标识确定第二标识,根据所述第二标识和所述概率向量,确定第二参考值;对所述第二参考值进行编码,以得到所述目标帧内预测模式对应的码流。When the size transformation operation includes the transposition, a second identifier is determined according to a preset constant and the first identifier, and a second reference value is determined according to the second identifier and the probability vector; Two reference values are encoded to obtain a code stream corresponding to the target intra-frame prediction mode.
  38. 一种编码装置,其特征在于,所述装置包括:An encoding device, characterized in that the device comprises:
    获取单元,用于获取当前图像块的周围图像块;an acquisition unit for acquiring the surrounding image blocks of the current image block;
    处理单元,用于对所述周围图像块进行拼接或者级联,得到第一数据块;对所述第一数据块进行尺寸变换操作,以得到第二数据块,所述尺寸变换操作包括缩放和转置中的至少一项,所述缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;以及用于将所述第二数据块输入预测模式概率模型进行处理,以得到所述当前图像块的概率向量,所述概率向量中的多个元素与多个帧内预测模式一一对应,所述概率向量中的任一元素用于表征对所述当前图像块进行预测时采用所述任一元素对应的帧内预测模式的概率;a processing unit, configured to splicing or concatenating the surrounding image blocks to obtain a first data block; performing a size transformation operation on the first data block to obtain a second data block, the size transformation operation including scaling and at least one of transposition, the scaling includes horizontal scaling, vertical scaling or equal scaling in both horizontal and vertical directions; and for inputting the second data block into a prediction mode probability model for processing , to obtain the probability vector of the current image block, multiple elements in the probability vector are in one-to-one correspondence with multiple intra prediction modes, and any element in the probability vector is used to represent the current image block. The probability of using the intra prediction mode corresponding to any of the elements when performing prediction;
    确定单元,用于确定所述当前图像块的目标帧内预测模式;a determining unit for determining a target intra prediction mode of the current image block;
    编码单元,用于根据所述目标帧内预测模式和所述概率向量,将所述目标帧内预测模式编入码流。A coding unit, configured to encode the target intra-frame prediction mode into a code stream according to the target intra-frame prediction mode and the probability vector.
  39. 根据权利要求38中所述的装置,其特征在于,在所述对所述第一数据块进行尺寸变换操作,以得到第二数据块的方面,所述处理单元具体用于:The apparatus according to claim 38, wherein, in the aspect of performing a size transformation operation on the first data block to obtain the second data block, the processing unit is specifically configured to:
    当所述目标帧内预测模式为非矩阵加权帧内预测MIP模式,且所述第一数据块的尺寸不为目标尺寸时,对所述第一数据块进行所述尺寸变换操作,以得到所述第二数据块,所述第二数据块的尺寸等于所述目标尺寸。When the target intra prediction mode is a non-matrix weighted intra prediction MIP mode and the size of the first data block is not the target size, the size transformation operation is performed on the first data block to obtain the the second data block, the size of the second data block is equal to the target size.
  40. 根据权利要求38中所述的装置,其特征在于,在所述对所述第一数据块进行尺寸变换操作,以得到第二数据块的方面,所述处理单元具体用于:The apparatus according to claim 38, wherein, in the aspect of performing a size transformation operation on the first data block to obtain the second data block, the processing unit is specifically configured to:
    当所述目标帧内预测模式为非矩阵加权帧内预测MIP模式,且所述第一数据块的尺寸不为目标尺寸时,根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行所述尺寸变换操作,以得到所述第二数据 块,所述第二数据块的尺寸等于所述目标尺寸。When the target intra prediction mode is the non-matrix weighted intra prediction MIP mode, and the size of the first data block is not the target size, according to the first side length and the second side length of the first data block size relationship, and/or the horizontal gradient and vertical gradient of the current image block, perform the size transformation operation on the first data block to obtain the second data block, the size of the second data block equal to the target size.
  41. 根据权利要求39或40中所述的装置,其特征在于,所述处理单元还用于:The apparatus according to claim 39 or 40, wherein the processing unit is further configured to:
    当所述目标帧内预测模式为非矩阵加权帧内预测MIP模式,且所述第一数据块的尺寸等于目标尺寸时,将所述第一数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量。When the target intra prediction mode is a non-matrix weighted intra prediction MIP mode and the size of the first data block is equal to the target size, the first data block is input into the prediction mode probability model for processing to Obtain the probability vector of the current image block.
  42. 根据权利要求40或41中所述的装置,其特征在于,所述尺寸变换操作包括所述缩放,在所述根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行所述尺寸变换操作,以得到所述第二数据块的方面,所述处理单元具体用于:The apparatus according to claim 40 or 41, wherein the size transformation operation includes the scaling, and in the size relationship according to the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, performing the size transformation operation on the first data block to obtain the aspect of the second data block, the processing unit is specifically used for:
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than a preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4M/N, 4;
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to At the preset threshold, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8M/N, 8;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4,4N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4, 4N/M;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8,8N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the When the threshold is preset, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8 and 8N/M;
    其中,所述M和所述N为正整数。Wherein, the M and the N are positive integers.
  43. 根据权利要求40或41中所述的装置,其特征在于,所述尺寸变换操作包括所述缩放,在所述根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行所述尺寸变换操作,以得到所述第二数据块的方面,所述处理单元具体用于:The apparatus according to claim 40 or 41, wherein the size transformation operation includes the scaling, and in the size relationship according to the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block, performing the size transformation operation on the first data block to obtain the aspect of the second data block, the processing unit is specifically used for:
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than a preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4M/N, 4;
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to At the preset threshold, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8M/N, 8;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N时,对所述第一数据块执行以下操作,以得到所述第二数据块:When the first side length M of the first data block is smaller than the second side length N of the first data block, perform the following operations on the first data block to obtain the second data block:
    当所述第一数据块的所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于所述预设阈值时,对所述第一数据块进行所述转置和所述缩放,以得到所述第二数据块,所述第二数 据块的第一边长和第二边长分别为4N/M,4;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is less than the preset threshold, performing the transposing and the scaling on the first data block, To obtain the second data block, the first side length and the second side length of the second data block are respectively 4N/M, 4;
    当所述第一数据块的所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述转置和所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8N/M,8;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is greater than or equal to the preset threshold, performing the transposing and the transposing on the first data block scaling to obtain the second data block, the first side length and the second side length of the second data block are 8N/M, 8 respectively;
    其中,所述M和所述N为正整数。Wherein, the M and the N are positive integers.
  44. 根据权利要求38-41中任一所述的装置,其特征在于,所述编码单元具体用于:The device according to any one of claims 38-41, wherein the encoding unit is specifically used for:
    当所述尺寸变换操作不包含所述转置时,根据所述目标帧内预测模式的第一标识和所述概率向量,确定第一参考值;对所述第一参考值进行编码,以得到所述目标帧内预测模式对应的码流;When the size transformation operation does not include the transposition, a first reference value is determined according to the first identifier of the target intra-frame prediction mode and the probability vector; and the first reference value is encoded to obtain the code stream corresponding to the target intra prediction mode;
    当所述尺寸变换操作包含所述转置时,根据预设常数和所述第一标识确定第二标识,根据所述第二标识和所述概率向量,确定第二参考值;对所述第二参考值进行编码,以得到所述目标帧内预测模式对应的码流。When the size transformation operation includes the transposition, a second identifier is determined according to a preset constant and the first identifier, and a second reference value is determined according to the second identifier and the probability vector; Two reference values are encoded to obtain a code stream corresponding to the target intra-frame prediction mode.
  45. 一种解码装置,其特征在于,所述装置包括:A decoding device, characterized in that the device comprises:
    获取单元,用于获取当前图像块对应的码流,以及所述当前图像块的周围图像块的重建块、预测块和残差块中的至少两个;an acquisition unit, configured to acquire a code stream corresponding to the current image block, and at least two of the reconstruction blocks, prediction blocks and residual blocks of the surrounding image blocks of the current image block;
    处理单元,用于根据所述周围图像块的重建块、预测块和残差块中的至少两个和预测模式概率模型得到所述当前图像块的概率向量,所述概率向量中的多个元素与多个帧内预测模式一一对应,所述概率向量中的任一元素用于表征对所述当前图像块进行预测时采用所述任一元素对应的帧内预测模式的概率;A processing unit, configured to obtain a probability vector of the current image block according to at least two of the reconstruction block, the prediction block and the residual block of the surrounding image block and the prediction mode probability model, and a plurality of elements in the probability vector One-to-one correspondence with multiple intra-frame prediction modes, and any element in the probability vector is used to represent the probability of using the intra-frame prediction mode corresponding to the any element when predicting the current image block;
    解码单元,用于根据所述码流和所述概率向量,确定所述当前图像块的目标帧内预测模式,并根据所述目标帧内预测模式,确定所述当前图像块的预测块。A decoding unit, configured to determine a target intra prediction mode of the current image block according to the code stream and the probability vector, and determine a prediction block of the current image block according to the target intra prediction mode.
  46. 根据权利要求45所述的装置,其特征在于,所述处理单元具体用于:The apparatus according to claim 45, wherein the processing unit is specifically configured to:
    对所述周围图像块的重建块、预测块和残差块中的至少两个进行拼接或者级联,得到第一数据块;splicing or concatenating at least two of the reconstruction block, prediction block and residual block of the surrounding image blocks to obtain a first data block;
    根据所述第一数据块和所述预测模式概率模型得到所述当前图像块的概率向量。The probability vector of the current image block is obtained according to the first data block and the prediction mode probability model.
  47. 根据权利要求46所述的装置,其特征在于,在所述根据所述第一数据块和所述预测模式概率模型得到所述当前图像块的概率向量的方面,所述处理单元具体用于:The apparatus according to claim 46, wherein, in the aspect of obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model, the processing unit is specifically configured to:
    当所述第一数据块的尺寸为目标尺寸时,将所述第一数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量,或者;When the size of the first data block is the target size, inputting the first data block into the prediction mode probability model for processing to obtain a probability vector of the current image block, or;
    当所述第一数据块的尺寸不为所述目标尺寸时,对所述第一数据块进行尺寸变换操作,以得到第二数据块,所述第二数据块的尺寸等于所述目标尺寸,所述尺寸变换操作包括缩放和转置中的至少一项,所述缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将所述第二数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量。When the size of the first data block is not the target size, a size transformation operation is performed on the first data block to obtain a second data block, and the size of the second data block is equal to the target size, The size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or proportional scaling in both horizontal and vertical directions; inputting the second data block into all The prediction mode probability model is processed to obtain the probability vector of the current image block.
  48. 根据权利要求46所述的装置,其特征在于,在所述根据所述第一数据块和所述预测模式概率模型得到所述当前图像块的概率向量的方面,所述处理单元具体用于:The apparatus according to claim 46, wherein, in the aspect of obtaining the probability vector of the current image block according to the first data block and the prediction mode probability model, the processing unit is specifically configured to:
    当所述第一数据块的尺寸为目标尺寸时,将所述第一数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量,或者;When the size of the first data block is the target size, inputting the first data block into the prediction mode probability model for processing to obtain a probability vector of the current image block, or;
    当所述第一数据块的尺寸不为所述目标尺寸时,根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行尺寸变换操作,以得到第二数据块,所述第一边长和所述第二边长为所述第一数据块中两个互相垂直的边的长度,所述第二数据块的尺寸为所述目标尺寸,所述尺寸变换操作包括缩放和转置中的至少一项,所述缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;将所述第二数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量。When the size of the first data block is not the target size, according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient sum of the current image block vertical gradient, performing a size transformation operation on the first data block to obtain a second data block, where the first side length and the second side length are the difference between two mutually perpendicular sides in the first data block length, the size of the second data block is the target size, the size transformation operation includes at least one of scaling and transposition, and the scaling includes horizontal scaling, vertical scaling, or both horizontal and vertical scaling proportional scaling in each direction; inputting the second data block into the prediction mode probability model for processing to obtain the probability vector of the current image block.
  49. 根据权利要求48所述的装置,其特征在于,所述尺寸变换操作包括所述缩放,在所述根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行尺寸变换操作,以得到第二数据块的方面,所述处理单元具体用于:The apparatus according to claim 48, wherein the size transformation operation comprises the scaling, in the size relationship according to the first side length and the second side length of the first data block, and/or For the horizontal gradient and vertical gradient of the current image block, a size transformation operation is performed on the first data block to obtain the second data block, and the processing unit is specifically used for:
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than a preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4M/N, 4;
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to At the preset threshold, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8M/N, 8;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4,4N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4, 4N/M;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8,8N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the When the threshold is preset, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8 and 8N/M;
    其中,所述M和所述N为正整数。Wherein, the M and the N are positive integers.
  50. 根据权利要求48所述的装置,其特征在于,所述尺寸变换操作包括所述缩放和所述转置中的至少一项,在所述根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行尺寸变换操作,以得到第二数据块的方面,所述处理单元具体用于:The apparatus according to claim 48, wherein the size transformation operation includes at least one of the scaling and the transposing, and the size conversion operation is based on the first side length of the first data block and the first data block. The size relationship between the two sides, and/or the horizontal gradient and the vertical gradient of the current image block, the size transformation operation is performed on the first data block to obtain the second data block, the processing unit is specifically used for :
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than a preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4M/N, 4;
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to At the preset threshold, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8M/N, 8;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N时,对所述第一数据 块执行以下操作,以得到所述第二数据块:When the first side length M of the first data block is smaller than the second side length N of the first data block, the following operations are performed on the first data block to obtain the second data block:
    当所述第一数据块的所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于所述预设阈值时,对所述第一数据块进行所述转置和所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4N/M,4;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is less than the preset threshold, performing the transposing and the scaling on the first data block, To obtain the second data block, the first side length and the second side length of the second data block are respectively 4N/M, 4;
    当所述第一数据块的所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述转置和所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8N/M,8;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is greater than or equal to the preset threshold, performing the transposing and the transposing on the first data block scaling to obtain the second data block, the first side length and the second side length of the second data block are 8N/M, 8 respectively;
    其中,所述M和所述N为正整数。Wherein, the M and the N are positive integers.
  51. 根据权利要求47-50所述的装置,其特征在于,所述解码单元具体用于:The apparatus according to claims 47-50, wherein the decoding unit is specifically configured to:
    对所述码流进行解码,得到目标参考值;Decoding the code stream to obtain a target reference value;
    当所述尺寸变换操作不包含所述转置时,根据所述目标参考值和所述概率向量确定所述目标帧内预测模式;When the size transformation operation does not include the transposition, determining the target intra prediction mode according to the target reference value and the probability vector;
    当所述尺寸变换操作包含所述转置时,根据所述目标参考值和所述概率向量确定第一标识;当所述第一标识对应的帧内预测模式为角度预测模式时,根据预设常数和所述第一标识,确定所述目标帧内预测模式;当所述第一标识对应的帧内预测模式为非角度预测模式时,根据所述目标参考值和所述概率向量确定所述目标帧内预测模式。When the size transformation operation includes the transposition, a first identifier is determined according to the target reference value and the probability vector; when the intra prediction mode corresponding to the first identifier is an angle prediction mode, a preset The constant and the first identifier are used to determine the target intra-frame prediction mode; when the intra-frame prediction mode corresponding to the first identifier is a non-angular prediction mode, the target reference value and the probability vector are determined to determine the target intra-frame prediction mode. Target intra prediction mode.
  52. 一种解码装置,其特征在于,所述装置包括:A decoding device, characterized in that the device comprises:
    获取单元,用于获取当前图像块对应的码流,以及所述当前图像块的周围图像块;an acquisition unit, used for acquiring the code stream corresponding to the current image block, and the surrounding image blocks of the current image block;
    处理单元,用于对所述周围图像块进行拼接或者级联,得到第一数据块;对所述第一数据块进行尺寸变换操作,以得到第二数据块,所述尺寸变换操作包括缩放和转置中的至少一项,所述缩放包括水平方向缩放、竖直方向缩放或者水平和竖直两个方向的等比缩放;以及用于将所述第二数据块输入预测模式概率模型进行处理,以得到所述当前图像块的概率向量,所述概率向量中的多个元素与多个帧内预测模式一一对应,所述概率向量中的任一元素用于表征对所述当前图像块进行预测时采用所述任一元素对应的帧内预测模式的概率;a processing unit, configured to splicing or concatenating the surrounding image blocks to obtain a first data block; performing a size transformation operation on the first data block to obtain a second data block, the size transformation operation including scaling and at least one of transposition, the scaling includes horizontal scaling, vertical scaling or equal scaling in both horizontal and vertical directions; and for inputting the second data block into a prediction mode probability model for processing , to obtain the probability vector of the current image block, multiple elements in the probability vector are in one-to-one correspondence with multiple intra prediction modes, and any element in the probability vector is used to represent the current image block. The probability of using the intra prediction mode corresponding to any of the elements when performing prediction;
    解码单元,用于根据所述码流和所述概率向量,确定所述当前图像块的目标帧内预测模式,并根据所述目标帧内预测模式,确定所述当前图像块的预测块。A decoding unit, configured to determine a target intra prediction mode of the current image block according to the code stream and the probability vector, and determine a prediction block of the current image block according to the target intra prediction mode.
  53. 根据权利要求52所述的装置,其特征在于,在所述对所述第一数据块进行尺寸变换操作,以得到第二数据块的方面,所述处理单元具体用于:The apparatus according to claim 52, wherein, in the aspect of performing a size transformation operation on the first data block to obtain a second data block, the processing unit is specifically configured to:
    当所述第一数据块的尺寸不为目标尺寸时,对所述第一数据块进行所述尺寸变换操作,以得到所述第二数据块,所述第二数据块的尺寸等于所述目标尺寸。When the size of the first data block is not the target size, the size transformation operation is performed on the first data block to obtain the second data block, and the size of the second data block is equal to the target size size.
  54. 根据权利要求52所述的装置,其特征在于,在所述对所述第一数据块进行尺寸变换操作,以得到第二数据块的方面,所述处理单元具体用于:The apparatus according to claim 52, wherein, in the aspect of performing a size transformation operation on the first data block to obtain a second data block, the processing unit is specifically configured to:
    当所述第一数据块的尺寸不为目标尺寸时,根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行所述尺寸变换操作,以得到所述第二数据块,所述第二数据块的尺寸等于所述目标尺寸。When the size of the first data block is not the target size, according to the size relationship between the first side length and the second side length of the first data block, and/or the horizontal gradient and vertical gradient of the current image block , performing the size transformation operation on the first data block to obtain the second data block, where the size of the second data block is equal to the target size.
  55. 根据权利要求53或54所述的装置,其特征在于,所述处理单元还用于:The apparatus according to claim 53 or 54, wherein the processing unit is further configured to:
    当所述第一数据块的尺寸等于目标尺寸时,将所述第一数据块输入所述预测模式概率模型进行处理,以得到所述当前图像块的概率向量。When the size of the first data block is equal to the target size, the first data block is input into the prediction mode probability model for processing to obtain a probability vector of the current image block.
  56. 根据权利要求54或55所述的装置,其特征在于,所述尺寸变换操作包括所述缩放,在所述根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行尺寸变换操作,以得到第二数据块的方面,所述处理单元具体用于:The apparatus according to claim 54 or 55, wherein the size transformation operation includes the scaling, in the size relationship according to the first side length and the second side length of the first data block, and / or the horizontal gradient and vertical gradient of the current image block, performing a size transformation operation on the first data block to obtain the aspect of the second data block, the processing unit is specifically used for:
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than a preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4M/N, 4;
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to At the preset threshold, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8M/N, 8;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4,4N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than the preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4, 4N/M;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8,8N/M;When the first side length M of the first data block is smaller than the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to the When the threshold is preset, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8 and 8N/M;
    其中,所述M和所述N为正整数。Wherein, the M and the N are positive integers.
  57. 根据权利要求54或55所述的装置,其特征在于,所述尺寸变换操作包括所述缩放和所述转置中的至少一项,在所述根据所述第一数据块的第一边长与第二边长的大小关系,和/或所述当前图像块的水平梯度和垂直梯度,对所述第一数据块进行尺寸变换操作,以得到第二数据块的方面,所述处理单元具体用于:The apparatus according to claim 54 or 55, wherein the size transformation operation includes at least one of the scaling and the transposing, and in the first data block according to the first side length The size relationship with the second side length, and/or the horizontal gradient and vertical gradient of the current image block, the size transformation operation is performed on the first data block to obtain the aspect of the second data block, the processing unit specifically Used for:
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4M/N,4;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is less than a preset value When the threshold is set, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 4M/N, 4;
    当所述第一数据块的第一边长M大于或等于所述第一数据块的第二边长N,且所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8M/N,8;When the first side length M of the first data block is greater than or equal to the second side length N of the first data block, and the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient is greater than or equal to At the preset threshold, the scaling is performed on the first data block to obtain the second data block, and the first side length and the second side length of the second data block are respectively 8M/N, 8;
    当所述第一数据块的第一边长M小于所述第一数据块的第二边长N时,对所述第一数据块执行以下操作,以得到所述第二数据块:When the first side length M of the first data block is smaller than the second side length N of the first data block, perform the following operations on the first data block to obtain the second data block:
    当所述第一数据块的所述水平梯度的绝对值与所述垂直梯度的绝对值之和小于所述预设阈值时,对所述第一数据块进行所述转置和所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为4N/M,4;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is less than the preset threshold, performing the transposing and the scaling on the first data block, To obtain the second data block, the first side length and the second side length of the second data block are respectively 4N/M, 4;
    当所述第一数据块的所述水平梯度的绝对值与所述垂直梯度的绝对值之和大于或等于所述预设阈值时,对所述第一数据块进行所述转置和所述缩放,以得到所述第二数据块,所述第二数据块的第一边长和第二边长分别为8N/M,8;When the sum of the absolute value of the horizontal gradient and the absolute value of the vertical gradient of the first data block is greater than or equal to the preset threshold, performing the transposing and the transposing on the first data block scaling to obtain the second data block, the first side length and the second side length of the second data block are 8N/M, 8 respectively;
    其中,所述M和所述N为正整数。Wherein, the M and the N are positive integers.
  58. 根据权利要求52-57所述的装置,其特征在于,所述解码单元具体用于:The apparatus according to claims 52-57, wherein the decoding unit is specifically configured to:
    对所述码流进行解码,得到目标参考值;Decoding the code stream to obtain a target reference value;
    当所述尺寸变换操作不包含所述转置时,根据所述目标参考值和所述概率向量确定所述目标帧内预测模式;When the size transformation operation does not include the transposition, determining the target intra prediction mode according to the target reference value and the probability vector;
    当所述尺寸变换操作包含所述转置时,根据所述目标参考值和所述概率向量确定第一标识;当所述第一标识对应的帧内预测模式为角度预测模式时,根据预设常数和所述第一标识,确定所述目标帧内预测模式;当所述第一标识对应的帧内预测模式为非角度预测模式时,根据所述目标参考值和所述概率向量确定所述目标帧内预测模式。When the size transformation operation includes the transposition, a first identifier is determined according to the target reference value and the probability vector; when the intra prediction mode corresponding to the first identifier is an angle prediction mode, a preset The constant and the first identifier are used to determine the target intra-frame prediction mode; when the intra-frame prediction mode corresponding to the first identifier is a non-angular prediction mode, the target reference value and the probability vector are determined according to the Target intra prediction mode.
  59. 一种编码器(20),其特征在于,包括处理电路,用于执行根据权利要求1至15中任一项所述的方法。An encoder (20), characterized by comprising a processing circuit for performing the method according to any one of claims 1 to 15.
  60. 一种解码器(30),其特征在于,包括处理电路,用于执行根据权利要求16至29中任一项所述的方法。A decoder (30), characterized by comprising a processing circuit for performing the method according to any one of claims 16 to 29.
  61. 一种编码器,其特征在于,包括:A kind of encoder, is characterized in that, comprises:
    一个或多个处理器;one or more processors;
    非瞬时性计算机可读存储介质,耦合到所述处理器并存储由所述处理器执行的程序,其中所述程序在由所述处理器执行时,使得所述编码器执行根据权利要求1至15中任一项所述的方法。A non-transitory computer-readable storage medium coupled to the processor and storing a program for execution by the processor, wherein the program, when executed by the processor, causes the encoder to perform according to claims 1 to The method of any one of 15.
  62. 一种解码器,其特征在于,包括:A decoder, characterized in that it includes:
    一个或多个处理器;one or more processors;
    非瞬时性计算机可读存储介质,耦合到所述处理器并存储由所述处理器执行的程序,其中所述程序在由所述处理器执行时,使得所述解码器执行根据权利要求16至29中任一项所述的方法。A non-transitory computer-readable storage medium coupled to the processor and storing a program executed by the processor, wherein the program, when executed by the processor, causes the decoder to perform a program according to claim 16 to The method of any one of 29.
  63. 一种计算机程序产品,其特征在于,包括程序代码,当其在计算机或处理器上执行时,用于执行根据权利要求1至29中任一项所述的方法。A computer program product, characterized by comprising program code for carrying out the method according to any one of claims 1 to 29 when executed on a computer or processor.
  64. 一种非瞬时性计算机可读存储介质,其特征在于,包括程序代码,当其由计算机设备执行时,用于执行根据权利要求1至29中任一项所述的方法。A non-transitory computer-readable storage medium comprising program code for performing the method according to any one of claims 1 to 29 when executed by a computer device.
  65. 一种非瞬时性存储介质,其特征在于,包括根据权利要求1至15中任一项所述的方法编码的比特流。A non-transitory storage medium, comprising a bit stream encoded according to the method of any one of claims 1 to 15.
PCT/CN2021/128000 2020-11-30 2021-11-01 Intra prediction mode coding method, and apparatus WO2022111233A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011387076.6A CN114584776A (en) 2020-11-30 2020-11-30 Method and device for decoding intra-frame prediction mode
CN202011387076.6 2020-11-30

Publications (1)

Publication Number Publication Date
WO2022111233A1 true WO2022111233A1 (en) 2022-06-02

Family

ID=81755333

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/128000 WO2022111233A1 (en) 2020-11-30 2021-11-01 Intra prediction mode coding method, and apparatus

Country Status (2)

Country Link
CN (1) CN114584776A (en)
WO (1) WO2022111233A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117354529A (en) * 2023-11-28 2024-01-05 广东匠芯创科技有限公司 Image processing method based on video coding system, electronic equipment and medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115941966B (en) * 2022-12-30 2023-08-22 深圳大学 Video compression method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130049604A (en) * 2011-11-04 2013-05-14 연세대학교 산학협력단 Mode Switching Method in Entropy Coding
US20160212446A1 (en) * 2013-09-27 2016-07-21 Hongbin Liu Residual coding for depth intra prediction modes
WO2019117645A1 (en) * 2017-12-14 2019-06-20 한국전자통신연구원 Image encoding and decoding method and device using prediction network
CN111355956A (en) * 2020-03-09 2020-06-30 蔡晓刚 Rate distortion optimization fast decision making system and method based on deep learning in HEVC intra-frame coding
US20200244955A1 (en) * 2017-10-13 2020-07-30 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Intra-prediction mode concept for block-wise picture coding

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130049604A (en) * 2011-11-04 2013-05-14 연세대학교 산학협력단 Mode Switching Method in Entropy Coding
US20160212446A1 (en) * 2013-09-27 2016-07-21 Hongbin Liu Residual coding for depth intra prediction modes
US20200244955A1 (en) * 2017-10-13 2020-07-30 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Intra-prediction mode concept for block-wise picture coding
WO2019117645A1 (en) * 2017-12-14 2019-06-20 한국전자통신연구원 Image encoding and decoding method and device using prediction network
CN111355956A (en) * 2020-03-09 2020-06-30 蔡晓刚 Rate distortion optimization fast decision making system and method based on deep learning in HEVC intra-frame coding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
T. DUMAS (INTERDIGITAL), F. GALPIN (INTERDIGITAL), P. BORDES (INTERDIGITAL), F. LELEANNEC (INTERDIGITAL): "AHG11: Neural Network-based intra prediction with transform selection in VVC", 20. JVET MEETING; 20201007 - 20201016; TELECONFERENCE; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 13 October 2020 (2020-10-13), XP030289860 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117354529A (en) * 2023-11-28 2024-01-05 广东匠芯创科技有限公司 Image processing method based on video coding system, electronic equipment and medium
CN117354529B (en) * 2023-11-28 2024-03-12 广东匠芯创科技有限公司 Image processing method based on video coding system, electronic equipment and medium

Also Published As

Publication number Publication date
CN114584776A (en) 2022-06-03

Similar Documents

Publication Publication Date Title
JP7239711B2 (en) Chroma block prediction method and apparatus
WO2022068716A1 (en) Entropy encoding/decoding method and device
WO2020125595A1 (en) Video coder-decoder and corresponding method
WO2022063265A1 (en) Inter-frame prediction method and apparatus
WO2020119814A1 (en) Image reconstruction method and device
JP7277586B2 (en) Method and apparatus for mode and size dependent block level limiting
WO2020114394A1 (en) Video encoding/decoding method, video encoder, and video decoder
WO2022111233A1 (en) Intra prediction mode coding method, and apparatus
CN110858903B (en) Chroma block prediction method and device
CN114125446A (en) Image encoding method, decoding method and device
US20230388490A1 (en) Encoding method, decoding method, and device
WO2020253681A1 (en) Method and device for constructing merge candidate motion information list, and codec
US20230239500A1 (en) Intra Prediction Method and Apparatus
WO2023011420A1 (en) Encoding method and apparatus, and decoding method and apparatus
WO2023020320A1 (en) Entropy encoding and decoding method and device
WO2022166462A1 (en) Encoding/decoding method and related device
WO2020135615A1 (en) Video image decoding method and apparatus
CN110876061B (en) Chroma block prediction method and device
WO2020114393A1 (en) Transform method, inverse transform method, video encoder, and video decoder
RU2786022C1 (en) Device and method for limitations of block level depending on mode and size
WO2024012249A1 (en) Method and apparatus for coding image including text, and method and apparatus for decoding image including text
WO2020119742A1 (en) Block division method, video encoding and decoding method, and video codec
CN116134817A (en) Motion compensation using sparse optical flow representation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21896734

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21896734

Country of ref document: EP

Kind code of ref document: A1