CN110944177B - Video decoding method, video decoder, video encoding method and video encoder - Google Patents
Video decoding method, video decoder, video encoding method and video encoder Download PDFInfo
- Publication number
- CN110944177B CN110944177B CN201811150819.0A CN201811150819A CN110944177B CN 110944177 B CN110944177 B CN 110944177B CN 201811150819 A CN201811150819 A CN 201811150819A CN 110944177 B CN110944177 B CN 110944177B
- Authority
- CN
- China
- Prior art keywords
- matrix
- transformation
- dct2
- transformation matrix
- pair
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 106
- 239000011159 matrix material Substances 0.000 claims abstract description 1292
- 230000009466 transformation Effects 0.000 claims abstract description 752
- 101100278585 Dictyostelium discoideum dst4 gene Proteins 0.000 claims abstract description 218
- 238000012545 processing Methods 0.000 claims abstract description 127
- 238000013139 quantization Methods 0.000 claims abstract description 97
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 69
- 238000004422 calculation algorithm Methods 0.000 claims description 41
- 230000001131 transforming effect Effects 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 11
- 230000015654 memory Effects 0.000 description 50
- 230000006854 communication Effects 0.000 description 30
- 238000004891 communication Methods 0.000 description 30
- 238000006243 chemical reaction Methods 0.000 description 20
- 230000008569 process Effects 0.000 description 20
- 238000005192 partition Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 12
- 230000006835 compression Effects 0.000 description 11
- 238000007906 compression Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 11
- 238000012360 testing method Methods 0.000 description 11
- 238000007781 pre-processing Methods 0.000 description 9
- 230000003044 adaptive effect Effects 0.000 description 8
- 238000003491 array Methods 0.000 description 8
- 230000005540 biological transmission Effects 0.000 description 8
- 238000005070 sampling Methods 0.000 description 8
- 239000013598 vector Substances 0.000 description 8
- 238000013461 design Methods 0.000 description 7
- 239000004973 liquid crystal related substance Substances 0.000 description 6
- 238000013500 data storage Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000000638 solvent extraction Methods 0.000 description 5
- 241000023320 Luma <angiosperm> Species 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 230000002829 reductive effect Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 230000036961 partial effect Effects 0.000 description 3
- 238000011056 performance test Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 230000011664 signaling Effects 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 2
- 230000002146 bilateral effect Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 235000019800 disodium phosphate Nutrition 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 229910052710 silicon Inorganic materials 0.000 description 2
- 239000010703 silicon Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 238000012952 Resampling Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/12—Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
- H04N19/122—Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Discrete Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention discloses a video decoding method and a video decoder. The method comprises the following steps: analyzing the received code stream to obtain indication information of a transformation matrix pair of the current block subjected to inverse transformation and a quantization coefficient of the current block; performing inverse quantization processing on the quantized coefficients of the current block to obtain inverse quantized coefficients of the current block; determining a transformation matrix pair for performing inverse transformation processing on the current block from four candidate transformation matrix pairs according to the indication information; the horizontal direction transformation matrix and the vertical direction transformation matrix included in the four candidate transformation matrix pairs are one of two preset transformation matrices; one of the two transformation matrices is a DST4 matrix or a deformation of the DST4 matrix, and the other of the two transformation matrices is a DCT2 'matrix or a deformation of the DCT2' matrix; and obtaining a reconstructed block of the current block according to the transformation matrix pair of the current block subjected to inverse transformation. Using this patent, the implementation of the transform/inverse transform can be simplified.
Description
Technical Field
Embodiments of the present application relate generally to the field of video encoding, and more particularly, to video decoding methods and video decoders, video encoding methods and video encoders.
Background
Video coding (video encoding and decoding) is widely used in digital video applications such as broadcast digital television, video distribution over the internet and mobile networks, real-time conversational applications such as video chat and video conferencing, DVD and blu-ray discs, video content acquisition and editing systems, and security applications for camcorders.
With the development of block-based hybrid video coding in the h.261 standard in 1990, new video coding techniques and tools have evolved and form the basis for new video coding standards. Other video coding standards include MPEG-1 video, MPEG-2 video, ITU-T H.262/MPEG-2, ITU-T H.263, ITU-T H.264/MPEG-4 part 10 advanced video coding (Advanced Video Coding, AVC), ITU-T H.265/high efficiency video coding (High Efficiency Video Coding, HEVC) …, and extensions of such standards, such as scalability and/or 3D (three-dimensional) extensions. As video creation and use becomes more widespread, video traffic becomes the biggest burden on communication networks and data storage. One of the goals of most video coding standards is therefore to reduce the bit rate without sacrificing picture quality compared to previous standards. Even though the latest high efficiency video coding (High Efficiency video coding, HEVC) can compress video approximately twice more than AVC without sacrificing picture quality, new techniques are still needed to compress video further than HEVC.
Disclosure of Invention
The embodiment of the application provides a video decoding method, a video decoder, a video encoding method and a video encoder, which can simplify the realization of transformation/inverse transformation.
The foregoing and other objects are achieved by the subject matter of the independent claims. Other implementations are apparent from the dependent claims, the description and the drawings.
In a first aspect, the present invention provides a video decoding method, including:
analyzing the received code stream to obtain indication information of a transformation matrix pair of the current block for inverse transformation and a quantization coefficient of the current block, wherein the transformation matrix pair comprises a horizontal transformation matrix and a vertical transformation matrix;
performing inverse quantization processing on the quantized coefficients of the current block to obtain inverse quantized coefficients of the current block;
determining a transformation matrix pair for performing inverse transformation processing on the current block from the candidate transformation matrix pair according to the indication information; the candidate transformation matrix pair comprises a horizontal transformation matrix and a vertical transformation matrix which are one of two preset transformation matrices; one of the two transformation matrices is a DST4 matrix or a deformation of the DST4 matrix, and the other of the two transformation matrices is a DCT2' matrix or a deformation of the DCT2' matrix, wherein the DCT2' matrix is a transposed matrix of the DCT2 matrix;
Performing inverse transformation on the inverse quantized coefficients of the current block according to a transformation matrix of the inverse transformation of the current block to obtain a reconstructed residual block of the current block;
and obtaining a reconstructed block of the current block according to the reconstructed residual block of the current block.
Wherein the horizontal direction transformation matrix and the vertical direction transformation matrix included in any one of the candidate transformation pairs are either the same or different.
Wherein, the number of the candidate transformation matrix pairs can be 2, 3 or 4.
It can be seen that the implementation of the transform/inverse transform can be simplified, since the DCT2 'matrix or the deformation of the DCT2' matrix has a butterfly fast algorithm at the time of the transform/inverse transform. Meanwhile, the transformation/inverse transformation realization circuit corresponding to the DCT2 matrix can be directly multiplexed by the transformation of the DCT2 'matrix or the transformation of the DCT2' matrix and the transformation of the DST4 matrix or the transformation of the DST4 matrix, so that the design of the realization circuit of the transformation/inverse transformation module can be simplified when the transformation/inverse transformation module is realized by the circuit.
With reference to the first aspect, in a possible implementation manner, the deformation of the DST4 matrix is obtained by performing a sign transformation on coefficients of at least a part of rows or at least a part of columns in the DST4 matrix; or (b)
The deformation of the DCT2 'matrix is obtained by performing a sign transformation on coefficients of at least a portion of rows or at least a portion of columns in the DCT2' matrix.
With reference to the first aspect, in a possible implementation manner, the number of the candidate transformation matrix pairs is four; when one of the two transformation matrices is a DST4 matrix and the other is a DCT2' matrix, the vertical transformation matrix included in the first transformation matrix pair of the four candidate transformation matrix pairs is a DST4 matrix, and the horizontal transformation matrix included in the first transformation matrix pair is a DST4 matrix;
a second transformation matrix pair of the four candidate transformation matrix pairs comprises a vertical transformation matrix which is a DST4 matrix, and a horizontal transformation matrix of the second transformation matrix pair comprises a DCT2' matrix;
a third transformation matrix pair of the four candidate transformation matrix pairs comprises a DCT2' matrix, and a horizontal transformation matrix of the third transformation matrix pair comprises a DST4 matrix;
and a vertical direction transformation matrix included in a fourth transformation matrix pair of the four candidate transformation matrix pairs is a DCT2 'matrix, and a horizontal direction transformation matrix included in the fourth transformation matrix pair is a DCT2' matrix.
With reference to the first aspect, in a possible implementation manner, the number of the candidate transformation matrix pairs is four; when one of the two transformation matrices is the deformation of the DST4 matrix and the other is the deformation of the DCT2' matrix, the vertical transformation matrix included in the first transformation matrix pair of the four candidate transformation matrix pairs is the deformation of the DST4 matrix, and the horizontal transformation matrix included in the first transformation matrix pair is the deformation of the DST4 matrix;
a second transformation matrix pair of the four candidate transformation matrix pairs comprises a deformation of a DST4 matrix, and a horizontal transformation matrix of the second transformation matrix pair comprises a deformation of a DCT2' matrix;
a third transformation matrix pair of the four candidate transformation matrix pairs comprises a deformation of the DCT2' matrix, and a horizontal transformation matrix of the third transformation matrix pair comprises a deformation of the DST4 matrix;
a fourth of the four candidate transform matrix pairs comprises a variant of the DCT2 'matrix as the vertical transform matrix and a variant of the DCT2' matrix as the horizontal transform matrix.
With reference to the first aspect, in a possible implementation manner, the indication information includes an identification of a vertical direction transform matrix in a transform matrix pair for indicating that the current block performs an inverse transform process, and an identification of a horizontal direction transform matrix in the transform matrix pair for indicating that the current block performs an inverse transform process.
With reference to the first aspect, in a possible implementation manner, before the inverse transforming the inverse quantized coefficients of the current block according to the transformation matrix of the inverse transforming the current block, the method further includes: and deducing a transformation matrix included in the transformation matrix pair of the current block subjected to inverse transformation from the DCT2 matrix according to a preset algorithm.
With reference to the first aspect, in a possible implementation manner, the pair of transformation matrices of the current block subjected to inverse transformation includes a DST4 matrix; the size of the DCT2 matrix is 64; the method for deriving the transformation matrix pair of the current block for inverse transformation from the DCT2 matrix according to the preset algorithm comprises the following steps: deriving the DST4 matrix from the DCT2 matrix according to the following formula:
wherein, transMatrix represents the DCT2 matrix, nTbs represents the size of the DST4 matrix, i is more than or equal to 0 and less than or equal to nTbS-1, and j is more than or equal to 0 and less than or equal to nTbS-1; offset 64-nTbs represents the offset of the column; offset amount Representing the offset of the row; (-1) j The representation is sign transformed.
With reference to the first aspect, in a possible implementation manner, the pair of transformation matrices of the current block subjected to inverse transformation includes a DCT2' matrix; the size of the DCT2 matrix is 64; the method for deriving the transformation matrix pair of the current block for inverse transformation from the DCT2 matrix according to the preset algorithm comprises the following steps: deriving the DCT2' matrix from the DCT2 matrix according to the following formula:
transMatrix[j][i×2 6-Log2(nTbs) ];
wherein, transMatrix represents the DCT2 matrix, nTbs represents the size of the DCT2', i is more than or equal to 0 and less than or equal to nTbS-1, and j is more than or equal to 0 and less than or equal to nTbS-1.
In a second aspect, the present invention provides a coding method, comprising:
determining indication information of a transformation matrix pair for transforming a current residual block, wherein the transformation matrix pair comprises a horizontal transformation matrix and a vertical transformation matrix; the transformation matrix pair is one of candidate transformation matrix pairs, and the horizontal transformation matrix and the vertical transformation matrix included in the candidate transformation matrix pair are one of two preset transformation matrices; one of the two transformation matrices is a DST4 matrix or a deformation of the DST4 matrix, and the other of the two transformation matrices is a DCT2' matrix or a deformation of the DCT2' matrix, wherein the DCT2' matrix is a transposed matrix of the DCT2 matrix;
Performing quantization processing on a transformation coefficient obtained by transforming the current residual block through the transformation matrix to obtain a quantization coefficient of the current residual block;
writing the indication information of the transformation matrix pair into a code stream; and
and performing entropy coding treatment on the quantized coefficients and writing the quantized coefficients into a code stream.
Wherein the horizontal direction transformation matrix and the vertical direction transformation matrix included in any one of the candidate transformation pairs are either the same or different.
Wherein, the number of the candidate transformation matrix pairs can be 2, 3 or 4.
It can be seen that the implementation of the transform/inverse transform can be simplified, since the DCT2 'matrix or the deformation of the DCT2' matrix has a butterfly fast algorithm at the time of the transform/inverse transform. Meanwhile, the transformation/inverse transformation realization circuit corresponding to the DCT2 matrix can be directly multiplexed by the transformation of the DCT2 'matrix or the transformation of the DCT2' matrix and the transformation of the DST4 matrix or the transformation of the DST4 matrix, so that the design of the realization circuit of the transformation/inverse transformation module can be simplified when the transformation/inverse transformation module is realized by the circuit.
With reference to the second aspect, in a possible implementation manner, the deformation of the DST4 matrix is obtained by performing a sign transformation on coefficients of at least a part of rows or at least a part of columns in the DST4 matrix; or (b)
The deformation of the DCT2 'matrix is obtained by performing a sign transformation on coefficients of at least a portion of rows or at least a portion of columns in the DCT2' matrix.
With reference to the second aspect, in a possible implementation manner, the number of the candidate transformation matrix pairs is four; when one of the two transformation matrices is a DST4 matrix and the other is a DCT2' matrix, the vertical transformation matrix included in the first transformation matrix pair of the four candidate transformation matrix pairs is a DST4 matrix, and the horizontal transformation matrix included in the first transformation matrix pair is a DST4 matrix;
a second transformation matrix pair of the four candidate transformation matrix pairs comprises a vertical transformation matrix which is a DST4 matrix, and a horizontal transformation matrix of the second transformation matrix pair comprises a DCT2' matrix;
a third transformation matrix pair of the four candidate transformation matrix pairs comprises a DCT2' matrix, and a horizontal transformation matrix of the third transformation matrix pair comprises a DST4 matrix;
and a vertical direction transformation matrix included in a fourth transformation matrix pair of the four candidate transformation matrix pairs is a DCT2 'matrix, and a horizontal direction transformation matrix included in the fourth transformation matrix pair is a DCT2' matrix.
With reference to the second aspect, in a possible implementation manner, the number of the candidate transformation matrix pairs is four; when one of the two transformation matrices is the deformation of the DST4 matrix and the other is the deformation of the DCT2' matrix, the vertical transformation matrix included in the first transformation matrix pair of the four candidate transformation matrix pairs is the deformation of the DST4 matrix, and the horizontal transformation matrix included in the first transformation matrix pair is the deformation of the DST4 matrix;
a second transformation matrix pair of the four candidate transformation matrix pairs comprises a deformation of a DST4 matrix, and a horizontal transformation matrix of the second transformation matrix pair comprises a deformation of a DCT2' matrix;
a third transformation matrix pair of the four candidate transformation matrix pairs comprises a deformation of the DCT2' matrix, and a horizontal transformation matrix of the third transformation matrix pair comprises a deformation of the DST4 matrix;
a fourth of the four candidate transform matrix pairs comprises a variant of the DCT2 'matrix as the vertical transform matrix and a variant of the DCT2' matrix as the horizontal transform matrix.
With reference to the second aspect, in a possible implementation manner, the method further includes: and deducing the transformation matrix included in the transformation matrix pair from the DCT2 matrix according to a preset algorithm.
With reference to the second aspect, in a possible implementation manner, the transformation matrix included in the transformation matrix pair includes a DST4 matrix; the size of the DCT2 matrix is 64; the deriving the transformation matrix included in the transformation matrix pair from the DCT2 matrix according to a preset algorithm includes: deriving the DST4 matrix from the DCT2 matrix according to the following formula:
wherein transMatrix represents the DCT2 matrix and nTbs represents the size of the DST4 matrixThe value of i is more than or equal to 0 and less than or equal to nTbS-1, and j is more than or equal to 0 and less than or equal to nTbS-1; offset 64-nTbs represents the offset of the column; offset amountRepresenting the offset of the row; (-1) j The representation is sign transformed.
With reference to the first aspect, in a possible implementation manner, the transformation matrix included in the transformation matrix pair includes a DCT2' matrix; the size of the DCT2 matrix is 64; the deriving the transformation matrix included in the transformation matrix pair from the DCT2 matrix according to a preset algorithm includes: deriving the DCT2' matrix from the DCT2 matrix according to the following formula:
transMatrix[j][i×2 6-Log2(nTbs) ];
wherein, transMatrix represents the DCT2 matrix, nTbs represents the size of the DCT2', i is more than or equal to 0 and less than or equal to nTbS-1, and j is more than or equal to 0 and less than or equal to nTbS-1.
In a third aspect, the present invention provides a video decoder comprising:
the entropy decoding unit is used for analyzing the received code stream to obtain indication information of a transformation matrix pair of the current block for carrying out inverse transformation and a quantization coefficient of the current block, wherein the transformation matrix pair comprises a horizontal transformation matrix and a vertical transformation matrix;
an inverse quantization unit, configured to perform inverse quantization processing on the quantized coefficient of the current block to obtain an inverse quantized coefficient of the current block;
an inverse transformation processing unit, configured to determine, from four candidate transformation matrix pairs according to the instruction information, a transformation matrix pair for performing inverse transformation processing on the current block; the candidate transformation matrix pair comprises a horizontal transformation matrix and a vertical transformation matrix which are one of two preset transformation matrices; one of the two transformation matrices is a DST4 matrix or a deformation of the DST4 matrix, and the other of the two transformation matrices is a DCT2' matrix or a deformation of the DCT2' matrix, wherein the DCT2' matrix is a transposed matrix of the DCT2 matrix; performing inverse transformation on the inverse quantized coefficients of the current block according to a transformation matrix of the inverse transformation of the current block to obtain a reconstructed residual block of the current block;
And a reconstruction unit, configured to obtain a reconstructed block of the current block based on the reconstructed residual block of the current block.
Wherein the horizontal direction transformation matrix and the vertical direction transformation matrix included in any one of the candidate transformation pairs are either the same or different.
Wherein, the number of the candidate transformation matrix pairs can be 2, 3 or 4.
With reference to the third aspect, in a possible implementation manner, the deformation of the DST4 matrix is obtained by performing a sign transformation on coefficients of at least a part of rows or at least a part of columns in the DST4 matrix; or (b)
The deformation of the DCT2 'matrix is obtained by performing a sign transformation on coefficients of at least a portion of rows or at least a portion of columns in the DCT2' matrix.
With reference to the third aspect, in a possible implementation manner, the number of the candidate transformation matrix pairs is four; when one of the two transformation matrices is a DST4 matrix and the other is a DCT2' matrix, the vertical transformation matrix included in the first transformation matrix pair of the four candidate transformation matrix pairs is a DST4 matrix, and the horizontal transformation matrix included in the first transformation matrix pair is a DST4 matrix;
a second transformation matrix pair of the four candidate transformation matrix pairs comprises a vertical transformation matrix which is a DST4 matrix, and a horizontal transformation matrix of the second transformation matrix pair comprises a DCT2' matrix;
A third transformation matrix pair of the four candidate transformation matrix pairs comprises a DCT2' matrix, and a horizontal transformation matrix of the third transformation matrix pair comprises a DST4 matrix;
and a vertical direction transformation matrix included in a fourth transformation matrix pair of the four candidate transformation matrix pairs is a DCT2 'matrix, and a horizontal direction transformation matrix included in the fourth transformation matrix pair is a DCT2' matrix.
With reference to the third aspect, in a possible implementation manner, the number of the candidate transformation matrix pairs is four; when one of the two transformation matrices is the deformation of the DST4 matrix and the other is the deformation of the DCT2' matrix, the vertical transformation matrix included in the first transformation matrix pair of the four candidate transformation matrix pairs is the deformation of the DST4 matrix, and the horizontal transformation matrix included in the first transformation matrix pair is the deformation of the DST4 matrix;
a second transformation matrix pair of the four candidate transformation matrix pairs comprises a deformation of a DST4 matrix, and a horizontal transformation matrix of the second transformation matrix pair comprises a deformation of a DCT2' matrix;
a third transformation matrix pair of the four candidate transformation matrix pairs comprises a deformation of the DCT2' matrix, and a horizontal transformation matrix of the third transformation matrix pair comprises a deformation of the DST4 matrix;
A fourth of the four candidate transform matrix pairs comprises a variant of the DCT2 'matrix as the vertical transform matrix and a variant of the DCT2' matrix as the horizontal transform matrix.
With reference to the third aspect, in a possible implementation manner, the indication information includes an identification of a vertical direction transform matrix in a pair of transform matrices for indicating that the current block performs an inverse transform process, and an identification of a horizontal direction transform matrix in the pair of transform matrices for indicating that the current block performs an inverse transform process.
With reference to the third aspect, in a possible implementation manner, the inverse transformation processing unit is further configured to: and deducing a transformation matrix included in the transformation matrix pair of the current block subjected to inverse transformation from the DCT2 matrix according to a preset algorithm.
With reference to the third aspect, in a possible implementation manner, the pair of transformation matrices of the current block that performs the inverse transformation includes a DST4 matrix; the size of the DCT2 matrix is 64; the inverse transformation processing unit is specifically configured to: deriving the DST4 matrix from the DCT2 matrix according to the following formula:
wherein, transMatrix represents the DCT2 matrix, nTbs represents the size of the DST4 matrix, i is more than or equal to 0 and less than or equal to nTbS-1, and j is more than or equal to 0 and less than or equal to nTbS-1; offset 64-nTbs represents the offset of the column; offset amount Representing the offset of the row; (-1) j The representation is sign transformed.
With reference to the third aspect, in a possible implementation manner, the pair of transformation matrices of the current block subjected to inverse transformation includes a DCT2' matrix; the size of the DCT2 matrix is 64; the inverse transformation processing unit is specifically configured to: deriving the DCT2' matrix from the DCT2 matrix according to the following formula:
transMatrix[j][i×2 6-Log2(nTbs) ];
wherein, transMatrix represents the DCT2 matrix, nTbs represents the size of the DCT2', i is more than or equal to 0 and less than or equal to nTbS-1, and j is more than or equal to 0 and less than or equal to nTbS-1.
In a fourth aspect, the present invention provides a video encoder comprising:
a transform processing unit configured to determine indication information of a transform matrix pair for performing a transform process on a current residual block, the transform matrix pair including a horizontal-direction transform matrix and a vertical-direction transform matrix; the transformation matrix pair is one of candidate transformation matrix pairs, and the horizontal transformation matrix and the vertical transformation matrix included in the candidate transformation matrix pair are one of two preset transformation matrices; one of the two transformation matrices is a DST4 matrix or a deformation of the DST4 matrix, and the other of the two transformation matrices is a DCT2' matrix or a deformation of the DCT2' matrix, wherein the DCT2' matrix is a transposed matrix of the DCT2 matrix;
A quantization unit that quantizes a transform coefficient obtained by transforming the current residual block through the transform matrix to obtain a quantized coefficient of the current residual block;
the entropy coding unit is used for carrying out entropy coding processing on the quantized coefficient of the current residual block and the indication information;
and outputting, namely writing the indication information of the transformation matrix pair after entropy coding and the quantized coefficient of the current residual block after entropy coding into a code stream.
Wherein the horizontal direction transformation matrix and the vertical direction transformation matrix included in any one of the candidate transformation pairs are either the same or different.
Wherein, the number of the candidate transformation matrix pairs can be 2, 3 or 4.
With reference to the fourth aspect, in a possible implementation manner, the deformation of the DST4 matrix is obtained by performing a sign transformation on coefficients of at least a part of rows or at least a part of columns in the DST4 matrix; or (b)
The deformation of the DCT2 'matrix is obtained by performing a sign transformation on coefficients of at least a portion of rows or at least a portion of columns in the DCT2' matrix.
With reference to the fourth aspect, in a possible implementation manner, the number of the candidate transformation matrix pairs is four; when one of the two transformation matrices is a DST4 matrix and the other is a DCT2' matrix, the vertical transformation matrix included in the first transformation matrix pair of the four candidate transformation matrix pairs is a DST4 matrix, and the horizontal transformation matrix included in the first transformation matrix pair is a DST4 matrix;
A second transformation matrix pair of the four candidate transformation matrix pairs comprises a vertical transformation matrix which is a DST4 matrix, and a horizontal transformation matrix of the second transformation matrix pair comprises a DCT2' matrix;
a third transformation matrix pair of the four candidate transformation matrix pairs comprises a DCT2' matrix, and a horizontal transformation matrix of the third transformation matrix pair comprises a DST4 matrix;
and a vertical direction transformation matrix included in a fourth transformation matrix pair of the four candidate transformation matrix pairs is a DCT2 'matrix, and a horizontal direction transformation matrix included in the fourth transformation matrix pair is a DCT2' matrix.
With reference to the fourth aspect, in a possible implementation manner, the number of the candidate transformation matrix pairs is four; when one of the two transformation matrices is the deformation of the DST4 matrix and the other is the deformation of the DCT2' matrix, the vertical transformation matrix included in the first transformation matrix pair of the four candidate transformation matrix pairs is the deformation of the DST4 matrix, and the horizontal transformation matrix included in the first transformation matrix pair is the deformation of the DST4 matrix;
a second transformation matrix pair of the four candidate transformation matrix pairs comprises a deformation of a DST4 matrix, and a horizontal transformation matrix of the second transformation matrix pair comprises a deformation of a DCT2' matrix;
A third transformation matrix pair of the four candidate transformation matrix pairs comprises a deformation of the DCT2' matrix, and a horizontal transformation matrix of the third transformation matrix pair comprises a deformation of the DST4 matrix;
a fourth of the four candidate transform matrix pairs comprises a variant of the DCT2 'matrix as the vertical transform matrix and a variant of the DCT2' matrix as the horizontal transform matrix.
With reference to the fourth aspect, in a possible implementation manner, the transformation processing unit is further configured to: and deducing the transformation matrix included in the transformation matrix pair from the DCT2 matrix according to a preset algorithm.
With reference to the fourth aspect, in a possible implementation manner, the transformation matrix pair includes a DST4 matrix; the size of the DCT2 matrix is 64; the transformation processing unit is specifically configured to: deriving the DST4 matrix from the DCT2 matrix according to the following formula:
wherein, transMatrix represents the DCT2 matrix, nTbs represents the size of the DST4 matrix, i is more than or equal to 0 and less than or equal to nTbS-1, and 0 is more than or equal to 0j is less than or equal to nTbS-1; offset 64-nTbs represents the offset of the column; offset amountRepresenting the offset of the row; (-1) j The representation is sign transformed.
With reference to the fourth aspect, in a possible implementation manner, the transformation matrix pair includes a DCT2' matrix; the size of the DCT2 matrix is 64; the inverse transformation processing unit is specifically configured to: deriving the DCT2' matrix from the DCT2 matrix according to the following formula:
transMatrix[j][i×2 6-Log2(nTbs) ]×x[j];
wherein, transMatrix represents the DCT2 matrix, nTbs represents the size of the DCT2', i is more than or equal to 0 and less than or equal to nTbS-1, and j is more than or equal to 0 and less than or equal to nTbS-1.
In a fifth aspect, the present invention is directed to an apparatus for decoding a video stream, comprising a processor and a memory. The memory stores instructions that cause the processor to perform a method according to the first aspect or any possible embodiment of the first aspect.
In a sixth aspect, the present invention is directed to a device for video encoding, comprising a processor and a memory. The memory stores instructions that cause the processor to perform a method according to the second aspect or any possible embodiment of the second aspect.
In a seventh aspect, a computer-readable storage medium is presented having instructions stored thereon that, when executed, cause one or more processors to decode video data. The instructions cause the one or more processors to perform a method according to the first aspect or any possible embodiment of the first aspect.
In an eighth aspect, a computer-readable storage medium having instructions stored thereon that, when executed, cause one or more processors to encode video data is presented. The instructions cause the one or more processors to perform a method according to the second aspect or any possible embodiment of the second aspect.
A ninth aspect proposes a video decoder comprising an execution circuit for executing the method as in the first aspect or any of the possible embodiments of the first aspect.
In a tenth aspect, a video encoder is presented comprising an execution circuit for executing the method as in the second aspect or any of the possible embodiments of the second aspect.
In an eleventh aspect, the invention relates to a computer program comprising a program code which, when run on a computer, performs the method according to the first aspect or any possible embodiment of the first aspect.
In a twelfth aspect, the invention relates to a computer program comprising a program code which, when run on a computer, performs the method according to the second aspect or any possible embodiment of the second aspect.
The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.
Drawings
In order to more clearly describe the technical solutions in the embodiments or the background of the present application, the following description will describe the drawings that are required to be used in the embodiments or the background of the present application.
FIG. 1 is a block diagram of an example video coding system for implementing an embodiment of the present invention;
FIG. 2 is a block diagram illustrating an example structure of a video encoder for implementing an embodiment of the present invention;
fig. 3 is a block diagram showing an example structure of a video decoder for implementing an embodiment of the present invention;
FIG. 4 shows a decoder 30 including the encoder 20 of FIG. 2 and FIG. 3;
FIG. 5 is a block diagram showing another example of an encoding device or decoding device;
FIG. 6 is a schematic diagram showing a butterfly fast algorithm circuit implementation of a 16×16DCT2 matrix in HEVC;
FIG. 7 is a schematic diagram illustrating a 32×32 inverse transform implementation according to an embodiment;
FIG. 8 is a schematic diagram illustrating an implementation circuit according to an embodiment;
FIG. 9 is a diagram illustrating an inverse transform architecture of an 8×8 DCT2 matrix according to one embodiment;
FIG. 10 is a flow chart illustrating a video decoding method according to an embodiment;
fig. 11 is a flowchart illustrating a video encoding method according to an embodiment.
In the following, like reference numerals refer to like or at least functionally equivalent features, unless specifically noted otherwise.
Detailed Description
In the following description, reference is made to the accompanying drawings which form a part hereof and which show by way of illustration specific aspects in which embodiments of the invention may be practiced. It is to be understood that embodiments of the invention may be used in other aspects and may include structural or logical changes not depicted in the drawings. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims.
For example, it should be understood that the disclosure in connection with the described methods may be equally applicable to a corresponding apparatus or system for performing the methods, and vice versa. For example, if one or more specific method steps are described, the corresponding apparatus may comprise one or more units, such as functional units, to perform the one or more described method steps (e.g., one unit performing one or more steps, or multiple units each performing one or more of the multiple steps), even if such one or more units are not explicitly described or illustrated in the figures. On the other hand, if a specific apparatus is described based on one or more units such as a functional unit, for example, the corresponding method may include one step to perform the functionality of the one or more units (e.g., one step to perform the functionality of the one or more units, or multiple steps each to perform the functionality of one or more units, even if such one or more steps are not explicitly described or illustrated in the figures). Further, it is to be understood that features of the various exemplary embodiments and/or aspects described herein may be combined with each other, unless explicitly stated otherwise.
Video coding generally refers to processing a sequence of pictures that form a video or video sequence. In the field of video coding, the terms "picture", "frame" or "image" may be used as synonyms. Video encoding as used in this application (or this disclosure) refers to video encoding or video decoding. Video encoding is performed on the source side, typically including processing (e.g., by compression) the original video picture to reduce the amount of data needed to represent the video picture (and thus more efficiently store and/or transmit). Video decoding is performed on the destination side, typically involving inverse processing with respect to the encoder to reconstruct the video pictures. The embodiment relates to video pictures (or collectively pictures, as will be explained below) 'encoding' is understood to relate to 'encoding' or 'decoding' of a video sequence. The combination of the encoding portion and the decoding portion is also called codec (encoding and decoding).
In the case of lossless video coding, the original video picture may be reconstructed, i.e., the reconstructed video picture has the same quality as the original video picture (assuming no transmission loss or other data loss during storage or transmission). In the case of lossy video coding, the amount of data needed to represent a video picture is reduced by performing further compression, e.g. quantization, whereas the decoder side cannot reconstruct the video picture completely, i.e. the quality of the reconstructed video picture is lower or worse than the quality of the original video picture.
Several video coding standards of h.261 belong to the "lossy hybrid video codec" (i.e. spatial and temporal prediction in the sample domain is combined with 2D transform coding in the transform domain for applying quantization). Each picture of a video sequence is typically partitioned into non-overlapping sets of blocks, typically encoded at the block level. In other words, the encoder side typically processes, i.e. encodes, video at the block (video block) level, e.g. by spatial (intra-picture) prediction and temporal (inter-picture) prediction to generate a prediction block, subtracting the prediction block from the current block (the currently processed or to-be-processed block) to obtain a residual block, transforming the residual block in the transform domain and quantizing the residual block to reduce the amount of data to be transmitted (compressed), while the decoder side applies the inverse processing part of the relative encoder to the encoded or compressed block to reconstruct the current block for representation. In addition, the encoder replicates the decoder processing loop so that the encoder and decoder generate the same predictions (e.g., intra-prediction and inter-prediction) and/or reconstructions for processing, i.e., encoding, the subsequent blocks.
As used herein, the term "block" may be a portion of a picture or frame. For ease of description, embodiments of the present invention are described with reference to multipurpose video coding (VVC: versatile Video Coding) or High-efficiency video coding (High-Efficiency Video Coding, HEVC) developed by the video coding joint working group (Joint Collaboration Team on Video Coding, JCT-VC) of the ITU-T video coding expert group (Video Coding Experts Group, VCEG) and the ISO/IEC moving picture expert group (Motion Picture Experts Group, MPEG). Those of ordinary skill in the art will appreciate that embodiments of the present invention are not limited to HEVC or VVC. May refer to a CU, PU, and TU. In HEVC, a CTU is split into multiple CUs by using a quadtree structure denoted as a coding tree. A decision is made at the CU level whether to encode a picture region using inter-picture (temporal) or intra-picture (spatial) prediction. Each CU may be further split into one, two, or four PUs depending on the PU split type. The same prediction process is applied within one PU and the relevant information is transmitted to the decoder on a PU basis. After the residual block is obtained by applying the prediction process based on the PU split type, the CU may be partitioned into Transform Units (TUs) according to other quadtree structures similar to the coding tree for the CU. In a recent development of video compression technology, a Quad tree and a binary tree (qd-tree and binary tree, QTBT) partition frames are used to partition the encoded blocks. In QTBT block structures, a CU may be square or rectangular in shape. In VVC, coding Tree Units (CTUs) are first partitioned by a quadtree structure. The quadtree leaf nodes are further partitioned by a binary tree structure. Binary leaf nodes are called Coding Units (CUs), and the segments are used for prediction and transformation processing without any other segmentation. This means that the block sizes of the CU, PU and TU are the same in the QTBT encoded block structure. Also, the use of multiple partitions, such as a trigeminal tree partition, with QTBT block structures is proposed.
Embodiments of the encoder 20, decoder 30 and codec systems 10, 40 are described below based on fig. 1-4 (before embodiments of the present invention are described in more detail based on fig. 10).
Fig. 1 is a conceptual or schematic block diagram illustrating an exemplary encoding system 10, e.g., a video encoding system 10 that may utilize the techniques of this application (this disclosure). Encoder 20 (e.g., video encoder 20) and decoder 30 (e.g., video decoder 30) of video encoding system 10 represent examples of devices that may be used to perform techniques for video encoding or video decoding methods according to various examples described herein. As shown in fig. 1, encoding system 10 includes a source device 12 for providing encoded data 13, e.g., encoded pictures 13, to a destination device 14, e.g., decoding encoded data 13.
Source device 12 includes an encoder 20 and, in addition, or alternatively, may include a picture source 16, a preprocessing unit 18, such as a picture preprocessing unit, and a communication interface 22 (or communication unit).
The picture source 16 may include or may be any type of picture capture device for capturing, for example, real world pictures, and/or any type of picture or comment (for screen content encoding, some text on the screen is also considered part of the picture or image to be encoded), for example, a computer graphics processor for generating computer animated pictures, or any type of device for capturing and/or providing real world pictures, computer animated pictures (e.g., screen content, virtual Reality (VR) pictures), and/or any combination thereof (e.g., real scene (augmented reality, AR) pictures).
A (digital) picture is or can be regarded as a two-dimensional array or matrix of sampling points with luminance values. The sampling points in the array may also be referred to as pixels (pixels) or pixels (pels). The number of sampling points of the array or picture in the horizontal and vertical directions (or axes) defines the size and/or resolution of the picture. To represent color, three color components are typically employed, i.e., a picture may be represented as or contain three sample arrays. In RBG format or color space, a picture includes corresponding red, green, and blue sample arrays. However, in video coding, each pixel is typically represented in a luminance/chrominance format or color space, e.g., YCbCr, including a luminance component indicated by Y (which may sometimes be indicated by L) and two chrominance components indicated by Cb and Cr. The luminance (luma) component Y represents the luminance or grayscale intensity (e.g., the same in a grayscale picture), while the two chrominance (chroma) components Cb and Cr represent the chrominance or color information components. Accordingly, a picture in YCbCr format includes a luma sample array of luma sample values (Y) and two chroma sample arrays of chroma values (Cb and Cr). Pictures in RGB format may be converted or transformed into YCbCr format and vice versa, a process also known as color transformation or conversion. If the picture is black and white, the picture may include only an array of luma samples.
Picture source 16 (e.g., a video source) may be, for example, a camera for capturing pictures, a memory such as a picture memory, a memory that includes or stores previously captured or generated pictures, and/or any type of (internal or external) interface that captures or receives pictures. The camera may be, for example, an integrated camera, either local or integrated in the source device, and the memory may be, for example, an integrated memory, either local or integrated in the source device. The interface may be, for example, an external interface that receives pictures from an external video source, such as an external picture capture device, like a camera, an external memory or an external picture generation device, such as an external computer graphics processor, a computer or a server. The interface may be any kind of interface according to any proprietary or standardized interface protocol, e.g. a wired or wireless interface, an optical interface. The interface to acquire the picture data 17 may be the same interface as the communication interface 22 or a part of the communication interface 22.
The picture or picture data 17 (e.g., video data) may also be referred to as an original picture or original picture data 17, unlike the preprocessing unit 18 and the processing performed by the preprocessing unit 18.
The preprocessing unit 18 is for receiving (original) picture data 17 and performing preprocessing on the picture data 17 to obtain a preprocessed picture 19 or preprocessed picture data 19. For example, preprocessing performed by preprocessing unit 18 may include truing, color format conversion (e.g., from RGB to YCbCr), toning, or denoising. It is understood that the preprocessing unit 18 may be an optional component.
Encoder 20, e.g., video encoder 20, is operative to receive preprocessed picture data 19 and provide encoded picture data 21 (details are described further below, e.g., based on fig. 2 or fig. 4).
The communication interface 22 of the source device 12 may be used to receive the encoded picture data 21 and transmit it to other devices, such as the destination device 14 or any other device, for storage or direct reconstruction, or for processing the encoded picture data 21 before storing the encoded data 13 and/or transmitting the encoded data 13 to the other devices, such as the destination device 14 or any other device for decoding or storage, respectively.
The destination device 14 includes a decoder 30 (e.g., a video decoder 30), and may additionally, i.e., alternatively, include a communication interface 28 (otherwise known as a communication unit), a post-processing unit or post-processor 32, and a display device 34.
The communication interface 28 of the destination device 14 is for receiving the encoded picture data 21 or the encoded data 13, e.g. directly from the source device 12 or any other source, e.g. a storage device, e.g. an encoded picture data storage device.
Communication interface 22 and communication interface 28 may be used to transmit or receive encoded picture data 21 or encoded data 13 via a direct communication link between source device 12 and destination device 14, such as a direct wired or wireless connection, or via any type of network, such as a wired or wireless network or any combination thereof, or any type of private and public networks, or any combination thereof.
The communication interface 22 may, for example, be used to encapsulate the encoded picture data 21 into a suitable format, such as packets, for transmission over a communication link or communication network.
The communication interface 28 forming a corresponding part of the communication interface 22 may for example be used for unpacking the encoded data 13 to obtain the encoded picture data 21.
Both communication interface 22 and communication interface 28 may be configured as unidirectional communication interfaces, as indicated by the arrow from source device 12 to destination device 14 for encoded picture data 13 in fig. 1, or as bi-directional communication interfaces, and may be used, for example, to send and receive messages to establish connections, acknowledge and exchange any other information related to the communication link and/or data transmission, such as encoded picture data transmission.
Decoder 30 is used to receive encoded picture data 21 and provide decoded picture data 31 or decoded picture 31 (details will be described further below, e.g., based on fig. 3 or fig. 5).
The post-processor 32 of the destination device 14 is used to post-process the decoded picture data 31 (also referred to as reconstructed slice data), e.g., the decoded picture 131, to obtain post-processed picture data 33, e.g., the post-processed picture 33. Post-processing performed by post-processor 32 may include, for example, color format conversion (e.g., conversion from YCbCr to RGB), toning, truing, or resampling, or any other processing for preparing decoded picture data 31 for display by display device 34, for example.
The display device 34 of the destination device 14 is for receiving the post-processed picture data 33 to display the picture to, for example, a user or viewer. The display device 34 may be or include any type of display for presenting reconstructed pictures, for example, an integrated or external display or monitor. For example, the display may include a liquid crystal display (liquid crystal display, LCD), an organic light emitting diode (organic light emitting diode, OLED) display, a plasma display, a projector, a micro LED display, a liquid crystal on silicon (liquid crystal on silicon, LCoS), a digital light processor (digital light processor, DLP), or any other type of display.
Although fig. 1 depicts source device 12 and destination device 14 as separate devices, device embodiments may also include the functionality of both source device 12 and destination device 14, or both, i.e., source device 12 or corresponding functionality and destination device 14 or corresponding functionality. In such embodiments, the source device 12 or corresponding functionality and the destination device 14 or corresponding functionality may be implemented using the same hardware and/or software, or using separate hardware and/or software, or any combination thereof.
It will be apparent to those skilled in the art from this description that the functionality of the different units or the presence and (exact) division of the functionality of the source device 12 and/or destination device 14 shown in fig. 1 may vary depending on the actual device and application.
Encoder 20 (e.g., video encoder 20) and decoder 30 (e.g., video decoder 30) may each be implemented as any of a variety of suitable circuits, such as one or more microprocessors, digital signal processors (digital signal processor, DSPs), application-specific integrated circuits (ASICs), field-programmable gate array, FPGA, discrete logic, hardware, or any combinations thereof. If the techniques are implemented in part in software, an apparatus may store instructions of the software in a suitable non-transitory computer-readable storage medium and may execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Any of the foregoing (including hardware, software, a combination of hardware and software, etc.) may be considered one or more processors. Each of video encoder 20 and video decoder 30 may be contained in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (codec) in the corresponding device.
Source device 12 may be referred to as a video encoding device or video encoding apparatus. Destination device 14 may be referred to as a video decoding device or video decoding apparatus. The source device 12 and the destination device 14 may be examples of video encoding devices or video encoding apparatus.
Source device 12 and destination device 14 may comprise any of a variety of devices, including any type of handheld or stationary device, such as a notebook or laptop computer, mobile phone, smart phone, tablet or tablet computer, video camera, desktop computer, set-top box, television, display device, digital media player, video game console, video streaming device (e.g., content service server or content distribution server), broadcast receiver device, broadcast transmitter device, etc., and may not use or use any type of operating system.
In some cases, source device 12 and destination device 14 may be equipped for wireless communication. Thus, the source device 12 and the destination device 14 may be wireless communication devices.
In some cases, the video encoding system 10 shown in fig. 1 is merely an example, and the techniques of this disclosure may be applicable to video encoding settings (e.g., video encoding or video decoding) that do not necessarily involve any data communication between encoding and decoding devices. In other examples, the data may be retrieved from local memory, streamed over a network, and the like. The video encoding device may encode and store data to the memory and/or the video decoding device may retrieve and decode data from the memory. In some examples, encoding and decoding are performed by devices that do not communicate with each other, but instead only encode data to memory and/or retrieve data from memory and decode data.
It should be appreciated that for each of the examples described above with reference to video encoder 20, video decoder 30 may be used to perform the reverse process. Regarding signaling syntax elements, video decoder 30 may be configured to receive and parse such syntax elements and decode the associated video data accordingly. In some examples, video encoder 20 may entropy encode one or more syntax elements defining … … into an encoded video bitstream. In such examples, video decoder 30 may parse such syntax elements and decode the relevant video data accordingly.
Encoder & encoding method
Fig. 2 shows a schematic/conceptual block diagram of an example of a video encoder 20 for implementing the techniques of this application (disclosure). In the example of fig. 2, video encoder 20 includes residual calculation unit 204, transform processing unit 206, quantization unit 208, inverse quantization unit 210, inverse transform processing unit 212, reconstruction unit 214, buffer 216, loop filter unit 220, decoded picture buffer (decoded picture buffer, DPB) 230, prediction processing unit 260, and entropy encoding unit 270. The prediction processing unit 260 may include an inter prediction unit 244, an intra prediction unit 254, and a mode selection unit 262. The inter prediction unit 244 may include a motion estimation unit and a motion compensation unit (not shown). The video encoder 20 shown in fig. 2 may also be referred to as a hybrid video encoder or a video encoder according to a hybrid video codec.
For example, the residual calculation unit 204, the transform processing unit 206, the quantization unit 208, the prediction processing unit 260 and the entropy encoding unit 270 form a forward signal path of the encoder 20, whereas for example the inverse quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214, the buffer 216, the loop filter 220, the decoded picture buffer (decoded picture buffer, DPB) 230, the prediction processing unit 260 form a backward signal path of the encoder, wherein the backward signal path of the encoder corresponds to the signal path of the decoder (see decoder 30 in fig. 3).
Encoder 20 receives picture 201 or a block 203 of picture 201, e.g., a picture in a sequence of pictures forming a video or video sequence, through, e.g., input 202. The picture block 203 may also be referred to as a current picture block or a picture block to be encoded, and the picture 201 may be referred to as a current picture or a picture to be encoded (especially when distinguishing the current picture from other pictures in video encoding, such as previously encoded and/or decoded pictures in the same video sequence, i.e. a video sequence also comprising the current picture).
Segmentation
An embodiment of encoder 20 may comprise a partitioning unit (not shown in fig. 2) for partitioning picture 201 into a plurality of blocks, e.g. blocks 203, typically into a plurality of non-overlapping blocks. The segmentation unit may be used to use the same block size for all pictures in the video sequence and a corresponding grid defining the block size, or to alter the block size between pictures or subsets or groups of pictures and to segment each picture into corresponding blocks.
In one example, prediction processing unit 260 of video encoder 20 may be configured to perform any combination of the above-described partitioning techniques.
Like picture 201, block 203 is also or may be regarded as a two-dimensional array or matrix of sampling points with luminance values (sampling values), albeit of smaller size than picture 201. In other words, block 203 may include, for example, one sampling array (e.g., a luminance array in the case of black-and-white picture 201) or three sampling arrays (e.g., one luminance array and two chrominance arrays in the case of color pictures) or any other number and/or class of arrays depending on the color format applied. The number of sampling points in the horizontal and vertical directions (or axes) of the block 203 defines the size of the block 203.
The encoder 20 as shown in fig. 2 is used to encode a picture 201 block by block, e.g. perform encoding and prediction on each block 203.
Residual calculation
The residual calculation unit 204 is configured to calculate a residual block 205 based on the picture block 203 and the prediction block 265 (further details of the prediction block 265 are provided below), for example, by subtracting sample values of the prediction block 265 from sample values of the picture block 203 on a sample-by-sample (pixel-by-pixel) basis to obtain the residual block 205 in a sample domain.
Transformation
The transform processing unit 206 is configured to apply a transform, such as a discrete cosine transform (discrete cosine transform, DCT) or a discrete sine transform (discrete sine transform, DST), on the sample values of the residual block 205 to obtain transform coefficients 207 in the transform domain. The transform coefficients 207 may also be referred to as transform residual coefficients and represent the residual block 205 in the transform domain.
The transform processing unit 206 may be used to apply integer approximations of DCT/DST, such as the transforms specified for HEVC/H.265. Such integer approximations are typically scaled by some factor compared to the orthogonal DCT transform. To maintain the norms of the forward and inverse transformed processed residual blocks, an additional scaling factor is applied as part of the transformation process. The scaling factor is typically selected based on certain constraints, e.g., the scaling factor is a tradeoff between power of 2, bit depth of transform coefficients, accuracy, and implementation cost for shift operations, etc. For example, a specific scaling factor is specified for inverse transformation by, for example, the inverse transformation processing unit 212 on the decoder 30 side (and for corresponding inverse transformation by, for example, the inverse transformation processing unit 212 on the encoder 20 side), and accordingly, a corresponding scaling factor may be specified for positive transformation by the transformation processing unit 206 on the encoder 20 side.
Quantization
The quantization unit 208 is for quantizing the transform coefficients 207, for example by applying scalar quantization or vector quantization, to obtain quantized transform coefficients 209. The quantized transform coefficients 209 may also be referred to as quantized residual coefficients 209. The quantization process may reduce the bit depth associated with some or all of the transform coefficients 207. For example, n-bit transform coefficients may be rounded down to m-bit transform coefficients during quantization, where n is greater than m. The quantization level may be modified by adjusting quantization parameters (quantization parameter, QP). For example, for scalar quantization, different scales may be applied to achieve finer or coarser quantization. Smaller quantization step sizes correspond to finer quantization, while larger quantization step sizes correspond to coarser quantization. The appropriate quantization step size may be indicated by a quantization parameter (quantization parameter, QP). For example, the quantization parameter may be an index of a predefined set of suitable quantization steps. For example, smaller quantization parameters may correspond to fine quantization (smaller quantization step size) and larger quantization parameters may correspond to coarse quantization (larger quantization step size) and vice versa. Quantization may involve division by a quantization step size and corresponding quantization or inverse quantization, e.g., performed by inverse quantization 210, or may involve multiplication by a quantization step size. Embodiments according to some standards, such as HEVC, may use quantization parameters to determine quantization step sizes. In general, the quantization step size may be calculated based on quantization parameters using a fixed-point approximation of an equation that includes division. Additional scaling factors may be introduced for quantization and inverse quantization to recover norms of residual blocks that may be modified due to the scale used in the fixed point approximation of the equation for quantization step size and quantization parameters. In one example embodiment, the inverse transformed and inverse quantized scales may be combined. Alternatively, a custom quantization table may be used and signaled from the encoder to the decoder, e.g., in a bitstream. Quantization is a lossy operation, where the larger the quantization step size, the larger the loss.
The inverse quantization unit 210 is configured to apply inverse quantization of the quantization unit 208 on the quantized coefficients to obtain inverse quantized coefficients 211, e.g., apply an inverse quantization scheme of the quantization scheme applied by the quantization unit 208 based on or using the same quantization step size as the quantization unit 208. The dequantized coefficients 211 may also be referred to as dequantized residual coefficients 211, correspond to the transform coefficients 207, although the losses due to quantization are typically different from the transform coefficients.
The inverse transform processing unit 212 is configured to apply an inverse transform of the transform applied by the transform processing unit 206, for example, an inverse discrete cosine transform (discrete cosine transform, DCT) or an inverse discrete sine transform (discrete sine transform, DST), to obtain an inverse transform block 213 in the sample domain. The inverse transform block 213 may also be referred to as an inverse transformed inverse quantized block 213 or an inverse transformed residual block 213.
A reconstruction unit 214 (e.g., a summer) is used to add the inverse transform block 213 (i.e., the reconstructed residual block 213) to the prediction block 265 to obtain the reconstructed block 215 in the sample domain, e.g., to add sample values of the reconstructed residual block 213 to sample values of the prediction block 265.
Optionally, a buffer unit 216, e.g. a line buffer 216 (or simply "buffer" 216), is used to buffer or store the reconstructed block 215 and the corresponding sample values for e.g. intra prediction. In other embodiments, the encoder may be configured to use the unfiltered reconstructed block and/or the corresponding sample values stored in the buffer unit 216 for any kind of estimation and/or prediction, such as intra prediction.
For example, embodiments of encoder 20 may be configured such that buffer unit 216 is used not only to store reconstructed blocks 215 for intra prediction 254, but also for loop filter unit 220 (not shown in fig. 2), and/or such that buffer unit 216 and decoded picture buffer unit 230 form one buffer, for example. Other embodiments may be used to use the filtered block 221 and/or blocks or samples (neither shown in fig. 2) from the decoded picture buffer 230 as an input or basis for the intra prediction 254.
The loop filter unit 220 (or simply "loop filter" 220) is used to filter the reconstructed block 215 to obtain a filtered block 221, which facilitates pixel transitions or improves video quality. Loop filter unit 220 is intended to represent one or more loop filters, such as deblocking filters, sample-adaptive offset (SAO) filters, or other filters, such as bilateral filters, adaptive loop filters (adaptive loop filter, ALF), or sharpening or smoothing filters, or collaborative filters. Although loop filter unit 220 is shown in fig. 2 as an in-loop filter, in other configurations loop filter unit 220 may be implemented as a post-loop filter. The filtered block 221 may also be referred to as a filtered reconstructed block 221. Decoded picture buffer 230 may store the reconstructed encoded block after loop filter unit 220 performs a filtering operation on the reconstructed encoded block.
Embodiments of encoder 20 (and correspondingly loop filter unit 220) may be configured to output loop filter parameters (e.g., sample adaptive offset information), e.g., directly or after entropy encoding by entropy encoding unit 270 or any other entropy encoding unit, e.g., such that decoder 30 may receive and apply the same loop filter parameters for decoding.
Decoded picture buffer (decoded picture buffer, DPB) 230 may be a reference picture memory that stores reference picture data for use by video encoder 20 in encoding video data. DPB 230 may be formed of any of a variety of memory devices, such as dynamic random access memory (dynamic random access memory, DRAM) (including Synchronous DRAM (SDRAM), magnetoresistive RAM (MRAM), resistive RAM (RRAM)), or other types of memory devices. DPB 230 and buffer 216 may be provided by the same memory device or separate memory devices. In a certain example, a decoded picture buffer (decoded picture buffer, DPB) 230 is used to store the filtered block 221. The decoded picture buffer 230 may further be used to store the same current picture or other previously filtered blocks, e.g., previously reconstructed and filtered blocks 221, of different pictures, e.g., previously reconstructed pictures, and may provide complete previously reconstructed, i.e., decoded pictures (and corresponding reference blocks and samples) and/or partially reconstructed current pictures (and corresponding reference blocks and samples), e.g., for inter prediction. In a certain example, if the reconstructed block 215 is reconstructed without in-loop filtering, the decoded picture buffer (decoded picture buffer, DPB) 230 is used to store the reconstructed block 215.
The prediction processing unit 260, also referred to as block prediction processing unit 260, is adapted to receive or obtain block 203 (current block 203 of current picture 201) and reconstructed slice data, e.g. reference samples of the same (current) picture from buffer 216 and/or reference picture data 231 of one or more previously decoded pictures from decoded picture buffer 230, and to process such data for prediction, i.e. to provide a prediction block 265 which may be an inter prediction block 245 or an intra prediction block 255.
The mode selection unit 262 may be used to select a prediction mode (e.g., intra or inter prediction mode) and/or a corresponding prediction block 245 or 255 used as the prediction block 265 to calculate the residual block 205 and reconstruct the reconstructed block 215.
Embodiments of mode selection unit 262 may be used to select the prediction mode (e.g., from those supported by prediction processing unit 260) that provides the best match or minimum residual (minimum residual meaning better compression in transmission or storage), or that provides the minimum signaling overhead (minimum signaling overhead meaning better compression in transmission or storage), or both. The mode selection unit 262 may be adapted to determine a prediction mode based on a rate-distortion optimization (rate distortion optimization, RDO), i.e. to select the prediction mode that provides the least rate-distortion optimization, or to select a prediction mode for which the associated rate-distortion at least meets a prediction mode selection criterion.
The prediction processing performed by an instance of encoder 20 (e.g., by prediction processing unit 260) and the mode selection performed (e.g., by mode selection unit 262) will be explained in detail below.
As described above, the encoder 20 is configured to determine or select the best or optimal prediction mode from a (predetermined) set of prediction modes. The set of prediction modes may include, for example, intra prediction modes and/or inter prediction modes.
The set of intra prediction modes may include 35 different intra prediction modes, for example, a non-directional mode such as a DC (or mean) mode and a planar mode, or a directional mode as defined in h.265, or 67 different intra prediction modes, for example, a non-directional mode such as a DC (or mean) mode and a planar mode, or a directional mode as defined in h.266 under development.
The set of (possible) inter prediction modes depends on the available reference pictures (i.e. at least part of the decoded pictures stored in the DBP 230 as described before) and other inter prediction parameters, e.g. on whether the entire reference picture is used or only a part of the reference picture is used, e.g. a search window area surrounding the area of the current block, to search for the best matching reference block, and/or on whether pixel interpolation like half-pixel and/or quarter-pixel interpolation is applied, e.g. on whether or not.
In addition to the above prediction modes, a skip mode and/or a direct mode may also be applied.
The prediction processing unit 260 may be further configured to partition the block 203 into smaller block partitions or sub-blocks, for example, by iteratively using a quad-tree (QT) partition, a binary-tree (BT) partition, or a ternary-tree (TT) partition, or any combination thereof, and to perform prediction for each of the block partitions or sub-blocks, for example, wherein the mode selection includes selecting a tree structure of the partitioned block 203 and selecting a prediction mode applied to each of the block partitions or sub-blocks.
The inter prediction unit 244 may include a motion estimation (motion estimation, ME) unit (not shown in fig. 2) and a motion compensation (motion compensation, MC) unit (not shown in fig. 2). The motion estimation unit is used to receive or obtain a picture block 203 (current picture block 203 of the current picture 201) and a decoded picture 231, or at least one or more previously reconstructed blocks, e.g. reconstructed blocks of one or more other/different previously decoded pictures 231, for motion estimation. For example, the video sequence may include a current picture and a previously decoded picture 31, or in other words, the current picture and the previously decoded picture 31 may be part of, or form, a sequence of pictures that form the video sequence.
For example, encoder 20 may be configured to select a reference block from a plurality of reference blocks of the same or different pictures of a plurality of other pictures, and provide the reference picture (or reference picture index) and/or an offset (spatial offset) between a position (X, Y coordinates) of the reference block and a position of the current block to a motion estimation unit (not shown in fig. 2) as the inter prediction parameter. This offset is also called Motion Vector (MV).
The motion compensation unit is used to obtain, for example, receive inter prediction parameters and perform inter prediction based on or using the inter prediction parameters to obtain the inter prediction block 245. The motion compensation performed by the motion compensation unit (not shown in fig. 2) may involve fetching or generating a prediction block based on motion/block vectors determined by motion estimation (possibly performing interpolation of sub-pixel accuracy). Interpolation filtering may generate additional pixel samples from known pixel samples, potentially increasing the number of candidate prediction blocks available for encoding a picture block. Upon receiving the motion vector for the PU of the current picture block, motion compensation unit 246 may locate the prediction block to which the motion vector points in a reference picture list. Motion compensation unit 246 may also generate syntax elements associated with the blocks and video slices for use by video decoder 30 in decoding the picture blocks of the video slices.
The intra prediction unit 254 is used to obtain, for example, a picture block 203 (current picture block) that receives the same picture and one or more previously reconstructed blocks, for example, reconstructed neighboring blocks, for intra estimation. For example, encoder 20 may be configured to select an intra-prediction mode from a plurality of (predetermined) intra-prediction modes.
Embodiments of encoder 20 may be used to select an intra-prediction mode based on optimization criteria, such as based on a minimum residual (e.g., the intra-prediction mode that provides a prediction block 255 most similar to current picture block 203) or minimum rate distortion.
The intra prediction unit 254 is further adapted to determine an intra prediction block 255 based on intra prediction parameters like the selected intra prediction mode. In any case, after the intra-prediction mode for the block is selected, the intra-prediction unit 254 is also configured to provide the intra-prediction parameters, i.e., information indicating the selected intra-prediction mode for the block, to the entropy encoding unit 270. In one example, intra-prediction unit 254 may be used to perform any combination of the intra-prediction techniques described below.
The entropy encoding unit 270 is used to apply an entropy encoding algorithm or scheme (e.g., a variable length coding (variable length coding, VLC) scheme, a Context Adaptive VLC (CAVLC) scheme, an arithmetic coding scheme, a context adaptive binary arithmetic coding (context adaptive binary arithmetic coding, CABAC), syntax-based context-based binary arithmetic coding (SBAC), probability interval partitioning entropy (probability interval partitioning entropy, PIPE) coding, or other entropy encoding methods or techniques) to a single or all of quantized residual coefficients 209, inter-prediction parameters, intra-prediction parameters, and/or loop filter parameters (or not) to obtain encoded picture data 21 that may be output by output 272 in the form of, for example, an encoded bitstream. The encoded bitstream may be transmitted to video decoder 30 or archived for later transmission or retrieval by video decoder 30. Entropy encoding unit 270 may also be used to entropy encode other syntax elements of the current video slice being encoded.
Other structural variations of video encoder 20 may be used to encode the video stream. For example, the non-transform based encoder 20 may directly quantize the residual signal without a transform processing unit 206 for certain blocks or frames. In another embodiment, encoder 20 may have quantization unit 208 and inverse quantization unit 210 combined into a single unit.
Fig. 3 illustrates an exemplary video decoder 30 for implementing the techniques of this application. Video decoder 30 is operative to receive encoded picture data (e.g., an encoded bitstream) 21, e.g., encoded by encoder 20, to obtain decoded picture 231. During the decoding process, video decoder 30 receives video data, such as an encoded video bitstream representing picture blocks of an encoded video slice and associated syntax elements, from video encoder 20.
In the example of fig. 3, decoder 30 includes entropy decoding unit 304, inverse quantization unit 310, inverse transform processing unit 312, reconstruction unit 314 (e.g., summer 314), buffer 316, loop filter 320, decoded picture buffer 330, and prediction processing unit 360. The prediction processing unit 360 may include an inter prediction unit 344, an intra prediction unit 354, and a mode selection unit 362. In some examples, video decoder 30 may perform a decoding pass that is substantially reciprocal to the encoding pass described with reference to video encoder 20 of fig. 2.
Entropy decoding unit 304 is used to perform entropy decoding on encoded picture data 21 to obtain, for example, quantized coefficients 309 and/or decoded encoding parameters (not shown in fig. 3), e.g., any or all of inter-prediction, intra-prediction parameters, loop filter parameters, and/or other syntax elements (decoded). Entropy decoding unit 304 is further configured to forward inter-prediction parameters, intra-prediction parameters, and/or other syntax elements to prediction processing unit 360. Video decoder 30 may receive syntax elements at the video stripe level and/or the video block level.
Inverse quantization unit 310 may be functionally identical to inverse quantization unit 110, inverse transform processing unit 312 may be functionally identical to inverse transform processing unit 212, reconstruction unit 314 may be functionally identical to reconstruction unit 214, buffer 316 may be functionally identical to buffer 216, loop filter 320 may be functionally identical to loop filter 220, and decoded picture buffer 330 may be functionally identical to decoded picture buffer 230.
The prediction processing unit 360 may include an inter prediction unit 344 and an intra prediction unit 354, where the inter prediction unit 344 may be similar in function to the inter prediction unit 244 and the intra prediction unit 354 may be similar in function to the intra prediction unit 254. The prediction processing unit 360 is typically used to perform block prediction and/or to obtain a prediction block 365 from the encoded image data 21, as well as to receive or obtain prediction related parameters and/or information about the selected prediction mode (explicitly or implicitly) from, for example, the entropy decoding unit 304.
When a video slice is encoded as an intra-coded (I) slice, the intra-prediction unit 354 of the prediction processing unit 360 is used to generate a prediction block 365 for a picture block of the current video slice based on the signaled intra-prediction mode and data from a previously decoded block of the current frame or picture. When a video frame is encoded as an inter-coded (i.e., B or P) slice, an inter-prediction unit 344 (e.g., a motion compensation unit) of prediction processing unit 360 is used to generate a prediction block 365 for a video block of the current video slice based on the motion vector and other syntax elements received from entropy decoding unit 304. For inter prediction, a prediction block may be generated from one reference picture within one reference picture list. Video decoder 30 may construct a reference frame list based on the reference pictures stored in DPB 330 using default construction techniques: list 0 and list 1.
The prediction processing unit 360 is configured to determine prediction information for a video block of a current video slice by parsing the motion vector and other syntax elements, and generate a prediction block for the current video block being decoded using the prediction information. For example, prediction processing unit 360 uses some syntax elements received to determine a prediction mode (e.g., intra or inter prediction) for encoding video blocks of a video slice, an inter prediction slice type (e.g., B slice, P slice, or GPB slice), construction information for one or more of the reference picture lists of the slice, motion vectors for each inter-encoded video block of the slice, inter prediction state for each inter-encoded video block of the slice, and other information to decode video blocks of the current video slice.
Inverse quantization unit 310 may be used to inverse quantize (i.e., inverse quantize) the quantized transform coefficients provided in the bitstream and decoded by entropy decoding unit 304. The inverse quantization process may include using quantization parameters calculated by video encoder 20 for each video block in a video stripe to determine the degree of quantization that should be applied and likewise the degree of inverse quantization that should be applied.
The inverse transform processing unit 312 is configured to apply an inverse transform (e.g., an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process) to the transform coefficients in order to generate a residual block in the pixel domain.
A reconstruction unit 314 (e.g., a summer 314) is used to add the inverse transform block 313 (i.e., the reconstructed residual block 313) to the prediction block 365 to obtain a reconstructed block 315 in the sample domain, e.g., by adding sample values of the reconstructed residual block 313 to sample values of the prediction block 365.
Loop filter unit 320 is used (during or after the encoding cycle) to filter reconstructed block 315 to obtain filtered block 321, to smooth pixel transitions or improve video quality. In one example, loop filter unit 320 may be used to perform any combination of the filtering techniques described below. Loop filter unit 320 is intended to represent one or more loop filters, such as a deblocking filter, a sample-adaptive offset (SAO) filter, or other filters, such as a bilateral filter, an adaptive loop filter (adaptive loop filter, ALF), or a sharpening or smoothing filter, or a collaborative filter. Although loop filter unit 320 is shown in fig. 3 as an in-loop filter, in other configurations loop filter unit 320 may be implemented as a post-loop filter.
The decoded video blocks 321 in a given frame or picture are then stored in a decoded picture buffer 330 that stores reference pictures for subsequent motion compensation.
Decoder 30 is for outputting decoded picture 31, e.g., via output 332, for presentation to a user or for viewing by a user.
Other variations of video decoder 30 may be used to decode the compressed bitstream. For example, decoder 30 may generate the output video stream without loop filter unit 320. For example, the non-transform based decoder 30 may directly inverse quantize the residual signal without an inverse transform processing unit 312 for certain blocks or frames. In another embodiment, the video decoder 30 may have an inverse quantization unit 310 and an inverse transform processing unit 312 combined into a single unit.
Fig. 4 is an illustration of an example of a video encoding system 40 including encoder 20 of fig. 2 and/or decoder 30 of fig. 3, according to an example embodiment. The system 40 may implement a combination of the various techniques of the present application. In the illustrated embodiment, video encoding system 40 may include an imaging device 41, a video encoder 20, a video decoder 30 (and/or a video encoder implemented by logic circuitry 47 of a processing unit 46), an antenna 42, one or more processors 43, one or more memories 44, and/or a display device 45.
As shown, imaging device 41, antenna 42, processing unit 46, logic 47, video encoder 20, video decoder 30, processor 43, memory 44, and/or display device 45 are capable of communicating with each other. As discussed, although video encoding system 40 is depicted with video encoder 20 and video decoder 30, in different examples, video encoding system 40 may include only video encoder 20 or only video decoder 30.
In some examples, as shown, video encoding system 40 may include an antenna 42. For example, the antenna 42 may be used to transmit or receive an encoded bitstream of video data. Additionally, in some examples, video encoding system 40 may include a display device 45. The display device 45 may be used to present video data. In some examples, as shown, logic circuitry 47 may be implemented by processing unit 46. The processing unit 46 may comprise application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, or the like. The video encoding system 40 may also include an optional processor 43, which optional processor 43 may similarly include application-specific integrated circuit (ASIC) logic, a graphics processor, a general purpose processor, or the like. In some examples, logic 47 may be implemented in hardware, such as video encoding dedicated hardware, processor 43 may be implemented in general purpose software, an operating system, or the like. In addition, the memory 44 may be any type of memory, such as volatile memory (e.g., static random access memory (Static Random Access Memory, SRAM), dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.) or non-volatile memory (e.g., flash memory, etc.), and the like. In a non-limiting example, the memory 44 may be implemented by an overspeed cache. In some examples, logic circuitry 47 may access memory 44 (e.g., for implementing an image buffer). In other examples, logic 47 and/or processing unit 46 may include memory (e.g., a cache, etc.) for implementing an image buffer, etc.
In some examples, video encoder 20 implemented by logic circuitry may include an image buffer (e.g., implemented by processing unit 46 or memory 44) and a graphics processing unit (e.g., implemented by processing unit 46). The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include video encoder 20 implemented by logic circuitry 47 to implement the various modules discussed with reference to fig. 2 and/or any other encoder system or subsystem described herein. Logic circuitry may be used to perform various operations discussed herein.
Video decoder 30 may be implemented in a similar manner by logic circuit 47 to implement the various modules discussed with reference to decoder 30 of fig. 3 and/or any other decoder system or subsystem described herein. In some examples, video decoder 30 implemented by logic circuitry may include an image buffer (implemented by processing unit 2820 or memory 44) and a graphics processing unit (e.g., implemented by processing unit 46). The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include video decoder 30 implemented by logic circuit 47 to implement the various modules discussed with reference to fig. 3 and/or any other decoder system or subsystem described herein.
In some examples, antenna 42 of video encoding system 40 may be used to receive an encoded bitstream of video data. As discussed, the encoded bitstream may include data related to the encoded video frame, indicators, index values, mode selection data, etc., discussed herein, such as data related to the encoded partitions (e.g., transform coefficients or quantized transform coefficients, optional indicators (as discussed), and/or data defining the encoded partitions). Video encoding system 40 may also include a video decoder 30 coupled to antenna 42 and used to decode the encoded bitstream. The display device 45 is used to present video frames.
Fig. 5 is a simplified block diagram of an apparatus 500 that may be used as either or both of the source device 12 and the destination device 14 in fig. 1, according to an example embodiment. Apparatus 500 may implement the techniques of this application, and apparatus 500 may take the form of a computing system comprising multiple computing devices, or a single computing device such as a mobile phone, tablet computer, laptop computer, notebook computer, desktop computer, or the like.
The processor 502 in the apparatus 500 may be a central processor. Processor 502 may be any other type of device or devices capable of manipulating or processing information, either as is known or later developed. As shown, while the disclosed embodiments may be practiced with a single processor, such as processor 502, advantages in speed and efficiency may be realized with more than one processor.
In an embodiment, the Memory 504 in the apparatus 500 may be a Read Only Memory (ROM) device or a random access Memory (random access Memory, RAM) device. Any other suitable type of storage device may be used as memory 504. Memory 504 may include code and data 506 that is accessed by processor 502 using bus 512. Memory 504 may further include an operating system 508 and an application 510, application 510 containing at least one program that permits processor 502 to perform the methods described herein. For example, application 510 may include applications 1 through N, applications 1 through N further including video encoding applications that perform the methods described herein. The apparatus 500 may also contain additional memory in the form of a secondary memory 514, which secondary memory 514 may be, for example, a memory card for use with a mobile computing device. Because video communication sessions may contain a large amount of information, such information may be stored in whole or in part in slave memory 514 and loaded into memory 504 for processing as needed.
The apparatus 500 may also include one or more output devices, such as a display 518. In one example, display 518 may be a touch-sensitive display that combines the display and touch-sensitive elements operable to sense touch inputs. A display 518 may be coupled to the processor 502 by a bus 512. Other output devices may be provided in addition to the display 518 that permit a user to program or otherwise use the apparatus 500, or other output devices may be provided as alternatives to the display 518. When the output device is a display or comprises a display, the display may be implemented in different ways, including by a liquid crystal display (liquid crystal display, LCD), cathode-ray tube (CRT) display, plasma display or light emitting diode (light emitting diode, LED) display, such as an Organic LED (OLED) display.
The apparatus 500 may also include or be in communication with an image sensing device 520, the image sensing device 520 being, for example, a camera or any other image sensing device 520 now available or hereafter developed that can sense images, such as images of a user operating the apparatus 500. The image sensing device 520 may be placed directly facing the user running the apparatus 500. In an example, the position and optical axis of the image sensing device 520 may be configured such that its field of view includes an area proximate to the display 518 and the display 518 is visible from that area.
The apparatus 500 may also include or be in communication with a sound sensing device 522, such as a microphone or any other sound sensing device now available or later developed that may sense sound in the vicinity of the apparatus 500. The sound sensing device 522 may be placed directly facing the user operating the apparatus 500 and may be used to receive sounds, such as speech or other sounds, emitted by the user while operating the apparatus 500.
Although the processor 502 and the memory 504 of the apparatus 500 are depicted in fig. 5 as being integrated in a single unit, other configurations may also be used. The operations of processor 502 may be distributed among a plurality of directly couplable machines, each having one or more processors, or distributed in a local area or other network. The memory 504 may be distributed across multiple machines, such as network-based memory or memory in multiple machines running the apparatus 500. Although depicted here as a single bus, the bus 512 of the apparatus 500 may be formed from multiple buses. Further, slave memory 514 may be coupled directly to other components of apparatus 500 or may be accessible over a network, and may comprise a single integrated unit, such as a memory card, or multiple units, such as multiple memory cards. Thus, the apparatus 500 may be implemented in a variety of configurations.
In the draft 2.0 of the general video coding (VVC: versatile video coding) standard, two new transform kernels, namely DCT8 and discrete sine transform 7 (DST 7: discrete Sine Transform), are introduced in addition to the conventional discrete cosine transform2 (DCT 2: discrete Cosine Transform) transform kernel. As shown in table 1, these transformation kernels exhibit different distribution characteristics for their corresponding basis functions.
TABLE 1DCT and DST transform basis functions
For the prediction residual, different prediction modes have different residual characteristics, so that the characteristics of different transformation matrixes can be fully utilized to better adapt to the residual characteristics by adopting the multi-core transformation technology multi-core transformation selection (MTS: multiple transform selection), and the aim of improving the coding compression performance is fulfilled.
After the transform kernel is determined, a corresponding transform matrix, such as a DCT2 matrix, a DST7 matrix, a DCT8 matrix, and the like, may be obtained based on the basis functions corresponding to the transform kernel.
The MTS protocol in the VVC standard draft test model (VTM 2.0) JVET-K1002 is shown in Table 2:
table 2 an MTS scheme
A, B in the table represents a transformation matrix, a=dst7 matrix; b=dct 8 matrix. The transform size (transform matrix size) includes: 4x4,8x8,16x16,32x32. The transformation matrix in the horizontal direction/the vertical direction can be combined into 4 transformation matrix pairs, which respectively correspond to different numbered index. These index are written into the code stream informing the decoder which set of transform matrix pairs to use.
Taking the MTS scheme shown in table 2 as an example, when the encoding end performs transform processing, traversing 4 groups of transform matrix pairs in table 2, performing horizontal and vertical transform on the prediction residual block by using each group of transform matrix pairs, selecting the transform matrix pair with the smallest rate-partition cost, and writing index of the transform matrix pair into a code stream. After determining the corresponding transform matrix pair (assuming that the transform matrix pair corresponds to index 1), transform (i.e., matrix multiplication) is performed on the prediction residual block R by using the transform matrices AH and B to obtain a transform coefficient block F.
F=B*R*A’
And then the coefficient block F is subjected to entropy coding and written into a code stream.
When the decoding end performs inverse transformation processing, the index of the obtained transformation matrix pair is decoded to determine the transformation matrix pair to be adopted, and the transformation matrix pair is utilized to perform inverse transformation in the vertical and horizontal directions on the coefficient block obtained by decoding, so as to obtain a prediction residual block (reconstructed residual block). Specifically, the block F of transform coefficients obtained by decoding is inversely transformed (i.e., matrix multiplied) by using the a matrix and the B matrix to obtain a residual block R.
R=B’*F*A
Wherein a 'represents the transposed matrix of the a matrix and B' represents the transposed matrix of the B matrix, which is equivalent to the inversion matrix because both the a matrix and the B matrix are orthogonal matrices.
When the matrix multiplication is implemented in a circuit, a butterfly fast algorithm (partial butterfly) is generally adopted, so that the symmetry of matrix coefficients can be utilized, and the number of times of required multiplication can be reduced. But DST7 and DCT8 transforms, there is no butterfly fast algorithm like DCT2 transform (partial butterfly); therefore, only matrix multiplication is used, and the computational complexity (such as the number of multiplications) is high. Meanwhile, coefficients of the transformation matrices of DST7 and DCT8 need to be stored, and the number of coefficients that need to be stored is up to 2×2 (4x4+8x8+16x16+32x32) =2720, considering different sizes.
It should be noted that in addition to the transformation matrices a, B described above, the VVC standard draft also uses a DCT2 matrix as the transformation matrix, the size of which includes 4x4 to 128x128.
In another implementation, a simplified scheme for MTS is presented, as shown in table 3:
TABLE 3 alternative MTS protocol
Wherein c=dst4 matrix; d=dct 4 matrix, i.e. DST7 matrix and DCT8 matrix in prior art 2 are replaced with DST4 matrix and DCT4 matrix. DST4 and DCT4 have similar transform kernel basis function characteristics, and the specific transform basis functions are shown in table 4.
TABLE 4DCT4 and DST4 transform basis functions
After determining that the transformation kernels are DST4 and DCT4, the corresponding transformation matrices, i.e., DST4 matrix and DCT4 matrix, may be obtained according to the basis functions corresponding to the transformation kernels.
Based on the basis functions of tables 1 and 4, the following transformation matrix examples can be obtained:
table 5 example 8x8 DCT2 transform matrix
Table 6 4x4 DCT2 transform matrix example
Table 7 4x4 DCT4 transform matrix example
Table 8 4x4 DST4 transform matrix example
As can be seen from the above respective transformation matrices, the coefficients of the 8x8 DCT2 matrix include all the coefficients of the 4x4 DCT2 matrix (e.g., the bold italic coefficients in table 5 are the same as the coefficients of table 6) and the 4x4 DCT4 matrix (e.g., the underlined coefficients in table 5 are the same as the coefficients of table 7). As can be seen again from tables 7 and 8, the DST4 matrix can be obtained by mirroring (FLIP) and sign transforming the DCT4 matrix.
To verify the effect between the MTS algorithm described in Table 3 and the MTS algorithm described in Table 2, performance tests were performed on the MTS algorithm described in Table 3 on the VVC reference software VTM-2.0 platform, and test data obtained by comparing the MTS algorithm described in Table 2 are shown in tables 9 and 10.
Table 9 data of MTS algorithm under AI test conditions described in table 3
Table 10 data of MTS algorithm under RA test conditions described in table 3
The values in the table represent the percentage of increase in coded bits at the same video image quality. Class X (A1, A2, B, C, or E) represents the test video sequence, Y, U/V represents the luminance and chrominance components of the video image, respectively, and EncT and DecT represent the encoding and decoding times, respectively. The test condition AI represents All Intra, and the test condition RA represents random access.
Fig. 6 illustrates a butterfly fast algorithm circuit implementation of a 16×16DCT2 matrix in HEVC, and as can be seen from fig. 6, the butterfly fast algorithm circuit of the 16×16DCT2 matrix includes implementation circuits of a 4×4DCT2 matrix, an 8×8DCT2 matrix, a 4×4DCT4 matrix, and an 8×8DCT4 matrix, that is, a circuit implementation of the 4×4DCT2 matrix, the 8×8DCT2 matrix, the 4×4DCT4 matrix, and the 8×8DCT4 matrix can be directly multiplexed; however, only 4×4DCT2 matrix and 8×8DCT2 matrix can multiplex the butterfly fast algorithm circuit, and the realization of 4×4DCT4 matrix and 8×8DCT4 matrix does not use the butterfly fast algorithm although the realization circuit of 16×16DCT2 matrix can be multiplexed.
To further reduce the implementation complexity of MTS while reducing performance loss, one embodiment of the present invention provides an MTS implementation as shown in Table 11.
Table 11 an MTS implementation
Wherein, the c=dst4 matrix, e=dct 2' matrix, and the DCT2' matrix is the transposed matrix of the DCT2 matrix, and the symbol ' "represents the transpose. In practice, the transpose of the DCT2 matrix coincides with the DCT3 matrix.
One example implementation of a 4 x 4DST4 matrix is shown in table 8. An example of an implementation of a 4 x 4DCT2' matrix is shown in table 12.
Table 12 4X 4DCT2' transform matrix example
As can be seen from comparing table 11 with table 3, table 9 replaces the DCT4 matrix in table 3 with the DCT2' matrix. The implementation of the transform/inverse transform can be further simplified, considering that there is a butterfly fast algorithm when the transform/inverse transform is implemented for the DCT2' matrix. Meanwhile, the transformation/inverse transformation realization circuit corresponding to the DCT2 matrix can be multiplexed on the circuit realization. For the implementation of the DST4 matrix, as shown above, the DST4 matrix can be multiplexed into the 2nx2n DCT2 matrix conversion/inverse conversion implementation circuit through operations such as FLIP, symbol conversion, and the like.
To verify the effect of the MTS scheme of table 11, the inventors performed performance tests on the technical scheme of table 11 on the VVC reference software VTM-2.0.1 platform, and the codec compression performance with respect to the MTS scheme of table 2 is shown in tables 13 and 14.
TABLE 13
TABLE 14
It can be seen from tables 13 and 14 that the average coding bit rate of the MTS scheme described in table 11 increases little (0.01% increase in coding bits under AI test conditions and 0.02% increase in coding bits under RA test conditions) compared to the MTS scheme described in table 2, that is, the effect on coding compression performance is almost negligible, but the scheme uses a butterfly fast algorithm, which can simplify the implementation of transform/inverse transform, as shown in the table, there is a 2% -4% saving in coding time and a certain saving in decoding time. Meanwhile, the coefficients of the used transformation matrix can be simply and conveniently derived from the 2Nx2N DCT2 matrix, and no extra storage space is needed; in addition, the implementation circuit of the used transformation matrix can multiplex the transformation/inverse transformation implementation circuit corresponding to the 2Nx2N DCT2 matrix, and the design of the implementation circuit of the coder-decoder can be simplified.
In order to clarify the implementation circuit of the transformation matrix in the embodiment of the present invention, the transformation/inverse transformation implementation circuit corresponding to the 2nx2n DCT2 matrix may be multiplexed, and the circuit multiplexing is specifically described below. For example, the partial butterfly fast implementation method of the inverse transform circuit disclosed in document Core Transform Design in the High Efficiency Video Coding (HEVC) Standard. The implementation of the inverse transform of the DCT2 matrix can be decomposed into three modules, EVEN, ODD and ADDSY, wherein EVEN represents column transformation by using a matrix composed of ODD-numbered line coefficients of the DCT2 matrix, ODD represents column transformation by using a matrix composed of EVEN-numbered line coefficients of the DCT2 matrix, and ADDSY represents an add-subtract module.
For example, fig. 7 depicts a 32×32 inverse transform implementation circuit, in which Even4 module, odd4 module, and Addsub4 module constitute an inverse transform implementation circuit 701 of a 4×4 matrix; the inverse transform implementation circuit 701 of the 4×4 matrix, the odd8 module and the Addsub8 module constitute an inverse transform implementation circuit 702 of the 8×8 matrix; the inverse transform implementation circuit 702 of 8×8 matrix, odd16 module and Addsub16 module constitute the inverse transform implementation circuit 703 of 16×16 matrix; the inverse transform circuit implementation 703 of the 16×16 matrix, the odd16 block and the Addsub16 block constitute an inverse transform implementation circuit 704 of the 32×32 matrix.
Fig. 8 depicts an implementation circuit, as shown in fig. 8, the matrix multiplication circuits of the Even4 module and the Odd4 module may be shared by the transform circuit and the inverse transform circuit.
FIG. 9 depicts the inverse transform architecture of an 8x8 DCT2 matrix, where d n Representing the coefficients of the nth row and 0 th column in the 32x32 DCT2 matrix, fig. 9 specifically describes the internal structure of the EVEN module implementation and the ODD module implementation matrix multiplication. Wherein EVEN8 and ODD8 matrices can be obtained from a 2n x2n DCT2 matrix.
Specifically, the 8×8 DCT2 matrix is shown in table 15:
TABLE 15 8X8 DCT2 matrix
The 4x4 event 8 transformation matrix can be obtained as follows:
obtaining the odd column coefficients (as the framed coefficients in table 15) extracted from the left half of the 8x8 DCT2 matrix shown in table 15, a 4x4 matrix is formed as shown in table 16:
table 16 4x4 matrix
The matrix of table 16 is transposed to obtain a 4×4 transform matrix EVEN8 as shown in table 17. Wherein the EVEN8 matrix is actually a DCT2' matrix.
Table 17 4x4 event 8 transform matrix
The 4x4 ODD8 transformation matrix may be obtained as follows:
obtaining even column coefficients (as underlined coefficients in table 15) extracted from the right half of the 8x8 DCT2 matrix shown in table 15, a 4x4 matrix shown in table 18 was composed:
Table 18 4x4 matrix
The matrix in table 18 is transposed or sign-transformed to obtain a 4x4 transformed matrix ODD8 as shown in table 19. The ODD8 matrix is actually deformed by the DST4 matrix, for example, the DST4 matrix may be subjected to symbol transformation, specifically, the negative sign of the ODD column coefficient of the DST4 matrix may be inverted to obtain the ODD8 matrix.
TABLE 19 4X4 ODD8 transformation matrix
As can be seen from the above description, for the NxN transform matrix, it can be derived from the 2Nx2N DCT2 matrix. Thus, only one 64x64 DCT2 matrix coefficient needs to be stored, and the matrix coefficients 32x32,16x16,8x8,4x4,2x2 can be derived. Thus, no additional memory space is required to store the matrix coefficients of 32x32,16x16,8x8,4x4, 2x2.
As can be seen from comparing table 12 with table 16, table 12 is identical to table 16, that is, the 4×4DCT2' transform matrix can be directly derived from the 8×8 DCT2 matrix, and thus the implementation circuit of the 4×4DCT2' transform matrix is also included in the transform/inverse transform implementation circuit of the 2n×2n DCT2, and thus the implementation of the 4×4DCT2' transform matrix can be directly multiplexed with the implementation circuit of the 2n×2n DCT2.
As can be seen from comparing table 8 with table 19, the transformation matrix shown in table 19 can be obtained by only performing the sign transformation (inverting the odd-numbered columns of coefficient signs) on the matrix described in table 8, that is, the 4x 4DST4 matrix can be directly derived from the 8x8 DCT2 matrix, so that the implementation circuit of the 4x 4DST4 matrix is also included in the transformation/inverse transformation implementation circuit of the 2n x2n DCT2 matrix, and thus the implementation circuit of the 4x 4DST4 matrix can be directly multiplexed with the implementation circuit of the 2n x2n DCT2.
To further reduce the implementation complexity of MTS while reducing performance loss, one embodiment of the present invention provides an MTS implementation as shown in Table 20.
Table 20 an MTS implementation
Wherein c=dst4 matrix, e=dct 2' matrix. The above variants may be, among other things, performing a sign-on-row or column transformation (e.g., a negation operation) on the matrix.
Aiming at the circuit realization of different 2Nx2N DCT2 matrixes, the deformation requirements of the C matrix and the E matrix are different in order to achieve the purpose of multiplexing the circuits. For example, as mentioned in the MTS scheme described in table 3, the DST4 matrix is to be multiplexed by FLIP and symbol transform processing (into a DCT4 matrix) to implement a circuit of a 2nx2n DCT2 matrix; in the MTS scheme described in table 11, the implementation circuit of the 2n x2n DCT2 matrix actually includes a 0DD8 matrix, and the DST4 matrix can be changed into an ODD8 matrix only by symbol transformation to realize multiplexing. The deformation of various C matrix and E matrix can better adapt to the realization circuits of different 2Nx2NDCT2 matrixes, thereby simplifying the circuit multiplexing.
In addition, some variants of the C matrix and the E matrix can be directly derived from the 2Nx2N DCT2 matrix, so that the derivation process of the coefficients of the C matrix and the E matrix can be simplified.
As an example, a variation of the C matrix may be to invert the odd rows of the C matrix to yield the data shown in table 21:
deformation matrix coefficient examples of Table 21C matrix
As can be seen from comparing table 21 with table 19, table 21 and table 19 are identical and can be directly derived from the coefficients of the 8x8 DCT2 matrix without any additional operations, so that the derivation of the deformation matrix for C can be further simplified. Meanwhile, as the deformation matrix of C can be directly deduced from the 2Nx2N DCT2 matrix, the circuit of different 2Nx2N DCT2 matrixes can be adapted to realize simplified circuit multiplexing, and meanwhile, the influence on the coding compression performance is small.
To verify the effect of the MTS scheme of table 20, the inventors performed performance tests on the technical scheme of table 20 on the VVC reference software VTM-2.0.1 platform, and the codec compression performance with respect to the MTS scheme of table 2 is shown in tables 22 and 23.
Table 22
Table 23
As can be seen from tables 13 and 14, the average coding bit rate of the MTS scheme described in table 11 increases little (0.06% increase in coding bit under AI test conditions and 0.08% increase in coding bit under RA test conditions) compared to the MTS scheme described in table 2, that is, the effect on coding compression performance is almost negligible, but the scheme can simplify the implementation of transform/inverse transform using a butterfly fast algorithm. Meanwhile, the matrix coefficients of the used transformation matrix can be directly derived from the 2Nx2NDCT2 matrix, and no extra storage space is needed; in addition, the implementation circuit of the used transformation matrix can directly multiplex the transformation/inverse transformation implementation circuit corresponding to the 2Nx2N DCT2 matrix, and the design of the implementation circuit of the coder-decoder can be simplified.
Another embodiment of the present invention provides an MTS implementation as shown in table 24.
Table 24 an MTS implementation
Wherein c=dst4 matrix, e=dct 2' matrix. The above variants may be, among other things, performing a sign-on-row or column transformation (e.g., a negation operation) on the matrix.
Another embodiment of the present invention provides an MTS implementation as shown in table 25.
Table 25 an MTS implementation
Wherein c=dst4 matrix, e=dct 2' matrix. The above variants may be, among other things, performing a sign-on-row or column transformation (e.g., a negation operation) on the matrix.
Another embodiment of the present invention provides an MTS implementation as shown in table 26.
Table 26 an MTS implementation
Wherein c=dst4 matrix, e=dct 2' matrix.
Another embodiment of the present invention provides an MTS implementation as shown in table 27.
Table 27 an MTS implementation
Wherein c=dst4 matrix, e=dct 2' matrix.
Another embodiment of the present invention provides an MTS implementation as shown in table 28.
Table 28 an MTS implementation
Wherein c=dst4 matrix, e=dct 2' matrix.
Another embodiment of the present invention provides an MTS implementation as shown in table 29.
Table 29 an MTS implementation
Wherein c=dst4 matrix, e=dct 2' matrix.
Another embodiment of the present invention provides an MTS implementation as shown in table 30.
Table 30 an MTS implementation
Wherein c=dst4 matrix, e=dct 2' matrix.
Another embodiment of the present invention provides an MTS implementation as shown in table 31.
Table 31 an MTS implementation
Wherein c=dst4 matrix, e=dct 2' matrix.
Another embodiment of the present invention provides an MTS implementation as shown in table 32.
Table 32 an MTS implementation
Wherein c=dst4 matrix, e=dct 2' matrix.
Another embodiment of the present invention provides an MTS implementation as shown in table 33.
Table 33 an MTS implementation
Wherein c=dst4 matrix, e=dct 2' matrix.
From the above, in one embodiment, at least one of the DST4 matrix, the DCT2 'matrix, the modification of the DST4 matrix, or the modification of the DCT2' matrix described above may be obtained from an 8×8 DCT2 matrix. Since the encoder or decoder will store 8×8 DCT2 matrices, obtaining the DST4 matrix, the DCT2 'matrix, at least one of the deformation of the DST4 matrix or the deformation of the DCT2' matrix from the 8×8 DCT2 matrices can reduce the number of transform matrices that the encoder or decoder needs to store, and thus can reduce the occupation of the memory space of the encoder or decoder by the transform matrices.
In another embodiment, at least one of the DST4 matrix, the DCT2 'matrix, the variations of the DST4 matrix, or the variations of the DCT2' matrix described above may also be obtained directly from the 64x64 DCT2 matrix. Since the encoder or decoder will store 64×64DCT2 matrices, obtaining the DST4 matrix, the DCT2 'matrix, at least one of the deformation of the DST4 matrix or the deformation of the DCT2' matrix from the 64×64DCT2 matrices can reduce the number of transform matrices that the encoder or decoder needs to store, and thus can reduce the occupation of the memory space of the encoder or decoder by the transform matrices.
In one embodiment, the 64×64DCT2 matrix described above may be represented as table 34 and table 35 (since the 64×64DCT2 matrix is relatively large, it is represented by two tables, wherein table 34 describes columns 0to15 of the matrix (denoted as transMatrixCol0to 15), and table 35 describes columns 16to31 of the matrix (denoted as transMatrixCol16to 31)).
Table 34 64X64 DCT2 matrix 0-15 columns
Table 35 64X64 DCT2 columns 16-31 of matrix
The 64x64 DCT2 matrix transMatrix can be obtained through the tables 34 and 35 by the following operations.
transMatrix[m][n]=transMatrixCol0to15[m][n]with m=0..15,n=0..63
transMatrix[m][n]=transMatrixCol16to31[m-16][n]with m=16..31,n=0..63
transMatrix[m][n]=(n&1?-1:1)*transMatrixCol16to31[47-m][n]
with m=32..47,n=0..63
transMatrix[m][n]=(n&1?-1:1)*transMatrixCol0to15[63-m][n]
with m=48..63,n=0..63
In one embodiment, the transformation kernel is indicated with a trType, e.g., indicating whether the transformation kernel is a deformation of the DST4 matrix/DST 4 matrix or a deformation of the DCT2 'matrix/DCT 2' matrix. For example, trtype=1 indicates that the transformation matrix is a DST4 matrix, and trtype=2 indicates that the transformation matrix is a DCT2' matrix; of course, the reverse may be also be used, that is, when trtype=2, the transform matrix is represented as DST4 matrix, and when trtype=1, the transform matrix is represented as DCT2' matrix. It is understood that trType may also take other values to indicate the DST4 matrix and DCT2' matrix. The embodiment of the invention does not limit the corresponding relation between the trType and the transformation matrix, and the realization of the embodiment of the invention is not affected as long as the value of the trType can be in one-to-one correspondence with the transformation matrix.
For example, where the transformation matrix is a DST4 matrix, the DST4 matrix may be derived from a 64×64DCT2 matrix by the following equation (1):
wherein, transMatrix represents the DCT2 matrix (64X 64DCT2 matrix), nTbs represents the size of the transformation matrix, i is more than or equal to 0 and less than or equal to nTbS-1, and j is more than or equal to 0 and less than or equal to nTbS-1; offset 64-nTbs represents the offset of the column, i.e., to the last nTbs column of the 64x64 matrix; offset amountRepresenting the offset of the row; (-1) j The representation is sign transformed.
In the embodiment of the present invention, i represents column coordinates of coefficients in the transformation matrix, and j represents row coordinates of coefficients in the transformation matrix.
For example, when ntbs=4, i.e., the size of DST4 matrix is 4×4, the DST4 matrix of 4×4 is derived according to formula (1) as:
for example, when ntbs=8, i.e. the size of DST4 matrix is 8×8, the DST4 matrix of 8×8 is derived according to formula (1) as:
when the size of the DST4 matrix is 16 or 32, the DST4 matrix may also be derived by using the formula (1), which will not be described in detail.
For example, where the transform matrix is a DCT2 'matrix, the DCT2' matrix may be derived from the 64×64DCT2 matrix by the following equation (2):
transMatrix[j][i×2 6-Log2(nTbs) ](2)
wherein, transMatrix represents the DCT2 matrix (64×64DCT2 matrix), nTbs represents the size of the transformation matrix, i is more than or equal to 0 and less than or equal to nTbS-1, and j is more than or equal to 0 and less than or equal to nTbS-1.
For example, when ntbs=4, i.e., the size of the DCT2 'matrix is 4×4, the 4×4DCT 2' matrix is derived according to equation (2) as:
for example, when ntbs=8, i.e., the size of the DCT2 'matrix is 8×8, the 8×8 DCT2' matrix is derived according to equation (2) as:
when the size of the DCT2' matrix is 16 or 32, the result can also be derived by using the formula (2), and the details are not repeated.
Wherein in one embodiment, the encoder or decoder may also derive a small-sized DCT 2-matrix from the stored large-sized DCT 2-matrix. For example, when the size of the large-sized DCT2 matrix is 64, i.e., 64×64, a DCT2 matrix having a size smaller than 64 can be derived according to the following formula (3).
transMatrix[i][j×2 6-Log2(nTbs) ](3)
Wherein, transMatrix represents the DCT2 matrix (64×64DCT2 matrix), nTbs represents the size of the transformation matrix, i is more than or equal to 0 and less than or equal to nTbS-1, and j is more than or equal to 0 and less than or equal to nTbS-1.
As can be seen by comparing equations (2) and (3), the difference is the change in the positions of i and j, which means that the matrices obtained by equations (2) and (3) are transposed to each other.
Fig. 10 depicts a flow of a video decoding method provided by an embodiment of the present invention, which may be performed, for example, by the video decoder shown in fig. 3, the method comprising:
1001. And analyzing the received code stream to obtain indication information of a transformation matrix pair of the current block for carrying out inverse transformation and a quantization coefficient of the current block, wherein the transformation matrix pair comprises a horizontal transformation matrix and a vertical transformation matrix.
1002. And performing inverse quantization processing on the quantized coefficients of the current block to obtain the inverse quantized coefficients of the current block.
1003. Determining a transformation matrix pair for performing inverse transformation processing on the current block from the candidate transformation matrix pair according to the indication information; the candidate transformation matrix pair comprises a horizontal transformation matrix and a vertical transformation matrix which are one of two preset transformation matrices; one of the two transformation matrices is a DST4 matrix or a variant of the DST4 matrix, and the other of the two transformation matrices is a DCT2 'matrix or a variant of the DCT2' matrix.
Wherein the number of candidate transformation matrix pairs may be 2, 3 or 4.
Wherein the horizontal direction transformation matrix and the vertical direction transformation matrix included in any one of the candidate transformation matrix pairs are either the same or different.
Wherein, in one embodiment, the deformation of the DST4 matrix is obtained by performing a sign transformation on coefficients of at least a part of rows or at least a part of columns in the DST4 matrix, for example, the sign transformation may be sign inversion.
In one embodiment, the transformation of the DCT2 'matrix is obtained by performing a sign transformation on coefficients of at least a portion of the rows or at least a portion of the columns in the DCT2' matrix, e.g., the sign transformation may be sign inversion.
For example, the candidate transformation matrix pairs may be as described in any of table 11, table 20, table 24, table 25, or table 26-table 33.
Wherein in one embodiment, the number of candidate transformation matrix pairs is four; the indication information of the transformation matrix pair of the current block subjected to the inverse transformation is the index in table 11, table 20, table 24 or table 25. Taking table 11 as an example, if the index is 0, the vertical transform matrix in the transform matrix pair representing the current block for inverse transform is DST4 matrix, and the numerical transform matrix is DST4 matrix; if the index is 1, the vertical transformation matrix in the transformation matrix pair representing the current block for inverse transformation is DST4 matrix, and the numerical transformation matrix is DCT2' matrix; if the index is 2, the vertical transformation matrix in the transformation matrix pair representing the current block for inverse transformation is DCT2' matrix, and the numerical transformation matrix is DST4 matrix; if the index is 3, the vertical transform matrix in the transform matrix pair representing the current block for inverse transform is the DCT2 'matrix, and the numerical transform matrix is the DCT2' matrix. The processing of the indexes of table 20 and table 24 to table 33 is similar to that of table 11, and will not be repeated.
In another embodiment, the indication information of the pair of transform matrices for inverse transforming the current block includes an identification of a vertical direction transform matrix in the pair of transform matrices for inverse transforming the current block, and an identification of a horizontal direction transform matrix in the pair of transform matrices for inverse transforming the current block. For example, one bit is used as an identification of the vertical direction transformation matrix, and another bit is used as an identification of the horizontal direction transformation matrix.
Taking table 11 as an example, if the bit value of the vertical transform matrix is 0, it indicates that the vertical transform matrix is a DST4 matrix, and conversely, it indicates that the vertical transform matrix is a DCT2' matrix; if the bit value of the horizontal direction conversion matrix is 0, it indicates that the horizontal direction conversion matrix is DST4 matrix, and otherwise, it indicates that the horizontal direction conversion matrix is DCT2' matrix.
Taking table 20 as an example, if the bit value of the vertical transform matrix is 0, it indicates that the vertical transform matrix is a deformation of DST4 matrix, and conversely, indicates that the vertical transform matrix is a deformation of DCT2' matrix; if the bit value of the horizontal direction conversion matrix is 0, the horizontal direction conversion matrix is a distortion of the DST4 matrix, and if the bit value of the horizontal direction conversion matrix is 0, the horizontal direction conversion matrix is a distortion of the DCT2' matrix.
Taking table 24 as an example, if the bit value of the vertical transform matrix is 0, it indicates that the vertical transform matrix is a deformation of DST4 matrix, and conversely, indicates that the vertical transform matrix is a DCT2' matrix; if the bit value of the horizontal direction conversion matrix is 0, the horizontal direction conversion matrix is a modification of the DST4 matrix, and otherwise, the horizontal direction conversion matrix is a DCT2' matrix.
Taking table 25 as an example, if the bit value of the vertical transform matrix is 0, it indicates that the vertical transform matrix is a DST4 matrix, and conversely, it indicates that the vertical transform matrix is a modification of the DCT2' matrix; if the bit value of the horizontal direction conversion matrix is 0, the horizontal direction conversion matrix is represented as DST4 matrix, and if the bit value is not 0, the horizontal direction conversion matrix is represented as a modification of DCT2' matrix.
1004. And carrying out inverse transformation on the inverse quantized coefficients of the current block according to the transformation matrix of the inverse transformation of the current block so as to obtain a reconstructed residual block of the current block.
1005. And obtaining a reconstructed block of the current block according to the reconstructed residual block of the current block.
It can be seen that the implementation of the transform/inverse transform can be simplified, since the DCT2 'matrix or the deformation of the DCT2' matrix has a butterfly fast algorithm at the time of the transform/inverse transform. Meanwhile, the transformation/inverse transformation realization circuit corresponding to the DCT2 matrix can be directly multiplexed by the transformation of the DCT2 'matrix or the transformation of the DCT2' matrix and the transformation of the DST4 matrix or the transformation of the DST4 matrix, so that the design of the realization circuit of the transformation/inverse transformation module can be simplified when the transformation/inverse transformation module is realized by the circuit.
In one embodiment, prior to step 1004, the method may further comprise: and deducing a transformation matrix included in the transformation matrix pair of the current block subjected to inverse transformation from the DCT2 matrix according to a preset algorithm.
For example, when the transform matrix pair of the current block subjected to the inverse transform process includes a DST4 matrix, and the size of the DCT2 matrix is 64, the deriving, according to a preset algorithm, from the DCT2 matrix, the transform matrix included in the transform matrix pair of the current block subjected to the inverse transform process may include: the DST4 matrix is derived according to the aforementioned equation (1).
For example, when the pair of transformation matrices for performing inverse transformation on the current block includes a DCT2' matrix, and the size of the DCT2 matrix is 64, the deriving, according to a preset algorithm, from the DCT2 matrix, the pair of transformation matrices for performing inverse transformation on the current block may include: the DCT2' matrix is derived according to equation (2) above.
It can be seen that the decoder only needs to store the DCT2 matrix to derive the matrix included in the pair of transform matrices, so that the number of transform matrices that the decoder needs to store can be reduced, thereby reducing the occupation of the memory space of the decoder by the transform matrices.
Fig. 11 depicts a flow of a video encoding method provided by an embodiment of the present invention, which may be performed, for example, by the video encoder shown in fig. 2, the method comprising:
1101. determining indication information of a transformation matrix pair for transforming a current residual block, wherein the transformation matrix pair comprises a horizontal transformation matrix and a vertical transformation matrix; the transformation matrix pair is one of candidate transformation matrix pairs, and the horizontal transformation matrix and the vertical transformation matrix included in the candidate transformation matrix pair are one of two preset transformation matrices; one of the two transformation matrices is a DST4 matrix or a variant of the DST4 matrix, and the other of the two transformation matrices is a DCT2 'matrix or a variant of the DCT2' matrix.
Wherein the horizontal direction transformation matrix and the vertical direction transformation matrix included in any one of the candidate transformation matrix pairs are either the same or different.
Wherein the number of candidate transformation matrix pairs may be 2, 3 or 4.
Wherein, in one embodiment, the deformation of the DST4 matrix is obtained by performing a sign transformation on coefficients of at least a portion of rows or at least a portion of columns in the DST4 matrix, for example, the negative sign transformation may be a sign inversion.
In one embodiment, the transformation of the DCT2 'matrix is obtained by performing a sign transformation on coefficients of at least a portion of the rows or at least a portion of the columns in the DCT2' matrix, e.g., the sign transformation may be sign inversion.
For example, the candidate transformation matrix pairs may be as described in any of table 11, table 20, table 24, table 25, or table 26-table 33.
Specifically, the encoder may use the four candidate transform matrix pairs to perform horizontal transform and vertical transform on the residual block, thereby selecting a transform matrix pair with the smallest rate-distortion cost (rate-distortion cost) as a transform matrix pair for performing transform processing on the current residual block, and determining the indication information of the transform matrix pair for performing transform processing on the current residual block from any one of table 11, table 20, or table 24-table 33.
1102. And carrying out quantization processing on the transformation coefficient obtained by carrying out transformation processing on the current residual block through the transformation matrix so as to obtain the quantization coefficient of the current residual block.
1103. And carrying out entropy coding processing on the quantized coefficient of the current residual block and the indication information.
1104. And writing the indication information of the transformation matrix pair after entropy coding and the quantization coefficient of the current residual block after entropy coding into a code stream.
It can be seen that the implementation of the transform/inverse transform can be simplified, since the DCT2 'matrix or the deformation of the DCT2' matrix has a butterfly fast algorithm at the time of the transform/inverse transform. Meanwhile, the transformation/inverse transformation realization circuit corresponding to the DCT2 matrix can be directly multiplexed by the transformation of the DCT2 'matrix or the transformation of the DCT2' matrix and the transformation of the DST4 matrix or the transformation of the DST4 matrix, so that the design of the realization circuit of the transformation/inverse transformation module can be simplified when the transformation/inverse transformation module is realized by the circuit.
Wherein, in one embodiment, the encoding method further comprises: and deducing the transformation matrix included in the transformation matrix pair from the DCT2 matrix according to a preset algorithm.
For example, when the transformation matrix included in the transformation matrix pair includes a DST4 matrix and the size of the DCT2 matrix is 64; the deriving the transformation matrix included in the transformation matrix pair from the DCT2 matrix according to a preset algorithm includes: the DST4 matrix is derived according to equation (1) above.
For example, when the transformation matrix included in the transformation matrix pair includes a DCT2' matrix, the size of the DCT2 matrix is 64; the deriving the transformation matrix included in the transformation matrix pair from the DCT2 matrix according to a preset algorithm includes: the DCT2' matrix is derived according to equation (1) above.
It can be seen that the encoder only needs to store the DCT2 matrix to derive the matrix included in the pair of transform matrices, so that the number of transform matrices that the encoder needs to store can be reduced, thereby reducing the occupation of the memory space of the encoder by the transform matrices.
One embodiment of the present invention provides a video decoder 30, as shown in fig. 3, comprising:
an entropy decoding unit 304, configured to parse the received code stream to obtain indication information of a transform matrix pair of the current block for performing an inverse transform, where the transform matrix pair includes a horizontal transform matrix and a vertical transform matrix, and a quantization coefficient 309 of the current block.
An inverse quantization unit 310, configured to perform inverse quantization processing on the quantized coefficient 309 of the current block to obtain an inverse quantized coefficient 311 of the current block.
An inverse transform processing unit 312 configured to determine a transform matrix pair for performing inverse transform processing on the current block from among candidate transform matrix pairs according to the instruction information; the candidate transformation matrix pair comprises a horizontal transformation matrix and a vertical transformation matrix which are one of two preset transformation matrices; one of the two transformation matrices is a DST4 matrix or a deformation of the DST4 matrix, and the other of the two transformation matrices is a DCT2 'matrix or a deformation of the DCT2' matrix; and carrying out inverse transformation on the inverse quantized coefficients of the current block according to the transformation matrix of the inverse transformation of the current block to obtain a reconstructed residual block 313 of the current block.
For specific processing, reference may be made to the processing of step 1003.
A reconstruction unit 314, configured to obtain a reconstructed block 315 of the current block based on the reconstructed residual block of the current block.
Wherein, in one embodiment, the inverse transform processing unit 312 may be further configured to: and deducing a transformation matrix included in the transformation matrix pair of the current block subjected to inverse transformation from the DCT2 matrix according to a preset algorithm.
For example, when the pair of transformation matrices for performing the inverse transformation on the current block includes a DST4 matrix, and the size of the DCT2 matrix is 64, the inverse transformation processing unit 312 may be specifically configured to derive the DST4 matrix according to the above formula (1).
For example, when the pair of transformation matrices for inverse transforming the current block includes a DCT2 'matrix, the size of the DCT2 matrix is 64, the inverse transformation processing unit 312 may be specifically configured to derive the DCT2' matrix according to the above formula (2).
One embodiment of the present invention provides a video encoder 20, as shown in fig. 2, comprising:
a transform processing unit 206 for determining indication information of a transform matrix pair for performing a transform process on the current residual block 205, the transform matrix pair including a horizontal direction transform matrix and a vertical direction transform matrix; the transformation matrix pair is one of candidate transformation matrix pairs, and the horizontal transformation matrix and the vertical transformation matrix included in the candidate transformation matrix pair are one of two preset transformation matrices; one of the two transformation matrices is a DST4 matrix or a variant of the DST4 matrix, and the other of the two transformation matrices is a DCT2 'matrix or a variant of the DCT2' matrix.
Specific implementations may refer to the processing of 1101.
A quantization unit 208 that quantizes a transform coefficient 207 obtained by transforming the current residual block by the transform matrix to obtain a quantized coefficient of the current residual block.
Among them, the transform coefficient 207 can be obtained specifically by the transform processing unit 206.
An entropy encoding unit 270, configured to perform entropy encoding processing on the quantized coefficient of the current residual block and the indication information;
and an output 272, configured to write the indication information of the transform matrix pair after entropy encoding processing and the quantized coefficient of the current residual block after entropy encoding processing into a code stream.
In an embodiment, the transformation processing unit 206 may be further configured to derive the transformation matrix included in the transformation matrix pair from the DCT2 matrix according to a preset algorithm.
For example, when the transform matrix pair includes a DST4 matrix, the DCT2 matrix has a size of 64, the transform processing unit 206 may be specifically configured to derive the DST4 matrix according to the above formula (1).
For example, when the transform matrix pair includes a DCT2 'matrix, the size of the DCT2 matrix is 64, the transform processing unit 206 may be specifically configured to derive the DCT2' matrix according to the above formula (2).
The embodiment of the invention also provides a video decoder which comprises an execution circuit for executing any video decoding method.
The embodiment of the invention also provides a video decoder, which comprises: at least one processor; and a non-volatile computer readable storage medium coupled to the at least one processor, the non-volatile computer readable storage medium storing a computer program executable by the at least one processor, which when executed by the at least one processor causes the video decoder to perform any of the video decoding methods described above.
The embodiment of the invention also provides a video encoder which comprises an execution circuit for executing any video encoding method.
The embodiment of the invention also provides a video encoder, which comprises: at least one processor; and a non-volatile computer readable storage medium coupled to the at least one processor, the non-volatile computer readable storage medium storing a computer program executable by the at least one processor, which when executed by the at least one processor causes the video decoder to perform any of the video encoding methods described above.
The embodiment of the invention also provides a computer readable storage medium for storing a computer program executable by a processor, which when executed by the at least one processor, performs any of the methods described above.
The embodiment of the invention also provides a computer program which, when executed, performs any of the methods described above.
In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium, and executed by a hardware-based processing unit. A computer-readable medium may comprise a computer-readable storage medium corresponding to a tangible medium, such as a data storage medium or a communication medium, such as any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium, such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described in this disclosure. The computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (digital subscriber line, DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that the computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are actually directed to non-transitory tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital versatile disc (digital versatile disc, DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The instructions may be executed by one or more processors, such as one or more digital signal processors (digital signal processor, DSPs), general purpose microprocessors, application specific integrated circuits (application specific integrated circuit, ASICs), field programmable logic arrays (field programmable logic arrays, FPGAs), or other equivalent integrated or discrete logic circuitry. Thus, the term "processor" as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Additionally, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules for encoding and decoding, or incorporated in a synthetic codec. Also, the techniques may be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a variety of devices or apparatuses including a wireless handset, an integrated circuit (integrated circuit, IC), or a collection of ICs (e.g., a chipset). The disclosure describes various components, modules, or units in order to emphasize functional aspects of the apparatus for performing the disclosed techniques, but does not necessarily require realization by different hardware units. In particular, as described above, the various units may be combined in a codec hardware unit in combination with suitable software and/or firmware, or provided by a collection of interoperable hardware units, including one or more processors as described above.
Claims (21)
1. A video decoding method, comprising:
analyzing the received code stream to obtain indication information of a transformation matrix pair of the current block for inverse transformation and a quantization coefficient of the current block, wherein the transformation matrix pair comprises a horizontal transformation matrix and a vertical transformation matrix; the indication information comprises an identification of a vertical direction transformation matrix in a transformation matrix pair for indicating the current block to perform inverse transformation, and an identification of a horizontal direction transformation matrix in the transformation matrix pair for indicating the current block to perform inverse transformation;
performing inverse quantization processing on the quantized coefficients of the current block to obtain inverse quantized coefficients of the current block;
determining a transformation matrix pair for performing inverse transformation processing on the current block from the candidate transformation matrix pair according to the indication information; the candidate transformation matrix pair comprises a horizontal transformation matrix and a vertical transformation matrix which are one of two preset transformation matrices; one of the two transformation matrices is a DST4 matrix or a deformation of the DST4 matrix, and the other of the two transformation matrices is a DCT2' matrix or a deformation of the DCT2' matrix, wherein the DCT2' matrix is a transposed matrix of the DCT2 matrix; the deformation of the DST4 matrix is obtained by performing a sign transformation on coefficients of at least a portion of rows or at least a portion of columns in the DST4 matrix; the transformation of the DCT2 'matrix is obtained by performing symbol transformation on coefficients of at least a part of rows or at least a part of columns in the DCT2' matrix;
Performing inverse transformation on the inverse quantized coefficients of the current block according to a transformation matrix of the inverse transformation of the current block to obtain a reconstructed residual block of the current block;
and obtaining a reconstructed block of the current block according to the reconstructed residual block of the current block.
2. The method of claim 1, wherein the number of candidate transform matrix pairs is four; when one of the two transformation matrices is a DST4 matrix and the other is a DCT2' matrix, the vertical transformation matrix included in the first transformation matrix pair of the four candidate transformation matrix pairs is a DST4 matrix, and the horizontal transformation matrix included in the first transformation matrix pair is a DST4 matrix;
a second transformation matrix pair of the four candidate transformation matrix pairs comprises a vertical transformation matrix which is a DST4 matrix, and a horizontal transformation matrix of the second transformation matrix pair comprises a DCT2' matrix;
a third transformation matrix pair of the four candidate transformation matrix pairs comprises a DCT2' matrix, and a horizontal transformation matrix of the third transformation matrix pair comprises a DST4 matrix;
and a vertical direction transformation matrix included in a fourth transformation matrix pair of the four candidate transformation matrix pairs is a DCT2 'matrix, and a horizontal direction transformation matrix included in the fourth transformation matrix pair is a DCT2' matrix.
3. The method of claim 1, wherein the number of candidate transform matrix pairs is four; when one of the two transformation matrices is the deformation of the DST4 matrix and the other is the deformation of the DCT2' matrix, the vertical transformation matrix included in the first transformation matrix pair of the four candidate transformation matrix pairs is the deformation of the DST4 matrix, and the horizontal transformation matrix included in the first transformation matrix pair is the deformation of the DST4 matrix;
a second transformation matrix pair of the four candidate transformation matrix pairs comprises a deformation of a DST4 matrix, and a horizontal transformation matrix of the second transformation matrix pair comprises a deformation of a DCT2' matrix;
a third transformation matrix pair of the four candidate transformation matrix pairs comprises a deformation of the DCT2' matrix, and a horizontal transformation matrix of the third transformation matrix pair comprises a deformation of the DST4 matrix;
a fourth of the four candidate transform matrix pairs comprises a variant of the DCT2 'matrix as the vertical transform matrix and a variant of the DCT2' matrix as the horizontal transform matrix.
4. A method according to any one of claims 1 to 3, wherein before the inverse transforming the inverse quantized coefficients of the current block according to the transformation matrix of the inverse transforming the current block, the method further comprises:
and deducing a transformation matrix included in the transformation matrix pair of the current block subjected to inverse transformation from the DCT2 matrix according to a preset algorithm.
5. The method of claim 4, wherein the pair of transform matrices for which the current block is inverse transformed comprises a DST4 matrix; the size of the DCT2 matrix is 64;
the method for deriving the transformation matrix pair of the current block for inverse transformation from the DCT2 matrix according to the preset algorithm comprises the following steps:
deriving the DST4 matrix from the DCT2 matrix according to the following formula:
wherein, transMatrix represents the DCT2 matrix, nTbs represents the DSTThe size of the matrix is 4, i is more than or equal to 0 and less than or equal to nTbS-1, and j is more than or equal to 0 and less than or equal to nTbS-1; offset 64-nTbs represents the offset of the column; offset amountRepresenting the offset of the row; (-1) j The representation is sign transformed.
6. The method of claim 4, wherein the pair of transform matrices for which the current block is inverse transformed comprises a DCT2' matrix; the size of the DCT2 matrix is 64;
The method for deriving the transformation matrix pair of the current block for inverse transformation from the DCT2 matrix according to the preset algorithm comprises the following steps:
deriving the DCT2' matrix from the DCT2 matrix according to the following formula:
transMatrix[j][i×2 6-Log2(nTbs) ];
wherein, transMatrix represents the DCT2 matrix, nTbs represents the size of the DCT2', i is more than or equal to 0 and less than or equal to nTbS-1, and j is more than or equal to 0 and less than or equal to nTbS-1.
7. A method of encoding, comprising:
determining indication information of a transformation matrix pair for performing transformation processing on a current residual block, wherein the indication information comprises an identification of a vertical transformation matrix in the transformation matrix pair for indicating the current residual block to perform inverse transformation processing and an identification of a horizontal transformation matrix in the transformation matrix pair for indicating the current residual block to perform inverse transformation processing; the transformation matrix pair comprises a horizontal transformation matrix and a vertical transformation matrix; the transformation matrix pair is one of candidate transformation matrix pairs, and the horizontal transformation matrix and the vertical transformation matrix included in the candidate transformation matrix pair are one of two preset transformation matrices; one of the two transformation matrices is a DST4 matrix or a deformation of the DST4 matrix, and the other of the two transformation matrices is a DCT2' matrix or a deformation of the DCT2' matrix, wherein the DCT2' matrix is a transposed matrix of the DCT2 matrix; the deformation of the DST4 matrix is obtained by performing a sign transformation on coefficients of at least a portion of rows or at least a portion of columns in the DST4 matrix; the transformation of the DCT2 'matrix is obtained by performing symbol transformation on coefficients of at least a part of rows or at least a part of columns in the DCT2' matrix;
Performing quantization processing on a transformation coefficient obtained by transforming the current residual block through the transformation matrix to obtain a quantization coefficient of the current residual block;
entropy coding is carried out on the quantized coefficient of the current residual block and the indication information;
and writing the indication information of the transformation matrix pair after entropy coding and the quantization coefficient of the current residual block after entropy coding into a code stream.
8. The method of claim 7, wherein the number of candidate transform matrix pairs is four; when one of the two transformation matrices is a DST4 matrix and the other is a DCT2' matrix, the vertical transformation matrix included in the first transformation matrix pair of the four candidate transformation matrix pairs is a DST4 matrix, and the horizontal transformation matrix included in the first transformation matrix pair is a DST4 matrix;
a second transformation matrix pair of the four candidate transformation matrix pairs comprises a vertical transformation matrix which is a DST4 matrix, and a horizontal transformation matrix of the second transformation matrix pair comprises a DCT2' matrix;
a third transformation matrix pair of the four candidate transformation matrix pairs comprises a DCT2' matrix, and a horizontal transformation matrix of the third transformation matrix pair comprises a DST4 matrix;
And a vertical direction transformation matrix included in a fourth transformation matrix pair of the four candidate transformation matrix pairs is a DCT2 'matrix, and a horizontal direction transformation matrix included in the fourth transformation matrix pair is a DCT2' matrix.
9. The method of claim 7, wherein the number of candidate transform matrix pairs is four; when one of the two transformation matrices is the deformation of the DST4 matrix and the other is the deformation of the DCT2' matrix, the vertical transformation matrix included in the first transformation matrix pair of the four candidate transformation matrix pairs is the deformation of the DST4 matrix, and the horizontal transformation matrix included in the first transformation matrix pair is the deformation of the DST4 matrix;
a second transformation matrix pair of the four candidate transformation matrix pairs comprises a deformation of a DST4 matrix, and a horizontal transformation matrix of the second transformation matrix pair comprises a deformation of a DCT2' matrix;
a third transformation matrix pair of the four candidate transformation matrix pairs comprises a deformation of the DCT2' matrix, and a horizontal transformation matrix of the third transformation matrix pair comprises a deformation of the DST4 matrix;
A fourth of the four candidate transform matrix pairs comprises a variant of the DCT2 'matrix as the vertical transform matrix and a variant of the DCT2' matrix as the horizontal transform matrix.
10. A video decoder, comprising:
the entropy decoding unit is used for analyzing the received code stream to obtain indication information of a transformation matrix pair of the current block for carrying out inverse transformation and a quantization coefficient of the current block, wherein the transformation matrix pair comprises a horizontal transformation matrix and a vertical transformation matrix; the indication information comprises an identification of a vertical direction transformation matrix in a transformation matrix pair for indicating the current block to perform inverse transformation, and an identification of a horizontal direction transformation matrix in the transformation matrix pair for indicating the current block to perform inverse transformation;
an inverse quantization unit, configured to perform inverse quantization processing on the quantized coefficient of the current block to obtain an inverse quantized coefficient of the current block;
an inverse transformation processing unit, configured to determine, from a candidate transformation matrix pair, a transformation matrix pair for performing inverse transformation processing on the current block according to the instruction information; the candidate transformation matrix pair comprises a horizontal transformation matrix and a vertical transformation matrix which are one of two preset transformation matrices; one of the two transformation matrices is a DST4 matrix or a deformation of the DST4 matrix, and the other of the two transformation matrices is a DCT2' matrix or a deformation of the DCT2' matrix, wherein the DCT2' matrix is a transposed matrix of the DCT2 matrix; the deformation of the DST4 matrix is obtained by performing a sign transformation on coefficients of at least a portion of rows or at least a portion of columns in the DST4 matrix; the transformation of the DCT2 'matrix is obtained by performing symbol transformation on coefficients of at least a part of rows or at least a part of columns in the DCT2' matrix; performing inverse transformation on the inverse quantized coefficients of the current block according to a transformation matrix of the inverse transformation of the current block to obtain a reconstructed residual block of the current block;
And a reconstruction unit, configured to obtain a reconstructed block of the current block based on the reconstructed residual block of the current block.
11. The video decoder of claim 10, characterized in that the number of candidate transform matrix pairs is four; when one of the two transformation matrices is a DST4 matrix and the other is a DCT2' matrix, the vertical transformation matrix included in the first transformation matrix pair of the four candidate transformation matrix pairs is a DST4 matrix, and the horizontal transformation matrix included in the first transformation matrix pair is a DST4 matrix;
a second transformation matrix pair of the four candidate transformation matrix pairs comprises a vertical transformation matrix which is a DST4 matrix, and a horizontal transformation matrix of the second transformation matrix pair comprises a DCT2' matrix;
a third transformation matrix pair of the four candidate transformation matrix pairs comprises a DCT2' matrix, and a horizontal transformation matrix of the third transformation matrix pair comprises a DST4 matrix;
and a vertical direction transformation matrix included in a fourth transformation matrix pair of the four candidate transformation matrix pairs is a DCT2 'matrix, and a horizontal direction transformation matrix included in the fourth transformation matrix pair is a DCT2' matrix.
12. The video decoder of claim 10, characterized in that the number of candidate transform matrix pairs is four; when one of the two transformation matrices is the deformation of the DST4 matrix and the other is the deformation of the DCT2' matrix, the vertical transformation matrix included in the first transformation matrix pair of the four candidate transformation matrix pairs is the deformation of the DST4 matrix, and the horizontal transformation matrix included in the first transformation matrix pair is the deformation of the DST4 matrix;
a second transformation matrix pair of the four candidate transformation matrix pairs comprises a deformation of a DST4 matrix, and a horizontal transformation matrix of the second transformation matrix pair comprises a deformation of a DCT2' matrix;
a third transformation matrix pair of the four candidate transformation matrix pairs comprises a deformation of the DCT2' matrix, and a horizontal transformation matrix of the third transformation matrix pair comprises a deformation of the DST4 matrix;
a fourth of the four candidate transform matrix pairs comprises a variant of the DCT2 'matrix as the vertical transform matrix and a variant of the DCT2' matrix as the horizontal transform matrix.
13. The video decoder according to any of claims 10 to 12, wherein the inverse transform processing unit is further configured to: and deducing a transformation matrix included in the transformation matrix pair of the current block subjected to inverse transformation from the DCT2 matrix according to a preset algorithm.
14. The video decoder of claim 13, characterized in that the pair of transform matrices for which the current block is inverse transformed comprises a DST4 matrix; the size of the DCT2 matrix is 64;
the inverse transformation processing unit is specifically configured to:
deriving the DST4 matrix from the DCT2 matrix according to the following formula:
wherein, transMatrix represents the DCT2 matrix, nTbs represents the size of the DST4 matrix, i is more than or equal to 0 and less than or equal to nTbS-1, and j is more than or equal to 0 and less than or equal to nTbS-1; offset 64-nTbs represents the offset of the column; offset amountRepresenting the offset of the row; (-1) j The representation is sign transformed.
15. The video decoder of claim 13, characterized in that the pair of transform matrices for which the current block is inverse transformed comprises a DCT2' matrix; the size of the DCT2 matrix is 64;
the inverse transformation processing unit is specifically configured to:
deriving the DCT2' matrix from the DCT2 matrix according to the following formula:
transMatrix[j][i×2 6-Log2(nTbs) ];
Wherein, transMatrix represents the DCT2 matrix, nTbs represents the size of the DCT2', i is more than or equal to 0 and less than or equal to nTbS-1, and j is more than or equal to 0 and less than or equal to nTbS-1.
16. A video encoder, comprising:
a transformation processing unit, configured to determine indication information of a transformation matrix pair for performing transformation processing on a current residual block, where the indication information includes an identifier of a vertical transformation matrix in the transformation matrix pair for instructing the current residual block to perform inverse transformation processing, and an identifier of a horizontal transformation matrix in the transformation matrix pair for instructing the current residual block to perform inverse transformation processing; the transformation matrix pair comprises a horizontal transformation matrix and a vertical transformation matrix; the transformation matrix pair is one of candidate transformation matrix pairs, and the horizontal transformation matrix and the vertical transformation matrix included in the candidate transformation matrix pair are one of two preset transformation matrices; one of the two transformation matrices is a DST4 matrix or a deformation of the DST4 matrix, and the other of the two transformation matrices is a DCT2' matrix or a deformation of the DCT2' matrix, wherein the DCT2' matrix is a transposed matrix of the DCT2 matrix; the deformation of the DST4 matrix is obtained by performing a sign transformation on coefficients of at least a portion of rows or at least a portion of columns in the DST4 matrix; the transformation of the DCT2 'matrix is obtained by performing symbol transformation on coefficients of at least a part of rows or at least a part of columns in the DCT2' matrix;
A quantization unit that quantizes a transform coefficient obtained by transforming the current residual block through the transform matrix to obtain a quantized coefficient of the current residual block;
the entropy coding unit is used for carrying out entropy coding processing on the quantized coefficient of the current residual block and the indication information;
and outputting, namely writing the indication information of the transformation matrix pair after entropy coding and the quantized coefficient of the current residual block after entropy coding into a code stream.
17. The video encoder of claim 16, wherein the number of candidate transform matrix pairs is four; when one of the two transformation matrices is a DST4 matrix and the other is a DCT2' matrix, the vertical transformation matrix included in the first transformation matrix pair of the four candidate transformation matrix pairs is a DST4 matrix, and the horizontal transformation matrix included in the first transformation matrix pair is a DST4 matrix;
a second transformation matrix pair of the four candidate transformation matrix pairs comprises a vertical transformation matrix which is a DST4 matrix, and a horizontal transformation matrix of the second transformation matrix pair comprises a DCT2' matrix;
a third transformation matrix pair of the four candidate transformation matrix pairs comprises a DCT2' matrix, and a horizontal transformation matrix of the third transformation matrix pair comprises a DST4 matrix;
And a vertical direction transformation matrix included in a fourth transformation matrix pair of the four candidate transformation matrix pairs is a DCT2 'matrix, and a horizontal direction transformation matrix included in the fourth transformation matrix pair is a DCT2' matrix.
18. The video encoder of claim 16, wherein the number of candidate transform matrix pairs is four; when one of the two transformation matrices is the deformation of the DST4 matrix and the other is the deformation of the DCT2' matrix, the vertical transformation matrix included in the first transformation matrix pair of the four candidate transformation matrix pairs is the deformation of the DST4 matrix, and the horizontal transformation matrix included in the first transformation matrix pair is the deformation of the DST4 matrix;
a second transformation matrix pair of the four candidate transformation matrix pairs comprises a deformation of a DST4 matrix, and a horizontal transformation matrix of the second transformation matrix pair comprises a deformation of a DCT2' matrix;
a third transformation matrix pair of the four candidate transformation matrix pairs comprises a deformation of the DCT2' matrix, and a horizontal transformation matrix of the third transformation matrix pair comprises a deformation of the DST4 matrix;
A fourth of the four candidate transform matrix pairs comprises a variant of the DCT2 'matrix as the vertical transform matrix and a variant of the DCT2' matrix as the horizontal transform matrix.
19. A video decoder comprising execution circuitry for performing the method of any of claims 1 to 6.
20. A video decoder, comprising:
at least one processor; and
a non-transitory computer readable storage medium coupled to the at least one processor, the non-transitory computer readable storage medium storing a computer program executable by the at least one processor, the computer program, when executed by the at least one processor, causing the video decoder to perform the method of any of claims 1 to 6.
21. A computer readable storage medium storing a computer program executable by at least one processor, which when executed by the at least one processor performs the method of any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/106383 WO2020057537A1 (en) | 2018-09-21 | 2019-09-18 | Video decoding method and video decoder, and video encoding method and video encoder |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2018111078652 | 2018-09-21 | ||
CN201811107865 | 2018-09-21 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110944177A CN110944177A (en) | 2020-03-31 |
CN110944177B true CN110944177B (en) | 2024-03-01 |
Family
ID=69904878
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811150819.0A Active CN110944177B (en) | 2018-09-21 | 2018-09-29 | Video decoding method, video decoder, video encoding method and video encoder |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110944177B (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5168375A (en) * | 1991-09-18 | 1992-12-01 | Polaroid Corporation | Image reconstruction by use of discrete cosine and related transforms |
JPH08185389A (en) * | 1994-05-10 | 1996-07-16 | Matsushita Electric Ind Co Ltd | Orthogonal transformation processor |
EP1294198A2 (en) * | 2001-09-18 | 2003-03-19 | Microsoft Corporation | Improved block transform and quantization for image and video coding |
CN102045560A (en) * | 2009-10-23 | 2011-05-04 | 华为技术有限公司 | Video encoding and decoding method and video encoding and decoding equipment |
WO2012077347A1 (en) * | 2010-12-09 | 2012-06-14 | パナソニック株式会社 | Decoding method |
JP2013047538A (en) * | 2011-08-29 | 2013-03-07 | Aisin Ai Co Ltd | Vehicle power transmission control device |
CN103098473A (en) * | 2010-09-08 | 2013-05-08 | 三星电子株式会社 | Low complexity transform coding using adaptive DCT/DST for intra-prediction |
CN103796015A (en) * | 2012-10-31 | 2014-05-14 | 朱洪波 | Quantization coefficient differential coding adapted to the number of coefficients |
CN104221378A (en) * | 2012-04-16 | 2014-12-17 | 高通股份有限公司 | Uniform granularity for quantization matrix in video coding |
CN107211144A (en) * | 2015-01-26 | 2017-09-26 | 高通股份有限公司 | Enhanced multiple transform for prediction residual |
WO2017171370A1 (en) * | 2016-03-28 | 2017-10-05 | 주식회사 케이티 | Method and apparatus for processing video signal |
WO2018049549A1 (en) * | 2016-09-13 | 2018-03-22 | Mediatek Inc. | Method of multiple quantization matrix sets for video coding |
WO2018166429A1 (en) * | 2017-03-16 | 2018-09-20 | Mediatek Inc. | Method and apparatus of enhanced multiple transforms and non-separable secondary transform for video coding |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10972733B2 (en) * | 2016-07-15 | 2021-04-06 | Qualcomm Incorporated | Look-up table for enhanced multiple transform |
-
2018
- 2018-09-29 CN CN201811150819.0A patent/CN110944177B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5168375A (en) * | 1991-09-18 | 1992-12-01 | Polaroid Corporation | Image reconstruction by use of discrete cosine and related transforms |
JPH08185389A (en) * | 1994-05-10 | 1996-07-16 | Matsushita Electric Ind Co Ltd | Orthogonal transformation processor |
EP1294198A2 (en) * | 2001-09-18 | 2003-03-19 | Microsoft Corporation | Improved block transform and quantization for image and video coding |
CN102045560A (en) * | 2009-10-23 | 2011-05-04 | 华为技术有限公司 | Video encoding and decoding method and video encoding and decoding equipment |
CN103098473A (en) * | 2010-09-08 | 2013-05-08 | 三星电子株式会社 | Low complexity transform coding using adaptive DCT/DST for intra-prediction |
WO2012077347A1 (en) * | 2010-12-09 | 2012-06-14 | パナソニック株式会社 | Decoding method |
JP2013047538A (en) * | 2011-08-29 | 2013-03-07 | Aisin Ai Co Ltd | Vehicle power transmission control device |
CN104221378A (en) * | 2012-04-16 | 2014-12-17 | 高通股份有限公司 | Uniform granularity for quantization matrix in video coding |
CN103796015A (en) * | 2012-10-31 | 2014-05-14 | 朱洪波 | Quantization coefficient differential coding adapted to the number of coefficients |
CN107211144A (en) * | 2015-01-26 | 2017-09-26 | 高通股份有限公司 | Enhanced multiple transform for prediction residual |
WO2017171370A1 (en) * | 2016-03-28 | 2017-10-05 | 주식회사 케이티 | Method and apparatus for processing video signal |
WO2018049549A1 (en) * | 2016-09-13 | 2018-03-22 | Mediatek Inc. | Method of multiple quantization matrix sets for video coding |
WO2018166429A1 (en) * | 2017-03-16 | 2018-09-20 | Mediatek Inc. | Method and apparatus of enhanced multiple transforms and non-separable secondary transform for video coding |
Non-Patent Citations (1)
Title |
---|
Complexity Reduction for Adaptive Multiple Transforms (AMTs) using Adjustment Stages;Amir Said等;Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 JVET-J0066-v3;JVET-J0066-v2的摘要、第2节 * |
Also Published As
Publication number | Publication date |
---|---|
CN110944177A (en) | 2020-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111107356B (en) | Image prediction method and device | |
CN111919444B (en) | Prediction method and device of chrominance block | |
CN110881129B (en) | Video decoding method and video decoder | |
CN111225206B (en) | Video decoding method and video decoder | |
CN112040247B (en) | Video decoding method, video decoder, and computer-readable storage medium | |
CN111277828B (en) | Video encoding and decoding method, video encoder and video decoder | |
JP7317973B2 (en) | IMAGE PREDICTION METHOD, DEVICE AND SYSTEM, APPARATUS AND STORAGE MEDIUM | |
JP2024124416A (en) | VIDEO PROCESSING METHOD, VIDEO PROCESSING APPARATUS, ENCODER, DECODER, MEDIUM, AND COMPUTER PROGRAM | |
EP4346212A2 (en) | Video picture decoding and encoding method and apparatus | |
CN112055200A (en) | MPM list construction method, and chroma block intra-frame prediction mode acquisition method and device | |
CN115426494A (en) | Encoder, decoder and corresponding methods using compressed MV storage | |
JP7337157B2 (en) | Video encoder, video decoder and method for encoding or decoding pictures | |
CN113366850B (en) | Video encoder, video decoder and corresponding methods | |
CN114913249A (en) | Encoding method, decoding method and related devices | |
CN112118447B (en) | Construction method, device and coder-decoder for fusion candidate motion information list | |
CN116684591A (en) | Video encoder, video decoder, and corresponding methods | |
CA3110477A1 (en) | Picture partitioning method and apparatus | |
CN111294603B (en) | Video encoding and decoding method and device | |
CN112135149B (en) | Entropy encoding/decoding method and device of syntax element and codec | |
CN111277840B (en) | Transform method, inverse transform method, video encoder and video decoder | |
CN110876061B (en) | Chroma block prediction method and device | |
CN110944177B (en) | Video decoding method, video decoder, video encoding method and video encoder | |
CN112637590A (en) | Video encoder, video decoder and corresponding methods | |
WO2020069632A1 (en) | A video encoder, a video decoder and corresponding methods | |
CN113316939A (en) | Context modeling method and device for zone bit |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |