WO2020253829A1 - 一种编解码方法、装置及存储介质 - Google Patents
一种编解码方法、装置及存储介质 Download PDFInfo
- Publication number
- WO2020253829A1 WO2020253829A1 PCT/CN2020/097144 CN2020097144W WO2020253829A1 WO 2020253829 A1 WO2020253829 A1 WO 2020253829A1 CN 2020097144 W CN2020097144 W CN 2020097144W WO 2020253829 A1 WO2020253829 A1 WO 2020253829A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- current block
- prediction mode
- indication information
- block
- decoding
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/18—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
Definitions
- This application relates to the field of image processing technology, and in particular to an encoding and decoding method, device and storage medium.
- syntax elements can be various indications, such as first ISP indication information or second ISP indication information.
- the first ISP indication information is used to indicate whether to start the intra-frame sub-block prediction mode, and the second ISP indication information is used to indicate the frame.
- the sub-block division method of intra-sub-block prediction mode is used to indicate the frame.
- the embodiments of the present application provide an encoding and decoding method and a storage medium, which can be used to solve the problems of a large number of context models and large memory overhead in the encoding and decoding process in related technologies.
- the technical solution is as follows:
- an encoding and decoding method includes:
- the ISP indication information is used to indicate whether to start the intra-frame sub-block prediction mode
- the second ISP indication information is used to indicate the sub-frame sub-block prediction mode. Block division method.
- an encoding and decoding method includes:
- the first ISP indication information When it is determined to encode or decode the first ISP indication information, perform bypass-based binary arithmetic coding or decoding on the first ISP indication information, where the first ISP indication information is used to indicate whether to start the intra sub-block prediction mode;
- the second ISP indication information When it is determined to encode or decode the second ISP indication information, perform bypass-based binary arithmetic coding or decoding on the second ISP indication information, where the second ISP indication information is used to indicate the sub-block division of the intra-frame sub-block prediction mode Types of.
- an encoding and decoding method includes:
- the current block does not support the multi-line prediction mode.
- the current block does not support a multi-line prediction mode.
- a coding and decoding method is provided. If the current block supports multi-line prediction mode and the number of candidate reference lines corresponding to the multi-line prediction mode is 3, the reference line indication information corresponding to the multi-line prediction mode is at most Occupying 2 bits, the reference row indication information is used to indicate the index information of the target reference row used when predicting the current block based on a multi-row prediction mode, and the method includes:
- a coding and decoding method is provided. If the current block supports a multi-line prediction mode, and the number of candidate reference lines corresponding to the multi-line prediction mode is 4, the reference line indication information corresponding to the multi-line prediction mode is at most Occupying 3 bits, the reference row indication information is used to indicate the index information of the target reference row used when predicting the current block based on a multi-row prediction mode, and the method includes:
- a coding and decoding method is provided. If the current block supports a multi-line prediction mode, and the number of candidate reference lines corresponding to the multi-line prediction mode is 3, the candidate reference line with index information of 0 is line 0 , The 0th row is the row adjacent to the current block boundary; the candidate reference row with index information of 1 is the first row, and the first row is the second row adjacent to the current block boundary; index information The candidate reference row of 2 is the second row, and the second row is the row adjacent to the first row; the method includes:
- the target reference row is determined according to the instruction information of the reference row
- the target reference row is row 0;
- the target reference row is the first row
- the target reference row is the second row.
- a coding and decoding method is provided. If it is determined that the current block starts the multi-line prediction mode, and the number of candidate reference lines corresponding to the multi-line prediction mode is 3, the candidate reference line whose index information is 0 is 0 Row, the 0th row is the row adjacent to the current block boundary; the candidate reference row whose index information is 1 is the first row, and the first row is the row second adjacent to the current block boundary; index The candidate reference line with information 2 is the second line, and the second line is the line adjacent to the first line; the candidate reference line with index information 3 is the third line, and the third line is the same as the second line.
- the row adjacent to the row; the method includes:
- the target reference row is determined according to the instruction information of the reference row
- the target reference row is row 0;
- the target reference row is the second row
- the target reference row is the third row.
- a decoding method If the current block supports a multi-line prediction mode, the method includes:
- the line number indication information Before predicting the current block according to the multi-line prediction mode, decode the line number indication information, where the line number indication information is used to indicate the number of candidate reference lines corresponding to the multi-line prediction mode;
- the target reference row is determined according to the number of candidate reference rows corresponding to the multi-row prediction mode and the reference row indication information, where the reference row indication information is used to indicate when the current block is predicted based on the multi-row prediction mode Index information of the target reference line used;
- the line number indication information exists in a sequence parameter set, an image parameter level, a slice level, or a tile level.
- an encoding and decoding method includes:
- the current block starts the affine prediction mode or starts other prediction modes except the affine prediction mode
- the motion vector difference coding or decoding of the current block is performed, if the current block supports the adaptive motion vector precision AMVR mode
- the first AMVR indication information is used to indicate whether to start AMVR mode
- the second AMVR indication information is used to indicate index information of pixel accuracy used when performing vector difference encoding or decoding in the AMVR mode, and the first context model and the second context model are different.
- an encoding and decoding method which is characterized in that the method includes:
- the first AMVR indication information is encoded or decoded based on the first context model , Performing context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic decoding on the first AMVR indication information, and when the first AMVR indication information indicates that the current block starts the AMVR mode, the second AMVR indication information Perform bypass-based binary arithmetic encoding or decoding;
- the first AMVR indication information is being coded Or during decoding, perform context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic decoding on the first AMVR indication information based on the second context model, and when the first AMVR indication information indicates that the current block starts AMVR In the mode, perform bypass-based binary arithmetic coding or decoding on the second AMVR indication information;
- the first context model and the second context model are different, the first AMVR indication information is used to indicate whether to start the AMVR mode, and the second AMVR indication information is used to indicate the running vector difference in the AMVR mode Index information of pixel accuracy used in encoding or decoding.
- an encoding and decoding method includes:
- the current block starts the affine prediction mode or starts other prediction modes except the affine prediction mode, when the motion vector difference coding or decoding of the current block is performed, if the current block supports the adaptive motion vector precision AMVR mode
- the first AMVR indication information is used to indicate whether to activate the AMVR mode
- the first AMVR indication information indicates that the current block starts the AMVR mode
- the second AMVR indication information is used to indicate that the operation is performed in AMVR mode.
- the index information of the pixel precision used when running vector difference encoding or decoding.
- a decoding method includes:
- the target prediction mode of the intra sub-block prediction exists in the most probable intra-prediction mode MPM list, and the current block is a luminance block, then the intra-sub-block prediction
- decode the prediction mode index information where the first bit of the prediction mode index information is obtained by performing context-based adaptive binary arithmetic decoding based on the first context model, and other bits Bits are obtained by binary arithmetic decoding based on bypass;
- the prediction mode index information determine the target prediction mode of the intra sub-block prediction initiated by the current block from the MPM list; the prediction mode index information is used to indicate the index information of the target prediction mode in the MPM list;
- the prediction The mode index information is decoded, wherein the first bit of the prediction mode index information is obtained by performing context-based adaptive binary arithmetic decoding based on the second context model, and the other bits are obtained based on bypassed binary arithmetic decoding, so
- the second context model and the first context model are the same context model;
- prediction mode index information determine from the MPM list a target prediction mode for regular intra prediction initiated by the current block; the prediction mode index information is used to indicate index information of the target prediction mode in the MPM list;
- a decoding method includes:
- the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a brightness block, then when the current block is predicted according to intra sub-block prediction, the prediction The mode index information is decoded, wherein all bits of the prediction mode index information are obtained based on bypassed binary arithmetic decoding;
- the prediction mode index information determine from the MPM list a target prediction mode for intra sub-block prediction initiated by the current block; the prediction mode index information is used to indicate index information of the target prediction mode in the MPM list;
- the prediction mode index is used when predicting the current block according to the conventional intra prediction Information is decoded, wherein all bits of the prediction mode index information are obtained based on bypassed binary arithmetic decoding;
- the prediction mode index information determine from the MPM list a target prediction mode for regular intra prediction initiated by the current block; the prediction mode index information is used to indicate index information of the target prediction mode in the MPM list;
- a decoding method includes:
- the prediction mode index information is decoded, where the first bit of the prediction mode index information is obtained by context-based adaptive binary arithmetic decoding based on a context model, and the other bits are obtained by bypassing binary arithmetic decoding;
- the prediction mode index information determine the target prediction mode of the intra sub-block prediction initiated by the current block from the MPM list; the prediction mode index information is used to indicate the index of the target prediction mode in the MPM list information;
- the current block starts conventional intra prediction
- the target prediction mode of the conventional intra prediction comes from the MPM list
- the current block is a luminance block
- the prediction mode index information is decoded, wherein all the bits of the prediction mode index information are obtained based on bypassed binary arithmetic decoding
- the prediction mode index information determine from the MPM list the target prediction mode of the conventional intra prediction initiated by the current block; the prediction mode index information is used to indicate the index of the target prediction mode in the MPM list information;
- a decoding method includes:
- the prediction mode index information is decoded, and all bits of the prediction mode index information are obtained based on bypassed binary arithmetic decoding;
- the prediction mode index information determine the target prediction mode of the intra sub-block prediction initiated by the current block from the MPM list; the prediction mode index information is used to indicate that the target prediction mode is in the MPM list Index information;
- the current block starts conventional intra prediction
- the target prediction mode of the conventional intra prediction comes from the MPM list
- the current block is a luminance block
- the prediction mode index information is decoded, where the first bit of the prediction mode index information is obtained by context-based adaptive binary arithmetic decoding based on a context model, and the other bits are obtained by bypassing binary arithmetic decoding;
- the prediction mode index information determine from the MPM list the target prediction mode of the conventional intra prediction initiated by the current block; the prediction mode index information is used to indicate the index of the target prediction mode in the MPM list information;
- a decoding method includes:
- the planar Indication information is decoded, where the planar indication information is used to indicate whether the target prediction mode of the intra-frame sub-block prediction initiated by the current block is the planar prediction mode, and the planar indication information is based on the first context model for context-based Obtained by adaptive binary arithmetic decoding;
- the target prediction mode of the intra sub-block prediction initiated by the current block is the planar prediction mode, predict the current block according to the planar prediction mode;
- the target prediction mode of the intra sub-block prediction activated by the current block is not the planar prediction mode
- the predicted target prediction mode is to predict the current block according to the target prediction mode
- the current block starts conventional intra prediction
- the target prediction mode of the conventional intra prediction comes from the MPM list
- the current block is a luminance block
- the current block is predicted according to the conventional intra prediction
- Decoding the planar indication information where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is context-based adaptation based on the second context model Obtained by binary arithmetic decoding, the first context model is the same as the second context model;
- the target prediction mode started by the current block is determined from the MPM list according to the prediction mode index information, and according to the target The prediction mode predicts the current block.
- a decoding method includes:
- the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, when predicting the current block based on intra sub-block prediction, provide the planar indication information Performing decoding, wherein the planar indication information is used to indicate whether the target prediction mode of the intra sub-block prediction initiated by the current block is the planar prediction mode, and the planar indication information is obtained based on bypassed binary arithmetic decoding;
- the target prediction mode of the intra sub-block prediction initiated by the current block is the planar prediction mode, predict the current block according to the planar prediction mode;
- the target prediction mode of the intra sub-block prediction activated by the current block is not the planar prediction mode
- the predicted target prediction mode is to predict the current block according to the target prediction mode
- the current block starts conventional intra prediction
- the target prediction mode of the conventional intra prediction comes from the MPM list
- the current block is a luminance block
- the current block is predicted according to the conventional intra prediction
- the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is obtained based on bypassed binary arithmetic decoding
- the target prediction mode started by the current block is determined from the MPM list according to the prediction mode index information, and the target prediction mode is determined according to the target prediction mode. Predict the current block.
- a decoding method is provided. If the current block supports the cross-component prediction mode and the current block starts the cross-component prediction mode, the current block is a chroma block, and the method includes:
- chroma prediction mode index information is decoded, and the chroma prediction mode index information is used to indicate that the target prediction mode of the current block is in the corresponding candidate prediction mode
- the index in the list wherein the first bit of the chroma prediction mode index information is obtained by adaptive binary arithmetic decoding based on the context based on the first context model, and the second bit of the chroma prediction mode index information Bit is based on a second context model for context-based adaptive binary arithmetic decoding, the first context model is different from the second context model; the third bit and the fourth bit of the chroma prediction mode index information Bits are obtained by binary arithmetic decoding based on bypass;
- a decoding method is provided. If the current block supports the cross-component prediction mode and the current block starts the cross-component prediction mode, the current block is a chroma block, and the method includes:
- chroma prediction mode index information is decoded, and the chroma prediction mode index information is used to indicate that the target prediction mode of the current block is in the corresponding candidate prediction mode
- the index in the list wherein the first bit of the chroma prediction mode index information is obtained by performing context-based adaptive binary arithmetic decoding based on a context model, and the second bit of the chroma prediction mode index information Bit, 3 bits and 4th bit are obtained by binary arithmetic decoding based on bypass;
- a coding and decoding method is provided. If the current block supports the cross-component prediction mode and the current block starts the cross-component prediction mode, the current block is a chroma block, and the method includes:
- chroma prediction mode index information is decoded, and the chroma prediction mode index information is used to indicate that the target prediction mode of the current block is in the corresponding candidate prediction mode Index in the list;
- the target prediction mode is the first cross-component prediction mode
- the target prediction mode is the second cross-component prediction mode
- the target prediction mode is the second cross-component prediction mode
- the target prediction mode is a planar prediction mode
- the target prediction mode is a vertical prediction mode
- the target prediction mode is a horizontal prediction mode
- the target prediction mode is the DC prediction mode
- a decoding method is provided. If the current block supports the cross-component prediction mode and the current block starts the cross-component prediction mode, the current block is a chroma block, and the method includes:
- chroma prediction mode index information is decoded, and the chroma prediction mode index information is used to indicate that the target prediction mode of the current block is in the corresponding candidate prediction mode Index in the list;
- the target prediction mode is the first cross-component prediction mode
- the target prediction mode is the second cross-component prediction mode
- the target prediction mode is the second cross-component prediction mode
- the target prediction mode is the planar prediction mode
- the target prediction mode is a vertical prediction mode
- the target prediction mode is a horizontal prediction mode
- the target prediction mode is the DC prediction mode
- an encoding and decoding method includes:
- the luminance and chrominance of the current block share a division tree, if the width and height size of the luminance block corresponding to the current block is 64*64, and the size of the chrominance block corresponding to the current block is 32*32, then The current block does not support cross-component prediction mode.
- a decoding method includes:
- the current block supports ALF and the current block is a luma block
- the target context model is a context model selected from three different context models included in the second context model set according to whether the upper block of the current block starts ALF and whether the left block of the current block starts ALF And the three context models included in the second context model set are different from the three context models included in the first context model set.
- a decoding method includes:
- the current block supports ALF and the current block is a luma block
- the ALF indication information is used to indicate whether the current block starts ALF
- the target context model is based on whether the upper block of the current block starts ALF, and the left block of the current block starts ALF, from the first context model
- One context model selected from 3 different context models included in the set; or,
- the target context model is a context model selected from three different context models included in the second context model set according to whether the upper block of the current block starts ALF and whether the left block of the current block starts ALF And the three context models included in the second context model set are the same as the three context models included in the first context model set.
- a decoding method includes:
- the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the first context model.
- the ALF indication information is used to indicate whether the current block starts ALF; or,
- the current block supports ALF and the current block is a chrominance block
- the current block before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the second context model, so The second context model is different from the first context model.
- a decoding method includes:
- the current block supports ALF and the current block is a luma block
- the ALF indication information is used to indicate whether the current block starts ALF
- the target context model is based on whether the upper block of the current block starts ALF, and the left block of the current block starts ALF, from the first context model
- One context model selected from 3 different context models included in the set; or,
- the current block supports ALF and the current block is a CB chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic on the ALF indication information based on the first context model Decode; or,
- the current block supports ALF, and the current block is a CR chroma block
- the current block before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic on the ALF indication information based on the second context model Decoding, the context model included in the first context model set, the first context model, and the second context model are different context models.
- a decoding method characterized in that the method includes:
- the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the first context model.
- the ALF indication information is used to indicate whether the current block starts ALF; or,
- the current block supports ALF and the current block is a CB chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic on the ALF indication information based on the second context model Decode; or,
- the current block supports ALF and the current block is a CR chroma block
- a decoding method includes:
- the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the first context model.
- the ALF indication information is used to indicate whether the current block starts ALF; or,
- the current block supports ALF and the current block is a chrominance block
- the current block before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the second context model, so The second context model and the first context model are the same context model.
- a decoding method includes:
- the current block supports ALF and the current block is a luma block
- the current block supports ALF and the current block is a chrominance block
- the current block before performing filtering processing on the current block according to the ALF mode, perform bypass-based binary arithmetic decoding on the ALF indication information.
- a decoding method includes:
- the current block supports ALF and the current block is a luma block
- the indication information is used to indicate whether the current block starts ALF; or,
- the current block supporter is ALF, and the current block is ALF enabled, and the current block is a chrominance block, perform bypass-based binary arithmetic on the ALF indicator before filtering the current block according to the ALF mode decoding.
- a decoding method includes:
- the current block supports ALF and the current block is a luma block
- the current block supports ALF and the current block is a chrominance block
- the current block before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on a context model.
- an encoding and decoding method includes:
- the current block does not support the matrix-based intra prediction mode.
- a decoding method If the current block supports a matrix-based intra prediction mode, the method includes:
- the target context model is based on whether the upper block of the current block starts the matrix-based intra prediction mode, and whether the left block of the current block starts the matrix-based intra prediction
- a context model selected from 3 different context models
- the matrix-based intra prediction mode is used to predict the current block.
- a decoding method If the current block supports a matrix-based intra prediction mode, the method includes:
- the target context model is a context model from two different context models according to whether the current block meets a preset size condition
- the matrix-based intra prediction mode is used to predict the current block.
- a decoding method If the current block supports a matrix-based intra prediction mode, the method includes:
- the MIP indication information is used to indicate whether the current block is Start the matrix-based intra prediction mode
- the matrix-based intra prediction mode is used to predict the current block.
- a decoding method If the current block supports a matrix-based intra prediction mode, the method includes:
- the MIP indication information is used to indicate whether the current block starts the matrix-based intra prediction.
- the matrix-based intra prediction mode is used to predict the current block.
- a decoding method includes:
- the current processing unit is decoded.
- the first BDPCM indication information exists in a sequence parameter set, an image parameter level, a slice level or a tile level.
- an encoding and decoding method includes:
- Second BDPCM indication information is used to indicate the size range of the processing unit supporting the BDPCM mode
- the current block Based on the second BDPCM indication information and the size of the current block, it is determined whether the current block can perform BDPCM encoding or decoding.
- the second BDPCM indication information exists in a sequence parameter set, an image parameter level, a slice level, or a tile level.
- a decoding method is provided. If the current block supports the BDPCM mode, the method includes:
- the third BDPCM indication information indicates that the current block starts the BDPCM mode
- the fourth BDPCM indication information is used to indicate the index of the prediction direction of the BDPCM mode information
- a decoding method is provided. If the current block supports the BDPCM mode, the method includes:
- the third BDPCM indication information indicates that the current block starts the BDPCM mode
- the fourth BDPCM indication information is used to indicate the index of the prediction direction of the BDPCM mode information
- an encoding and decoding method includes:
- the current block starts intra-sub-block prediction, when it is determined to encode or decode the CBF indication information, based on the target context model, perform context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic decoding on the CBF indication information;
- the CBF indication information is used to indicate whether the transform block of the current block has non-zero transform coefficients, and the target context model is based on whether the previous transform block of the current block has non-zero transform coefficients from the first context model One context model selected from 2 different context models included in the set; or,
- the current block starts the regular intra prediction or starts the BDPCM mode
- when determining to encode or decode the CBF indication information, based on the target context model perform context-based adaptive binary arithmetic coding or context-based adaptation on the CBF indication information Binary arithmetic decoding
- the target context model is a context model selected from 2 different context models included in the second context model set according to the division depth of the transformation block of the current block, and the second context model
- the two context model sets included in the set are different from the two context models included in the first context model set.
- an encoding and decoding method includes:
- the current block starts intra-frame sub-block prediction, or starts regular intra-frame prediction, or starts BDPCM mode
- CBF indication information when it is determined to encode or decode CBF indication information, based on the target context model, perform context-based adaptive binary for CBF indication information Arithmetic coding or context-based adaptive binary arithmetic decoding; wherein the CBF indication information is used to indicate whether the transform block of the current block has non-zero transform coefficients, and the target context model is based on the transform block of the current block
- the depth of division is a context model selected from 2 different context models.
- an encoding and decoding method includes:
- the current block starts intra-frame sub-block prediction or starts regular intra-frame prediction
- the CBF indication information when it is determined to encode or decode the CBF indication information, based on the target context model, perform context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic coding on the CBF indication information Adaptive binary arithmetic decoding; wherein the CBF indication information is used to indicate whether the transform block of the current block has non-zero transform coefficients, and the target context model is based on the division depth of the transform block of the current block from the A context model selected from 2 different context models included in the context model set; or,
- the target context model is a context model in the first set of context models.
- a decoding method includes:
- JCCR indication information Decoding JCCR indication information, where the JCCR indication information is used to indicate whether the current processing unit supports JCCR mode
- the current block If it is determined according to the JCCR indication information that the current block supports the JCCR mode, and the current block activates the JCCR mode, the current block’s blue chrominance CB component and red chrominance CR component are correlated with each other. The block is decoded to obtain the chrominance residual coefficient of the current block.
- the JCCR indication information exists in a sequence parameter set, an image parameter level, a slice level or a tile level.
- a coding and decoding device which is characterized in that the device includes:
- a memory for storing processor executable instructions
- the processor is configured to execute any one of the foregoing encoding and decoding methods or decoding methods.
- a computer-readable storage medium is provided, and instructions are stored on the computer-readable storage medium, and when the instructions are executed by a processor, any one of the foregoing encoding and decoding methods or decoding methods is implemented.
- a computer program product containing instructions which when running on a computer, causes the computer to execute any of the above-mentioned encoding and decoding methods or decoding methods.
- a context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic decoding is performed on the first ISP indication information based on a context model.
- a context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic decoding is performed on the first ISP indication information based on a context model.
- the second ISP instruction information perform bypass-based binary arithmetic encoding or decoding on the second ISP instruction information. In this way, the number of context models required in the encoding and decoding process can be reduced, and the The complexity of the encoding and decoding process reduces memory overhead.
- FIG. 1 is a schematic structural diagram of an encoding and decoding system provided by an embodiment of the present application
- FIG. 2 is a schematic diagram of a coding and decoding process provided by an embodiment of the present application
- FIG. 3 is an exemplary direction corresponding to an intra prediction mode provided by an embodiment of the present application.
- FIG. 4 is an exemplary direction corresponding to an angle mode provided by an embodiment of the present application.
- FIG. 5 is a schematic diagram of image block division according to an embodiment of the present application.
- Fig. 6 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 7 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 8 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 9 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 10 is a flowchart of an encoding and decoding method provided by an embodiment of the present application.
- FIG. 11 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 12 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 13 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 14 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 15 is a flowchart of an encoding and decoding method provided by an embodiment of the present application.
- FIG. 16 is a flowchart of an encoding and decoding method provided by an embodiment of the present application.
- FIG. 17 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 18 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 19 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 20 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 21 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 22 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 23 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 24 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 25 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 26 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 27 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 28 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 29 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 30 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 31 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 32 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 33 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 34 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 35 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 36 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 37 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 38 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 39 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 40 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 41 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 42 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 43 is a flowchart of an encoding and decoding method provided by an application embodiment
- FIG. 44 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 45 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 46 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 47 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 48 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 49 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 50 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 51 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 52 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 53 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 54 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 55 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 56 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 57 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 58 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 59 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 60 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 61 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 62 is a flowchart of an encoding and decoding method provided by an embodiment of the present application.
- FIG. 63 is a flowchart of an editing method provided by an embodiment of the present application.
- FIG. 64 is a flowchart of an editing method provided by an embodiment of the present application.
- FIG. 65 is a flowchart of an editing method provided by an embodiment of the present application.
- FIG. 66 is a flowchart of an editing method provided by an embodiment of the present application.
- FIG. 67 is a flowchart of an editing method provided by an embodiment of the present application.
- FIG. 68 is a flowchart of an editing method provided by an embodiment of the present application.
- FIG. 69 is a flowchart of an editing method provided by an embodiment of the present application.
- FIG. 70 is a flowchart of an editing method provided by an embodiment of the present application.
- Figure 71 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 72 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 73 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 74 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 75 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 76 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 77 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 78 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 79 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 80 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 81 is a flowchart of an encoding method provided by an embodiment of the present application.
- Figure 82 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 83 is a flowchart of an encoding method provided by an embodiment of the present application.
- FIG. 84 is a flowchart of a decoding method provided by an embodiment of the present application.
- FIG. 85 is a flowchart of an encoding mode provided by an embodiment of the present application.
- FIG. 86 is a flowchart of an encoding mode provided by an embodiment of the present application.
- FIG. 87 is a schematic structural diagram of an encoding end provided by an embodiment of the present application.
- FIG. 88 is a schematic structural diagram of a decoding end provided by an embodiment of the present application.
- FIG. 1 is a schematic structural diagram of an encoding and decoding system provided by an embodiment of the present application.
- the codec system includes an encoder 01, a decoder 02, a storage device 03 and a link 04.
- the encoder 01 can communicate with the storage device 03, and the encoder 01 can also communicate with the decoder 02 through the link 04.
- the decoder 02 can also communicate with the storage device 03.
- the encoder 01 is used to obtain a data source, encode the data source, and transmit the encoded code stream to the storage device 03 for storage, or directly transmit it to the decoder 02 via the link 04.
- the decoder 02 can obtain the code stream from the storage device 03 and decode it to obtain the data source, or decode it after receiving the code stream transmitted by the encoder 01 via the link 04 to obtain the data source.
- the data source can be a captured image or a captured video.
- Both the encoder 01 and the decoder 02 can be used as an electronic device alone.
- the storage device 03 may include any of a variety of distributed or locally accessible data storage media. For example, hard drives, Blu-ray discs, read-only discs, flash memory, or other suitable digital storage media for storing encoded data.
- the link 04 may include at least one communication medium, and the at least one communication medium may include a wireless and/or wired communication medium, such as an RF (Radio Frequency) spectrum or one or more physical transmission lines.
- RF Radio Fre
- FIG. 2 is a schematic diagram of a coding and decoding process according to an exemplary embodiment.
- the coding includes prediction, transformation, quantization, and entropy coding.
- Decoding includes decoding, inverse transformation, inverse quantization, and prediction. A process.
- binary arithmetic coding and decoding techniques are usually used to code and decode current syntax elements.
- Prediction in encoding and decoding generally includes intra-frame prediction, multi-line prediction, cross-component prediction and matrix-based intra-frame prediction, etc.
- intra-frame luminance candidate list and adaptive loop filtering will also be used in encoding and decoding Encoder, adaptive motion vector precision encoding and decoding technology, and BD (Block-based quantized residual domain Differential) PCM (Pulse Code Modulation) encoding and decoding technology, etc.
- BD Block-based quantized residual domain Differential
- PCM Pulse Code Modulation
- Binary arithmetic coding refers to performing arithmetic coding on each bin (bit) after binarization of the current syntax element according to its probability model parameters to obtain the final code stream. It includes two coding methods: context-based adaptive arithmetic coding and bypass-based binary arithmetic coding.
- CABAC Context-based Adaptive Binary Arithmetic Coding, context-based adaptive binary arithmetic coding
- CABAC Context-based Adaptive Binary Arithmetic Coding, context-based adaptive binary arithmetic coding
- the encoding of each symbol is related to the result of previous encoding, and the codeword is adaptively allocated to each symbol according to the statistical characteristics of the symbol stream, especially for symbols with non-equal probability of occurrence, which can be further compressed Bit rate.
- Each bit of the syntax element enters the context modeler in order, and the encoder assigns an appropriate probability model to each input bit according to the previously encoded syntax element or bit. This process is called context modeling.
- the bits and the probability model assigned to it are sent to the binary arithmetic encoder for encoding.
- the encoder needs to update the context model according to the bit value, which is the adaptation of the encoding.
- Bypass-based Binary Arithmetic Coding is a binary arithmetic coding mode based on equal probability (also called bypass coding mode). Compared with CABAC, Bypass has less probability update process. There is no need to adaptively update the probability state. Instead, a fixed probability of 50% of the probability of 0 and 1 is used for coding. This coding method is simpler, has low coding complexity and low memory consumption, and is suitable for symbols with equal probability.
- Intra-frame prediction refers to using the correlation of the image space domain to predict the pixels of the current image block by using the pixels of the neighboring blocks that have been coded and reconstructed around the current image block, so as to achieve the purpose of removing the image space redundancy.
- a variety of intra prediction modes are specified in intra prediction, and each intra prediction mode corresponds to a texture direction (except for the DC mode). For example, if the texture of the image is arranged horizontally, then selecting the horizontal prediction mode can better predict the image information.
- the luminance component in HEVC High Efficiency Video Coding
- HEVC High Efficiency Video Coding
- each size of prediction unit corresponds to 35 intra prediction modes. Contains Planar mode, DC mode and 33 angle modes, as shown in Table 1.
- Planar mode is suitable for areas where the pixel value changes slowly.
- two linear filters in the horizontal and vertical directions can be used for filtering, and the average of the two is used as the predicted value of the current image block.
- the DC mode is suitable for a large flat area, and the average pixel value of the neighboring blocks that have been coded and reconstructed around the current image block can be used as the predicted value of the current image block.
- the Planar mode and the DC mode may also be called non-angle modes.
- the intra prediction modes corresponding to the mode number 26 and the mode number 10 respectively indicate the vertical direction and the horizontal direction.
- the mode number 26 may be adjacent
- the intra prediction modes corresponding to the mode numbers of are collectively referred to as vertical prediction modes, and the intra prediction modes corresponding to the mode numbers adjacent to the mode number 10 are collectively referred to as horizontal prediction modes.
- the vertical prediction modes may include The mode number 2 to the mode number 18, and the horizontal prediction mode may include the mode number 19 to the mode number 34.
- VVC Very Video Coding, Valser Video Coding
- the method used in conventional intra prediction is to use surrounding pixels to predict the current block, which removes spatial redundancy.
- the target prediction mode used can be from the MPM (Most Probable Mode, the most probable intra prediction mode) list or the non-MPM list.
- ISP Intra Sub-block-Partitions, intra sub-block prediction
- the intra prediction method adopted in the ISP technology is to divide the image block into multiple sub-blocks for prediction.
- the supported division methods include horizontal division and vertical division.
- the decoder when the current block starts the ISP mode, if the size of the current block supports only one division method by default, the current block is divided according to the default division direction, and it is predicted, inversely transformed, and reversed.
- processing such as quantization if the size of the current block supports two division methods, it is necessary to further analyze its division direction, and then divide the current block according to the determined division direction, and perform processing such as prediction, inverse transformation, and inverse quantization on it.
- the method adopted in the MRL technology is to predict based on the reference pixels of the current block, and the reference pixels can come from adjacent rows of the current block.
- the reference pixels may come from Reference line 0 (line 0), Reference line 1 (line 1), Reference line 2 (line 2) and Reference line 3 (line 3) as shown in FIG. 5.
- the 0th line is the line adjacent to the current block boundary
- the 1st line is the second adjacent line to the current block boundary
- the 2nd line is the adjacent line of the first line
- the third line is the second adjacent line Line.
- reference pixels come from Reference line 0, Reference line 1 and Reference line 3, and Reference 2 is not used.
- the line may be a line on the upper side of the current block, or a column on the left side of the current block.
- the number of MPMs in HEVC is 3, and the number of MPMs in current VVC is 6.
- the intra-frame prediction mode must come from MPM, and for conventional intra-frame prediction, the intra-frame prediction mode may come from MPM or non-MPM.
- CCLM Cross-component Linear Model Prediction, cross-component prediction
- the method adopted in the CCLM technology is to use a linear prediction model to reconstruct the pixel value through the luminance component and use a linear equation to obtain the predicted pixel value of the chrominance component, which can remove the redundancy between the image components and further improve the coding performance.
- MDLM-L is a cross-component prediction mode that uses only the left template information to obtain linear parameters
- MDLM-T uses only the upper template information.
- the cross-component prediction mode of linear model parameters is derived.
- DM uses the same prediction mode as brightness for chrominance.
- Adaptive loop filter can select a filter from a fixed filter according to its own gradient direction for filtering, and can indicate whether the block has ALF filtering enabled through the CTU-level flag. Degree and brightness can be controlled separately.
- AMVR adaptive motion vector resolution, adaptive motion vector resolution
- AMVR is used to indicate that different precisions can be used when performing motion vector difference coding.
- the precision used can be integer pixel precision, such as 4 pixel precision, or non-integer pixel precision, such as 1/16 pixel precision.
- This technology can be applied to motion vector data coding in conventional intra-frame prediction, and can also be used in motion vector data coding in affine prediction mode.
- the matrix-based intra prediction technology refers to determining the predicted pixel value of the current block by taking the upper and left adjacent pixels of the current block as reference pixels, sending them to the matrix-vector multiplier and adding an offset value.
- BDPCM refers to directly copying the pixel value of the corresponding reference pixel in the vertical direction when predicting the pixel in the prediction process, or copying the pixel value of the corresponding reference pixel in the horizontal direction, similar to vertical prediction and horizontal prediction. Then the residual values of the predicted pixels and the original pixels are quantized, and the quantized residuals are differentially coded.
- r i,j ,0 ⁇ i ⁇ M-1,0 ⁇ j ⁇ N-1 represents the prediction residual
- Q(ri ,j ) 0 ⁇ i ⁇ M-1,0 ⁇ j ⁇ N-1 indicates that the prediction residual r i,j is quantized to obtain the quantized residual. Then, perform differential coding on the quantized residual Q(ri ,j ) to obtain the differential coding result
- the inverse accumulation process is used to obtain the quantized residual data.
- the quantized residual is dequantized and added to the predicted value to obtain the reconstructed pixel value.
- JCCR Joint Coding of Chrominance Residuals, joint coding of chrominance residuals
- JCCR is a joint coding method of CB (blue chroma) and CR (red chroma) components. By observing the distribution of chroma residuals, it is not difficult to find that CB and CR always show a trend of negative correlation, so JCCR uses this phenomenon to propose a joint coding method for CB and CR. For example, only coding (CB-CR)/2 is required, which is the mean value of CB and CR components.
- the decoding end needs to transmit different syntax elements to the encoding end, and more context models are required to transmit the syntax elements, the encoding and decoding process is complex, and the memory overhead is large.
- the present application provides a coding and decoding method that can reduce the number of required context models, thereby reducing the complexity of the coding and decoding process and the memory overhead.
- the encoding and decoding methods of the embodiments of the present application will be introduced respectively with respect to the foregoing prediction modes and encoding and decoding technologies.
- the syntax elements that need to be transmitted between the decoding end and the encoding end may include the first ISP indication information and the second ISP indication information.
- the first ISP indication information is used to indicate whether to start the intra-frame sub-block prediction mode.
- the indication information is used to indicate the sub-block division mode of the intra-frame sub-block prediction mode.
- the first indication information is intra_subpartitions_mode_flag
- the second indication information is intra_subpartitions_split_flag.
- Fig. 6 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in Fig. 1, the method includes:
- Step 601 When it is determined to encode the first ISP indication information, perform context-based adaptive binary arithmetic coding on the first ISP indication information based on a context model, and the first ISP indication information is used to indicate whether to start intra sub-block prediction mode.
- the current block can try to use the sub-block division technology, and the encoder can decide whether to use the sub-block division technology finally through RDO (Rate Distortion Optimization). And coding of the first ISP indication information is performed, and the first ISP indication information is used to indicate whether the current block starts the intra-sub-block prediction mode.
- the conditions for supporting the sub-block division technology include: the current block is a luminance block, the current block has not activated the multi-line prediction mode, and the size of the current block meets certain restriction conditions.
- the conditions for supporting the sub-block division technology are not limited to the above three conditions, and may also include other conditions.
- the first ISP indication information is intra_subpartitions_mode_flag
- intra_subpartitions_mode_flag is a flag bit indicating whether the current block starts the intra sub-block prediction mode. If intra_subpartitions_mode_flag is 0, it means that the intra sub-block prediction mode is not activated for the current block, and if intra_subpartitions_mode_flag is 1, it means that the intra sub-block prediction mode is activated for the current block.
- Step 602 When it is determined to encode the second ISP indication information, perform bypass-based binary arithmetic coding on the second ISP indication information.
- the second ISP indication information is used to indicate the sub-block division mode of the intra-frame sub-block prediction mode.
- the sub-block division method includes a horizontal division direction and a vertical division direction.
- the final division direction needs to be determined, and based on the division direction used, the encoding of the second ISP indication information is continued.
- the current block supports only one division direction, there is no need to continue coding the second ISP indication information.
- the second ISP indication information may be intra_subpartitions_split_flag, and intra_subpartitions_split_flag is a flag bit indicating the sub-block division mode of the ISP mode of the current block. For example, when intra_subpartitions_split_flag is 0, it means that the sub-block division method of the ISP mode of the current block is horizontal division; when intra_subpartitions_split_flag is 1, it means that the sub-block division mode of the ISP mode of the current block is vertical division.
- the coding mode of the second ISP indication information in the related technology is modified, and the bypass coding mode is used to replace the complex CABAC coding mode.
- the memory overhead can be reduced, the coding complexity is reduced, and the coding performance Starting off, the performance remains basically unchanged.
- FIG. 7 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end and is a decoding method corresponding to the encoding method provided in the embodiment of FIG. 6. As shown in FIG. 7, the method includes:
- Step 701 When it is determined to decode the first ISP indication information, perform context-based adaptive binary arithmetic decoding on the first ISP indication information based on a context model, and the first ISP indication information is used to indicate whether to start intra sub-block prediction mode.
- the coded stream of the current block may be received first, and if the current block meets the parsing condition, the first ISP indication information in the coded stream is decoded to analyze whether the current block starts the intra sub-block prediction mode.
- the analysis conditions include: the current block is a luminance block, the current block has not activated the multi-line prediction mode, and the size of the current block meets certain restriction conditions.
- the analysis conditions are not limited to the above three conditions, and may also include other conditions.
- the first ISP indication information is intra_subpartitions_mode_flag. If intra_subpartitions_mode_flag is 0, it means that the intra sub-block prediction mode is not activated for the current block, and if intra_subpartitions_mode_flag is 1, it means that the intra sub-block prediction mode is activated for the current block.
- Step 702 When it is determined to decode the second ISP indication information, perform bypass-based binary arithmetic decoding on the second ISP indication information.
- the second ISP indication information is used to indicate the sub-block division mode of the intra-frame sub-block prediction mode.
- the first ISP indication information indicates that the current block starts the intra sub-block prediction mode, and the current block supports two division directions
- intra_subpartitions_mode_flag 1
- intra_subpartitions_split_flag 1
- intra_subpartitions_mode_flag 0
- intra_subpartitions_mode_flag 1
- the current block only supports a certain fixed direction of division direction, there is no need to parse the flag bit indicating the division direction.
- the decoder can determine whether the current block starts the ISP mode and the corresponding division direction, thereby predicting the current block based on the determined division direction, and obtain the predicted value of the current block for the subsequent reconstruction process.
- Fig. 8 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in Fig. 8, the method includes:
- Step 801 When it is determined to encode the first ISP indication information, perform bypass-based binary arithmetic coding on the first ISP indication information.
- the first ISP indication information is used to indicate whether to start the intra sub-block prediction mode.
- the current block can try to use the sub-block division technology, and the encoder can decide whether to use the sub-block division technology finally through RDO, and perform the encoding of the first ISP indication information .
- the first ISP indication information is used to indicate whether the current block starts the intra sub-block prediction mode.
- the conditions for supporting the sub-block division technology include: the current block is a luminance block, the current block has not activated the multi-line prediction mode, and the size of the current block meets certain restriction conditions.
- the conditions for supporting the sub-block division technology are not limited to the above three conditions, and may also include other conditions.
- the first ISP indication information is intra_subpartitions_mode_flag
- intra_subpartitions_mode_flag is a flag bit indicating whether the current block starts the intra sub-block prediction mode. If intra_subpartitions_mode_flag is 0, it means that the intra sub-block prediction mode is not activated for the current block, and if intra_subpartitions_mode_flag is 1, it means that the intra sub-block prediction mode is activated for the current block.
- Step 802 When it is determined to encode the second ISP indication information, perform bypass-based binary arithmetic coding on the second ISP indication information, and the second ISP indication information is used to indicate the sub-block division mode of the intra-frame sub-block prediction mode.
- the sub-block division method includes a horizontal division direction and a vertical division direction.
- the final division direction needs to be determined, and based on the division direction used, the encoding of the second ISP indication information is continued.
- the current block supports only one division direction, there is no need to continue coding the second ISP indication information.
- the second ISP indication information may be intra_subpartitions_split_flag, and intra_subpartitions_split_flag is a flag bit indicating the sub-block division mode of the ISP mode of the current block. For example, when intra_subpartitions_split_flag is 0, it means that the sub-block division method of the ISP mode of the current block is horizontal division; when intra_subpartitions_split_flag is 1, it means that the sub-block division mode of the ISP mode of the current block is vertical division.
- the encoding method of the flag bit of intra_subpartitions_mode_flag and the flag bit of intra_subpartitions_split_flag in the related technology has been modified, and the bypass encoding method is used to replace the original complex CABAC encoding method.
- the memory overhead can be further reduced, and the The coding complexity, and starting from the coding performance, the performance remains basically unchanged.
- FIG. 9 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end and is a decoding method corresponding to the encoding method provided in the embodiment of FIG. 8. As shown in FIG. 9, the method includes:
- Step 901 When it is determined to decode the first ISP indication information, perform bypass-based binary arithmetic decoding on the first ISP indication information.
- the first ISP indication information is used to indicate whether to start the intra sub-block prediction mode.
- the coded stream of the current block may be received first, and if the current block meets the parsing condition, the first ISP indication information in the coded stream is decoded to analyze whether the current block starts the intra sub-block prediction mode.
- the analysis conditions include: the current block is a luminance block, the current block has not activated the multi-line prediction mode, and the size of the current block meets certain restriction conditions.
- the analysis conditions are not limited to the above three conditions, and may also include other conditions.
- the first ISP indication information is intra_subpartitions_mode_flag. If intra_subpartitions_mode_flag is 0, it means that the intra sub-block prediction mode is not activated for the current block, and if intra_subpartitions_mode_flag is 1, it means that the intra sub-block prediction mode is activated for the current block.
- Step 902 When it is determined to decode the second ISP indication information, perform bypass-based binary arithmetic decoding on the second ISP indication information.
- the second ISP indication information is used to indicate the sub-block division mode of the intra-frame sub-block prediction mode.
- the first ISP indication information indicates that the current block starts the intra sub-block prediction mode, and the current block supports two division directions
- intra_subpartitions_mode_flag 1
- intra_subpartitions_split_flag 1
- intra_subpartitions_mode_flag 0
- intra_subpartitions_mode_flag 1
- the current block only supports a certain fixed direction of division direction, there is no need to parse the flag bit indicating the division direction.
- the decoder can determine whether the current block starts the ISP mode and the corresponding division direction, thereby predicting the current block based on the determined division direction, and obtain the predicted value of the current block for the subsequent reconstruction process.
- FIG. 10 is a flowchart of an encoding and decoding method provided by an embodiment of the present application. The method is applied to the encoding end or the decoding end. As shown in FIG. 10, the method includes the following steps:
- Step 1001 If the width and height dimensions of the current block are M*N, M is less than 64 and N is less than 64, the current block does not support the multi-line prediction mode.
- the current block does not support the multi-line prediction mode.
- the syntax elements that need to be transmitted between the decoding end and the encoding end may include reference line indication information, which is used to indicate the index of the target reference line used when predicting the current block based on the multi-line prediction mode information.
- the reference row indication information is intra_luma_ref_idx.
- the reference line indication information corresponding to the multi-line prediction mode occupies at most 2 bits, and these 2 Bits need to use two different context models for encoding and decoding, as shown in Table 5 and Table 6 below:
- the first bin (bit position) Second bin (bit position) MultiRefLineIdx(0) is the first context model
- MultiRefLineIdx(1) is the second context model
- the first bin refers to the first bit of the reference row indication information, which needs to be coded and decoded based on the first context model
- the second bin refers to the second bit of the reference row indication information, which needs to be based on the first bit.
- Two context models perform encoding and decoding, and the first context model is different from the second context model.
- the target reference row is row 0; if the index information indicated by the reference row indication information is 1, the target reference row is row 1; The index information indicated by the row indication information is 2, and the target reference row is the third row.
- the row described in the embodiment of the present application may be the row above the current block or the column on the left side of the current block.
- FIG. 11 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 10, if it is determined that the current block supports the multi-line prediction mode, and the candidate reference line corresponding to the multi-line prediction mode The number of rows is 3, and the reference row indication information corresponding to the multi-row prediction mode occupies at most 2 bits, then the method includes:
- Step 1101 Based on a context model, perform context-based adaptive binary arithmetic coding on the first bit of the reference row indication information.
- the current block can be determined whether the current block meets the conditions for supporting the multi-line prediction technology, and if the current block meets the conditions for supporting the multi-line prediction technology, it is determined that the current block can try to use each reference line for encoding.
- the encoding end can determine the source of the final reference pixel through RDO, and encode the reference row index information into the encoding stream.
- the conditions for supporting the multi-line prediction technology include: the current block is a luma intra-frame block, and the size of the current block meets certain restriction conditions, and the current block does not include the first line of the coding tree unit.
- the conditions for supporting the multi-line prediction technology are not limited to the above three conditions, and other conditions may also be included.
- the reference line indication information can be coded according to specific conditions.
- the reference row indication information may be intra_luma_ref_idx.
- the row described in the embodiment of the present application may be the row above the current block or the column on the left side of the current block.
- Step 1102 When it is necessary to encode the second bit of the reference line indication information, perform bypass-based binary arithmetic coding on the second bit of the reference line indication information.
- the current block supports the multi-line prediction mode, and the number of candidate reference lines corresponding to the multi-line prediction mode is 3, and the reference line indication information corresponding to the multi-line prediction mode occupies at most 2 bits
- this The first bit of the 2 bits can be coded using a context model, and the second bit can be coded based on the bypass coding mode. In this way, only one context model is needed to realize all the information indicating the reference row. Bit coding reduces the number of context models used, thereby reducing coding complexity and memory consumption, and coding performance has not changed much.
- the context model used by the reference row indication information can be shown in Table 8 and Table 9 below:
- First bin Second bin MultiRefLineIdx(0) is the first context model No context model, use Bypass for encoding
- FIG. 12 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 11. As shown in FIG. 12, if the current block is determined Multi-row prediction mode is supported, and the number of candidate reference rows corresponding to the multi-row prediction mode is 3, and the reference row indication information corresponding to the multi-row prediction mode occupies at most 2 bits, then the method includes:
- Step 1201 Based on a context model, perform context-based adaptive binary arithmetic decoding on the first bit of the reference row indication information.
- Step 1202 When the second bit of the reference line indication information needs to be decoded, perform bypass-based binary arithmetic decoding on the second bit of the reference line indication information.
- the target reference line used when predicting the current block based on the multi-line prediction mode can be determined based on the reference line indication information, and then the target reference line is used to predict the current block.
- the coded stream of the current block may be received first, and if the current block meets the parsing condition, the reference line indication information is decoded to determine the source of the reference pixels of the current block.
- the analysis conditions include: the current block is a luma intra-frame block, the size of the current block meets certain conditions, and the current block is not the first row of the coding tree unit.
- the analysis conditions are not limited to the above three conditions, and may also include other conditions.
- intra_luma_ref_idx needs to be parsed, so as to determine the reference pixels of the current block according to the value of intra_luma_ref_idx, so as to obtain the predicted value of the current block for the subsequent reconstruction process.
- the reference row indication information corresponding to the multi-row prediction mode occupies at most 3 bits, and these 3 Bits need to use 3 different context models for encoding and decoding, as shown in Table 10 and Table 11 below:
- the first bin refers to the first bit of the reference row indication information, which needs to be coded and decoded based on the first context model
- the second bin refers to the second bit of the reference row indication information, which needs to be based on the first bit.
- Two context models are used for encoding and decoding.
- the third bin refers to the third bit of the reference row indication information. The encoding and decoding needs to be performed based on the third context model, and these three context models are all different.
- index information of the target reference row and the row number of the corresponding target reference row are shown in Table 12:
- the target reference line is line 0; if the index information indicated by the reference line indication information is 1, the target reference line is line 1; If the index information indicated by the row indication information is 2, the target reference row is the 2nd row; if the index information indicated by the reference row indication information is 3, the target reference row is the 3rd row.
- FIG. 13 is a flowchart of an encoding method provided by an embodiment of the application, which is applied to the encoding end. As shown in FIG. 13, if it is determined that the current block supports the multi-line prediction mode, and the number of candidate reference lines corresponding to the multi-line prediction mode It is 4, the reference line indication information corresponding to the multi-line prediction mode occupies at most 3 bits, and the method includes:
- Step 1301 Based on a context model, perform context-based adaptive binary arithmetic coding on the first bit of the reference row indication information.
- the current block can be determined whether the current block meets the conditions for supporting the multi-line prediction technology, and if the current block meets the conditions for supporting the multi-line prediction technology, it is determined that the current block can try to use each reference line for encoding.
- the encoding end can determine the source of the final reference pixel through RDO, and encode the reference row index information into the encoding stream.
- the conditions for supporting the multi-line prediction technology include: the current block is a luma intra-frame block, and the size of the current block meets certain restriction conditions, and the current block is not the first line of the coding tree unit.
- the conditions for supporting the multi-line prediction technology are not limited to the above three conditions, and may also include other conditions.
- the reference line indication information can be coded according to specific conditions.
- the reference row indication information may be intra_luma_ref_idx.
- Step 1302 When the second bit of the reference line indication information needs to be coded, perform bypass-based binary arithmetic coding on the second bit of the reference line indication information.
- Step 1303 When the third bit of the reference line indication information needs to be coded, perform bypass-based binary arithmetic coding on the third bit of the reference line indication information.
- the reference line indication information corresponding to the multi-line prediction mode occupies at most 3 bits
- this The first bit of the 3 bits can be coded using a context model, and the second and third bits can be coded based on the bypass coding mode. In this way, only one context model is needed to achieve the The coding of all bits of the reference line indication information reduces the number of context models used, thereby reducing coding complexity and memory consumption, and coding performance has not changed much.
- the context model used by the reference row indication information can be as shown in Table 13 and Table 14 below:
- FIG. 14 is a flowchart of a decoding method provided by an embodiment of the present application. It is applied to the decoding end and is a decoding method corresponding to the encoding method described in the embodiment of FIG. 12. As shown in FIG. 14, if it is determined that the current block supports multiple In the row prediction mode, the number of candidate reference rows corresponding to the multi-row prediction mode is 4, and the reference row indication information corresponding to the multi-row prediction mode occupies at most 3 bits.
- the method includes:
- Step 1401 Based on a context model, perform context-based adaptive binary arithmetic decoding on the first bit of the reference row indication information.
- Step 1402 When it is necessary to decode the second bit of the reference line indication information, perform bypass-based binary arithmetic decoding on the second bit of the reference line indication information.
- Step 1403 When the third bit of the reference line indication information needs to be decoded, perform bypass-based binary arithmetic decoding on the third bit of the reference line indication information.
- the coded stream of the current block may be received first, and if the current block meets the parsing condition, the reference line indication information is decoded to determine the source of the reference pixels of the current block.
- the analysis conditions include: the current block is a luma intra-frame block, the size of the current block meets certain conditions, and the current block is not the first row of the coding tree unit.
- the analysis conditions are not limited to the above three conditions, and may also include other conditions.
- intra_luma_ref_idx needs to be parsed, so as to determine the reference pixels of the current block according to the value of intra_luma_ref_idx, so as to obtain the predicted value of the current block for the subsequent reconstruction process.
- FIG. 15 is a flowchart of a coding and decoding method provided by an embodiment of the present application. The method is applied to the encoding end or the decoding end. As shown in FIG. 15, if it is determined that the current block supports the multi-line prediction mode, and the multi-line prediction mode corresponds to The number of candidate reference rows is 3, where the candidate reference row with index information 0 is row 0, the candidate reference row with index information 1 is row 1, and the candidate reference row with index information 2 is row 2.
- the method includes :
- Step 1501 When predicting the current block according to the multi-line prediction mode, predict the current block according to the target reference line, and the target reference line is determined according to the reference line indication information.
- the target reference row is row 0;
- the target reference row is the first row
- the target reference row is the second row.
- index information indicated by the reference row indication information and the corresponding target reference row may be as shown in Table 15 below:
- the three nearest rows and three columns may be selected as candidates for the target reference row. That is, the target reference row is a row selected from the candidate reference rows, where the number of candidate reference rows corresponding to the multi-row prediction mode is 3, and the three rows and three columns closest to the current block boundary are used as candidate reference rows.
- FIG. 16 is a flowchart of a coding and decoding method provided by an embodiment of the present application. The method is applied to the encoding end or the decoding end. As shown in FIG. 16, if it is determined that the current block supports the multi-line prediction mode, and the multi-line prediction mode corresponds to The number of candidate reference rows is 3, where the candidate reference row with index information 0 is row 0, the candidate reference row with index information 1 is row 1, and the candidate reference row with index information 2 is row 2, the method include:
- Step 1601 When predicting the current block according to the multi-line prediction mode, predict the current block according to the target reference line, and the target reference line is determined according to the reference line indication information.
- the target reference row is row 0;
- the target reference row is the second row
- the target reference row is the third row.
- index information indicated by the reference row indication information and the corresponding target reference row may be as shown in Table 16 below:
- row 0, row 2 and row 3 can be selected as candidates for the target reference row.
- FIG. 17 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 17, if it is determined that the current block supports the multi-line prediction mode, the method includes:
- Step 1701 Before predicting the current block according to the multi-line prediction mode, encode the line number indication information according to the number of candidate reference lines corresponding to the multi-line prediction mode.
- the line number indication information is used to indicate the number of lines corresponding to the multi-line prediction mode. The number of candidate reference rows.
- Step 1702 Based on the target reference line used in the prediction of the current block based on the multi-line prediction mode, encode the reference line indication information.
- the reference line indication information is used to indicate the target reference line used in the prediction of the current block based on the multi-line prediction mode The index information of the target reference row.
- Step 1703 Predict the current block according to the target reference line.
- a line number indication information that can indicate the number of candidate reference lines corresponding to the multi-line prediction mode is added, so that the multi-line prediction mode can select the number of reference lines.
- the line number indication information may exist in the sequence parameter set (SPS), the image parameter level, the slice level, or the tile level.
- the line number indication information exists in the sequence parameter set, that is, a syntax for indicating the number of candidate reference lines corresponding to the multi-line prediction mode can be added at the SPS level.
- FIG. 18 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. As shown in FIG. 17, if it is determined that the current block supports the multi-line prediction mode, the method includes:
- Step 1801 Before predicting the current block according to the multi-line prediction mode, decode the line number indication information, where the line number indication information is used to indicate the number of candidate reference lines corresponding to the multi-line prediction mode.
- Step 1802 Determine the number of candidate reference rows corresponding to the multi-row prediction mode according to the row number indication information.
- Step 1803 Determine the target reference line according to the number of candidate reference lines corresponding to the multi-line prediction mode and the reference line indication information.
- the reference line indication information is used to indicate the target reference line used when predicting the current block based on the multi-line prediction mode. Index information.
- Step 1804 Predict the current block according to the target reference line.
- a line number indication information that can indicate the number of candidate reference lines corresponding to the multi-line prediction mode is added, so that the multi-line prediction mode can select the number of reference lines.
- the line number indication information may exist in the sequence parameter set (SPS), the image parameter level, the slice level, or the tile level.
- the line number indication information exists in the sequence parameter set, that is, a syntax for indicating the number of candidate reference lines corresponding to the multi-line prediction mode can be added at the SPS level.
- the syntax elements that need to be transmitted between the decoding end and the encoding end may include the first AMVR indication information and the second AMVR indication information.
- the first AMVR indication information is used to indicate whether to start the AMVR mode
- the second AMVR indication information is used It is used to indicate the index information of the pixel precision used when performing vector difference encoding or decoding in AMVR mode.
- the first AMVR indication information is amvr_flag
- the second AMVR indication information is amvr_precision_flag.
- the non-affine prediction mode refers to the prediction modes other than the affine prediction mode.
- the first AMVR indication information and the second AMVR indication information require a total of 4
- the context model performs encoding and decoding, as shown in Tables 17 and 18 below:
- the first AMVR indication information needs to be context-based and adaptive based on the third context model.
- Binary arithmetic coding or context-based adaptive binary arithmetic decoding When the second AMVR indication information indicates that the current block starts the AMVR mode, it is necessary to perform context-based adaptive binary arithmetic coding on the second AMVR indication information based on the fourth context model Or context-based adaptive binary arithmetic decoding.
- the current block starts the non-affine prediction mode
- the first AMVR indication information indicates that the current block starts the AMVR mode
- FIG. 19 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 19, the method includes:
- Step 1901 If the current block starts the affine prediction mode or starts other prediction modes except the affine prediction mode, when the motion vector difference coding of the current block is performed, if the current block supports the AMVR mode, the first AMVR instruction is performed When coding the information, based on the first context model, perform context-based adaptive binary arithmetic coding on the first AMVR indication information.
- Step 1902 When the first AMVR indication information indicates that the AMVR mode is activated for the current block, perform context-based adaptive binary arithmetic coding on the second AMVR indication information based on the second context model.
- the first context model and the second context model are different.
- the current block can try to use multiple motion vector accuracy for encoding.
- the encoder can decide whether to start AMVR and the adopted motion vector accuracy through RDO, and encode the corresponding syntax information into the encoded stream.
- the conditions for using the adaptive motion vector accuracy include: the current block is an inter-frame prediction block, and the current block motion information includes a non-zero motion vector difference.
- the conditions for using the adaptive motion vector accuracy are not limited to the above conditions, and other conditions may also be included.
- FIG. 20 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 19. As shown in FIG. 20, the method includes:
- Step 2001 If the current block starts the affine prediction mode or starts other prediction modes except the affine prediction mode, when the motion vector difference decoding of the current block is performed, if the current block supports the AMVR mode, perform the first AMVR instruction When decoding the information, based on the first context model, perform context-based adaptive binary arithmetic decoding on the first AMVR indication information.
- the decoder may first receive the coded stream of the current block, and if it is determined that the current block meets the parsing condition, the first AMVR is parsed to determine whether the current block starts AMVR, that is, whether the adaptive motion vector accuracy technology is used.
- the analysis conditions include: the current block is an inter-frame block, and the current block motion information includes a non-zero motion vector difference.
- the analysis conditions are not limited to the above conditions, and other conditions may also be included.
- Step 2002 When the first AMVR indication information indicates that the AMVR mode is activated for the current block, perform context-based adaptive binary arithmetic decoding on the second AMVR indication information based on the second context model.
- the first context model and the second context model are different.
- the second AMVR indication information needs to be further analyzed to determine the used accuracy.
- the decoding end can uniquely determine the motion vector accuracy of the motion information of the current block, thereby obtaining the prediction value of the current block, which is used in the subsequent reconstruction process.
- the context models used by the first AMVR indication information and the second AMVR indication information may be as shown in Table 19 below:
- the AMVR indication information can share the context model in the Affine affine prediction mode and the non-affine affine prediction mode. In this way, the context models required under AMVR can be reduced to two, thereby reducing the editing.
- the complexity of decoding reduces memory overhead.
- FIG. 21 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 21, the method includes:
- Step 2101 If the current block starts the affine prediction mode, when performing the motion vector difference coding of the current block, if the current block supports the AMVR mode, when coding the first AMVR indication information, the first context model is used to One AMVR indication information is subjected to context-based adaptive binary arithmetic coding, and when the first AMVR indication information indicates that the current block starts the AMVR mode, the second AMVR indication information is subjected to bypass-based binary arithmetic coding.
- Step 2102 If the current block starts other prediction modes except the affine prediction mode, when the motion vector difference coding of the current block is performed, if the current block supports the AMVR mode, then when coding the first AMVR indication information, based on The second context model performs context-based adaptive binary arithmetic coding on the first AMVR indication information, and when the first AMVR indication information indicates that the current block starts the AMVR mode, performs bypass-based binary arithmetic coding on the second AMVR indication information.
- FIG. 22 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end and is a decoding method corresponding to the encoding method shown in FIG. 21. As shown in FIG. 22, the method includes:
- Step 2201 If the current block starts the affine prediction mode, when the motion vector difference decoding of the current block is performed, if the current block supports the AMVR mode, when the first AMVR indication information is decoded, the first context model is used for the first context model.
- a context-based adaptive binary arithmetic decoding is performed on one AMVR indication information, and when the first AMVR indication information indicates that the current block starts the AMVR mode, the second AMVR indication information is subjected to bypass-based binary arithmetic decoding.
- Step 2202 If the current block starts other prediction modes except the affine prediction mode, when the motion vector difference decoding of the current block is performed, if the current block supports the AMVR mode, then the first AMVR indication information is decoded based on The second context model performs context-based adaptive binary arithmetic decoding on the first AMVR indication information, and when the first AMVR indication information indicates that the current block starts the AMVR mode, performs bypass-based binary arithmetic decoding on the second AMVR indication information.
- the context models used by the first AMVR indication information and the second AMVR indication information may be as shown in Table 20 below:
- the second AMVR indication information is modified to perform bypass-based binary arithmetic coding or decoding.
- the required AMVR The context model is reduced to two, thereby reducing the complexity of encoding and decoding and reducing memory overhead.
- FIG. 23 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 23, the method includes:
- Step 2301 If the current block starts the affine prediction mode or starts other prediction modes except the affine prediction mode, when the motion vector difference coding of the current block is performed, if the current block supports the adaptive motion vector precision AMVR mode, When encoding the first AMVR indication information, based on the first context model, perform context-based adaptive binary arithmetic coding on the first AMVR indication information.
- Step 2302 When the first AMVR indication information indicates that the current block starts the AMVR mode, perform bypass-based binary arithmetic coding on the second AMVR indication information.
- FIG. 24 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end and is a decoding method corresponding to the encoding method shown in FIG. 22. As shown in FIG. 24, the method includes:
- Step 2401 If the current block starts the affine prediction mode or starts other prediction modes except the affine prediction mode, when the motion vector difference decoding of the current block is performed, if the current block supports the adaptive motion vector accuracy AMVR mode, When decoding the first AMVR indication information, based on the first context model, perform context-based adaptive binary arithmetic decoding on the first AMVR indication information.
- Step 2402 When the first AMVR indication information indicates that the current block starts the AMVR mode, perform bypass-based binary arithmetic decoding on the second AMVR indication information.
- the context models used by the first AMVR indication information and the second AMVR indication information may be as shown in the following Table 21:
- the first AMVR indication information shares a context model
- the second AMVR indication information is modified to perform bypass-based binary arithmetic coding or decoding In this way, the context model required under AMVR can be reduced to one, thereby reducing the complexity of encoding and decoding and reducing memory overhead.
- an encoding method is also provided, which is applied to the encoding end, and the method includes:
- Step 1 If the current block starts the affine prediction mode, when performing the motion vector difference coding of the current block, if the current block supports the AMVR mode, when coding the first AMVR indication information, the first context model is used to An AMVR indication information is subjected to context-based adaptive binary arithmetic coding.
- the second AMVR indication information is subjected to context-based adaptive binary arithmetic coding based on the second context model.
- Step 2 If the current block starts other prediction modes except the affine prediction mode, when the motion vector difference decoding of the current block is performed, if the current block supports the AMVR mode, the first AMVR indication information is decoded based on The third context model performs context-based adaptive binary arithmetic decoding on the first AMVR indication information.
- the second AMVR indication information is context-based Adaptive binary arithmetic coding.
- the first context model, the second context model, and the third context model are different.
- a decoding method is also provided, which is applied to the decoding end, and the method is a decoding method corresponding to the foregoing encoding method, and the method includes the following steps:
- Step 1 If the current block starts the affine prediction mode, when the motion vector difference decoding of the current block is performed, if the current block supports the AMVR mode, when the first AMVR indication information is decoded, the first context model is used to An AMVR indication information performs context-based adaptive binary arithmetic decoding.
- the second AMVR indication information is context-based adaptive binary arithmetic decoding based on the second context model.
- Step 2 If the current block starts other prediction modes except the affine prediction mode, when the motion vector difference decoding of the current block is performed, if the current block supports the AMVR mode, the first AMVR indication information is decoded based on The third context model performs context-based adaptive binary arithmetic decoding on the first AMVR indication information.
- the second AMVR indication information is context-based Adaptive binary arithmetic decoding.
- the first context model, the second context model, and the third context model are different.
- the context models used by the first AMVR indication information and the second AMVR indication information may be as shown in the following Table 22:
- the second indication information in the Affine affine prediction mode and the non-affine affine prediction mode, can share one context model.
- the context models required in the AMVR mode can be reduced to three, thereby Reduce the complexity of encoding and decoding, reducing memory overhead.
- prediction mode index information When the current block is a luminance block, prediction mode index information needs to be transmitted between the encoding end and the decoding end.
- the prediction mode index information is used to indicate the index information of the target prediction mode of the current block in the MPM list.
- the encoding end and the decoding end store the most likely intra-frame prediction mode MPM list, and the conventional intra-frame prediction mode, intra-frame sub-block prediction mode and multi-line prediction mode can share the MPM list.
- the reference line of the target prediction mode of the current block is an adjacent line of the current block
- two different context models are required, and context-based adaptive binary arithmetic is performed on the first bit of the prediction mode index information.
- the specific context model used depends on whether the current block starts the intra sub-block prediction mode.
- the prediction mode index information intra_luma_mpm_idx When intra_luma_ref_idx is equal to 0, it indicates that the reference line of the target prediction mode of the current block is the adjacent line of the current block, that is, the current block does not start the multi-line prediction mode. When intra_luma_ref_idx is not equal to 0, it indicates that the reference line of the target prediction mode of the current block is not an adjacent line of the current block, that is, the current block has activated the multi-line prediction mode.
- intra_luma_ref_idx when intra_luma_ref_idx is equal to 0, the first bit of intra_luma_mpm_idx needs to select a context model from two different context models for encoding and decoding according to whether the intra sub-block prediction mode is activated for the current block.
- intra_luma_ref_idx when intra_luma_ref_idx is not equal to 0, it means that the current block starts the multi-line prediction mode, and the target prediction mode of the multi-line prediction mode started by the current block also comes from the MPM list.
- FIG. 25 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 25, the method includes the following steps:
- Step 2501 If the current block starts intra sub-block prediction, the target prediction mode of the intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when the current block is predicted according to the intra sub-block prediction, it is determined that the current block is started The index information of the target prediction mode of the intra sub-block prediction in the MPM list.
- Step 2502 Encode the prediction mode index information according to the index information of the target prediction mode in the MPM list of the intra sub-block prediction initiated by the current block, where the first bit of the prediction mode index information is performed based on the first context model Context-based adaptive binary arithmetic coding is obtained, and other bits are obtained from bypass-based binary arithmetic coding.
- Step 2503 If the current block starts conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when the current block is predicted according to the conventional intra prediction, if the current block is predicted
- the target prediction mode for block startup comes from the MPM list, and the index information of the target prediction mode for the current block startup in the MPM list is determined.
- Step 2504 Encode the prediction mode index information according to the index information of the target prediction mode started by the current block in the MPM list, where the first bit of the prediction mode index information is based on the second context model for context-based self-control Adapt to binary arithmetic coding, and other bits are obtained based on bypassed binary arithmetic coding.
- a flag bit is needed to indicate whether the target prediction mode of the conventional prediction started by the current block comes from the MPM list. If it is determined that the target prediction mode comes from the MPM list, then determine Index information of the target prediction mode in the MPM list. If the target prediction mode is not from the MPM list, there is no need to determine the index information of the target prediction mode in the MPM list.
- the encoder can construct an MPM list, and the intra sub-block prediction mode, the multi-line prediction mode, and the regular intra prediction can share the MPM list.
- the encoder can use RDO to determine the final prediction mode to be used, that is, the target prediction mode. If the target prediction mode is intra-sub-block prediction mode or multi-line prediction mode, then the target prediction mode must be one selected from the MPM list. The prediction mode requires coding prediction mode index information (intra_luma_mpm_idx) to inform the decoder which prediction mode has been selected. If the target prediction mode is conventional intra prediction, it is also necessary to encode a flag bit to indicate whether the target prediction mode of the conventional prediction initiated by the current block comes from the MPM list.
- the target prediction mode is determined next In the index information of the MPM list, if the target prediction mode is not from the MPM list, there is no need to determine the index information of the target prediction mode in the MPM list.
- FIG. 26 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 24. As shown in FIG. 26, the method includes the following step:
- Step 2601 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when predicting the current block based on intra sub-block prediction, compare the prediction mode
- the index information is decoded, where the first bit of the prediction mode index information is obtained by context-based adaptive binary arithmetic decoding based on the first context model, and the other bits are obtained by bypass-based binary arithmetic decoding.
- Step 2602 According to the prediction mode index information, determine the target prediction mode of the intra sub-block prediction initiated by the current block from the MPM list, and the prediction mode index information is used to indicate index information of the target prediction mode in the MPM list.
- Step 2603 Predict the current block according to the target prediction mode.
- Step 2604 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when predicting the current block according to the conventional intra prediction,
- the prediction mode index information is decoded, where the first bit of the prediction mode index information is obtained by context-based adaptive binary arithmetic decoding based on the second context model, and the other bits are obtained based on bypassed binary arithmetic decoding.
- the context model and the first context model are the same context model.
- Step 2605 Determine the target prediction mode started by the current block from the MPM list according to the prediction mode index information.
- the prediction mode index information is used to indicate the index information of the target prediction mode in the MPM list.
- Step 2606 Predict the current block according to the target prediction mode.
- the decoder may first receive the encoded stream, and based on the premise that the same MPM list is constructed based on the conventional intra prediction, intra sub-block prediction mode, and multi-line prediction mode, if the current block starts the intra sub-block prediction mode or the multi-line prediction mode , The target prediction mode it adopts must be from the MPM list, and the index value in the list is parsed to obtain the final target prediction mode. If the current block starts conventional intra prediction, a flag bit needs to be parsed to determine whether the target prediction mode is from the MPM list, and if it is from the MPM list, then its index value in the MPM list is parsed.
- the decoding end can uniquely determine the prediction mode of the current block, thereby obtaining the prediction value of the current block, which is used in the subsequent reconstruction process.
- the frame when encoding or decoding the first bit of the prediction mode index information, the frame may not be activated according to whether the current block is Intra-sub-block prediction mode is a condition to select a context model from two different context models, but the same context model can be used under the two different conditions that the intra-sub-block prediction mode is activated and the intra-sub-block prediction mode is not activated in the current block , Perform context-based adaptive binary arithmetic coding or decoding on the first bit of the prediction mode index information. In this way, the number of context models required can be reduced to 1, which reduces the complexity of coding and decoding. Down memory overhead.
- the context model used by the prediction mode index information is shown in Table 24 below:
- the first bit of intra_luma_mpm_idx can be based on the same context model when the intra sub-block prediction mode of the current block is activated and the intra sub-block prediction mode is not activated, and the first bit of intra_luma_mpm_idx
- the bits are used for context-based adaptive binary arithmetic coding or decoding.
- FIG. 27 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 27, the method includes the following steps:
- Step 2701 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when the current block is predicted according to the intra sub-block prediction, it is determined that the current block is started The index information of the target prediction mode of the intra sub-block prediction in the MPM list.
- Step 2702 Coding the prediction mode index information according to the index information of the target prediction mode in the MPM list of the intra sub-block prediction initiated by the current block, where all bits of the prediction mode index information are obtained based on bypassed binary arithmetic coding.
- Step 2703 If the current block starts conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, then when the current block is predicted according to the conventional intra prediction, if The target prediction mode started by the current block is in the MPM list, and the index information of the target prediction mode started by the current block in the MPM list is determined.
- Step 2704 Encode the prediction mode index information according to the index information of the target prediction mode activated by the current block in the MPM list, where all bits of the prediction mode index information are obtained based on bypassed binary arithmetic coding.
- a flag bit is also required to indicate whether the target prediction mode of the conventional prediction started by the current block comes from the MPM list. If it is determined that the target prediction mode comes from the MPM list, the target prediction is determined The index information of the mode in the MPM list. If the target prediction mode is not from the MPM list, there is no need to determine the index information of the target prediction mode in the MPM list.
- FIG. 28 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 27. As shown in FIG. 28, the method includes the following step:
- Step 2801 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when predicting the current block based on intra sub-block prediction, compare the prediction mode The index information is decoded, where all bits of the prediction mode index information are obtained based on bypassed binary arithmetic coding.
- Step 2802 According to the prediction mode index information, determine the target prediction mode of the intra sub-block prediction initiated by the current block from the MPM list, and the prediction mode index information is used to indicate the index information of the target prediction mode in the MPM list.
- Step 2803 Predict the current block according to the target prediction mode.
- Step 2804 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when predicting the current block according to the conventional intra prediction, The prediction mode index information is decoded, wherein all bits of the prediction mode index information are obtained based on bypassed binary arithmetic coding.
- Step 2805 Determine the target prediction mode started by the current block from the MPM list according to the prediction mode index information.
- the prediction mode index information is used to indicate index information of the target prediction mode started by the current block in the MPM list.
- Step 2806 Predict the current block according to the target prediction mode.
- the reference line of the target prediction mode of the current block is the adjacent line of the current block, that is, when intra_luma_ref_idx is equal to 0, when encoding or decoding the first bit of the prediction mode index information
- the first bit of the prediction mode index information is performed Binary arithmetic coding or decoding based on bypass. In this way, the first bit of the prediction mode index information does not need to use the context model, and the number of context models required by it is reduced to 0, thereby reducing the complexity of coding and decoding and reducing the memory overhead.
- the context model used by the prediction mode index information is shown in Table 25 below:
- the first bit of intra_luma_mpm_idx can be based on the first bit of intra_luma_mpm_idx when the intra sub-block prediction mode is activated and the intra sub-block prediction mode is not activated in the current block.
- FIG. 29 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 29, the method includes the following steps:
- Step 2901 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when the current block is predicted according to the intra sub-block prediction, it is determined that the current block is started The index information of the target prediction mode of the intra sub-block prediction in the MPM list.
- Step 2902 Code the prediction mode index information according to the index information of the target prediction mode in the MPM list of the intra sub-block prediction initiated by the current block, where the first bit of the prediction mode index information is based on a context model.
- the context is obtained by adaptive binary arithmetic coding, and other bits are obtained based on bypassed binary arithmetic coding.
- Step 2903 If the current block starts conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when the current block is predicted according to the conventional intra prediction, if the current block is predicted If the target prediction mode of the conventional prediction initiated by the block is in the MPM list, the index information of the target prediction mode of the conventional prediction initiated by the current block in the MPM list is determined.
- Step 2904 Encode the prediction mode index information according to the index information in the MPM list of the target prediction mode activated by the current block, where all bits of the prediction mode index information are obtained based on bypassed binary arithmetic coding.
- a flag bit is needed to indicate whether the target prediction mode of the conventional prediction started by the current block comes from the MPM list. If it is determined that the target prediction mode is from the MPM list, then it is determined that the target prediction mode is in the MPM list. The index information of the list. If the target prediction mode is not from the MPM list, there is no need to determine the index information of the target prediction mode in the MPM list.
- FIG. 30 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the decoding method shown in FIG. 29. As shown in FIG. 30, the method includes the following step:
- Step 3001 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when predicting the current block based on intra sub-block prediction, index the prediction mode The information is decoded, where the first bit of the prediction mode index information is obtained by context-based adaptive binary arithmetic decoding based on a context model, and the other bits are obtained based on bypassed binary arithmetic decoding.
- Step 3002 According to the prediction mode index information, determine the target prediction mode of the intra sub-block prediction initiated by the current block from the MPM list, and the prediction mode index information is used to indicate the index information of the target prediction mode in the MPM list.
- Step 3003 Predict the current block according to the target prediction mode.
- Step 3004 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, then when the current block is predicted according to the conventional intra prediction, The prediction mode index information is decoded, where all bits of the prediction mode index information are obtained based on bypassed binary arithmetic decoding.
- Step 3005 Determine the target prediction mode started by the current block from the MPM list according to the prediction mode index information.
- the prediction mode index information is used to indicate index information of the target prediction mode started by the current block in the MPM list.
- Step 3006 Predict the current block according to the target prediction mode.
- the reference line of the target prediction mode of the current block is the adjacent line of the current block, that is, when intra_luma_ref_idx is equal to 0, when encoding or decoding the prediction mode index information
- the current block starts intra sub-block prediction Mode, based on a context model
- the context-based adaptive binary arithmetic coding or decoding is performed on the first bit of the prediction mode index information.
- the intra sub-block prediction mode is not activated for the current block, the first bit of the prediction mode index information is One bit performs binary arithmetic coding or decoding based on bypass. In this way, only one context model is needed for the coding and decoding of prediction mode index information, and the number of context models required by it is reduced to 1, which reduces the complexity of coding and decoding and reduces memory overhead.
- the context model used by the prediction mode index information is shown in Table 26 below:
- intra_luma_ref_idx when intra_luma_ref_idx is equal to 0, and the current block starts ISP mode, use a context model to encode and decode the first bit of intra_luma_mpm_idx, and when the current block does not start ISP mode, use Bypass to compare intra_luma_mpm_idx The first bit is encoded and decoded.
- FIG. 31 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 31, the method includes the following steps:
- Step 3101 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when the current block is predicted according to the intra sub-block prediction, it is determined that the current block is started The index information of the target prediction mode of the intra sub-block prediction in the MPM list.
- Step 3102 Encode the prediction mode index information according to the index information of the target prediction mode in the MPM list of the intra sub-block prediction initiated by the current block, where all bits of the prediction mode index information are obtained based on bypassed binary arithmetic coding.
- Step 3103 If the current block starts conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, then when the current block is predicted according to the conventional intra prediction, if The target prediction mode started by the current block is in the MPM list, and the index information of the target prediction mode started by the current block in the MPM list is determined.
- Step 3104 Encode the prediction mode index information according to the index information of the target prediction mode activated by the current block in the MPM list, where the first bit of the prediction mode index information is based on a context model for context-based adaptive binary Arithmetic coding is obtained, and the other bits are obtained based on bypassed binary arithmetic coding.
- a flag bit is also required to indicate whether the target prediction mode of the conventional prediction started by the current block comes from the MPM list. If it is determined that the target prediction mode comes from the MPM list, the target prediction is determined The index information of the mode in the MPM list. If the target prediction mode is not from the MPM list, there is no need to determine the index information of the target prediction mode in the MPM list.
- FIG. 32 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the encoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 31. As shown in FIG. 32, the method includes the following step:
- Step 3201 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when predicting the current block based on intra sub-block prediction, index the prediction mode The information is decoded, where all bits of the prediction mode index information are obtained based on bypassed binary arithmetic decoding.
- Step 3202 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when predicting the current block according to intra sub-block prediction, index the prediction mode The information is decoded, where all bits of the prediction mode index information are obtained based on bypassed binary arithmetic decoding.
- Step 3203 Predict the current block according to the target prediction mode.
- Step 3204 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, then when the current block is predicted according to the conventional intra prediction, The prediction mode index information is decoded, where the first bit of the prediction mode index information is obtained by context-based adaptive binary arithmetic coding based on a context model, and the other bits are obtained based on bypassed binary arithmetic decoding.
- Step 3205 Determine the target prediction mode started by the current block from the MPM list according to the prediction mode index information.
- the prediction mode index information is used to indicate index information of the target prediction mode started by the current block in the MPM list.
- Step 3206 Predict the current block according to the target prediction mode.
- the reference line of the target prediction mode of the current block is the adjacent line of the current block, that is, when intra_luma_ref_idx is equal to 0, when encoding or decoding the prediction mode index information
- the current block starts intra sub-block prediction Mode, perform bypass-based binary arithmetic encoding or decoding on the first bit of the prediction mode index information.
- the intra sub-block prediction mode is not activated for the current block, then based on a context model, the first bit of the prediction mode index information
- the bits are used for context-based adaptive binary arithmetic coding or decoding. In this way, only one context model is needed for the coding and decoding of prediction mode index information, and the number of context models required by it is reduced to 1, which reduces the complexity of coding and decoding and reduces memory overhead.
- the context model used by the prediction mode index information is shown in Table 27 below:
- intra_luma_ref_idx when intra_luma_ref_idx is equal to 0 and the current block does not start ISP mode, use a context model to encode and decode the first bit of intra_luma_mpm_idx.
- use Bypass when the current block starts ISP mode, use Bypass to compare intra_luma_mpm_idx The first bit is encoded and decoded.
- the syntax element transmitted between the encoding end and the decoding end may also include planar indication information.
- the planar indication information is used to indicate whether the target prediction mode of the current block is the planar prediction mode, and the planar indication information occupies one bit.
- the plan indication information is intra_luma_not_planar_flag.
- the plan indication information intra_luma_not_planar_flag uses context-based adaptive binary arithmetic coding, and the context selection depends on whether the current block starts the intra sub-block prediction mode, that is, the coding and decoding of the plan indication information requires 2 Different context models.
- FIG. 33 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 33, the method includes the following steps:
- Step 3301 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when the current block is predicted according to intra sub-block prediction, start according to the current block If the target prediction mode of the intra-frame sub-block prediction is the planar prediction mode, the planar indication information is encoded, where the planar indication information is used to indicate whether the target prediction mode initiated by the current block is the planar prediction mode, and the planar indication information is based on the first
- the context model is obtained by adaptive binary arithmetic coding based on context.
- Step 3302 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when the current block is predicted according to the conventional intra prediction, the current block Whether the target prediction mode of the started conventional intra prediction is the planar prediction mode, the planar indication information is encoded, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is based on the first block.
- the second context model is obtained by performing context-based adaptive binary arithmetic coding, and the first context model is the same as the second context model.
- FIG. 34 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 33. As shown in FIG. 34, the method includes the following step:
- Step 3401 If the current block starts intra sub-block prediction, the target prediction mode of the intra sub-block prediction exists in the MPM list, and the current block is a luminance block, when predicting the current block according to the intra sub-block prediction, provide the planar indication information Perform decoding, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is obtained by performing context-based adaptive binary arithmetic decoding based on the first context model.
- Step 3402 When it is determined based on the planar indication information that the target prediction mode of the intra sub-block prediction initiated by the current block is the planar prediction mode, predict the current block according to the planar prediction mode.
- Step 3403 When it is determined based on the planar indication information that the target prediction mode of the intra sub-block prediction initiated by the current block is not the planar prediction mode, determine the target prediction mode of the intra sub-block prediction initiated by the current block from the MPM list according to the prediction mode index information , Predict the current block according to the target prediction mode.
- Step 3404 If the current block starts the conventional intra prediction, the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luma block.
- the planar indication information Perform decoding where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is obtained by context-based adaptive binary arithmetic decoding based on the second context model.
- the first context model and the second context model The two context models are the same.
- Step 3405 When it is determined based on the planar indication information that the target prediction mode started by the current block is the planar prediction mode, predict the current block according to the planar prediction mode.
- Step 3406 When it is determined based on the planar indication information that the target prediction mode started by the current block is not the planar prediction mode, the target prediction mode started by the current block is determined from the MPM list according to the prediction mode index information, and the current block is performed according to the target prediction mode. prediction.
- the plan indication information intra_luma_not_planar_flag still uses the context-based adaptive binary arithmetic coding and decoding method, but the context selection does not depend on whether the current block starts the intra sub-block prediction mode, but the current block starts the intra sub-block In both cases of prediction mode and conventional intra prediction, a fixed context model is used for encoding and decoding.
- FIG. 35 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 35, the method includes the following steps:
- Step 3501 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when the current block is predicted according to intra sub-block prediction, start according to the current block If the target prediction mode of the intra-frame sub-block prediction is the planar prediction mode, the planar indication information is encoded, where the planar indication information is used to indicate whether the target prediction mode initiated by the current block is the planar prediction mode, and the planar indication information is based on the bypass Obtained by binary arithmetic coding.
- Step 3502 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when the current block is predicted according to the conventional intra prediction, the current block Whether the target prediction mode of the started conventional intra prediction is the planar prediction mode, encode the planar indication information, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is based on the bypass The binary arithmetic coding is obtained.
- FIG. 36 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 35. As shown in FIG. 36, the method includes the following step:
- Step 3601 If the current block starts intra sub-block prediction, the target prediction mode of the intra sub-block prediction exists in the MPM list, and the current block is a luminance block, when predicting the current block according to the intra sub-block prediction, provide the planar indication information Perform decoding, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is obtained based on the bypass binary arithmetic decoding.
- Step 3602 When it is determined based on the planar indication information that the target prediction mode of the intra sub-block prediction initiated by the current block is the planar prediction mode, predict the current block according to the planar prediction mode.
- Step 3603 When it is determined based on the planar indication information that the target prediction mode of the intra sub-block prediction initiated by the current block is not the planar prediction mode, determine the target prediction mode of the intra sub-block prediction initiated by the current block from the MPM list according to the prediction mode index information , Predict the current block according to the target prediction mode.
- Step 3604 If the current block starts conventional intra prediction, the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luma block.
- the planar indication information Perform decoding, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is obtained based on the bypass binary arithmetic decoding.
- Step 3605 When it is determined based on the planar indication information that the target prediction mode started by the current block is the planar prediction mode, predict the current block according to the planar prediction mode.
- Step 3606 When it is determined based on the planar indication information that the target prediction mode started by the current block is not the planar prediction mode, the target prediction mode started by the current block is determined from the MPM list according to the prediction mode index information, and the current block is performed according to the target prediction mode. prediction.
- the coding and decoding modes of the planar indication information are shown in Table 30:
- the plan indication information intra_luma_not_planar_flag no longer adopts the context-based adaptive binary arithmetic coding and decoding method, but when the current block starts the intra sub-block prediction mode and the conventional intra prediction, both are used in Binary arithmetic coding or decoding method based on bypass.
- an encoding method is also provided, the encoding method is applied to the encoding end, and the encoding method includes the following steps:
- Step 1 If the current block starts conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when the current block is predicted according to the conventional intra prediction, if the current block is predicted
- the target prediction mode for block startup comes from the MPM list, and the index information of the target prediction mode for the current block startup in the MPM list is determined.
- Step 2 According to the index information of the target prediction mode started by the current block in the MPM list, the prediction mode index information is encoded, where the first bit of the prediction mode index information is based on a context model for context-based adaptation The binary arithmetic coding is obtained, and the other bits are obtained based on the bypassed binary arithmetic coding.
- a flag bit is needed to indicate whether the target prediction mode of the conventional prediction started by the current block comes from the MPM list. If it is determined that the target prediction mode comes from the MPM list, then determine Index information of the target prediction mode in the MPM list. If the target prediction mode is not from the MPM list, there is no need to determine the index information of the target prediction mode in the MPM list.
- a decoding method is also provided.
- the decoding method is applied to the decoding end and is a decoding method corresponding to the foregoing encoding method.
- the decoding method includes the following steps:
- Step 1 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, then when the current block is predicted according to the conventional intra prediction,
- the prediction mode index information is decoded, where the first bit of the prediction mode index information is obtained by context-based adaptive binary arithmetic decoding based on a context model, and the other bits are obtained based on bypassed binary arithmetic decoding, and the second context
- the model and the first context model are the same context model.
- Step 2 Determine the target prediction mode started by the current block from the MPM list according to the prediction mode index information.
- the prediction mode index information is used to indicate the index information of the target prediction mode in the MPM list.
- Step 3 Predict the current block according to the target prediction mode.
- an encoding method is also provided, which is applied to the encoding end, and the encoding method includes the following steps:
- Step 1 If the current block starts conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, then when the current block is predicted according to the conventional intra prediction, if The target prediction mode started by the current block is in the MPM list, and the index information of the target prediction mode started by the current block in the MPM list is determined.
- Step 2 According to the index information of the target prediction mode activated by the current block in the MPM list, the prediction mode index information is coded, where all the bits of the prediction mode index information are obtained based on bypassed binary arithmetic coding.
- a flag bit is also required to indicate whether the target prediction mode of the conventional prediction started by the current block comes from the MPM list. If it is determined that the target prediction mode comes from the MPM list, the target prediction is determined The index information of the mode in the MPM list. If the target prediction mode is not from the MPM list, there is no need to determine the index information of the target prediction mode in the MPM list.
- a decoding method is also provided.
- the decoding method is applied to the decoding end and is a decoding method corresponding to the foregoing encoding method.
- the decoding method includes the following steps:
- Step 1 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, then when the current block is predicted according to the conventional intra prediction, The prediction mode index information is decoded, wherein all bits of the prediction mode index information are obtained based on bypassed binary arithmetic coding.
- Step 2 According to the prediction mode index information, determine the target prediction mode started by the current block from the MPM list.
- the prediction mode index information is used to indicate the index information of the target prediction mode started by the current block in the MPM list.
- Step 3 Predict the current block according to the target prediction mode.
- an encoding method is also provided, which is applied to the encoding end, and the encoding method includes the following steps:
- Step 1 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when the current block is predicted according to the conventional intra prediction, the current block Whether the target prediction mode of the started conventional intra prediction is the planar prediction mode, the planar indication information is encoded, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is based on one
- the context model is obtained by adaptive binary arithmetic coding based on context.
- a decoding method is also provided.
- the decoding method is applied to the decoding end and is a decoding method corresponding to the foregoing encoding method.
- the decoding method includes the following steps:
- Step 1 If the current block starts conventional intra prediction, the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luma block.
- the planar indication information Performing decoding, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is obtained by performing context-based adaptive binary arithmetic decoding based on a context model.
- Step 2 When it is determined based on the planar indication information that the target prediction mode started by the current block is the planar prediction mode, predict the current block according to the planar prediction mode.
- Step 3 When it is determined that the target prediction mode started by the current block is not the planar prediction mode based on the planar indication information, the target prediction mode started by the current block is determined from the MPM list according to the prediction mode index information, and the current block is performed according to the target prediction mode prediction.
- an encoding method is also provided, which is applied to the encoding end, and the encoding method includes the following steps:
- Step 1 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when the current block is predicted according to the conventional intra prediction, the current block Whether the target prediction mode of the started conventional intra prediction is the planar prediction mode, the planar indication information is encoded, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is based on the side The binary arithmetic coding of the road is obtained.
- a decoding method is also provided.
- the decoding method is applied to the decoding end and is a decoding method corresponding to the foregoing encoding method.
- the decoding method includes the following steps:
- Step 1 If the current block starts conventional intra prediction, the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luma block.
- the planar indication information Perform decoding, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is obtained based on bypass binary arithmetic coding.
- Step 2 When it is determined based on the planar indication information that the target prediction mode started by the current block is the planar prediction mode, predict the current block according to the planar prediction mode.
- Step 3 When it is determined that the target prediction mode started by the current block is not the planar prediction mode based on the planar indication information, the target prediction mode started by the current block is determined from the MPM list according to the prediction mode index information, and the current block is performed according to the target prediction mode prediction.
- the syntax element transmitted between the encoding end and the decoding end also includes chroma prediction mode index information, and the chroma prediction mode index information is used to indicate the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
- the chroma prediction mode index information and its corresponding prediction mode are shown in Table 31 below.
- the chroma prediction mode index information occupies a maximum of 4 bits. If the current block supports the cross-component prediction mode, and the current block If the cross-component prediction mode is not activated, the chroma prediction mode index information occupies a maximum of 5 bits.
- the coding and decoding mode of the chroma prediction mode index information is shown in Table 32:
- the first bit of the chroma prediction mode index information is based on the context-based self of the first context model. It is obtained by adaptive binary arithmetic decoding.
- the second bit of chroma prediction mode index information is obtained by context-based adaptive binary arithmetic decoding based on the second context model.
- the third bit and the third bit of chroma prediction mode index information The 4 bits are obtained by performing context-based adaptive binary arithmetic decoding based on the third context model, and the three context models are all different context models. That is, the chroma prediction mode index information needs to use three context models, and the memory overhead is relatively large.
- FIG. 37 is a flowchart of an encoding method provided by an embodiment of the application. The method is applied to the encoding end. As shown in FIG. 37, the current block supports the cross-component prediction mode, and the current block starts the cross-component prediction mode, and the current block is Chroma block, the method includes:
- Step 3701 When predicting the current block according to the cross-component prediction mode, determine the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
- the encoding end can decide the final target prediction mode through the rate-distortion cost, and then inform the decoding end which prediction mode has been selected through the encoding index information.
- Step 3702 Encode the chroma prediction mode index information according to the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
- the chroma prediction mode index information is used to indicate the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
- the first bit of the chroma prediction mode index information is obtained based on the context-based adaptive binary arithmetic coding based on the first context model, and the second bit of the chroma prediction mode index information is context-based based on the second context model Adaptive binary arithmetic coding, and the first context model is different from the second context model; the third and fourth bits of the chroma prediction mode index information are obtained by binary arithmetic coding based on bypass.
- the encoding end stores a list of chroma prediction mode candidates, the encoding end can decide the final target prediction mode to be used through RDO, and then the encoding index value informs the decoding end which prediction mode is selected, that is, encoding the chroma prediction mode index information .
- the chroma prediction mode includes the same prediction mode as the luminance and the cross-component prediction mode.
- the cross-component prediction mode includes the mode in which the linear model coefficients are derived from the templates on both sides, the mode in which the linear model coefficients are derived from the upper template and the mode in which the linear model coefficients are derived from the left template, as well as the Planar prediction mode, DC Prediction mode, vertical prediction mode and horizontal prediction mode.
- FIG. 38 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end and is a decoding method corresponding to the encoding method shown in FIG. 37. As shown in FIG. 38, the current block supports cross-component Prediction mode, the current block starts the cross-component prediction mode, and the current block is a chrominance block, the method includes:
- Step 3801 When predicting the current block according to the cross-component prediction mode, decode the chroma prediction mode index information.
- the first bit of the chroma prediction mode index information is obtained based on the adaptive binary arithmetic decoding based on the first context model, and the second bit of the chroma prediction mode index information is based on the second context model.
- the first context model is different from the second context model; the third and fourth bits of the chroma prediction mode index information are obtained by binary arithmetic decoding based on bypass.
- Step 3802 According to the chroma prediction mode index information, determine the target prediction mode of the current block from the candidate prediction mode list.
- Step 3803 Predict the current block according to the target prediction mode.
- the decoder can receive the coded stream, and then parse the chroma prediction mode related syntax from it.
- the coding bit overhead required for each prediction mode is different.
- the decoder uniquely determines the chroma prediction mode of the current block by analyzing the chroma prediction mode index information, thereby obtaining the prediction value of the current block, which is used in the subsequent reconstruction process.
- the current block supports the cross-component prediction mode, and when the current block starts the cross-component prediction mode, the third and fourth bits of the chroma prediction mode index information can be decoded based on bypassed binary arithmetic Obtained, in this way, the number of context models required for the chroma prediction mode index information can be reduced to 2, which reduces the complexity of coding and decoding and reduces the content overhead.
- the coding and decoding modes of the chroma prediction mode index information are shown in Table 33 and Table 34 below:
- FIG. 39 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 39, the current block supports the cross-component prediction mode, and the current block starts the cross-component prediction mode, and the current block is Chroma block, the method includes:
- Step 3901 When predicting the current block according to the cross-component prediction mode, determine the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
- the encoding end can decide the final target prediction mode through the rate-distortion cost, and then inform the decoding end which prediction mode has been selected through the encoding index information.
- Step 3902 Encoding the chroma prediction mode index information according to the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
- the chroma prediction mode index information is used to indicate the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
- the first bit of the chroma prediction mode index information is obtained by context-based adaptive binary arithmetic coding based on a context model.
- the second, 3, and 4 bits of the chroma prediction mode index information The bits are obtained by binary arithmetic coding based on bypass.
- FIG. 40 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end and is a decoding method corresponding to the encoding method shown in FIG. 39.
- the current block supports cross-component Prediction mode
- the current block starts the cross-component prediction mode
- the current block is a chrominance block
- the method includes:
- Step 4001 When predicting the current block according to the cross-component prediction mode, decode the chroma prediction mode index information.
- the first bit of the chroma prediction mode index information is obtained by performing context-based adaptive binary arithmetic decoding based on a context model, and the second, third and fourth bits of the chroma prediction mode index information The bits are obtained by binary arithmetic decoding based on bypass.
- Step 4002 According to the chroma prediction mode index information, determine the target prediction mode of the current block from the candidate prediction mode list.
- Step 4003 Predict the current block according to the target prediction mode.
- the current block supports the cross-component prediction mode, and when the current block starts the cross-component prediction mode, the first bit of the chroma prediction mode index information uses one context model, and the second bit and the second bit Both the 3 bits and the 4th bit adopt a bypass-based binary arithmetic coding and decoding method.
- the number of context models required for chroma prediction mode index information can be reduced to 1, reducing the complexity of coding and decoding , Reducing content overhead.
- the coding and decoding modes of the chroma prediction mode index information are shown in Table 35 and Table 36 below:
- FIG. 41 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 41, the current block supports the cross-component prediction mode, and when the current block is a chrominance block, the method includes:
- Step 4101 When predicting the current block according to the cross-component prediction mode, determine the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
- the encoding end can decide the final target prediction mode through the rate-distortion cost, and then inform the decoding end which prediction mode has been selected through the encoding index information.
- Step 4102 Encoding the chroma prediction mode index information according to the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
- the chroma prediction mode index information is used to indicate the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
- the target prediction mode is the first cross-component prediction mode
- the target prediction mode is the second cross-component prediction mode
- the target prediction mode is the second cross-component prediction mode
- the target prediction mode is the planar prediction mode
- the target prediction mode is the vertical prediction mode
- the target prediction mode is the horizontal prediction mode
- the target prediction mode is the DC prediction mode.
- the chroma prediction mode index information and its corresponding prediction mode are shown in Table 37 below:
- the chroma prediction mode index information indicates the cross-component prediction mode, and in this case, the chroma prediction mode index information occupies the most 3 bits, reducing bit overhead, thereby reducing memory overhead.
- the chroma prediction mode index information indicates conventional intra prediction, and in this case, the chroma prediction mode index information occupies up to 6 bits Bit.
- the target prediction mode is the first cross-component prediction mode
- the target prediction mode is the second cross-component prediction mode
- the target prediction mode is the second cross-component prediction mode
- the target prediction mode is the planar prediction mode
- the target prediction mode is the vertical prediction mode
- the target prediction mode is the horizontal prediction mode
- the target prediction mode is the DC prediction mode.
- Step 4103 Predict the current block according to the target prediction mode.
- index information of the chroma prediction mode and its corresponding prediction mode are shown in Table 38 below:
- the chroma prediction mode index information indicates the cross-component prediction mode, and in this case, the chroma prediction mode index information occupies the most 3 bits, reducing bit overhead, thereby reducing memory overhead.
- the chroma prediction mode index information indicates conventional intra prediction, and in this case, the chroma prediction mode index information occupies a maximum of 7 bits Bit.
- FIG. 42 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 41. As shown in FIG. 42, the current block supports span In component prediction mode, when the current block is a chrominance block, the method includes:
- Step 4201 When predicting the current block according to the cross-component prediction mode, decode the chroma prediction mode index information.
- Step 4202 According to the chroma prediction mode index information, determine the target prediction mode of the current block from the candidate prediction mode list.
- the target prediction mode is the first cross-component prediction mode
- the target prediction mode is the second cross-component prediction mode
- the target prediction mode is the second cross-component prediction mode
- the target prediction mode is the planar prediction mode
- the target prediction mode is the vertical prediction mode
- the target prediction mode is the horizontal prediction mode
- the target prediction mode is the DC prediction mode.
- the target prediction mode is the first cross-component prediction mode
- the target prediction mode is the second cross-component prediction mode
- the target prediction mode is the second cross-component prediction mode
- the target prediction mode is the planar prediction mode
- the target prediction mode is the vertical prediction mode
- the target prediction mode is the horizontal prediction mode
- the target prediction mode is the DC prediction mode.
- Step 4203 Predict the current block according to the target prediction mode.
- the bit overhead of the chroma prediction mode index information can be reduced, thereby reducing the memory overhead.
- Fig. 43 is a flow chart of an encoding and decoding method provided by an application embodiment. The method is applied to the encoding end or the decoding end. As shown in Fig. 43, the method includes:
- Step 4301 When the luminance and chrominance of the current block share a division tree, if the width and height size of the luminance block corresponding to the current block is 64*64, and the size of the chrominance block corresponding to the current block is 32*32, then Block does not support cross-component prediction mode.
- the embodiment of the present application can reduce the dependence of luminance and chrominance in the CCLM mode, and avoid the need for the chrominance block to wait for a reconstruction value of a 64*64 luminance block.
- the syntax elements transmitted between the encoding end and the decoding end also include ALF indication information, and the ALF indication information is used to indicate whether ALF is enabled for the current block.
- the ALF indication information is alf_ctb_flag.
- the ALF indication information is subjected to context-based adaptive binary arithmetic coding or decoding based on the target context model.
- the target context model is based on whether the upper block of the current block starts ALF, And whether the left block of the current block starts ALF, a context model selected from 3 different context models included in the first context model set.
- the current block supports ALF and the current block is a CB chrominance block, then perform context-based adaptive binary arithmetic coding or decoding on the ALF indication information based on the target context model.
- the target context model is based on whether ALF is enabled on the upper block of the current block, and Whether the left block of the current block starts ALF, a context model selected from 3 different context models included in the second context model set.
- the current block supports ALF and the current block is a CR chrominance block, then perform context-based adaptive binary arithmetic coding or decoding on the ALF indication information based on the target context model.
- the target context model is based on whether ALF is enabled on the upper block of the current block, and Whether the left block of the current block starts ALF, a context model selected from 3 different context models included in the third context model set. Among them, the above nine context models are all different.
- FIG. 44 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 44, the method includes the following steps:
- Step 4401 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the target context model; wherein, the target context The model is a context model selected from three different context models included in the first context model set according to whether the upper block of the current block starts ALF and the left block of the current block starts ALF.
- the first context model set includes a first context model, a second context model, and a third context model. If the upper block of the current block starts ALF, and the left block of the current block starts ALF, the target context model is the first context model; if the upper block of the current block starts ALF and the left block of the current block does not start ALF, or if the current block If the upper block of the block does not start ALF and the left block of the current block starts ALF, the target context model is the second context model; if the upper block of the current block does not start ALF, and the left block of the current block does not start ALF, then the target context model It is the third context model.
- Step 4402 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the target context model; where the target The context model is based on whether ALF is enabled on the upper block of the current block, and whether ALF is enabled on the left block of the current block, a context model is selected from 3 different context models included in the second context model set, the second The three context models included in the context model set are different from the three context models included in the first context model set.
- chroma blocks include CB chroma blocks and CR chroma blocks.
- the second context model set includes a fourth context model, a fifth context model, and a sixth context model. If the upper block of the current block starts ALF, and the left block of the current block starts ALF, the target context model is the fourth context model; if the upper block of the current block starts ALF and the left block of the current block does not start ALF, or if the current block If the upper block of the block does not start ALF and the left block of the current block starts ALF, the target context model is the fifth context model; if the upper block of the current block does not start ALF, and the left block of the current block does not start ALF, the target context model It is the sixth context model.
- the encoder can decide whether to enable ALF for the current block through RDO, that is, whether to use adaptive loop filtering, and encode ALF indication information in the code stream to inform the decoder whether to enable ALF, and then inform the decoder whether to perform self Adapt to loop filtering. Moreover, if ALF is enabled, ALF-related syntax elements need to be encoded, and the encoding end also performs filtering.
- FIG. 45 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 44. As shown in FIG. 45, the method includes the following steps :
- Step 4501 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic on the ALF indication information based on the target context model Decoding; where the target context model is based on whether the upper block of the current block starts ALF, and the left block of the current block starts ALF, a context model selected from three different context models included in the first context model set.
- Step 4502 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the target context model; where the target The context model is based on whether ALF is enabled on the upper block of the current block, and whether ALF is enabled on the left block of the current block.
- a context model is selected from 3 different context models included in the second context model set.
- the second context model set includes The 3 context models of is different from the 3 context models included in the first context model set.
- chroma blocks include CB chroma blocks and CR chroma blocks.
- the decoding end can decode the ALF indication information to analyze whether the current block starts adaptive loop filtering. If the ALF indication information indicates that ALF is enabled for the current block, the decoding end can also continue to decode ALF-related syntax elements to perform adaptive loop filtering on the current block to obtain filtered reconstructed pixels.
- the encoding and decoding mode of ALF indication information is shown in Table 40:
- the CB chroma block and the CR chroma block can share 3 different context models. In this way, the number of context models that can be used for the ALF indication information will be 6, thereby reducing the editing The complexity of decoding reduces the memory overhead.
- FIG. 46 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 46, the method includes the following steps:
- Step 4601 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the target context model; wherein, the target context The model is a context model selected from three different context models included in the first context model set according to whether the upper block of the current block starts ALF and the left block of the current block starts ALF.
- the first context model set includes a first context model, a second context model, and a third context model. If the upper block of the current block starts ALF, and the left block of the current block starts ALF, the target context model is the first context model; if the upper block of the current block starts ALF and the left block of the current block does not start ALF, or if the current block If the upper block of the block does not start ALF and the left block of the current block starts ALF, the target context model is the second context model; if the upper block of the current block does not start ALF, and the left block of the current block does not start ALF, then the target context model It is the third context model.
- Step 4602 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the target context model; wherein, the target The context model is based on whether ALF is enabled on the upper block of the current block, and whether ALF is enabled on the left block of the current block, a context model is selected from 3 different context models included in the second context model set, the second The three context models included in the context model set are the same as the three context models included in the first context model set.
- chroma blocks include CB chroma blocks and CR chroma blocks.
- the second context model set includes a fourth context model, a fifth context model, and a sixth context model. If the upper block of the current block starts ALF, and the left block of the current block starts ALF, the target context model is the fourth context model; if the upper block of the current block starts ALF and the left block of the current block does not start ALF, or if the current block If the upper block of the block does not start ALF and the left block of the current block starts ALF, the target context model is the fifth context model; if the upper block of the current block does not start ALF, and the left block of the current block does not start ALF, the target context model It is the sixth context model.
- FIG. 47 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 46. As shown in FIG. 47, the method includes the following steps :
- Step 4701 If the current block supports adaptive loop filter ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic on the ALF indication information based on the target context model Decoding; where the target context model is based on whether the upper block of the current block starts ALF, and the left block of the current block starts ALF, a context model selected from three different context models included in the first context model set.
- Step 4702 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the target context model; where the target The context model is based on whether ALF is enabled on the upper block of the current block, and whether ALF is enabled on the left block of the current block.
- a context model is selected from 3 different context models included in the second context model set.
- the second context model set includes The 3 context models of is different from the 3 context models included in the first context model set.
- chroma blocks include CB chroma blocks and CR chroma blocks.
- the luminance block, the CB chrominance block and the CR chrominance block can all share 3 different context models. In this way, the number of context models that can be used for the ALF indication information will be 3. Reduce the complexity of encoding and decoding, and reduce the memory overhead.
- FIG. 48 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 48, the method includes the following steps:
- Step 4801 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the first context model.
- Step 4802 If the current block supports ALF and the current block is a CB chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the second context model.
- Step 4803 If the current block supports ALF and the current block is a CR chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the third context model.
- the first context model, the second context model, and the third context model are different context models.
- FIG. 49 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 48. As shown in FIG. 49, the method includes the following steps :
- Step 4901 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the first context model, and the ALF indication information Used to indicate whether the current block starts ALF.
- Step 4902 If the current block supports ALF and the current block is a CB chroma block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the second context model.
- Step 4903 If the current block supports ALF and the current block is a CR chroma block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the third context model.
- the first context model, the second context model, and the third context model are different context models.
- the luminance block shares a context model
- the CB chrominance block shares a context model
- the CR chrominance block shares a context model.
- FIG. 50 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 50, the method includes the following steps:
- Step 5001 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the first context model.
- Step 5002 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the second context model.
- chroma blocks include CB chroma blocks and CR chroma blocks.
- FIG. 51 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 50. As shown in FIG. 51, the method includes the following steps :
- Step 5101 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the first context model, and the ALF indication information Used to indicate whether the current block starts ALF.
- Step 5102 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the second context model.
- the context model is different from the second context model.
- the luminance block shares a context model
- the CB chrominance block and the CR chrominance block share the same context model.
- the number of context models used by the ALF indication information can be reduced to 2.
- the complexity of encoding and decoding is reduced, and the memory overhead is reduced.
- FIG. 52 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 52, the method includes the following steps:
- Step 5201 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the target context model; wherein, the target context The model is a context model selected from three different context models included in the first context model set according to whether the upper block of the current block starts ALF and the left block of the current block starts ALF.
- the first context model set includes a first context model, a second context model, and a third context model. If the upper block of the current block starts ALF, and the left block of the current block starts ALF, the target context model is the first context model; if the upper block of the current block starts ALF and the left block of the current block does not start ALF, or if the current block If the upper block of the block does not start ALF and the left block of the current block starts ALF, the target context model is the second context model; if the upper block of the current block does not start ALF, and the left block of the current block does not start ALF, then the target context model It is the third context model.
- Step 5202 If the current block supports ALF and the current block is a CB chroma block, before performing filtering processing on the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the first context model.
- Step 5203 If the current block supports ALF and the current block is a CR chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the second context model.
- the context model, the first context model, and the second context model included in a context model set are different context models.
- FIG. 53 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 52. As shown in FIG. 53, the method includes the following steps :
- Step 5301 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the target context model; where the target context
- the model is a context model selected from three different context models included in the first context model set according to whether the upper block of the current block starts ALF and the left block of the current block starts ALF.
- the first context model set includes a first context model, a second context model, and a third context model. If the upper block of the current block starts ALF, and the left block of the current block starts ALF, the target context model is the first context model; if the upper block of the current block starts ALF and the left block of the current block does not start ALF, or if the current block If the upper block of the block does not start ALF and the left block of the current block starts ALF, the target context model is the second context model; if the upper block of the current block does not start ALF, and the left block of the current block does not start ALF, then the target context model It is the third context model.
- Step 5302 If the current block supports ALF and the current block is a CB chroma block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the first context model.
- Step 5303 If the current block supports ALF and the current block is a CR chroma block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the second context model.
- the context model, the first context model, and the second context model included in a context model set are different context models.
- the coding and decoding mode of the ALF indication information is shown in Table 44:
- the coding and decoding mode of the ALF indication information is shown in Table 45:
- the luma block needs to use three different context models, the CB chroma block shares one context module, and the CR chroma block shares a different context model. In this way, the ALF can be indicated
- the number of context models used by the information will be 5, thereby reducing the complexity of encoding and decoding and reducing memory overhead.
- FIG. 54 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 54, the method includes the following steps:
- Step 5401 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the first context model.
- Step 5402 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the second context model.
- the context model and the first context model are the same context model.
- FIG. 55 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 54. As shown in FIG. 55, the method includes the following steps :
- Step 5501 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the first context model, and the ALF indication information Used to indicate whether the current block starts ALF.
- Step 5502 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the second context model.
- the context model and the first context model are the same context model.
- the luminance block, the CB chrominance block, and the CR chrominance block share one context model.
- the number of context models that can be used for the ALF indication information will be 1, thereby reducing the compilation
- the complexity of decoding reduces the memory overhead.
- FIG. 56 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 56, the method includes the following steps:
- Step 5601 If the current block supports ALF and the current block is a luminance block, perform bypass-based binary arithmetic coding on the ALF indication information before performing filtering processing on the current block according to the ALF mode.
- Step 5602 If the current block supports ALF and the current block is a chrominance block, perform bypass-based binary arithmetic coding on the ALF indication information and the ALF indication information before filtering the current block according to the ALF mode.
- FIG. 57 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 56. As shown in FIG. 57, the method includes the following steps :
- Step 5701 If the current block supports ALF and the current block is a luma block, perform bypass-based binary arithmetic decoding on the ALF indication information before filtering the current block according to the ALF mode.
- the ALF indication information is used to indicate whether the current block is Start ALF.
- Step 5702 If the current block supports ALF and the current block is a chrominance block, perform bypass-based binary arithmetic decoding on the ALF indication information before filtering the current block according to the ALF mode.
- a bypass-based binary arithmetic coding and decoding method can be used to perform the ALF indication information.
- Encoding or decoding in this way, the number of context models used by the ALF indication information can be reduced to zero, thereby reducing the complexity of encoding and decoding and reducing memory overhead.
- FIG. 58 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 58, the method includes the following steps:
- Step 5801 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on a context model, and use the ALF indication information To indicate whether ALF is enabled for the current block.
- Step 5802 If the current block supporter ALF, and the current block activates the adaptive loop filter ALF, and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform a bypass based on the ALF indication information The binary arithmetic decoding.
- chroma blocks include CB chroma blocks and CR chroma blocks.
- FIG. 59 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 58. As shown in FIG. 59, the method includes the following steps :
- Step 5901 If the current block supports ALF and the current block is a brightness block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on a context model, and use the ALF indication information To indicate whether ALF is enabled for the current block.
- Step 5902 If the current block supporter ALF, and the current block starts the adaptive loop filter ALF, and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform a bypass based on the ALF indication information The binary arithmetic decoding.
- chroma blocks include CB chroma blocks and CR chroma blocks.
- the luma block uses one context model
- the CB chroma block and the CR chroma block both use the bypass-based binary arithmetic coding and decoding method for encoding or decoding, so that the ALF can
- the number of context models used by the indication information is reduced to 1, thereby reducing the complexity of encoding and decoding and reducing memory overhead.
- FIG. 60 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 60, the method includes the following steps:
- Step 6001 If the current block supports ALF and the current block is a luma block, perform bypass-based binary arithmetic coding on the ALF indication information before filtering the current block according to the ALF mode.
- the ALF indication information is used to indicate whether the current block is Start ALF.
- Step 6002 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on a context model.
- chroma blocks include CB chroma blocks and CR chroma blocks.
- Fig. 61 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in Fig. 60. As shown in Fig. 61, the method includes the following steps :
- Step 6101 If the current block supports ALF and the current block is a luma block, perform bypass-based binary arithmetic decoding on the ALF indication information before filtering the current block according to the ALF mode.
- the ALF indication information is used to indicate whether the current block is Start ALF.
- Step 6102 If the current block supports ALF, and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on a context model.
- chroma blocks include CB chroma blocks and CR chroma blocks.
- the luminance block adopts a bypass-based binary arithmetic coding and decoding method for encoding or decoding.
- the CB chrominance block and the CR chrominance block share a context model.
- the ALF indication information can be The number of context models used is reduced to 1, thereby reducing the complexity of encoding and decoding and reducing memory overhead.
- the syntax elements transmitted between the encoding end and the decoding end also include MIP indication information.
- the MIP indication information is used to indicate whether the current block starts the matrix-based intra prediction mode.
- the MIP indication information is Intra_MIP_flag.
- context-based adaptive binary arithmetic can be performed on the MIP indication information based on the target context model decoding.
- the target context model is based on whether the upper block of the current block starts the matrix-based intra prediction mode, whether the left block of the current block starts the matrix-based intra prediction mode, and whether the current block meets the preset size condition, from 4 A context model selected from different context models.
- the preset size condition may be that the width of the current block is greater than twice the height, or the height of the current block is greater than twice the width.
- the preset size condition may also be other conditions, which are not limited in the embodiment of the present application.
- the above four different context models include a first context model, a second context model, a third context model, and a fourth context model. If the upper block of the current block starts the matrix-based intra prediction mode, the left block of the current block starts the matrix-based intra prediction mode, and the current block does not meet the preset size condition, the target context model is the first context model; if The upper block of the current block starts the matrix-based intra prediction mode, the left block of the current block does not start the matrix-based intra prediction mode, and the current block does not meet the preset size condition, or if the upper block of the current block is not started based on Matrix intra prediction mode, the left block of the current block starts the matrix-based intra prediction mode, and the current block does not meet the preset size condition, the target context model is the second context model; if the upper block of the current block is not started based on Matrix intra prediction mode, the left block of the current block does not start the matrix-based intra prediction mode, and the current block does not meet the preset size condition, the target
- the MIP indication information needs to use 4 different context models, and the memory overhead is relatively large.
- FIG. 62 is a flowchart of an encoding and decoding method provided by an embodiment of the present application. The method is applied to the encoding end or the decoding end. As shown in FIG. 62, the method includes:
- Step 6201 If the width and height dimensions of the current block are 32*32, the current block does not support the matrix-based intra prediction mode.
- the current block is a luminance block or a chrominance block.
- the current block is a luminance block and the width and height dimensions of the current block are 32*32, the current block does not support the matrix-based intra prediction mode.
- the current block does not support the matrix-based intra prediction mode.
- the current block is a luminance block or a chrominance block.
- the current block does not support the matrix-based intra prediction mode.
- the current block is a luminance block or a chrominance block.
- the current block when the current block is a large-size block, the current block does not support the matrix-based intra prediction mode, that is, the current block cannot enable the matrix-based intra prediction mode. In this way, the computational complexity can be reduced. degree.
- FIG. 63 is a flowchart of an encoding and decoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 63, if the current block supports a matrix-based intra prediction mode, the method includes:
- Step 6301 According to whether the matrix-based intra prediction mode is activated for the current block, perform context-based adaptive binary arithmetic coding on the MIP indication information based on the target context model; where the target context model is based on whether the upper block of the current block is activated based on the matrix The intra prediction mode of the current block, and whether the left block of the current block activates the matrix-based intra prediction mode to select a context model from 3 different context models.
- the above three different context models include a first context model, a second context model, and a third context model. If the upper block of the current block starts the matrix-based intra prediction mode, and the left block of the current block starts the matrix-based intra prediction mode, the target context model is the first context model; if the upper block of the current block starts the matrix-based intra prediction mode Intra prediction mode and the matrix-based intra prediction mode is not activated for the left block of the current block, or if the matrix-based intra prediction mode is not activated for the upper block of the current block and the matrix-based intra prediction is activated for the left block of the current block Mode, the target context model is the second context model; if the upper block of the current block does not activate the matrix-based intra prediction mode, and the left block of the current block does not activate the matrix-based intra prediction mode, the target context model is the first Three context models.
- the encoder determines that the current block meets the conditions of matrix-based intra prediction, it can use RDO to decide whether the current block starts the MIP mode, that is, whether to use the matrix-based intra prediction method, and encode MIP in the encoding stream. Instruction information to tell the decoder whether to start the MIP mode.
- the above-mentioned MIP indication information will be encoded according to the specific situation, and if the MIP mode is activated in the current block, other syntax elements related to MIP need to be encoded.
- Fig. 64 is a flowchart of an encoding and decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in Fig. 63. As shown in Fig. 64, if the current The block supports matrix-based intra prediction mode, and the method includes:
- Step 6401 Before predicting the current block according to the matrix-based intra prediction mode, perform context-based adaptive binary arithmetic decoding on the MIP indication information based on the target context model; wherein the target context model is based on whether the upper block of the current block is Activate the matrix-based intra prediction mode, and whether the left block of the current block activates the matrix-based intra prediction mode to select a context model from 3 different context models.
- the above three different context models include a first context model, a second context model, and a third context model. If the upper block of the current block starts the matrix-based intra prediction mode, and the left block of the current block starts the matrix-based intra prediction mode, the target context model is the first context model; if the upper block of the current block starts the matrix-based intra prediction mode Intra prediction mode and the matrix-based intra prediction mode is not activated for the left block of the current block, or if the matrix-based intra prediction mode is not activated for the upper block of the current block and the matrix-based intra prediction is activated for the left block of the current block Mode, the target context model is the second context model; if the upper block of the current block does not activate the matrix-based intra prediction mode, and the left block of the current block does not activate the matrix-based intra prediction mode, the target context model is the first Three context models.
- Step 6402 If it is determined according to the MIP indication information that the current block starts the matrix-based intra prediction mode, then the matrix-based intra prediction mode is used to predict the current block.
- the decoding end receives the encoded stream, and if it is determined that the current block meets the parsing condition, the MIP indication information can be parsed to determine whether the current block starts the MIP mode.
- the analysis conditions include: the current block is a luminance block, and the current block size meets certain conditions. Of course, the analysis conditions do not limit the above conditions, and may also include other conditions.
- the decoder can determine whether the prediction mode of the current block is a matrix-based intra prediction mode. If it is a matrix-based intra prediction mode, it can continue to analyze other syntax related to the mode to obtain its prediction Mode information, and then get the predicted value.
- the size condition of the current block may not be considered, and the matrix-based intra prediction mode is activated only according to whether the upper block of the current block starts the matrix-based intra prediction mode, and whether the left block of the current block starts the matrix-based intra prediction
- the prediction mode is a context model selected from 3 different context models. In this way, the number of context models required for the MIP indication information can be reduced to 3, thereby reducing the complexity of coding and decoding and reducing the memory Overhead.
- FIG. 65 is a flowchart of an encoding and decoding method according to an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 65, if the current block supports a matrix-based intra prediction mode, the method includes:
- Step 6501 According to whether the matrix-based intra prediction mode is activated for the current block, perform context-based adaptive binary arithmetic coding on the MIP indication information based on the target context model; wherein, the target context model is based on whether the current block meets the preset size condition One of the two different context models.
- the preset size condition may be that the width of the current block is greater than twice the height, or the height of the current block is greater than twice the width.
- the above two different context models include a first context model and a second context model. If the size of the current block meets the preset size condition, the target context model is the first context model, and if the size of the current block does not meet the preset size condition, the target context model is the second context model.
- FIG. 66 is a flowchart of an encoding and decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 65. As shown in FIG. 66, if the current The block supports matrix-based intra prediction mode, and the method includes:
- Step 6601 Before predicting the current block according to the matrix-based intra prediction mode, perform context-based adaptive binary arithmetic decoding on the MIP indication information based on the target context model; wherein, the target context model is based on whether the current block satisfies a preset The size condition is from one of the two different context models.
- the preset size condition may be that the width of the current block is greater than twice the height, or the height of the current block is greater than twice the width.
- the above two different context models include a first context model and a second context model. If the size of the current block meets the preset size condition, the target context model is the first context model, and if the size of the current block does not meet the preset size condition, the target context model is the second context model.
- Step 6602 If it is determined that the current block starts the matrix-based intra prediction mode according to the MIP indication information, then the matrix-based intra prediction mode is used to predict the current block.
- the context model is selected only according to the size condition. In this way, the number of context models required for the MIP indication information can be reduced to 2, thereby reducing the complexity of coding and decoding and reducing the memory overhead.
- FIG. 67 is a flowchart of an encoding and decoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 67, if the current block supports the matrix-based intra prediction mode, the method includes:
- Step 6701 According to whether the matrix-based intra prediction mode is activated for the current block, perform context-based adaptive binary arithmetic coding on the MIP indication information based on the same context model.
- FIG. 68 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 67. As shown in FIG. 68, if the current block Support matrix-based intra prediction mode, the method includes:
- Step 6801 Before predicting the current block according to the matrix-based intra prediction mode, perform context-based adaptive binary arithmetic decoding on the MIP indication information based on the same context model.
- Step 6802 If it is determined that the current block starts the matrix-based intra prediction mode according to the MIP indication information, then the matrix-based intra prediction mode is used to predict the current block.
- the MIP indication information in the MIP mode, for the selection of the context model of the MIP indication information, whether the upper block of the current block starts the matrix-based intra prediction mode, and whether the left block of the current block starts the matrix-based intra prediction mode
- the intra-frame prediction mode does not consider the size conditions.
- the MIP indication information is subjected to context-based adaptive binary arithmetic coding or decoding based on the same context model. In this way, the context required by the MIP indication information can be changed.
- the number of models is reduced to 1, thereby reducing the complexity of encoding and decoding and reducing memory overhead.
- FIG. 69 is a flowchart of an encoding and decoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 69, if the current block supports a matrix-based intra prediction mode, the method includes:
- Step 6901 According to whether the matrix-based intra prediction mode is activated for the current block, perform bypass-based binary arithmetic coding on the MIP indication information.
- FIG. 70 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 69. As shown in FIG. 70, if the current block Support matrix-based intra prediction mode, the method includes:
- Step 7001 Before predicting the current block according to the matrix-based intra prediction mode, perform bypass-based binary arithmetic decoding on the MIP indication information.
- Step 7002 If it is determined according to the MIP indication information that the current block starts the matrix-based intra prediction mode, then the matrix-based intra prediction mode is used to predict the current block.
- the MIP mode in the MIP mode, whether the upper block of the current block starts the matrix-based intra prediction mode, and whether the left block of the current block starts the matrix-based intra prediction mode, and the size conditions are not considered.
- bypass-based binary arithmetic coding or decoding is performed on the MIP indication information, that is, context-based adaptive binary arithmetic coding or decoding is not used.
- the context required by the MIP indication information can be changed.
- the number of models is reduced to 0, thereby reducing the complexity of encoding and decoding and reducing memory overhead.
- the BDPCM technology lacks an SPS-level syntax to turn on or off the BDPCM mode, and also lacks an SPS-level syntax to control the switch of the size of the largest encoding block that can enable the BDPCM mode, which is less flexible.
- FIG. 71 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 71, if the current block supports the BDPCM mode, the method includes the following steps:
- Step 7101 Before performing BDPCM encoding on the current block, encode the first BDPCM indication information, where the first BDPCM indication information is used to indicate whether the current processing unit supports the BDPCM mode.
- the first BDPCM indication information may exist in a sequence parameter set, an image parameter level, a slice level, or a tile level.
- the first BDPCM indication information exists in the sequence parameter set, that is, the first BDPCM indication information is an SPS-level syntax.
- the encoding end may also encode range indication information, where the range indication information is used to indicate the range of the processing unit supporting the BDPCM mode.
- the range indication information can exist in the sequence parameter set, image parameter level, slice level or tile level.
- FIG. 72 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 71. As shown in FIG. 72, if the current block Support BDPCM mode, the method includes the following steps:
- Step 7201 Before performing BDPCM decoding on the current block, decode the first BDPCM indication information, where the first BDPCM indication information is used to indicate whether the current processing unit supports the BDPCM mode.
- Step 7202 According to the first BDPCM instruction information, decode the current processing unit.
- the current processing unit is processed based on the BDPCM mode.
- the first BDPCM indication information may exist in a sequence parameter set, image parameter level, slice level, or tile level.
- the first BDPCM indication information exists in the sequence parameter set, that is, the first BDPCM indication information is an SPS-level syntax.
- the decoding end may also decode the range indication information, where the range indication information is used to indicate the range of the processing unit supporting the BDPCM mode.
- the range indication information can exist in the sequence parameter set, image parameter level, slice level or tile level.
- a syntax is added to enable or disable the BDPCM mode, which improves the flexibility of the coding and decoding process.
- a syntax is added to indicate the range of processing units that support the BDPCM mode.
- Fig. 73 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in Fig. 73, if the current block supports the BDPCM mode, the method includes the following steps:
- Step 7301 Before performing BDPCM processing on the current block, encode the second BDPCM indication information, where the second BDPCM indication information is used to indicate the size range of the processing unit that supports the BDPCM mode.
- the range of the unit in the current processing unit can be sequence level, image parameter level or block level.
- the current processing unit may be the current image block.
- the size range may be a size range smaller than 32*32.
- the second BDPCM indication information is used to indicate the maximum size of a processing unit that can support the BDPCM mode, that is, the maximum size of a processing unit that can use the BDPCM mode.
- the maximum size is 32*32.
- the second BDPCM indication information may exist in a sequence parameter set (SPS), an image parameter level, a slice level, or a tile level.
- SPS sequence parameter set
- the second BDPCM indication information exists in the sequence parameter set, that is, the second BDPCM indication information is a syntax added at the SPS level.
- FIG. 74 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 71. As shown in FIG. 72, if the current block Support BDPCM mode, the method includes the following steps:
- Step 7401 Before performing BDPCM processing on the current block, decode the second BDPCM indication information, where the second BDPCM indication information is used to indicate the size range of the processing unit supporting the BDPCM mode.
- Step 7402 Based on the second BDPCM indication information and the size of the current block, determine whether the current block can perform BDPCM processing.
- the current block can perform BDPCM processing. If the size of the current block is not within the size range of the processing unit supporting the BDPCM mode indicated by the second BDPCM indication information, it is determined that the current block cannot be processed by BDPCM.
- the second BDPCM indication information is used to indicate the maximum size of a processing unit that can support the BDPCM mode, and if the size of the current block is less than or equal to the maximum size indicated by the second BDPCM indication information, it is determined that the current block is capable of BDPCM processing . If the size of the current block is greater than the maximum size indicated by the second BDPCM indication information, it is determined that the current block cannot be processed by BDPCM.
- the second BDPCM indication information may exist in a sequence parameter set (SPS), an image parameter level, a slice level, or a tile level.
- SPS sequence parameter set
- the second BDPCM indication information exists in the sequence parameter set, that is, the second BDPCM indication information is a syntax added at the SPS level.
- a syntax is added to control the size range in which the BDPCM mode can be used, which improves the flexibility of the coding and decoding process.
- the syntax elements transmitted between the encoding end and the decoding end may also include third BDPCM indication information and fourth BDPCM indication information.
- the third BDPCM indication information is used to indicate whether the current processing unit starts the BDPCM mode
- the fourth BDPCM indication information is used to indicate index information of the prediction direction of the BDPCM mode.
- the third BDPCM indication information is Intra_bdpcm_flag
- the fourth BDPCM indication information is Intra_bdpcm_dir_flag.
- the current block supports the BDPCM mode
- when it is determined to encode or decode the third BDPCM indication information it is necessary to perform context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic coding on the third BDPCM indication information based on a context model.
- Adaptive binary arithmetic decoding when it is determined to encode or decode the fourth BDPCM indication information, it is necessary to perform context-based adaptive binary arithmetic coding or context-based adaptation on the fourth BDPCM indication information based on a different context model Binary arithmetic decoding. That is, two context models are needed to encode and decode the third BDPCM indication information and the fourth BDPCM indication information, as shown in Table 46 below.
- FIG. 75 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 75, if the current block supports the BDPCM mode, the method includes the following steps:
- Step 7501 Before performing BDPCM coding on the current block, according to whether the BDPCM mode is activated for the current block, based on a context model, perform context-based adaptive binary arithmetic coding on the third BDPCM indication information.
- RDO can be used to decide whether to start the BDPCM mode, that is, whether to use the differential PCM encoding method of the quantized residual, and in the encoding stream Encode the third BDPCM indication information to indicate whether the current block starts the BDPCM mode.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
模式号 | 帧内预测模式 |
0 | Intra_Planar |
1 | Intra_DC |
2...34 | Intra_angular2…Intra_angular34 |
第一个bin(比特位) | 第二个bin(比特位) |
MultiRefLineIdx(0)即第一个上下文模型 | MultiRefLineIdx(1)即第二个上下文模型 |
第一个bin | 第二个bin |
MultiRefLineIdx(0)即第一个上下文模型 | 不用上下文模型,用Bypass进行编码 |
Claims (33)
- 一种编解码方法,其特征在于,所述方法包括:在确定进行第一ISP指示信息的编码或解码时,基于一个上下文模型,对所述第一ISP指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码,所述第一ISP指示信息用于指示是否启动帧内子块预测模式;在确定进行第二ISP指示信息的编码或解码时,对所述第二ISP指示信息进行基于旁路的二进制算术编码或解码,所述第二ISP指示信息用于指示帧内子块预测模式的子块划分方式。
- 一种编解码方法,其特征在于,所述方法包括:若当前块的宽高尺寸为M*N,所述M小于64且所述N小于64,则所述当前块不支持多行预测模式。
- 如权利要求2所述的方法,其特征在于,若所述当前块的宽高尺寸为4*4,则所述当前块不支持多行预测模式。
- 一种编解码方法,其特征在于,若当前块支持多行预测模式,且所述多行预测模式对应的候选参考行行数为3,所述多行预测模式对应的参考行指示信息至多占用2个比特位,所述参考行指示信息用于指示基于多行预测模式进行所述当前块的预测时所使用的目标参考行的索引信息,所述方法包括:基于一个上下文模型,对所述参考行指示信息的第1个比特位进行基于上下文的自适应二进制算术编码或自适应二进制算术解码;当需要对所述参考行指示信息的第2个比特位进行编码或解码时,对所述参考行指示信息的第2个比特位进行基于旁路的二进制算术编码或解码。
- 一种编解码方法,其特征在于,若当前块支持多行预测模式,且所述多行预测模式对应的候选参考行行数为3,其中,索引信息为0的候选参考行为第0行,所述第0行是与所述当前块边界相邻的行;索引信息为1的候选参考行为第1行,所述第1行是与所述当前块边界次相邻的行;索引信息为2的候选参考行为第2行,所述第2行是与所述第1行相邻的行;所述方法包括:在根据所述多行预测模式对所述当前块进行预测时,根据目标参考行对所述当前块进行预测;其中,所述目标参考行根据参考行指示信息确定;若所述参考行指示信息所指示的索引信息为0,则所述目标参考行为第0行;若所述参考行指示信息所指示的索引信息为1,则所述目标参考行是第1行;若所述参考行指示信息所指示的索引信息为2,则所述目标参考行是第2行。
- 一种解码方法,其特征在于,若当前块支持多行预测模式,所述方法包括:在根据所述多行预测模式对所述当前块进行预测之前,对行数指示信息进行解码,所述行数指示信息用于指示所述多行预测模式对应的候选参考行行数;根据所述行数指示信息确定所述多行预测模式对应的候选参考行行数;根据所述多行预测模式对应的候选参考行行数和所述参考行指示信息确定目标参考行,所述参考行指示信息用于指示基于所述多行预测模式进行所述当前块的预测时所使用的目标参考行的索引信息;根据所述目标参考行对所述当前块进行预测。
- 如权利要求6所述的方法,其特征在于,所述行数指示信息存在于序列参数集中。
- 一种编解码方法,其特征在于,所述方法包括:若当前块启动仿射预测模式或启动除仿射预测模式之外的其他预测模式,在进行当前块的运动矢量差编码或解码时,如果所述当前块支持AMVR模式,则在进行第一AMVR指示信息的编码或解码时,基于第一上下文模型,对所述第一AMVR指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码,所述第一AMVR指示信息用于指示是否启动AMVR模式;当所述第一AMVR指示信息指示所述当前块启动AMVR模式时,基于第二上下文模型,对第二AMVR指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码,所述第二AMVR指示信息用于指示在AMVR模式下进行运行矢量差编码或解码时所采用的像素精度的索引信息,所述第一上下文模型和所述第二上下文模型不同。
- 一种解码方法,其特征在于,所述方法包括:若当前块启动帧内子块预测,所述帧内子块预测的目标预测模式存在于最有可能的帧内预测模式MPM列表中,所述当前块为亮度块,则在根据所述帧内子块预测对所述当前块进行预测时,对预测模式索引信息进行解码,其中,所述预测模式索引信息的第1个比特位是基于第一上下文模型进行基于上下文的自适应二进制算术解码得到,其他比特位是基于旁路的二进制算术解码得到;根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的帧内子块预测的目标预测模式;所述预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引信息;根据所述目标预测模式对所述当前块进行预测;或者若当前块启动常规帧内预测,当常规帧内预测的目标预测模式来自于MPM列表中时,若当前块为亮度块,则在根据常规帧内预测对所述当前块进行预测时,对预测模式索引信息进行解码,其中,所述预测模式索引信息的第1个比特位是基于第二上下文模型进行基于上下文的自适应二进制算术解码得到,其他比特位基于旁路的二进制算术解码得到,所述第二上下文模型与所述第一上下文模型是同一上下文模型;根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的常规帧内预测的目标预测模式;所述预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引信息;根据所述目标预测模式对所述当前块进行预测。
- 一种解码方法,其特征在于,所述方法包括:若当前块启动帧内子块预测时,帧内子块预测的目标预测模式存在于MPM列表中,所述当前块为亮度块,则在根据帧内子块预测对所述当前块进行预测时,对预测模式索引信息进行解码,其中,所述预测模式索引信息的所有比特位基于旁路的二进制算术解码得到;根据预测模式索引信息,从所述MPM列表中确定当前块启动的帧内子块预测的目标预测模式;所述预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引信息;根据所述目标预测模式对当前块进行预测;或者若当前块启动常规帧内预测时,当常规帧内预测的目标预测模式来自于MPM列表中时,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,对预测模式索引信息进行解码,其中,所述预测模式索引信息的所有比特位基于旁路的二进制算术解码得到;根据预测模式索引信息,从所述MPM列表中确定当前块启动的常规帧内预测的目标预测模式;所述预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引信息;根据所述目标预测模式对当前块进行预测。
- 一种解码方法,其特征在于,所述方法包括:若当前块启动帧内子块预测时,帧内子块预测的目标预测模式存在于MPM列表中,所述当前块为亮度块,则在根据帧内子块预测对所述当前块进行预测时,对planar指示信息进行解码,其中,所述planar指示信息用于指示所述当前块启动的帧内子块预测的目标预测模式是否为planar预测模式,所述planar指示信息是基于第一上下文模型进行基于上下文的自适应二进制算术解码得到;当基于所述planar指示信息确定所述当前块启动的帧内子块预测的目标预测模式为planar预测模式时,根据planar预测模式对所述当前块进行预测;当基于所述planar指示信息确定所述当前块启动的帧内子块预测的目标预测模式不是planar预测模式时,根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的帧内子块预测的目标预测模式,根据所述目标预测模式对所述当前块进行预测;或者若所述当前块启动常规帧内预测,当常规帧内预测的目标预测模式来自于MPM列表时,若所述当前块为亮度块,则在根据常规帧内预测对所述当前块进行预测时,对planar指示信息进行解码,其中,所述planar指示信息用于指示所述当前块启动的目标预测模式是否为planar预测模式,所述planar指示信息是基于第二上下文模型进行基于上下文的自适应二进制算术解码得到,所述第一上下文模型与所述第二上下文模型相同;当基于所述planar指示信息确定所述当前块启动的目标预测模式为planar预测模式时,根据planar预测模式对所述当前块进行预测;当基于所述planar指示信息确定所述当前块启动的目标预测模式不是planar预测模式时, 根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的目标预测模式,根据所述目标预测模式对所述当前块进行预测。
- 一种解码方法,其特征在于,所述方法包括:若当前块启动帧内子块预测时,帧内子块预测的目标预测模式存在于MPM列表中,所述当前块为亮度块,则在根据帧内子块预测对所述当前块进行预测时,对planar指示信息进行解码,其中,所述planar指示信息用于指示所述当前块启动的帧内子块预测的目标预测模式是否为planar预测模式,所述planar指示信息基于旁路的二进制算术解码得到;当基于所述planar指示信息确定所述当前块启动的帧内子块预测的目标预测模式为planar预测模式时,根据planar预测模式对所述当前块进行预测;当基于所述planar指示信息确定所述当前块启动的帧内子块预测的目标预测模式不是planar预测模式时,根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的帧内子块预测的目标预测模式,根据所述目标预测模式对所述当前块进行预测;或者若所述当前块启动常规帧内预测时,当常规帧内预测的目标预测模式来自于MPM列表中,若所述当前块为亮度块,则在根据常规帧内预测对所述当前块进行预测时,对planar指示信息进行解码,其中,所述planar指示信息用于指示所述当前块启动的目标预测模式是否为planar预测模式,所述planar指示信息基于旁路的二进制算术解码得到;当基于所述planar指示信息确定所述当前块启动的目标预测模式为planar预测模式时,根据planar预测模式对所述当前块进行预测;当基于所述planar指示信息确定所述当前块启动的目标预测模式不是planar预测模式时,根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的目标预测模式,根据所述目标预测模式对所述当前块进行预测。
- 一种解码方法,其特征在于,若当前块支持跨分量预测模式,且所述当前块启动跨分量预测模式,所述当前块为色度块,所述方法包括:在根据跨分量预测模式对所述当前块进行预测时,对色度预测模式索引信息进行解码,所述色度预测模式索引信息用于指示所述当前块的目标预测模式在对应的候选预测模式列表中的索引;其中,所述色度预测模式索引信息的第1个比特位是基于第一上下文模型基于上下文的自适应二进制算术解码得到,所述色度预测模式索引信息的第2个比特位是基于第二上下文模型进行基于上下文的自适应二进制算术解码,所述第一上下文模型与所述第二上下文模型不同;所述色度预测模式索引信息的第3个比特位和第4个比特位是基于旁路的二进制算术解码得到;根据所述色度预测模式索引信息,从所述候选预测模式列表中确定所述当前块的目标预测模式;根据所述目标预测模式对当前块进行预测。
- 一种解码方法,其特征在于,若当前块支持跨分量预测模式,且所述当前块启动跨 分量预测模式,所述当前块为色度块,所述方法包括:在根据跨分量预测模式对所述当前块进行预测时,对色度预测模式索引信息进行解码,所述色度预测模式索引信息用于指示所述当前块的目标预测模式在对应的候选预测模式列表中的索引;其中,所述色度预测模式索引信息的第1个比特位是基于一个上下文模型进行基于上下文的自适应二进制算术解码得到,所述色度预测模式索引信息的第2个比特位、3个比特位和第4个比特位是基于旁路的二进制算术解码得到;根据所述色度预测模式索引信息,从所述候选预测模式列表中确定所述当前块的目标预测模式;根据所述目标预测模式对所述当前块进行预测。
- 一种编解码方法,其特征在于,所述方法包括:当当前块的亮度和色度共用一棵划分树时,若所述当前块对应的亮度块的宽高尺寸为64*64,所述当前块对应的色度块的尺寸为32*32,则所述当前块不支持跨分量预测模式。
- 一种解码方法,其特征在于,所述方法包括:若当前块支持ALF,所述当前块为亮度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码;其中,所述ALF指示信息用于指示所述当前块是否启动ALF,所述目标上下文模型是根据所述当前块的上边块是否启动ALF,以及所述当前块的左边块是否启动ALF,从第一上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型;或者,若所述当前块支持ALF,所述当前块为CB色度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于第一上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码;或者,若所述当前块支持ALF,所述当前块为CR色度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于第二上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,所述第一上下文模型集合包括的上下文模型、所述第一上下文模型和所述第二上下文模型为不同的上下文模型。
- 一种解码方法,其特征在于,所述方法包括:若当前块支持ALF,所述当前块为亮度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于一个上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,所述ALF指示信息用于指示所述当前块是否启动ALF;或者,若所述当前块支持器ALF,且所述当前块启动自适应环路滤波器ALF,所述当前块为色度块,则在根据ALF模式对所述当前块进行滤波处理之前,对ALF指示信息进行基于旁路的二进制算术解码。
- 一种编解码方法,其特征在于,所述方法包括:若当前块的宽高尺寸为32*32,则所述当前块不支持基于矩阵的帧内预测模式。
- 一种解码方法,其特征在于,若当前块支持基于矩阵的帧内预测模式,所述方法包括:在根据基于矩阵的帧内预测模式对所述当前块进行预测之前,基于目标上下文模型对MIP指示信息进行基于上下文的自适应二进制算术解码;其中,所述MIP指示信息用于指示所述当前块是否启动基于矩阵的帧内预测模式,所述目标上下文模型是根据所述当前块的上边块是否启动基于矩阵的帧内预测模式,以及所述当前块的左边块是否启动基于矩阵的帧内预测模式从3个不同的上下文模型中选择的一个上下文模型;若根据所述MIP指示信息确定所述当前块启动基于矩阵的帧内预测模式,则基于矩阵的帧内预测模式,对所述当前块进行预测。
- 一种解码方法,其特征在于,若当前块支持基于矩阵的帧内预测模式,所述方法包括:在根据基于矩阵的帧内预测模式对所述当前块进行预测之前,基于目标上下文模型对MIP指示信息进行基于上下文的自适应二进制算术解码;其中,所述MIP指示信息用于指示所述当前块是否启动基于矩阵的帧内预测模式,所述目标上下文模型是根据所述当前块是否满足预设尺寸条件从2个不同的上下文模型中的一个上下文模型;若根据所述MIP指示信息确定所述当前块启动基于矩阵的帧内预测模式,则基于矩阵的帧内预测模式,对所述当前块进行预测。
- 一种解码方法,其特征在于,若当前块支持基于矩阵的帧内预测模式,所述方法包括:在根据基于矩阵的帧内预测模式对所述当前块进行预测之前,基于同一个上下文模型对MIP指示信息进行基于上下文的自适应二进制算术解码,所述MIP指示信息用于指示所述当前块是否启动基于矩阵的帧内预测模式;若根据所述MIP指示信息确定所述当前块启动基于矩阵的帧内预测模式,则基于矩阵的帧内预测模式,对所述当前块进行预测。
- 一种解码方法,其特征在于,所述方法包括:对第一BDPCM指示信息进行解码,所述第一BDPCM指示信息用于指示当前处理单元是否支持BDPCM模式;根据所述第一BDPCM指示信息,对所述当前处理单元进行解码。
- 如权利要求22所述的方法,其特征在于,所述第一BDPCM指示信息存在于序列参数集中。
- 一种编解码方法,其特征在于,所述方法包括:对第二BDPCM指示信息进行编码或解码,所述第二BDPCM指示信息用于指示支持BDPCM模式的处理单元的尺寸范围;基于所述第二BDPCM指示信息和所述当前块的尺寸,确定所述当前块是否能够进行BDPCM模式的编码或解码。
- 如权利要求24所述的方法,其特征在于,所述第二BDPCM指示信息存在于序列参数集中。
- 一种解码方法,其特征在于,若当前块支持BDPCM模式,所述方法包括:基于一个上下文模型,对第三BDPCM指示信息进行基于上下文的自适应二进制算术解码,所述第三BDPCM指示信息用于指示所述当前块是否启动BDPCM模式;当所述第三BDPCM指示信息指示所述当前块启动BDPCM模式时,对第四BDPCM指示信息进行基于旁路的二进制算术解码,所述第四BDPCM指示信息用于指示BDPCM模式的预测方向的索引信息;按照所述第四BDPCM指示信息指示的预测方向,对所述当前块进行BDPCM处理。
- 一种编解码方法,其特征在于,所述方法包括:若当前块启动帧内子块预测,或者启动常规帧内预测,或者启动BDPCM模式,则在确定进行CBF指示信息的编码或解码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码;其中,所述CBF指示信息用于指示所述当前块的变换块是否具有非零变换系数,所述目标上下文模型是根据所述当前块的变换块的划分深度从2个不同的上下文模型中选择的一个上下文模型。
- 一种编解码方法,其特征在于,所述方法包括:若当前块启动帧内子块预测或启动常规帧内预测,则在确定进行CBF指示信息的编码或解码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码;其中,所述CBF指示信息用于指示所述当前块的变换块是否具有非零变换系数,所述目标上下文模型是根据所述当前块的变换块的划分深度从第一上下文模型集合包括的2个不同的上下文模型中选择的一个上下文模型;或者,若当前块启动BDPCM模式,则在确定进行CBF指示信息的编码或解码时,基于目标上下文模型,对所述CBF指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码;其中,所述目标上下文模型是所述第一上下文模型集合中的一个上下文模型。
- 一种解码方法,其特征在于,所述方法包括:对JCCR指示信息进行解码,所述JCCR指示信息用于指示当前处理单元是否支持JCCR 模式;若根据所述JCCR指示信息确定当前块支持JCCR模式,且所述当前块启动JCCR模式,则按照所述当前块的蓝色色度CB分量和红色色度CR分量的相关性对所述当前块进行解码,得到所述当前块的色度残差系数。
- 如权利要求29所述的方法,其特征在于,所述JCCR指示信息存在于序列参数集中。
- 一种编解码装置,其特征在于,所述装置包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为执行权利要求1-30所述的任一项编解码方法或解码方法。
- 一种计算机可读存储介质,所述计算机可读存储介质上存储有指令,其特征在于,所述指令被处理器执行时实现权利要求1-30任一项所述的编解码方法或解码方法。
- 一种包含指令的计算机程序产品,其特征在于,当其在计算机上运行时,使得计算机执行权利要求1-30任一项所述的编解码方法或解码方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20825445.8A EP3979647A4 (en) | 2019-06-21 | 2020-06-19 | CODING/DECODING METHOD AND DEVICE AND STORAGE MEDIUM |
JP2021576392A JP7325553B2 (ja) | 2019-06-21 | 2020-06-19 | 符号化・復号化の方法、装置、および記憶媒体 |
US17/621,644 US20220360800A1 (en) | 2019-06-21 | 2020-06-19 | Coding/decoding method and device, and storage medium |
KR1020217043437A KR20220016232A (ko) | 2019-06-21 | 2020-06-19 | 코딩 및 디코딩 방법, 장치 및 저장 매체 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910545251.0A CN112118448B (zh) | 2019-06-21 | 2019-06-21 | 一种编解码方法、装置及存储介质 |
CN201910545251.0 | 2019-06-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020253829A1 true WO2020253829A1 (zh) | 2020-12-24 |
Family
ID=69102758
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/097144 WO2020253829A1 (zh) | 2019-06-21 | 2020-06-19 | 一种编解码方法、装置及存储介质 |
PCT/CN2020/097130 WO2020253828A1 (zh) | 2019-06-21 | 2020-06-19 | 一种编解码方法、装置及存储介质 |
PCT/CN2020/097088 WO2020253823A1 (zh) | 2019-06-21 | 2020-06-19 | 一种编解码方法、装置及存储介质 |
PCT/CN2020/097148 WO2020253831A1 (zh) | 2019-06-21 | 2020-06-19 | 一种编解码方法、装置及存储介质 |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/097130 WO2020253828A1 (zh) | 2019-06-21 | 2020-06-19 | 一种编解码方法、装置及存储介质 |
PCT/CN2020/097088 WO2020253823A1 (zh) | 2019-06-21 | 2020-06-19 | 一种编解码方法、装置及存储介质 |
PCT/CN2020/097148 WO2020253831A1 (zh) | 2019-06-21 | 2020-06-19 | 一种编解码方法、装置及存储介质 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20220360800A1 (zh) |
EP (1) | EP3979647A4 (zh) |
JP (2) | JP7325553B2 (zh) |
KR (1) | KR20220016232A (zh) |
CN (12) | CN113382251B (zh) |
WO (4) | WO2020253829A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11265581B2 (en) * | 2019-08-23 | 2022-03-01 | Tencent America LLC | Method and apparatus for video coding |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11991350B2 (en) * | 2019-05-27 | 2024-05-21 | Sk Telecom Co., Ltd. | Method and device for deriving intra-prediction mode |
KR20200145749A (ko) * | 2019-06-19 | 2020-12-30 | 한국전자통신연구원 | 화면 내 예측 모드 및 엔트로피 부호화/복호화 방법 및 장치 |
CN113382251B (zh) * | 2019-06-21 | 2022-04-08 | 杭州海康威视数字技术股份有限公司 | 一种编解码方法、装置、设备及存储介质 |
CN114827609B (zh) * | 2019-06-25 | 2023-09-12 | 北京大学 | 视频图像编码和解码方法、设备及介质 |
JP2022539768A (ja) * | 2019-07-07 | 2022-09-13 | オッポ広東移動通信有限公司 | 画像予測方法、エンコーダ、デコーダ及び記憶媒体 |
CN113497936A (zh) * | 2020-04-08 | 2021-10-12 | Oppo广东移动通信有限公司 | 编码方法、解码方法、编码器、解码器以及存储介质 |
KR20230004797A (ko) * | 2020-05-01 | 2023-01-06 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 파티션 신택스를 위한 엔트로피 코딩 |
WO2022141278A1 (zh) * | 2020-12-30 | 2022-07-07 | 深圳市大疆创新科技有限公司 | 视频处理方法和编码装置 |
WO2022266971A1 (zh) * | 2021-06-24 | 2022-12-29 | Oppo广东移动通信有限公司 | 编解码方法、编码器、解码器以及计算机存储介质 |
US20230008488A1 (en) * | 2021-07-07 | 2023-01-12 | Tencent America LLC | Entropy coding for intra prediction modes |
WO2023194193A1 (en) * | 2022-04-08 | 2023-10-12 | Interdigital Ce Patent Holdings, Sas | Sign and direction prediction in transform skip and bdpcm |
WO2023224289A1 (ko) * | 2022-05-16 | 2023-11-23 | 현대자동차주식회사 | 가상의 참조라인을 사용하는 비디오 코딩을 위한 방법 및 장치 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102857763A (zh) * | 2011-06-30 | 2013-01-02 | 华为技术有限公司 | 一种基于帧内预测的解码方法和解码装置 |
CN102986213A (zh) * | 2010-04-16 | 2013-03-20 | Sk电信有限公司 | 视频编码/解码设备和方法 |
CN103621099A (zh) * | 2011-04-01 | 2014-03-05 | Lg电子株式会社 | 熵解码方法和使用其的解码装置 |
CN109314783A (zh) * | 2016-06-01 | 2019-02-05 | 三星电子株式会社 | 用于根据编码顺序对视频进行编码和解码的方法和设备 |
CN110677663A (zh) * | 2019-06-21 | 2020-01-10 | 杭州海康威视数字技术股份有限公司 | 一种编解码方法、装置及存储介质 |
Family Cites Families (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4549304A (en) * | 1983-11-28 | 1985-10-22 | Northern Telecom Limited | ADPCM Encoder/decoder with signalling bit insertion |
JP3542572B2 (ja) * | 2001-06-14 | 2004-07-14 | キヤノン株式会社 | 画像復号方法及び装置 |
EP1363458A3 (en) * | 2002-05-14 | 2004-12-15 | Broadcom Corporation | Video bitstream preprocessing method |
EP1365592A3 (en) * | 2002-05-20 | 2005-02-09 | Broadcom Corporation | System, method, and apparatus for decoding flexibly ordered macroblocks |
CN101106721A (zh) * | 2006-07-10 | 2008-01-16 | 华为技术有限公司 | 一种编解码装置及相关编码器 |
US8344917B2 (en) * | 2010-09-30 | 2013-01-01 | Sharp Laboratories Of America, Inc. | Methods and systems for context initialization in video coding and decoding |
CN103069805B (zh) * | 2011-06-27 | 2017-05-31 | 太阳专利托管公司 | 图像编码方法、图像解码方法、图像编码装置、图像解码装置及图像编码解码装置 |
WO2013047805A1 (ja) * | 2011-09-29 | 2013-04-04 | シャープ株式会社 | 画像復号装置、画像復号方法および画像符号化装置 |
US9088796B2 (en) * | 2011-11-07 | 2015-07-21 | Sharp Kabushiki Kaisha | Video decoder with enhanced CABAC decoding |
KR20130058524A (ko) * | 2011-11-25 | 2013-06-04 | 오수미 | 색차 인트라 예측 블록 생성 방법 |
US9843809B2 (en) * | 2012-07-02 | 2017-12-12 | Electronics And Telecommunications Research | Method and apparatus for coding/decoding image |
US9313500B2 (en) * | 2012-09-30 | 2016-04-12 | Microsoft Technology Licensing, Llc | Conditional signalling of reference picture list modification information |
CN103024384B (zh) * | 2012-12-14 | 2015-10-21 | 深圳百科信息技术有限公司 | 一种视频编码、解码方法及装置 |
CN103024389B (zh) * | 2012-12-24 | 2015-08-12 | 芯原微电子(北京)有限公司 | 一种用于hevc的解码装置和方法 |
KR101726572B1 (ko) * | 2013-05-22 | 2017-04-13 | 세종대학교산학협력단 | 무손실 이미지 압축 및 복원 방법과 이를 수행하는 장치 |
FR3012004A1 (fr) * | 2013-10-15 | 2015-04-17 | Orange | Procede de codage et de decodage d'images, dispositif de codage et de decodage d'images et programmes d'ordinateur correspondants |
BR112016015080A2 (pt) * | 2014-01-03 | 2017-08-08 | Microsoft Technology Licensing Llc | Predição de vetor de bloco em codificação / decodificação de vídeo e imagem |
US9948933B2 (en) * | 2014-03-14 | 2018-04-17 | Qualcomm Incorporated | Block adaptive color-space conversion coding |
WO2015188297A1 (zh) * | 2014-06-08 | 2015-12-17 | 北京大学深圳研究生院 | 加权跳过模式的视频图像块压缩算术编解码方法及装置 |
CN106797471B (zh) * | 2014-09-03 | 2020-03-10 | 联发科技股份有限公司 | 一种对图像内区块使用调色板预测模式的颜色索引图解码方法 |
RU2562414C1 (ru) * | 2014-09-24 | 2015-09-10 | Закрытое акционерное общество "Элекард наноДевайсез" | Способ быстрого выбора режима пространственного предсказания в системе кодирования hevc |
US10212445B2 (en) * | 2014-10-09 | 2019-02-19 | Qualcomm Incorporated | Intra block copy prediction restrictions for parallel processing |
CN107113444A (zh) * | 2014-11-04 | 2017-08-29 | 三星电子株式会社 | 使用帧内预测对视频进行编码/解码的方法和装置 |
US10148977B2 (en) * | 2015-06-16 | 2018-12-04 | Futurewei Technologies, Inc. | Advanced coding techniques for high efficiency video coding (HEVC) screen content coding (SCC) extensions |
WO2017087751A1 (en) * | 2015-11-20 | 2017-05-26 | Mediatek Inc. | Method and apparatus for global motion compensation in video coding system |
US10659812B2 (en) * | 2015-11-24 | 2020-05-19 | Samsung Electronics Co., Ltd. | Method and device for video decoding and method and device for video encoding |
US10390021B2 (en) * | 2016-03-18 | 2019-08-20 | Mediatek Inc. | Method and apparatus of video coding |
WO2017173593A1 (en) * | 2016-04-06 | 2017-10-12 | Mediatek Singapore Pte. Ltd. | Separate coding secondary transform syntax elements for different color components |
CN109076241B (zh) * | 2016-05-04 | 2023-06-23 | 微软技术许可有限责任公司 | 利用样本值的非相邻参考线进行帧内图片预测 |
WO2017203882A1 (en) * | 2016-05-24 | 2017-11-30 | Sharp Kabushiki Kaisha | Systems and methods for intra prediction coding |
ES2724568B2 (es) * | 2016-06-24 | 2021-05-19 | Kt Corp | Método y aparato para tratar una señal de vídeo |
EP3972256B1 (en) * | 2016-06-24 | 2024-01-03 | KT Corporation | Adaptive reference sample filtering for intra prediction using distant pixel lines |
US11368681B2 (en) * | 2016-07-18 | 2022-06-21 | Electronics And Telecommunications Research Institute | Image encoding/decoding method and device, and recording medium in which bitstream is stored |
CN109565591B (zh) * | 2016-08-03 | 2023-07-18 | 株式会社Kt | 用于对视频进行编码和解码的方法和装置 |
WO2018045332A1 (en) * | 2016-09-02 | 2018-03-08 | Vid Scale, Inc. | Methods and apparatus for coded block flag coding in quad-tree plus binary-tree block partitioning |
WO2018062950A1 (ko) * | 2016-09-30 | 2018-04-05 | 엘지전자(주) | 영상 처리 방법 및 이를 위한 장치 |
CN109891892A (zh) * | 2016-10-11 | 2019-06-14 | Lg 电子株式会社 | 依赖于图像编译系统中的帧内预测的图像解码方法和装置 |
JPWO2018070267A1 (ja) * | 2016-10-14 | 2019-08-15 | ソニー株式会社 | 画像処理装置および画像処理方法 |
US10742975B2 (en) * | 2017-05-09 | 2020-08-11 | Futurewei Technologies, Inc. | Intra-prediction with multiple reference lines |
RU2020109859A (ru) * | 2017-09-15 | 2021-09-07 | Сони Корпорейшн | Устройство и способ обработки изображения |
CN108093264B (zh) * | 2017-12-29 | 2019-03-08 | 东北石油大学 | 基于分块压缩感知的岩心图像压缩、解压方法和系统 |
CN109743576B (zh) * | 2018-12-28 | 2020-05-12 | 杭州海康威视数字技术股份有限公司 | 编码方法、解码方法及装置 |
CN109788285B (zh) * | 2019-02-27 | 2020-07-28 | 北京大学深圳研究生院 | 一种量化系数结束标志位的上下文模型选取方法及装置 |
US11451826B2 (en) * | 2019-04-15 | 2022-09-20 | Tencent America LLC | Lossless coding mode and switchable residual coding |
WO2020216375A1 (en) * | 2019-04-26 | 2020-10-29 | Huawei Technologies Co., Ltd. | Method and apparatus for signaling of mapping function of chroma quantization parameter |
JP2022537275A (ja) * | 2019-06-20 | 2022-08-25 | インターデジタル ブイシー ホールディングス フランス,エスアーエス | 多用途ビデオコーディングのためのロスレスモード |
-
2019
- 2019-06-21 CN CN202110688028.9A patent/CN113382251B/zh active Active
- 2019-06-21 CN CN202110688048.6A patent/CN113382254B/zh active Active
- 2019-06-21 CN CN202110688047.1A patent/CN113382253B/zh active Active
- 2019-06-21 CN CN202110688049.0A patent/CN113382255B/zh active Active
- 2019-06-21 CN CN201910545251.0A patent/CN112118448B/zh active Active
- 2019-06-21 CN CN201911061873.2A patent/CN110677655B/zh active Active
- 2019-06-21 CN CN202110686665.2A patent/CN113347427A/zh not_active Withdrawn
- 2019-06-21 CN CN201911061275.5A patent/CN110677663B/zh active Active
- 2019-06-21 CN CN202110686662.9A patent/CN113347426A/zh not_active Withdrawn
- 2019-06-21 CN CN201911090138.4A patent/CN110784712B/zh active Active
- 2019-06-21 CN CN202110688030.6A patent/CN113382252B/zh active Active
- 2019-06-21 CN CN202110688052.2A patent/CN113382256B/zh active Active
-
2020
- 2020-06-19 WO PCT/CN2020/097144 patent/WO2020253829A1/zh unknown
- 2020-06-19 WO PCT/CN2020/097130 patent/WO2020253828A1/zh active Application Filing
- 2020-06-19 EP EP20825445.8A patent/EP3979647A4/en active Pending
- 2020-06-19 WO PCT/CN2020/097088 patent/WO2020253823A1/zh active Application Filing
- 2020-06-19 WO PCT/CN2020/097148 patent/WO2020253831A1/zh active Application Filing
- 2020-06-19 KR KR1020217043437A patent/KR20220016232A/ko active Search and Examination
- 2020-06-19 US US17/621,644 patent/US20220360800A1/en active Pending
- 2020-06-19 JP JP2021576392A patent/JP7325553B2/ja active Active
-
2023
- 2023-05-22 JP JP2023084090A patent/JP2023096190A/ja active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102986213A (zh) * | 2010-04-16 | 2013-03-20 | Sk电信有限公司 | 视频编码/解码设备和方法 |
CN103621099A (zh) * | 2011-04-01 | 2014-03-05 | Lg电子株式会社 | 熵解码方法和使用其的解码装置 |
CN102857763A (zh) * | 2011-06-30 | 2013-01-02 | 华为技术有限公司 | 一种基于帧内预测的解码方法和解码装置 |
CN109314783A (zh) * | 2016-06-01 | 2019-02-05 | 三星电子株式会社 | 用于根据编码顺序对视频进行编码和解码的方法和设备 |
CN110677663A (zh) * | 2019-06-21 | 2020-01-10 | 杭州海康威视数字技术股份有限公司 | 一种编解码方法、装置及存储介质 |
CN110677655A (zh) * | 2019-06-21 | 2020-01-10 | 杭州海康威视数字技术股份有限公司 | 一种编解码方法、装置及存储介质 |
CN110784712A (zh) * | 2019-06-21 | 2020-02-11 | 杭州海康威视数字技术股份有限公司 | 一种编解码方法、装置及存储介质 |
Non-Patent Citations (1)
Title |
---|
SANTIAGO DE LUXÁN HERNÁNDEZ ET AL.: "CE3: Line-based intra coding mode (Tests 2.1.1 and 2.1.2)", 12. JVET MEETING; 20181003 - 20181012; MACAO; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ),, 30 September 2018 (2018-09-30), pages 1 - 9, XP030194061 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11265581B2 (en) * | 2019-08-23 | 2022-03-01 | Tencent America LLC | Method and apparatus for video coding |
US20220150547A1 (en) * | 2019-08-23 | 2022-05-12 | Tencent America LLC | Method and apparatus for video coding |
US11632573B2 (en) * | 2019-08-23 | 2023-04-18 | Tencent America LLC | Method and apparatus for video coding |
Also Published As
Publication number | Publication date |
---|---|
EP3979647A4 (en) | 2023-03-22 |
CN113347426A (zh) | 2021-09-03 |
CN113382254A (zh) | 2021-09-10 |
CN110677655A (zh) | 2020-01-10 |
WO2020253831A1 (zh) | 2020-12-24 |
JP2023096190A (ja) | 2023-07-06 |
CN110677663A (zh) | 2020-01-10 |
CN113382255A (zh) | 2021-09-10 |
EP3979647A1 (en) | 2022-04-06 |
US20220360800A1 (en) | 2022-11-10 |
CN113382254B (zh) | 2022-05-17 |
CN110677663B (zh) | 2021-05-14 |
JP7325553B2 (ja) | 2023-08-14 |
CN113382255B (zh) | 2022-05-20 |
WO2020253823A1 (zh) | 2020-12-24 |
CN113382251A (zh) | 2021-09-10 |
CN112118448B (zh) | 2022-09-16 |
CN113382256B (zh) | 2022-05-20 |
CN110784712B (zh) | 2021-05-11 |
JP2022537220A (ja) | 2022-08-24 |
KR20220016232A (ko) | 2022-02-08 |
CN113347427A (zh) | 2021-09-03 |
CN113382253B (zh) | 2022-05-20 |
CN110784712A (zh) | 2020-02-11 |
CN113382256A (zh) | 2021-09-10 |
CN113382252A (zh) | 2021-09-10 |
WO2020253828A1 (zh) | 2020-12-24 |
CN113382251B (zh) | 2022-04-08 |
CN113382252B (zh) | 2022-04-05 |
CN113382253A (zh) | 2021-09-10 |
CN110677655B (zh) | 2022-08-16 |
CN112118448A (zh) | 2020-12-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020253829A1 (zh) | 一种编解码方法、装置及存储介质 | |
CN110024392B (zh) | 用于视频译码的低复杂度符号预测 | |
CN108605127B (zh) | 滤波视频数据的经解码块的方法和装置及存储介质 | |
US9167269B2 (en) | Determining boundary strength values for deblocking filtering for video coding | |
CN103563380B (zh) | 减少用于视频处理的行缓冲的方法及装置 | |
TW201742458A (zh) | 二值化二次轉換指數 | |
TW201352004A (zh) | 轉換係數寫碼 | |
US20230239464A1 (en) | Video processing method with partial picture replacement | |
JP7286783B2 (ja) | 符号化方法、復号化方法、デコーダ、エンコーダー及び記憶媒体 | |
TWI832661B (zh) | 圖像編解碼的方法、裝置及存儲介質 | |
WO2022191947A1 (en) | State based dependent quantization and residual coding in video coding | |
WO2021211576A1 (en) | Methods and systems for combined lossless and lossy coding | |
CN117203960A (zh) | 视频编码中的旁路对齐 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20825445 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021576392 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20217043437 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2020825445 Country of ref document: EP Effective date: 20211230 |