WO2020253829A1 - 一种编解码方法、装置及存储介质 - Google Patents

一种编解码方法、装置及存储介质 Download PDF

Info

Publication number
WO2020253829A1
WO2020253829A1 PCT/CN2020/097144 CN2020097144W WO2020253829A1 WO 2020253829 A1 WO2020253829 A1 WO 2020253829A1 CN 2020097144 W CN2020097144 W CN 2020097144W WO 2020253829 A1 WO2020253829 A1 WO 2020253829A1
Authority
WO
WIPO (PCT)
Prior art keywords
current block
prediction mode
indication information
block
decoding
Prior art date
Application number
PCT/CN2020/097144
Other languages
English (en)
French (fr)
Inventor
徐丽英
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Priority to EP20825445.8A priority Critical patent/EP3979647A4/en
Priority to JP2021576392A priority patent/JP7325553B2/ja
Priority to US17/621,644 priority patent/US20220360800A1/en
Priority to KR1020217043437A priority patent/KR20220016232A/ko
Publication of WO2020253829A1 publication Critical patent/WO2020253829A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Definitions

  • This application relates to the field of image processing technology, and in particular to an encoding and decoding method, device and storage medium.
  • syntax elements can be various indications, such as first ISP indication information or second ISP indication information.
  • the first ISP indication information is used to indicate whether to start the intra-frame sub-block prediction mode, and the second ISP indication information is used to indicate the frame.
  • the sub-block division method of intra-sub-block prediction mode is used to indicate the frame.
  • the embodiments of the present application provide an encoding and decoding method and a storage medium, which can be used to solve the problems of a large number of context models and large memory overhead in the encoding and decoding process in related technologies.
  • the technical solution is as follows:
  • an encoding and decoding method includes:
  • the ISP indication information is used to indicate whether to start the intra-frame sub-block prediction mode
  • the second ISP indication information is used to indicate the sub-frame sub-block prediction mode. Block division method.
  • an encoding and decoding method includes:
  • the first ISP indication information When it is determined to encode or decode the first ISP indication information, perform bypass-based binary arithmetic coding or decoding on the first ISP indication information, where the first ISP indication information is used to indicate whether to start the intra sub-block prediction mode;
  • the second ISP indication information When it is determined to encode or decode the second ISP indication information, perform bypass-based binary arithmetic coding or decoding on the second ISP indication information, where the second ISP indication information is used to indicate the sub-block division of the intra-frame sub-block prediction mode Types of.
  • an encoding and decoding method includes:
  • the current block does not support the multi-line prediction mode.
  • the current block does not support a multi-line prediction mode.
  • a coding and decoding method is provided. If the current block supports multi-line prediction mode and the number of candidate reference lines corresponding to the multi-line prediction mode is 3, the reference line indication information corresponding to the multi-line prediction mode is at most Occupying 2 bits, the reference row indication information is used to indicate the index information of the target reference row used when predicting the current block based on a multi-row prediction mode, and the method includes:
  • a coding and decoding method is provided. If the current block supports a multi-line prediction mode, and the number of candidate reference lines corresponding to the multi-line prediction mode is 4, the reference line indication information corresponding to the multi-line prediction mode is at most Occupying 3 bits, the reference row indication information is used to indicate the index information of the target reference row used when predicting the current block based on a multi-row prediction mode, and the method includes:
  • a coding and decoding method is provided. If the current block supports a multi-line prediction mode, and the number of candidate reference lines corresponding to the multi-line prediction mode is 3, the candidate reference line with index information of 0 is line 0 , The 0th row is the row adjacent to the current block boundary; the candidate reference row with index information of 1 is the first row, and the first row is the second row adjacent to the current block boundary; index information The candidate reference row of 2 is the second row, and the second row is the row adjacent to the first row; the method includes:
  • the target reference row is determined according to the instruction information of the reference row
  • the target reference row is row 0;
  • the target reference row is the first row
  • the target reference row is the second row.
  • a coding and decoding method is provided. If it is determined that the current block starts the multi-line prediction mode, and the number of candidate reference lines corresponding to the multi-line prediction mode is 3, the candidate reference line whose index information is 0 is 0 Row, the 0th row is the row adjacent to the current block boundary; the candidate reference row whose index information is 1 is the first row, and the first row is the row second adjacent to the current block boundary; index The candidate reference line with information 2 is the second line, and the second line is the line adjacent to the first line; the candidate reference line with index information 3 is the third line, and the third line is the same as the second line.
  • the row adjacent to the row; the method includes:
  • the target reference row is determined according to the instruction information of the reference row
  • the target reference row is row 0;
  • the target reference row is the second row
  • the target reference row is the third row.
  • a decoding method If the current block supports a multi-line prediction mode, the method includes:
  • the line number indication information Before predicting the current block according to the multi-line prediction mode, decode the line number indication information, where the line number indication information is used to indicate the number of candidate reference lines corresponding to the multi-line prediction mode;
  • the target reference row is determined according to the number of candidate reference rows corresponding to the multi-row prediction mode and the reference row indication information, where the reference row indication information is used to indicate when the current block is predicted based on the multi-row prediction mode Index information of the target reference line used;
  • the line number indication information exists in a sequence parameter set, an image parameter level, a slice level, or a tile level.
  • an encoding and decoding method includes:
  • the current block starts the affine prediction mode or starts other prediction modes except the affine prediction mode
  • the motion vector difference coding or decoding of the current block is performed, if the current block supports the adaptive motion vector precision AMVR mode
  • the first AMVR indication information is used to indicate whether to start AMVR mode
  • the second AMVR indication information is used to indicate index information of pixel accuracy used when performing vector difference encoding or decoding in the AMVR mode, and the first context model and the second context model are different.
  • an encoding and decoding method which is characterized in that the method includes:
  • the first AMVR indication information is encoded or decoded based on the first context model , Performing context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic decoding on the first AMVR indication information, and when the first AMVR indication information indicates that the current block starts the AMVR mode, the second AMVR indication information Perform bypass-based binary arithmetic encoding or decoding;
  • the first AMVR indication information is being coded Or during decoding, perform context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic decoding on the first AMVR indication information based on the second context model, and when the first AMVR indication information indicates that the current block starts AMVR In the mode, perform bypass-based binary arithmetic coding or decoding on the second AMVR indication information;
  • the first context model and the second context model are different, the first AMVR indication information is used to indicate whether to start the AMVR mode, and the second AMVR indication information is used to indicate the running vector difference in the AMVR mode Index information of pixel accuracy used in encoding or decoding.
  • an encoding and decoding method includes:
  • the current block starts the affine prediction mode or starts other prediction modes except the affine prediction mode, when the motion vector difference coding or decoding of the current block is performed, if the current block supports the adaptive motion vector precision AMVR mode
  • the first AMVR indication information is used to indicate whether to activate the AMVR mode
  • the first AMVR indication information indicates that the current block starts the AMVR mode
  • the second AMVR indication information is used to indicate that the operation is performed in AMVR mode.
  • the index information of the pixel precision used when running vector difference encoding or decoding.
  • a decoding method includes:
  • the target prediction mode of the intra sub-block prediction exists in the most probable intra-prediction mode MPM list, and the current block is a luminance block, then the intra-sub-block prediction
  • decode the prediction mode index information where the first bit of the prediction mode index information is obtained by performing context-based adaptive binary arithmetic decoding based on the first context model, and other bits Bits are obtained by binary arithmetic decoding based on bypass;
  • the prediction mode index information determine the target prediction mode of the intra sub-block prediction initiated by the current block from the MPM list; the prediction mode index information is used to indicate the index information of the target prediction mode in the MPM list;
  • the prediction The mode index information is decoded, wherein the first bit of the prediction mode index information is obtained by performing context-based adaptive binary arithmetic decoding based on the second context model, and the other bits are obtained based on bypassed binary arithmetic decoding, so
  • the second context model and the first context model are the same context model;
  • prediction mode index information determine from the MPM list a target prediction mode for regular intra prediction initiated by the current block; the prediction mode index information is used to indicate index information of the target prediction mode in the MPM list;
  • a decoding method includes:
  • the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a brightness block, then when the current block is predicted according to intra sub-block prediction, the prediction The mode index information is decoded, wherein all bits of the prediction mode index information are obtained based on bypassed binary arithmetic decoding;
  • the prediction mode index information determine from the MPM list a target prediction mode for intra sub-block prediction initiated by the current block; the prediction mode index information is used to indicate index information of the target prediction mode in the MPM list;
  • the prediction mode index is used when predicting the current block according to the conventional intra prediction Information is decoded, wherein all bits of the prediction mode index information are obtained based on bypassed binary arithmetic decoding;
  • the prediction mode index information determine from the MPM list a target prediction mode for regular intra prediction initiated by the current block; the prediction mode index information is used to indicate index information of the target prediction mode in the MPM list;
  • a decoding method includes:
  • the prediction mode index information is decoded, where the first bit of the prediction mode index information is obtained by context-based adaptive binary arithmetic decoding based on a context model, and the other bits are obtained by bypassing binary arithmetic decoding;
  • the prediction mode index information determine the target prediction mode of the intra sub-block prediction initiated by the current block from the MPM list; the prediction mode index information is used to indicate the index of the target prediction mode in the MPM list information;
  • the current block starts conventional intra prediction
  • the target prediction mode of the conventional intra prediction comes from the MPM list
  • the current block is a luminance block
  • the prediction mode index information is decoded, wherein all the bits of the prediction mode index information are obtained based on bypassed binary arithmetic decoding
  • the prediction mode index information determine from the MPM list the target prediction mode of the conventional intra prediction initiated by the current block; the prediction mode index information is used to indicate the index of the target prediction mode in the MPM list information;
  • a decoding method includes:
  • the prediction mode index information is decoded, and all bits of the prediction mode index information are obtained based on bypassed binary arithmetic decoding;
  • the prediction mode index information determine the target prediction mode of the intra sub-block prediction initiated by the current block from the MPM list; the prediction mode index information is used to indicate that the target prediction mode is in the MPM list Index information;
  • the current block starts conventional intra prediction
  • the target prediction mode of the conventional intra prediction comes from the MPM list
  • the current block is a luminance block
  • the prediction mode index information is decoded, where the first bit of the prediction mode index information is obtained by context-based adaptive binary arithmetic decoding based on a context model, and the other bits are obtained by bypassing binary arithmetic decoding;
  • the prediction mode index information determine from the MPM list the target prediction mode of the conventional intra prediction initiated by the current block; the prediction mode index information is used to indicate the index of the target prediction mode in the MPM list information;
  • a decoding method includes:
  • the planar Indication information is decoded, where the planar indication information is used to indicate whether the target prediction mode of the intra-frame sub-block prediction initiated by the current block is the planar prediction mode, and the planar indication information is based on the first context model for context-based Obtained by adaptive binary arithmetic decoding;
  • the target prediction mode of the intra sub-block prediction initiated by the current block is the planar prediction mode, predict the current block according to the planar prediction mode;
  • the target prediction mode of the intra sub-block prediction activated by the current block is not the planar prediction mode
  • the predicted target prediction mode is to predict the current block according to the target prediction mode
  • the current block starts conventional intra prediction
  • the target prediction mode of the conventional intra prediction comes from the MPM list
  • the current block is a luminance block
  • the current block is predicted according to the conventional intra prediction
  • Decoding the planar indication information where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is context-based adaptation based on the second context model Obtained by binary arithmetic decoding, the first context model is the same as the second context model;
  • the target prediction mode started by the current block is determined from the MPM list according to the prediction mode index information, and according to the target The prediction mode predicts the current block.
  • a decoding method includes:
  • the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, when predicting the current block based on intra sub-block prediction, provide the planar indication information Performing decoding, wherein the planar indication information is used to indicate whether the target prediction mode of the intra sub-block prediction initiated by the current block is the planar prediction mode, and the planar indication information is obtained based on bypassed binary arithmetic decoding;
  • the target prediction mode of the intra sub-block prediction initiated by the current block is the planar prediction mode, predict the current block according to the planar prediction mode;
  • the target prediction mode of the intra sub-block prediction activated by the current block is not the planar prediction mode
  • the predicted target prediction mode is to predict the current block according to the target prediction mode
  • the current block starts conventional intra prediction
  • the target prediction mode of the conventional intra prediction comes from the MPM list
  • the current block is a luminance block
  • the current block is predicted according to the conventional intra prediction
  • the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is obtained based on bypassed binary arithmetic decoding
  • the target prediction mode started by the current block is determined from the MPM list according to the prediction mode index information, and the target prediction mode is determined according to the target prediction mode. Predict the current block.
  • a decoding method is provided. If the current block supports the cross-component prediction mode and the current block starts the cross-component prediction mode, the current block is a chroma block, and the method includes:
  • chroma prediction mode index information is decoded, and the chroma prediction mode index information is used to indicate that the target prediction mode of the current block is in the corresponding candidate prediction mode
  • the index in the list wherein the first bit of the chroma prediction mode index information is obtained by adaptive binary arithmetic decoding based on the context based on the first context model, and the second bit of the chroma prediction mode index information Bit is based on a second context model for context-based adaptive binary arithmetic decoding, the first context model is different from the second context model; the third bit and the fourth bit of the chroma prediction mode index information Bits are obtained by binary arithmetic decoding based on bypass;
  • a decoding method is provided. If the current block supports the cross-component prediction mode and the current block starts the cross-component prediction mode, the current block is a chroma block, and the method includes:
  • chroma prediction mode index information is decoded, and the chroma prediction mode index information is used to indicate that the target prediction mode of the current block is in the corresponding candidate prediction mode
  • the index in the list wherein the first bit of the chroma prediction mode index information is obtained by performing context-based adaptive binary arithmetic decoding based on a context model, and the second bit of the chroma prediction mode index information Bit, 3 bits and 4th bit are obtained by binary arithmetic decoding based on bypass;
  • a coding and decoding method is provided. If the current block supports the cross-component prediction mode and the current block starts the cross-component prediction mode, the current block is a chroma block, and the method includes:
  • chroma prediction mode index information is decoded, and the chroma prediction mode index information is used to indicate that the target prediction mode of the current block is in the corresponding candidate prediction mode Index in the list;
  • the target prediction mode is the first cross-component prediction mode
  • the target prediction mode is the second cross-component prediction mode
  • the target prediction mode is the second cross-component prediction mode
  • the target prediction mode is a planar prediction mode
  • the target prediction mode is a vertical prediction mode
  • the target prediction mode is a horizontal prediction mode
  • the target prediction mode is the DC prediction mode
  • a decoding method is provided. If the current block supports the cross-component prediction mode and the current block starts the cross-component prediction mode, the current block is a chroma block, and the method includes:
  • chroma prediction mode index information is decoded, and the chroma prediction mode index information is used to indicate that the target prediction mode of the current block is in the corresponding candidate prediction mode Index in the list;
  • the target prediction mode is the first cross-component prediction mode
  • the target prediction mode is the second cross-component prediction mode
  • the target prediction mode is the second cross-component prediction mode
  • the target prediction mode is the planar prediction mode
  • the target prediction mode is a vertical prediction mode
  • the target prediction mode is a horizontal prediction mode
  • the target prediction mode is the DC prediction mode
  • an encoding and decoding method includes:
  • the luminance and chrominance of the current block share a division tree, if the width and height size of the luminance block corresponding to the current block is 64*64, and the size of the chrominance block corresponding to the current block is 32*32, then The current block does not support cross-component prediction mode.
  • a decoding method includes:
  • the current block supports ALF and the current block is a luma block
  • the target context model is a context model selected from three different context models included in the second context model set according to whether the upper block of the current block starts ALF and whether the left block of the current block starts ALF And the three context models included in the second context model set are different from the three context models included in the first context model set.
  • a decoding method includes:
  • the current block supports ALF and the current block is a luma block
  • the ALF indication information is used to indicate whether the current block starts ALF
  • the target context model is based on whether the upper block of the current block starts ALF, and the left block of the current block starts ALF, from the first context model
  • One context model selected from 3 different context models included in the set; or,
  • the target context model is a context model selected from three different context models included in the second context model set according to whether the upper block of the current block starts ALF and whether the left block of the current block starts ALF And the three context models included in the second context model set are the same as the three context models included in the first context model set.
  • a decoding method includes:
  • the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the first context model.
  • the ALF indication information is used to indicate whether the current block starts ALF; or,
  • the current block supports ALF and the current block is a chrominance block
  • the current block before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the second context model, so The second context model is different from the first context model.
  • a decoding method includes:
  • the current block supports ALF and the current block is a luma block
  • the ALF indication information is used to indicate whether the current block starts ALF
  • the target context model is based on whether the upper block of the current block starts ALF, and the left block of the current block starts ALF, from the first context model
  • One context model selected from 3 different context models included in the set; or,
  • the current block supports ALF and the current block is a CB chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic on the ALF indication information based on the first context model Decode; or,
  • the current block supports ALF, and the current block is a CR chroma block
  • the current block before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic on the ALF indication information based on the second context model Decoding, the context model included in the first context model set, the first context model, and the second context model are different context models.
  • a decoding method characterized in that the method includes:
  • the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the first context model.
  • the ALF indication information is used to indicate whether the current block starts ALF; or,
  • the current block supports ALF and the current block is a CB chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic on the ALF indication information based on the second context model Decode; or,
  • the current block supports ALF and the current block is a CR chroma block
  • a decoding method includes:
  • the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the first context model.
  • the ALF indication information is used to indicate whether the current block starts ALF; or,
  • the current block supports ALF and the current block is a chrominance block
  • the current block before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the second context model, so The second context model and the first context model are the same context model.
  • a decoding method includes:
  • the current block supports ALF and the current block is a luma block
  • the current block supports ALF and the current block is a chrominance block
  • the current block before performing filtering processing on the current block according to the ALF mode, perform bypass-based binary arithmetic decoding on the ALF indication information.
  • a decoding method includes:
  • the current block supports ALF and the current block is a luma block
  • the indication information is used to indicate whether the current block starts ALF; or,
  • the current block supporter is ALF, and the current block is ALF enabled, and the current block is a chrominance block, perform bypass-based binary arithmetic on the ALF indicator before filtering the current block according to the ALF mode decoding.
  • a decoding method includes:
  • the current block supports ALF and the current block is a luma block
  • the current block supports ALF and the current block is a chrominance block
  • the current block before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on a context model.
  • an encoding and decoding method includes:
  • the current block does not support the matrix-based intra prediction mode.
  • a decoding method If the current block supports a matrix-based intra prediction mode, the method includes:
  • the target context model is based on whether the upper block of the current block starts the matrix-based intra prediction mode, and whether the left block of the current block starts the matrix-based intra prediction
  • a context model selected from 3 different context models
  • the matrix-based intra prediction mode is used to predict the current block.
  • a decoding method If the current block supports a matrix-based intra prediction mode, the method includes:
  • the target context model is a context model from two different context models according to whether the current block meets a preset size condition
  • the matrix-based intra prediction mode is used to predict the current block.
  • a decoding method If the current block supports a matrix-based intra prediction mode, the method includes:
  • the MIP indication information is used to indicate whether the current block is Start the matrix-based intra prediction mode
  • the matrix-based intra prediction mode is used to predict the current block.
  • a decoding method If the current block supports a matrix-based intra prediction mode, the method includes:
  • the MIP indication information is used to indicate whether the current block starts the matrix-based intra prediction.
  • the matrix-based intra prediction mode is used to predict the current block.
  • a decoding method includes:
  • the current processing unit is decoded.
  • the first BDPCM indication information exists in a sequence parameter set, an image parameter level, a slice level or a tile level.
  • an encoding and decoding method includes:
  • Second BDPCM indication information is used to indicate the size range of the processing unit supporting the BDPCM mode
  • the current block Based on the second BDPCM indication information and the size of the current block, it is determined whether the current block can perform BDPCM encoding or decoding.
  • the second BDPCM indication information exists in a sequence parameter set, an image parameter level, a slice level, or a tile level.
  • a decoding method is provided. If the current block supports the BDPCM mode, the method includes:
  • the third BDPCM indication information indicates that the current block starts the BDPCM mode
  • the fourth BDPCM indication information is used to indicate the index of the prediction direction of the BDPCM mode information
  • a decoding method is provided. If the current block supports the BDPCM mode, the method includes:
  • the third BDPCM indication information indicates that the current block starts the BDPCM mode
  • the fourth BDPCM indication information is used to indicate the index of the prediction direction of the BDPCM mode information
  • an encoding and decoding method includes:
  • the current block starts intra-sub-block prediction, when it is determined to encode or decode the CBF indication information, based on the target context model, perform context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic decoding on the CBF indication information;
  • the CBF indication information is used to indicate whether the transform block of the current block has non-zero transform coefficients, and the target context model is based on whether the previous transform block of the current block has non-zero transform coefficients from the first context model One context model selected from 2 different context models included in the set; or,
  • the current block starts the regular intra prediction or starts the BDPCM mode
  • when determining to encode or decode the CBF indication information, based on the target context model perform context-based adaptive binary arithmetic coding or context-based adaptation on the CBF indication information Binary arithmetic decoding
  • the target context model is a context model selected from 2 different context models included in the second context model set according to the division depth of the transformation block of the current block, and the second context model
  • the two context model sets included in the set are different from the two context models included in the first context model set.
  • an encoding and decoding method includes:
  • the current block starts intra-frame sub-block prediction, or starts regular intra-frame prediction, or starts BDPCM mode
  • CBF indication information when it is determined to encode or decode CBF indication information, based on the target context model, perform context-based adaptive binary for CBF indication information Arithmetic coding or context-based adaptive binary arithmetic decoding; wherein the CBF indication information is used to indicate whether the transform block of the current block has non-zero transform coefficients, and the target context model is based on the transform block of the current block
  • the depth of division is a context model selected from 2 different context models.
  • an encoding and decoding method includes:
  • the current block starts intra-frame sub-block prediction or starts regular intra-frame prediction
  • the CBF indication information when it is determined to encode or decode the CBF indication information, based on the target context model, perform context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic coding on the CBF indication information Adaptive binary arithmetic decoding; wherein the CBF indication information is used to indicate whether the transform block of the current block has non-zero transform coefficients, and the target context model is based on the division depth of the transform block of the current block from the A context model selected from 2 different context models included in the context model set; or,
  • the target context model is a context model in the first set of context models.
  • a decoding method includes:
  • JCCR indication information Decoding JCCR indication information, where the JCCR indication information is used to indicate whether the current processing unit supports JCCR mode
  • the current block If it is determined according to the JCCR indication information that the current block supports the JCCR mode, and the current block activates the JCCR mode, the current block’s blue chrominance CB component and red chrominance CR component are correlated with each other. The block is decoded to obtain the chrominance residual coefficient of the current block.
  • the JCCR indication information exists in a sequence parameter set, an image parameter level, a slice level or a tile level.
  • a coding and decoding device which is characterized in that the device includes:
  • a memory for storing processor executable instructions
  • the processor is configured to execute any one of the foregoing encoding and decoding methods or decoding methods.
  • a computer-readable storage medium is provided, and instructions are stored on the computer-readable storage medium, and when the instructions are executed by a processor, any one of the foregoing encoding and decoding methods or decoding methods is implemented.
  • a computer program product containing instructions which when running on a computer, causes the computer to execute any of the above-mentioned encoding and decoding methods or decoding methods.
  • a context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic decoding is performed on the first ISP indication information based on a context model.
  • a context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic decoding is performed on the first ISP indication information based on a context model.
  • the second ISP instruction information perform bypass-based binary arithmetic encoding or decoding on the second ISP instruction information. In this way, the number of context models required in the encoding and decoding process can be reduced, and the The complexity of the encoding and decoding process reduces memory overhead.
  • FIG. 1 is a schematic structural diagram of an encoding and decoding system provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of a coding and decoding process provided by an embodiment of the present application
  • FIG. 3 is an exemplary direction corresponding to an intra prediction mode provided by an embodiment of the present application.
  • FIG. 4 is an exemplary direction corresponding to an angle mode provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of image block division according to an embodiment of the present application.
  • Fig. 6 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 7 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 8 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 9 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 10 is a flowchart of an encoding and decoding method provided by an embodiment of the present application.
  • FIG. 11 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 12 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 13 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 14 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 15 is a flowchart of an encoding and decoding method provided by an embodiment of the present application.
  • FIG. 16 is a flowchart of an encoding and decoding method provided by an embodiment of the present application.
  • FIG. 17 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 18 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 19 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 20 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 21 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 22 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 23 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 24 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 25 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 26 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 27 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 28 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 29 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 30 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 31 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 32 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 33 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 34 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 35 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 36 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 37 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 38 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 39 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 40 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 41 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 42 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 43 is a flowchart of an encoding and decoding method provided by an application embodiment
  • FIG. 44 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 45 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 46 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 47 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 48 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 49 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 50 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 51 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 52 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 53 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 54 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 55 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 56 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 57 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 58 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 59 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 60 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 61 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 62 is a flowchart of an encoding and decoding method provided by an embodiment of the present application.
  • FIG. 63 is a flowchart of an editing method provided by an embodiment of the present application.
  • FIG. 64 is a flowchart of an editing method provided by an embodiment of the present application.
  • FIG. 65 is a flowchart of an editing method provided by an embodiment of the present application.
  • FIG. 66 is a flowchart of an editing method provided by an embodiment of the present application.
  • FIG. 67 is a flowchart of an editing method provided by an embodiment of the present application.
  • FIG. 68 is a flowchart of an editing method provided by an embodiment of the present application.
  • FIG. 69 is a flowchart of an editing method provided by an embodiment of the present application.
  • FIG. 70 is a flowchart of an editing method provided by an embodiment of the present application.
  • Figure 71 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 72 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 73 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 74 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 75 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 76 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 77 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 78 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 79 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 80 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 81 is a flowchart of an encoding method provided by an embodiment of the present application.
  • Figure 82 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 83 is a flowchart of an encoding method provided by an embodiment of the present application.
  • FIG. 84 is a flowchart of a decoding method provided by an embodiment of the present application.
  • FIG. 85 is a flowchart of an encoding mode provided by an embodiment of the present application.
  • FIG. 86 is a flowchart of an encoding mode provided by an embodiment of the present application.
  • FIG. 87 is a schematic structural diagram of an encoding end provided by an embodiment of the present application.
  • FIG. 88 is a schematic structural diagram of a decoding end provided by an embodiment of the present application.
  • FIG. 1 is a schematic structural diagram of an encoding and decoding system provided by an embodiment of the present application.
  • the codec system includes an encoder 01, a decoder 02, a storage device 03 and a link 04.
  • the encoder 01 can communicate with the storage device 03, and the encoder 01 can also communicate with the decoder 02 through the link 04.
  • the decoder 02 can also communicate with the storage device 03.
  • the encoder 01 is used to obtain a data source, encode the data source, and transmit the encoded code stream to the storage device 03 for storage, or directly transmit it to the decoder 02 via the link 04.
  • the decoder 02 can obtain the code stream from the storage device 03 and decode it to obtain the data source, or decode it after receiving the code stream transmitted by the encoder 01 via the link 04 to obtain the data source.
  • the data source can be a captured image or a captured video.
  • Both the encoder 01 and the decoder 02 can be used as an electronic device alone.
  • the storage device 03 may include any of a variety of distributed or locally accessible data storage media. For example, hard drives, Blu-ray discs, read-only discs, flash memory, or other suitable digital storage media for storing encoded data.
  • the link 04 may include at least one communication medium, and the at least one communication medium may include a wireless and/or wired communication medium, such as an RF (Radio Frequency) spectrum or one or more physical transmission lines.
  • RF Radio Fre
  • FIG. 2 is a schematic diagram of a coding and decoding process according to an exemplary embodiment.
  • the coding includes prediction, transformation, quantization, and entropy coding.
  • Decoding includes decoding, inverse transformation, inverse quantization, and prediction. A process.
  • binary arithmetic coding and decoding techniques are usually used to code and decode current syntax elements.
  • Prediction in encoding and decoding generally includes intra-frame prediction, multi-line prediction, cross-component prediction and matrix-based intra-frame prediction, etc.
  • intra-frame luminance candidate list and adaptive loop filtering will also be used in encoding and decoding Encoder, adaptive motion vector precision encoding and decoding technology, and BD (Block-based quantized residual domain Differential) PCM (Pulse Code Modulation) encoding and decoding technology, etc.
  • BD Block-based quantized residual domain Differential
  • PCM Pulse Code Modulation
  • Binary arithmetic coding refers to performing arithmetic coding on each bin (bit) after binarization of the current syntax element according to its probability model parameters to obtain the final code stream. It includes two coding methods: context-based adaptive arithmetic coding and bypass-based binary arithmetic coding.
  • CABAC Context-based Adaptive Binary Arithmetic Coding, context-based adaptive binary arithmetic coding
  • CABAC Context-based Adaptive Binary Arithmetic Coding, context-based adaptive binary arithmetic coding
  • the encoding of each symbol is related to the result of previous encoding, and the codeword is adaptively allocated to each symbol according to the statistical characteristics of the symbol stream, especially for symbols with non-equal probability of occurrence, which can be further compressed Bit rate.
  • Each bit of the syntax element enters the context modeler in order, and the encoder assigns an appropriate probability model to each input bit according to the previously encoded syntax element or bit. This process is called context modeling.
  • the bits and the probability model assigned to it are sent to the binary arithmetic encoder for encoding.
  • the encoder needs to update the context model according to the bit value, which is the adaptation of the encoding.
  • Bypass-based Binary Arithmetic Coding is a binary arithmetic coding mode based on equal probability (also called bypass coding mode). Compared with CABAC, Bypass has less probability update process. There is no need to adaptively update the probability state. Instead, a fixed probability of 50% of the probability of 0 and 1 is used for coding. This coding method is simpler, has low coding complexity and low memory consumption, and is suitable for symbols with equal probability.
  • Intra-frame prediction refers to using the correlation of the image space domain to predict the pixels of the current image block by using the pixels of the neighboring blocks that have been coded and reconstructed around the current image block, so as to achieve the purpose of removing the image space redundancy.
  • a variety of intra prediction modes are specified in intra prediction, and each intra prediction mode corresponds to a texture direction (except for the DC mode). For example, if the texture of the image is arranged horizontally, then selecting the horizontal prediction mode can better predict the image information.
  • the luminance component in HEVC High Efficiency Video Coding
  • HEVC High Efficiency Video Coding
  • each size of prediction unit corresponds to 35 intra prediction modes. Contains Planar mode, DC mode and 33 angle modes, as shown in Table 1.
  • Planar mode is suitable for areas where the pixel value changes slowly.
  • two linear filters in the horizontal and vertical directions can be used for filtering, and the average of the two is used as the predicted value of the current image block.
  • the DC mode is suitable for a large flat area, and the average pixel value of the neighboring blocks that have been coded and reconstructed around the current image block can be used as the predicted value of the current image block.
  • the Planar mode and the DC mode may also be called non-angle modes.
  • the intra prediction modes corresponding to the mode number 26 and the mode number 10 respectively indicate the vertical direction and the horizontal direction.
  • the mode number 26 may be adjacent
  • the intra prediction modes corresponding to the mode numbers of are collectively referred to as vertical prediction modes, and the intra prediction modes corresponding to the mode numbers adjacent to the mode number 10 are collectively referred to as horizontal prediction modes.
  • the vertical prediction modes may include The mode number 2 to the mode number 18, and the horizontal prediction mode may include the mode number 19 to the mode number 34.
  • VVC Very Video Coding, Valser Video Coding
  • the method used in conventional intra prediction is to use surrounding pixels to predict the current block, which removes spatial redundancy.
  • the target prediction mode used can be from the MPM (Most Probable Mode, the most probable intra prediction mode) list or the non-MPM list.
  • ISP Intra Sub-block-Partitions, intra sub-block prediction
  • the intra prediction method adopted in the ISP technology is to divide the image block into multiple sub-blocks for prediction.
  • the supported division methods include horizontal division and vertical division.
  • the decoder when the current block starts the ISP mode, if the size of the current block supports only one division method by default, the current block is divided according to the default division direction, and it is predicted, inversely transformed, and reversed.
  • processing such as quantization if the size of the current block supports two division methods, it is necessary to further analyze its division direction, and then divide the current block according to the determined division direction, and perform processing such as prediction, inverse transformation, and inverse quantization on it.
  • the method adopted in the MRL technology is to predict based on the reference pixels of the current block, and the reference pixels can come from adjacent rows of the current block.
  • the reference pixels may come from Reference line 0 (line 0), Reference line 1 (line 1), Reference line 2 (line 2) and Reference line 3 (line 3) as shown in FIG. 5.
  • the 0th line is the line adjacent to the current block boundary
  • the 1st line is the second adjacent line to the current block boundary
  • the 2nd line is the adjacent line of the first line
  • the third line is the second adjacent line Line.
  • reference pixels come from Reference line 0, Reference line 1 and Reference line 3, and Reference 2 is not used.
  • the line may be a line on the upper side of the current block, or a column on the left side of the current block.
  • the number of MPMs in HEVC is 3, and the number of MPMs in current VVC is 6.
  • the intra-frame prediction mode must come from MPM, and for conventional intra-frame prediction, the intra-frame prediction mode may come from MPM or non-MPM.
  • CCLM Cross-component Linear Model Prediction, cross-component prediction
  • the method adopted in the CCLM technology is to use a linear prediction model to reconstruct the pixel value through the luminance component and use a linear equation to obtain the predicted pixel value of the chrominance component, which can remove the redundancy between the image components and further improve the coding performance.
  • MDLM-L is a cross-component prediction mode that uses only the left template information to obtain linear parameters
  • MDLM-T uses only the upper template information.
  • the cross-component prediction mode of linear model parameters is derived.
  • DM uses the same prediction mode as brightness for chrominance.
  • Adaptive loop filter can select a filter from a fixed filter according to its own gradient direction for filtering, and can indicate whether the block has ALF filtering enabled through the CTU-level flag. Degree and brightness can be controlled separately.
  • AMVR adaptive motion vector resolution, adaptive motion vector resolution
  • AMVR is used to indicate that different precisions can be used when performing motion vector difference coding.
  • the precision used can be integer pixel precision, such as 4 pixel precision, or non-integer pixel precision, such as 1/16 pixel precision.
  • This technology can be applied to motion vector data coding in conventional intra-frame prediction, and can also be used in motion vector data coding in affine prediction mode.
  • the matrix-based intra prediction technology refers to determining the predicted pixel value of the current block by taking the upper and left adjacent pixels of the current block as reference pixels, sending them to the matrix-vector multiplier and adding an offset value.
  • BDPCM refers to directly copying the pixel value of the corresponding reference pixel in the vertical direction when predicting the pixel in the prediction process, or copying the pixel value of the corresponding reference pixel in the horizontal direction, similar to vertical prediction and horizontal prediction. Then the residual values of the predicted pixels and the original pixels are quantized, and the quantized residuals are differentially coded.
  • r i,j ,0 ⁇ i ⁇ M-1,0 ⁇ j ⁇ N-1 represents the prediction residual
  • Q(ri ,j ) 0 ⁇ i ⁇ M-1,0 ⁇ j ⁇ N-1 indicates that the prediction residual r i,j is quantized to obtain the quantized residual. Then, perform differential coding on the quantized residual Q(ri ,j ) to obtain the differential coding result
  • the inverse accumulation process is used to obtain the quantized residual data.
  • the quantized residual is dequantized and added to the predicted value to obtain the reconstructed pixel value.
  • JCCR Joint Coding of Chrominance Residuals, joint coding of chrominance residuals
  • JCCR is a joint coding method of CB (blue chroma) and CR (red chroma) components. By observing the distribution of chroma residuals, it is not difficult to find that CB and CR always show a trend of negative correlation, so JCCR uses this phenomenon to propose a joint coding method for CB and CR. For example, only coding (CB-CR)/2 is required, which is the mean value of CB and CR components.
  • the decoding end needs to transmit different syntax elements to the encoding end, and more context models are required to transmit the syntax elements, the encoding and decoding process is complex, and the memory overhead is large.
  • the present application provides a coding and decoding method that can reduce the number of required context models, thereby reducing the complexity of the coding and decoding process and the memory overhead.
  • the encoding and decoding methods of the embodiments of the present application will be introduced respectively with respect to the foregoing prediction modes and encoding and decoding technologies.
  • the syntax elements that need to be transmitted between the decoding end and the encoding end may include the first ISP indication information and the second ISP indication information.
  • the first ISP indication information is used to indicate whether to start the intra-frame sub-block prediction mode.
  • the indication information is used to indicate the sub-block division mode of the intra-frame sub-block prediction mode.
  • the first indication information is intra_subpartitions_mode_flag
  • the second indication information is intra_subpartitions_split_flag.
  • Fig. 6 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in Fig. 1, the method includes:
  • Step 601 When it is determined to encode the first ISP indication information, perform context-based adaptive binary arithmetic coding on the first ISP indication information based on a context model, and the first ISP indication information is used to indicate whether to start intra sub-block prediction mode.
  • the current block can try to use the sub-block division technology, and the encoder can decide whether to use the sub-block division technology finally through RDO (Rate Distortion Optimization). And coding of the first ISP indication information is performed, and the first ISP indication information is used to indicate whether the current block starts the intra-sub-block prediction mode.
  • the conditions for supporting the sub-block division technology include: the current block is a luminance block, the current block has not activated the multi-line prediction mode, and the size of the current block meets certain restriction conditions.
  • the conditions for supporting the sub-block division technology are not limited to the above three conditions, and may also include other conditions.
  • the first ISP indication information is intra_subpartitions_mode_flag
  • intra_subpartitions_mode_flag is a flag bit indicating whether the current block starts the intra sub-block prediction mode. If intra_subpartitions_mode_flag is 0, it means that the intra sub-block prediction mode is not activated for the current block, and if intra_subpartitions_mode_flag is 1, it means that the intra sub-block prediction mode is activated for the current block.
  • Step 602 When it is determined to encode the second ISP indication information, perform bypass-based binary arithmetic coding on the second ISP indication information.
  • the second ISP indication information is used to indicate the sub-block division mode of the intra-frame sub-block prediction mode.
  • the sub-block division method includes a horizontal division direction and a vertical division direction.
  • the final division direction needs to be determined, and based on the division direction used, the encoding of the second ISP indication information is continued.
  • the current block supports only one division direction, there is no need to continue coding the second ISP indication information.
  • the second ISP indication information may be intra_subpartitions_split_flag, and intra_subpartitions_split_flag is a flag bit indicating the sub-block division mode of the ISP mode of the current block. For example, when intra_subpartitions_split_flag is 0, it means that the sub-block division method of the ISP mode of the current block is horizontal division; when intra_subpartitions_split_flag is 1, it means that the sub-block division mode of the ISP mode of the current block is vertical division.
  • the coding mode of the second ISP indication information in the related technology is modified, and the bypass coding mode is used to replace the complex CABAC coding mode.
  • the memory overhead can be reduced, the coding complexity is reduced, and the coding performance Starting off, the performance remains basically unchanged.
  • FIG. 7 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end and is a decoding method corresponding to the encoding method provided in the embodiment of FIG. 6. As shown in FIG. 7, the method includes:
  • Step 701 When it is determined to decode the first ISP indication information, perform context-based adaptive binary arithmetic decoding on the first ISP indication information based on a context model, and the first ISP indication information is used to indicate whether to start intra sub-block prediction mode.
  • the coded stream of the current block may be received first, and if the current block meets the parsing condition, the first ISP indication information in the coded stream is decoded to analyze whether the current block starts the intra sub-block prediction mode.
  • the analysis conditions include: the current block is a luminance block, the current block has not activated the multi-line prediction mode, and the size of the current block meets certain restriction conditions.
  • the analysis conditions are not limited to the above three conditions, and may also include other conditions.
  • the first ISP indication information is intra_subpartitions_mode_flag. If intra_subpartitions_mode_flag is 0, it means that the intra sub-block prediction mode is not activated for the current block, and if intra_subpartitions_mode_flag is 1, it means that the intra sub-block prediction mode is activated for the current block.
  • Step 702 When it is determined to decode the second ISP indication information, perform bypass-based binary arithmetic decoding on the second ISP indication information.
  • the second ISP indication information is used to indicate the sub-block division mode of the intra-frame sub-block prediction mode.
  • the first ISP indication information indicates that the current block starts the intra sub-block prediction mode, and the current block supports two division directions
  • intra_subpartitions_mode_flag 1
  • intra_subpartitions_split_flag 1
  • intra_subpartitions_mode_flag 0
  • intra_subpartitions_mode_flag 1
  • the current block only supports a certain fixed direction of division direction, there is no need to parse the flag bit indicating the division direction.
  • the decoder can determine whether the current block starts the ISP mode and the corresponding division direction, thereby predicting the current block based on the determined division direction, and obtain the predicted value of the current block for the subsequent reconstruction process.
  • Fig. 8 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in Fig. 8, the method includes:
  • Step 801 When it is determined to encode the first ISP indication information, perform bypass-based binary arithmetic coding on the first ISP indication information.
  • the first ISP indication information is used to indicate whether to start the intra sub-block prediction mode.
  • the current block can try to use the sub-block division technology, and the encoder can decide whether to use the sub-block division technology finally through RDO, and perform the encoding of the first ISP indication information .
  • the first ISP indication information is used to indicate whether the current block starts the intra sub-block prediction mode.
  • the conditions for supporting the sub-block division technology include: the current block is a luminance block, the current block has not activated the multi-line prediction mode, and the size of the current block meets certain restriction conditions.
  • the conditions for supporting the sub-block division technology are not limited to the above three conditions, and may also include other conditions.
  • the first ISP indication information is intra_subpartitions_mode_flag
  • intra_subpartitions_mode_flag is a flag bit indicating whether the current block starts the intra sub-block prediction mode. If intra_subpartitions_mode_flag is 0, it means that the intra sub-block prediction mode is not activated for the current block, and if intra_subpartitions_mode_flag is 1, it means that the intra sub-block prediction mode is activated for the current block.
  • Step 802 When it is determined to encode the second ISP indication information, perform bypass-based binary arithmetic coding on the second ISP indication information, and the second ISP indication information is used to indicate the sub-block division mode of the intra-frame sub-block prediction mode.
  • the sub-block division method includes a horizontal division direction and a vertical division direction.
  • the final division direction needs to be determined, and based on the division direction used, the encoding of the second ISP indication information is continued.
  • the current block supports only one division direction, there is no need to continue coding the second ISP indication information.
  • the second ISP indication information may be intra_subpartitions_split_flag, and intra_subpartitions_split_flag is a flag bit indicating the sub-block division mode of the ISP mode of the current block. For example, when intra_subpartitions_split_flag is 0, it means that the sub-block division method of the ISP mode of the current block is horizontal division; when intra_subpartitions_split_flag is 1, it means that the sub-block division mode of the ISP mode of the current block is vertical division.
  • the encoding method of the flag bit of intra_subpartitions_mode_flag and the flag bit of intra_subpartitions_split_flag in the related technology has been modified, and the bypass encoding method is used to replace the original complex CABAC encoding method.
  • the memory overhead can be further reduced, and the The coding complexity, and starting from the coding performance, the performance remains basically unchanged.
  • FIG. 9 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end and is a decoding method corresponding to the encoding method provided in the embodiment of FIG. 8. As shown in FIG. 9, the method includes:
  • Step 901 When it is determined to decode the first ISP indication information, perform bypass-based binary arithmetic decoding on the first ISP indication information.
  • the first ISP indication information is used to indicate whether to start the intra sub-block prediction mode.
  • the coded stream of the current block may be received first, and if the current block meets the parsing condition, the first ISP indication information in the coded stream is decoded to analyze whether the current block starts the intra sub-block prediction mode.
  • the analysis conditions include: the current block is a luminance block, the current block has not activated the multi-line prediction mode, and the size of the current block meets certain restriction conditions.
  • the analysis conditions are not limited to the above three conditions, and may also include other conditions.
  • the first ISP indication information is intra_subpartitions_mode_flag. If intra_subpartitions_mode_flag is 0, it means that the intra sub-block prediction mode is not activated for the current block, and if intra_subpartitions_mode_flag is 1, it means that the intra sub-block prediction mode is activated for the current block.
  • Step 902 When it is determined to decode the second ISP indication information, perform bypass-based binary arithmetic decoding on the second ISP indication information.
  • the second ISP indication information is used to indicate the sub-block division mode of the intra-frame sub-block prediction mode.
  • the first ISP indication information indicates that the current block starts the intra sub-block prediction mode, and the current block supports two division directions
  • intra_subpartitions_mode_flag 1
  • intra_subpartitions_split_flag 1
  • intra_subpartitions_mode_flag 0
  • intra_subpartitions_mode_flag 1
  • the current block only supports a certain fixed direction of division direction, there is no need to parse the flag bit indicating the division direction.
  • the decoder can determine whether the current block starts the ISP mode and the corresponding division direction, thereby predicting the current block based on the determined division direction, and obtain the predicted value of the current block for the subsequent reconstruction process.
  • FIG. 10 is a flowchart of an encoding and decoding method provided by an embodiment of the present application. The method is applied to the encoding end or the decoding end. As shown in FIG. 10, the method includes the following steps:
  • Step 1001 If the width and height dimensions of the current block are M*N, M is less than 64 and N is less than 64, the current block does not support the multi-line prediction mode.
  • the current block does not support the multi-line prediction mode.
  • the syntax elements that need to be transmitted between the decoding end and the encoding end may include reference line indication information, which is used to indicate the index of the target reference line used when predicting the current block based on the multi-line prediction mode information.
  • the reference row indication information is intra_luma_ref_idx.
  • the reference line indication information corresponding to the multi-line prediction mode occupies at most 2 bits, and these 2 Bits need to use two different context models for encoding and decoding, as shown in Table 5 and Table 6 below:
  • the first bin (bit position) Second bin (bit position) MultiRefLineIdx(0) is the first context model
  • MultiRefLineIdx(1) is the second context model
  • the first bin refers to the first bit of the reference row indication information, which needs to be coded and decoded based on the first context model
  • the second bin refers to the second bit of the reference row indication information, which needs to be based on the first bit.
  • Two context models perform encoding and decoding, and the first context model is different from the second context model.
  • the target reference row is row 0; if the index information indicated by the reference row indication information is 1, the target reference row is row 1; The index information indicated by the row indication information is 2, and the target reference row is the third row.
  • the row described in the embodiment of the present application may be the row above the current block or the column on the left side of the current block.
  • FIG. 11 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 10, if it is determined that the current block supports the multi-line prediction mode, and the candidate reference line corresponding to the multi-line prediction mode The number of rows is 3, and the reference row indication information corresponding to the multi-row prediction mode occupies at most 2 bits, then the method includes:
  • Step 1101 Based on a context model, perform context-based adaptive binary arithmetic coding on the first bit of the reference row indication information.
  • the current block can be determined whether the current block meets the conditions for supporting the multi-line prediction technology, and if the current block meets the conditions for supporting the multi-line prediction technology, it is determined that the current block can try to use each reference line for encoding.
  • the encoding end can determine the source of the final reference pixel through RDO, and encode the reference row index information into the encoding stream.
  • the conditions for supporting the multi-line prediction technology include: the current block is a luma intra-frame block, and the size of the current block meets certain restriction conditions, and the current block does not include the first line of the coding tree unit.
  • the conditions for supporting the multi-line prediction technology are not limited to the above three conditions, and other conditions may also be included.
  • the reference line indication information can be coded according to specific conditions.
  • the reference row indication information may be intra_luma_ref_idx.
  • the row described in the embodiment of the present application may be the row above the current block or the column on the left side of the current block.
  • Step 1102 When it is necessary to encode the second bit of the reference line indication information, perform bypass-based binary arithmetic coding on the second bit of the reference line indication information.
  • the current block supports the multi-line prediction mode, and the number of candidate reference lines corresponding to the multi-line prediction mode is 3, and the reference line indication information corresponding to the multi-line prediction mode occupies at most 2 bits
  • this The first bit of the 2 bits can be coded using a context model, and the second bit can be coded based on the bypass coding mode. In this way, only one context model is needed to realize all the information indicating the reference row. Bit coding reduces the number of context models used, thereby reducing coding complexity and memory consumption, and coding performance has not changed much.
  • the context model used by the reference row indication information can be shown in Table 8 and Table 9 below:
  • First bin Second bin MultiRefLineIdx(0) is the first context model No context model, use Bypass for encoding
  • FIG. 12 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 11. As shown in FIG. 12, if the current block is determined Multi-row prediction mode is supported, and the number of candidate reference rows corresponding to the multi-row prediction mode is 3, and the reference row indication information corresponding to the multi-row prediction mode occupies at most 2 bits, then the method includes:
  • Step 1201 Based on a context model, perform context-based adaptive binary arithmetic decoding on the first bit of the reference row indication information.
  • Step 1202 When the second bit of the reference line indication information needs to be decoded, perform bypass-based binary arithmetic decoding on the second bit of the reference line indication information.
  • the target reference line used when predicting the current block based on the multi-line prediction mode can be determined based on the reference line indication information, and then the target reference line is used to predict the current block.
  • the coded stream of the current block may be received first, and if the current block meets the parsing condition, the reference line indication information is decoded to determine the source of the reference pixels of the current block.
  • the analysis conditions include: the current block is a luma intra-frame block, the size of the current block meets certain conditions, and the current block is not the first row of the coding tree unit.
  • the analysis conditions are not limited to the above three conditions, and may also include other conditions.
  • intra_luma_ref_idx needs to be parsed, so as to determine the reference pixels of the current block according to the value of intra_luma_ref_idx, so as to obtain the predicted value of the current block for the subsequent reconstruction process.
  • the reference row indication information corresponding to the multi-row prediction mode occupies at most 3 bits, and these 3 Bits need to use 3 different context models for encoding and decoding, as shown in Table 10 and Table 11 below:
  • the first bin refers to the first bit of the reference row indication information, which needs to be coded and decoded based on the first context model
  • the second bin refers to the second bit of the reference row indication information, which needs to be based on the first bit.
  • Two context models are used for encoding and decoding.
  • the third bin refers to the third bit of the reference row indication information. The encoding and decoding needs to be performed based on the third context model, and these three context models are all different.
  • index information of the target reference row and the row number of the corresponding target reference row are shown in Table 12:
  • the target reference line is line 0; if the index information indicated by the reference line indication information is 1, the target reference line is line 1; If the index information indicated by the row indication information is 2, the target reference row is the 2nd row; if the index information indicated by the reference row indication information is 3, the target reference row is the 3rd row.
  • FIG. 13 is a flowchart of an encoding method provided by an embodiment of the application, which is applied to the encoding end. As shown in FIG. 13, if it is determined that the current block supports the multi-line prediction mode, and the number of candidate reference lines corresponding to the multi-line prediction mode It is 4, the reference line indication information corresponding to the multi-line prediction mode occupies at most 3 bits, and the method includes:
  • Step 1301 Based on a context model, perform context-based adaptive binary arithmetic coding on the first bit of the reference row indication information.
  • the current block can be determined whether the current block meets the conditions for supporting the multi-line prediction technology, and if the current block meets the conditions for supporting the multi-line prediction technology, it is determined that the current block can try to use each reference line for encoding.
  • the encoding end can determine the source of the final reference pixel through RDO, and encode the reference row index information into the encoding stream.
  • the conditions for supporting the multi-line prediction technology include: the current block is a luma intra-frame block, and the size of the current block meets certain restriction conditions, and the current block is not the first line of the coding tree unit.
  • the conditions for supporting the multi-line prediction technology are not limited to the above three conditions, and may also include other conditions.
  • the reference line indication information can be coded according to specific conditions.
  • the reference row indication information may be intra_luma_ref_idx.
  • Step 1302 When the second bit of the reference line indication information needs to be coded, perform bypass-based binary arithmetic coding on the second bit of the reference line indication information.
  • Step 1303 When the third bit of the reference line indication information needs to be coded, perform bypass-based binary arithmetic coding on the third bit of the reference line indication information.
  • the reference line indication information corresponding to the multi-line prediction mode occupies at most 3 bits
  • this The first bit of the 3 bits can be coded using a context model, and the second and third bits can be coded based on the bypass coding mode. In this way, only one context model is needed to achieve the The coding of all bits of the reference line indication information reduces the number of context models used, thereby reducing coding complexity and memory consumption, and coding performance has not changed much.
  • the context model used by the reference row indication information can be as shown in Table 13 and Table 14 below:
  • FIG. 14 is a flowchart of a decoding method provided by an embodiment of the present application. It is applied to the decoding end and is a decoding method corresponding to the encoding method described in the embodiment of FIG. 12. As shown in FIG. 14, if it is determined that the current block supports multiple In the row prediction mode, the number of candidate reference rows corresponding to the multi-row prediction mode is 4, and the reference row indication information corresponding to the multi-row prediction mode occupies at most 3 bits.
  • the method includes:
  • Step 1401 Based on a context model, perform context-based adaptive binary arithmetic decoding on the first bit of the reference row indication information.
  • Step 1402 When it is necessary to decode the second bit of the reference line indication information, perform bypass-based binary arithmetic decoding on the second bit of the reference line indication information.
  • Step 1403 When the third bit of the reference line indication information needs to be decoded, perform bypass-based binary arithmetic decoding on the third bit of the reference line indication information.
  • the coded stream of the current block may be received first, and if the current block meets the parsing condition, the reference line indication information is decoded to determine the source of the reference pixels of the current block.
  • the analysis conditions include: the current block is a luma intra-frame block, the size of the current block meets certain conditions, and the current block is not the first row of the coding tree unit.
  • the analysis conditions are not limited to the above three conditions, and may also include other conditions.
  • intra_luma_ref_idx needs to be parsed, so as to determine the reference pixels of the current block according to the value of intra_luma_ref_idx, so as to obtain the predicted value of the current block for the subsequent reconstruction process.
  • FIG. 15 is a flowchart of a coding and decoding method provided by an embodiment of the present application. The method is applied to the encoding end or the decoding end. As shown in FIG. 15, if it is determined that the current block supports the multi-line prediction mode, and the multi-line prediction mode corresponds to The number of candidate reference rows is 3, where the candidate reference row with index information 0 is row 0, the candidate reference row with index information 1 is row 1, and the candidate reference row with index information 2 is row 2.
  • the method includes :
  • Step 1501 When predicting the current block according to the multi-line prediction mode, predict the current block according to the target reference line, and the target reference line is determined according to the reference line indication information.
  • the target reference row is row 0;
  • the target reference row is the first row
  • the target reference row is the second row.
  • index information indicated by the reference row indication information and the corresponding target reference row may be as shown in Table 15 below:
  • the three nearest rows and three columns may be selected as candidates for the target reference row. That is, the target reference row is a row selected from the candidate reference rows, where the number of candidate reference rows corresponding to the multi-row prediction mode is 3, and the three rows and three columns closest to the current block boundary are used as candidate reference rows.
  • FIG. 16 is a flowchart of a coding and decoding method provided by an embodiment of the present application. The method is applied to the encoding end or the decoding end. As shown in FIG. 16, if it is determined that the current block supports the multi-line prediction mode, and the multi-line prediction mode corresponds to The number of candidate reference rows is 3, where the candidate reference row with index information 0 is row 0, the candidate reference row with index information 1 is row 1, and the candidate reference row with index information 2 is row 2, the method include:
  • Step 1601 When predicting the current block according to the multi-line prediction mode, predict the current block according to the target reference line, and the target reference line is determined according to the reference line indication information.
  • the target reference row is row 0;
  • the target reference row is the second row
  • the target reference row is the third row.
  • index information indicated by the reference row indication information and the corresponding target reference row may be as shown in Table 16 below:
  • row 0, row 2 and row 3 can be selected as candidates for the target reference row.
  • FIG. 17 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 17, if it is determined that the current block supports the multi-line prediction mode, the method includes:
  • Step 1701 Before predicting the current block according to the multi-line prediction mode, encode the line number indication information according to the number of candidate reference lines corresponding to the multi-line prediction mode.
  • the line number indication information is used to indicate the number of lines corresponding to the multi-line prediction mode. The number of candidate reference rows.
  • Step 1702 Based on the target reference line used in the prediction of the current block based on the multi-line prediction mode, encode the reference line indication information.
  • the reference line indication information is used to indicate the target reference line used in the prediction of the current block based on the multi-line prediction mode The index information of the target reference row.
  • Step 1703 Predict the current block according to the target reference line.
  • a line number indication information that can indicate the number of candidate reference lines corresponding to the multi-line prediction mode is added, so that the multi-line prediction mode can select the number of reference lines.
  • the line number indication information may exist in the sequence parameter set (SPS), the image parameter level, the slice level, or the tile level.
  • the line number indication information exists in the sequence parameter set, that is, a syntax for indicating the number of candidate reference lines corresponding to the multi-line prediction mode can be added at the SPS level.
  • FIG. 18 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. As shown in FIG. 17, if it is determined that the current block supports the multi-line prediction mode, the method includes:
  • Step 1801 Before predicting the current block according to the multi-line prediction mode, decode the line number indication information, where the line number indication information is used to indicate the number of candidate reference lines corresponding to the multi-line prediction mode.
  • Step 1802 Determine the number of candidate reference rows corresponding to the multi-row prediction mode according to the row number indication information.
  • Step 1803 Determine the target reference line according to the number of candidate reference lines corresponding to the multi-line prediction mode and the reference line indication information.
  • the reference line indication information is used to indicate the target reference line used when predicting the current block based on the multi-line prediction mode. Index information.
  • Step 1804 Predict the current block according to the target reference line.
  • a line number indication information that can indicate the number of candidate reference lines corresponding to the multi-line prediction mode is added, so that the multi-line prediction mode can select the number of reference lines.
  • the line number indication information may exist in the sequence parameter set (SPS), the image parameter level, the slice level, or the tile level.
  • the line number indication information exists in the sequence parameter set, that is, a syntax for indicating the number of candidate reference lines corresponding to the multi-line prediction mode can be added at the SPS level.
  • the syntax elements that need to be transmitted between the decoding end and the encoding end may include the first AMVR indication information and the second AMVR indication information.
  • the first AMVR indication information is used to indicate whether to start the AMVR mode
  • the second AMVR indication information is used It is used to indicate the index information of the pixel precision used when performing vector difference encoding or decoding in AMVR mode.
  • the first AMVR indication information is amvr_flag
  • the second AMVR indication information is amvr_precision_flag.
  • the non-affine prediction mode refers to the prediction modes other than the affine prediction mode.
  • the first AMVR indication information and the second AMVR indication information require a total of 4
  • the context model performs encoding and decoding, as shown in Tables 17 and 18 below:
  • the first AMVR indication information needs to be context-based and adaptive based on the third context model.
  • Binary arithmetic coding or context-based adaptive binary arithmetic decoding When the second AMVR indication information indicates that the current block starts the AMVR mode, it is necessary to perform context-based adaptive binary arithmetic coding on the second AMVR indication information based on the fourth context model Or context-based adaptive binary arithmetic decoding.
  • the current block starts the non-affine prediction mode
  • the first AMVR indication information indicates that the current block starts the AMVR mode
  • FIG. 19 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 19, the method includes:
  • Step 1901 If the current block starts the affine prediction mode or starts other prediction modes except the affine prediction mode, when the motion vector difference coding of the current block is performed, if the current block supports the AMVR mode, the first AMVR instruction is performed When coding the information, based on the first context model, perform context-based adaptive binary arithmetic coding on the first AMVR indication information.
  • Step 1902 When the first AMVR indication information indicates that the AMVR mode is activated for the current block, perform context-based adaptive binary arithmetic coding on the second AMVR indication information based on the second context model.
  • the first context model and the second context model are different.
  • the current block can try to use multiple motion vector accuracy for encoding.
  • the encoder can decide whether to start AMVR and the adopted motion vector accuracy through RDO, and encode the corresponding syntax information into the encoded stream.
  • the conditions for using the adaptive motion vector accuracy include: the current block is an inter-frame prediction block, and the current block motion information includes a non-zero motion vector difference.
  • the conditions for using the adaptive motion vector accuracy are not limited to the above conditions, and other conditions may also be included.
  • FIG. 20 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 19. As shown in FIG. 20, the method includes:
  • Step 2001 If the current block starts the affine prediction mode or starts other prediction modes except the affine prediction mode, when the motion vector difference decoding of the current block is performed, if the current block supports the AMVR mode, perform the first AMVR instruction When decoding the information, based on the first context model, perform context-based adaptive binary arithmetic decoding on the first AMVR indication information.
  • the decoder may first receive the coded stream of the current block, and if it is determined that the current block meets the parsing condition, the first AMVR is parsed to determine whether the current block starts AMVR, that is, whether the adaptive motion vector accuracy technology is used.
  • the analysis conditions include: the current block is an inter-frame block, and the current block motion information includes a non-zero motion vector difference.
  • the analysis conditions are not limited to the above conditions, and other conditions may also be included.
  • Step 2002 When the first AMVR indication information indicates that the AMVR mode is activated for the current block, perform context-based adaptive binary arithmetic decoding on the second AMVR indication information based on the second context model.
  • the first context model and the second context model are different.
  • the second AMVR indication information needs to be further analyzed to determine the used accuracy.
  • the decoding end can uniquely determine the motion vector accuracy of the motion information of the current block, thereby obtaining the prediction value of the current block, which is used in the subsequent reconstruction process.
  • the context models used by the first AMVR indication information and the second AMVR indication information may be as shown in Table 19 below:
  • the AMVR indication information can share the context model in the Affine affine prediction mode and the non-affine affine prediction mode. In this way, the context models required under AMVR can be reduced to two, thereby reducing the editing.
  • the complexity of decoding reduces memory overhead.
  • FIG. 21 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 21, the method includes:
  • Step 2101 If the current block starts the affine prediction mode, when performing the motion vector difference coding of the current block, if the current block supports the AMVR mode, when coding the first AMVR indication information, the first context model is used to One AMVR indication information is subjected to context-based adaptive binary arithmetic coding, and when the first AMVR indication information indicates that the current block starts the AMVR mode, the second AMVR indication information is subjected to bypass-based binary arithmetic coding.
  • Step 2102 If the current block starts other prediction modes except the affine prediction mode, when the motion vector difference coding of the current block is performed, if the current block supports the AMVR mode, then when coding the first AMVR indication information, based on The second context model performs context-based adaptive binary arithmetic coding on the first AMVR indication information, and when the first AMVR indication information indicates that the current block starts the AMVR mode, performs bypass-based binary arithmetic coding on the second AMVR indication information.
  • FIG. 22 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end and is a decoding method corresponding to the encoding method shown in FIG. 21. As shown in FIG. 22, the method includes:
  • Step 2201 If the current block starts the affine prediction mode, when the motion vector difference decoding of the current block is performed, if the current block supports the AMVR mode, when the first AMVR indication information is decoded, the first context model is used for the first context model.
  • a context-based adaptive binary arithmetic decoding is performed on one AMVR indication information, and when the first AMVR indication information indicates that the current block starts the AMVR mode, the second AMVR indication information is subjected to bypass-based binary arithmetic decoding.
  • Step 2202 If the current block starts other prediction modes except the affine prediction mode, when the motion vector difference decoding of the current block is performed, if the current block supports the AMVR mode, then the first AMVR indication information is decoded based on The second context model performs context-based adaptive binary arithmetic decoding on the first AMVR indication information, and when the first AMVR indication information indicates that the current block starts the AMVR mode, performs bypass-based binary arithmetic decoding on the second AMVR indication information.
  • the context models used by the first AMVR indication information and the second AMVR indication information may be as shown in Table 20 below:
  • the second AMVR indication information is modified to perform bypass-based binary arithmetic coding or decoding.
  • the required AMVR The context model is reduced to two, thereby reducing the complexity of encoding and decoding and reducing memory overhead.
  • FIG. 23 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 23, the method includes:
  • Step 2301 If the current block starts the affine prediction mode or starts other prediction modes except the affine prediction mode, when the motion vector difference coding of the current block is performed, if the current block supports the adaptive motion vector precision AMVR mode, When encoding the first AMVR indication information, based on the first context model, perform context-based adaptive binary arithmetic coding on the first AMVR indication information.
  • Step 2302 When the first AMVR indication information indicates that the current block starts the AMVR mode, perform bypass-based binary arithmetic coding on the second AMVR indication information.
  • FIG. 24 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end and is a decoding method corresponding to the encoding method shown in FIG. 22. As shown in FIG. 24, the method includes:
  • Step 2401 If the current block starts the affine prediction mode or starts other prediction modes except the affine prediction mode, when the motion vector difference decoding of the current block is performed, if the current block supports the adaptive motion vector accuracy AMVR mode, When decoding the first AMVR indication information, based on the first context model, perform context-based adaptive binary arithmetic decoding on the first AMVR indication information.
  • Step 2402 When the first AMVR indication information indicates that the current block starts the AMVR mode, perform bypass-based binary arithmetic decoding on the second AMVR indication information.
  • the context models used by the first AMVR indication information and the second AMVR indication information may be as shown in the following Table 21:
  • the first AMVR indication information shares a context model
  • the second AMVR indication information is modified to perform bypass-based binary arithmetic coding or decoding In this way, the context model required under AMVR can be reduced to one, thereby reducing the complexity of encoding and decoding and reducing memory overhead.
  • an encoding method is also provided, which is applied to the encoding end, and the method includes:
  • Step 1 If the current block starts the affine prediction mode, when performing the motion vector difference coding of the current block, if the current block supports the AMVR mode, when coding the first AMVR indication information, the first context model is used to An AMVR indication information is subjected to context-based adaptive binary arithmetic coding.
  • the second AMVR indication information is subjected to context-based adaptive binary arithmetic coding based on the second context model.
  • Step 2 If the current block starts other prediction modes except the affine prediction mode, when the motion vector difference decoding of the current block is performed, if the current block supports the AMVR mode, the first AMVR indication information is decoded based on The third context model performs context-based adaptive binary arithmetic decoding on the first AMVR indication information.
  • the second AMVR indication information is context-based Adaptive binary arithmetic coding.
  • the first context model, the second context model, and the third context model are different.
  • a decoding method is also provided, which is applied to the decoding end, and the method is a decoding method corresponding to the foregoing encoding method, and the method includes the following steps:
  • Step 1 If the current block starts the affine prediction mode, when the motion vector difference decoding of the current block is performed, if the current block supports the AMVR mode, when the first AMVR indication information is decoded, the first context model is used to An AMVR indication information performs context-based adaptive binary arithmetic decoding.
  • the second AMVR indication information is context-based adaptive binary arithmetic decoding based on the second context model.
  • Step 2 If the current block starts other prediction modes except the affine prediction mode, when the motion vector difference decoding of the current block is performed, if the current block supports the AMVR mode, the first AMVR indication information is decoded based on The third context model performs context-based adaptive binary arithmetic decoding on the first AMVR indication information.
  • the second AMVR indication information is context-based Adaptive binary arithmetic decoding.
  • the first context model, the second context model, and the third context model are different.
  • the context models used by the first AMVR indication information and the second AMVR indication information may be as shown in the following Table 22:
  • the second indication information in the Affine affine prediction mode and the non-affine affine prediction mode, can share one context model.
  • the context models required in the AMVR mode can be reduced to three, thereby Reduce the complexity of encoding and decoding, reducing memory overhead.
  • prediction mode index information When the current block is a luminance block, prediction mode index information needs to be transmitted between the encoding end and the decoding end.
  • the prediction mode index information is used to indicate the index information of the target prediction mode of the current block in the MPM list.
  • the encoding end and the decoding end store the most likely intra-frame prediction mode MPM list, and the conventional intra-frame prediction mode, intra-frame sub-block prediction mode and multi-line prediction mode can share the MPM list.
  • the reference line of the target prediction mode of the current block is an adjacent line of the current block
  • two different context models are required, and context-based adaptive binary arithmetic is performed on the first bit of the prediction mode index information.
  • the specific context model used depends on whether the current block starts the intra sub-block prediction mode.
  • the prediction mode index information intra_luma_mpm_idx When intra_luma_ref_idx is equal to 0, it indicates that the reference line of the target prediction mode of the current block is the adjacent line of the current block, that is, the current block does not start the multi-line prediction mode. When intra_luma_ref_idx is not equal to 0, it indicates that the reference line of the target prediction mode of the current block is not an adjacent line of the current block, that is, the current block has activated the multi-line prediction mode.
  • intra_luma_ref_idx when intra_luma_ref_idx is equal to 0, the first bit of intra_luma_mpm_idx needs to select a context model from two different context models for encoding and decoding according to whether the intra sub-block prediction mode is activated for the current block.
  • intra_luma_ref_idx when intra_luma_ref_idx is not equal to 0, it means that the current block starts the multi-line prediction mode, and the target prediction mode of the multi-line prediction mode started by the current block also comes from the MPM list.
  • FIG. 25 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 25, the method includes the following steps:
  • Step 2501 If the current block starts intra sub-block prediction, the target prediction mode of the intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when the current block is predicted according to the intra sub-block prediction, it is determined that the current block is started The index information of the target prediction mode of the intra sub-block prediction in the MPM list.
  • Step 2502 Encode the prediction mode index information according to the index information of the target prediction mode in the MPM list of the intra sub-block prediction initiated by the current block, where the first bit of the prediction mode index information is performed based on the first context model Context-based adaptive binary arithmetic coding is obtained, and other bits are obtained from bypass-based binary arithmetic coding.
  • Step 2503 If the current block starts conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when the current block is predicted according to the conventional intra prediction, if the current block is predicted
  • the target prediction mode for block startup comes from the MPM list, and the index information of the target prediction mode for the current block startup in the MPM list is determined.
  • Step 2504 Encode the prediction mode index information according to the index information of the target prediction mode started by the current block in the MPM list, where the first bit of the prediction mode index information is based on the second context model for context-based self-control Adapt to binary arithmetic coding, and other bits are obtained based on bypassed binary arithmetic coding.
  • a flag bit is needed to indicate whether the target prediction mode of the conventional prediction started by the current block comes from the MPM list. If it is determined that the target prediction mode comes from the MPM list, then determine Index information of the target prediction mode in the MPM list. If the target prediction mode is not from the MPM list, there is no need to determine the index information of the target prediction mode in the MPM list.
  • the encoder can construct an MPM list, and the intra sub-block prediction mode, the multi-line prediction mode, and the regular intra prediction can share the MPM list.
  • the encoder can use RDO to determine the final prediction mode to be used, that is, the target prediction mode. If the target prediction mode is intra-sub-block prediction mode or multi-line prediction mode, then the target prediction mode must be one selected from the MPM list. The prediction mode requires coding prediction mode index information (intra_luma_mpm_idx) to inform the decoder which prediction mode has been selected. If the target prediction mode is conventional intra prediction, it is also necessary to encode a flag bit to indicate whether the target prediction mode of the conventional prediction initiated by the current block comes from the MPM list.
  • the target prediction mode is determined next In the index information of the MPM list, if the target prediction mode is not from the MPM list, there is no need to determine the index information of the target prediction mode in the MPM list.
  • FIG. 26 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 24. As shown in FIG. 26, the method includes the following step:
  • Step 2601 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when predicting the current block based on intra sub-block prediction, compare the prediction mode
  • the index information is decoded, where the first bit of the prediction mode index information is obtained by context-based adaptive binary arithmetic decoding based on the first context model, and the other bits are obtained by bypass-based binary arithmetic decoding.
  • Step 2602 According to the prediction mode index information, determine the target prediction mode of the intra sub-block prediction initiated by the current block from the MPM list, and the prediction mode index information is used to indicate index information of the target prediction mode in the MPM list.
  • Step 2603 Predict the current block according to the target prediction mode.
  • Step 2604 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when predicting the current block according to the conventional intra prediction,
  • the prediction mode index information is decoded, where the first bit of the prediction mode index information is obtained by context-based adaptive binary arithmetic decoding based on the second context model, and the other bits are obtained based on bypassed binary arithmetic decoding.
  • the context model and the first context model are the same context model.
  • Step 2605 Determine the target prediction mode started by the current block from the MPM list according to the prediction mode index information.
  • the prediction mode index information is used to indicate the index information of the target prediction mode in the MPM list.
  • Step 2606 Predict the current block according to the target prediction mode.
  • the decoder may first receive the encoded stream, and based on the premise that the same MPM list is constructed based on the conventional intra prediction, intra sub-block prediction mode, and multi-line prediction mode, if the current block starts the intra sub-block prediction mode or the multi-line prediction mode , The target prediction mode it adopts must be from the MPM list, and the index value in the list is parsed to obtain the final target prediction mode. If the current block starts conventional intra prediction, a flag bit needs to be parsed to determine whether the target prediction mode is from the MPM list, and if it is from the MPM list, then its index value in the MPM list is parsed.
  • the decoding end can uniquely determine the prediction mode of the current block, thereby obtaining the prediction value of the current block, which is used in the subsequent reconstruction process.
  • the frame when encoding or decoding the first bit of the prediction mode index information, the frame may not be activated according to whether the current block is Intra-sub-block prediction mode is a condition to select a context model from two different context models, but the same context model can be used under the two different conditions that the intra-sub-block prediction mode is activated and the intra-sub-block prediction mode is not activated in the current block , Perform context-based adaptive binary arithmetic coding or decoding on the first bit of the prediction mode index information. In this way, the number of context models required can be reduced to 1, which reduces the complexity of coding and decoding. Down memory overhead.
  • the context model used by the prediction mode index information is shown in Table 24 below:
  • the first bit of intra_luma_mpm_idx can be based on the same context model when the intra sub-block prediction mode of the current block is activated and the intra sub-block prediction mode is not activated, and the first bit of intra_luma_mpm_idx
  • the bits are used for context-based adaptive binary arithmetic coding or decoding.
  • FIG. 27 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 27, the method includes the following steps:
  • Step 2701 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when the current block is predicted according to the intra sub-block prediction, it is determined that the current block is started The index information of the target prediction mode of the intra sub-block prediction in the MPM list.
  • Step 2702 Coding the prediction mode index information according to the index information of the target prediction mode in the MPM list of the intra sub-block prediction initiated by the current block, where all bits of the prediction mode index information are obtained based on bypassed binary arithmetic coding.
  • Step 2703 If the current block starts conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, then when the current block is predicted according to the conventional intra prediction, if The target prediction mode started by the current block is in the MPM list, and the index information of the target prediction mode started by the current block in the MPM list is determined.
  • Step 2704 Encode the prediction mode index information according to the index information of the target prediction mode activated by the current block in the MPM list, where all bits of the prediction mode index information are obtained based on bypassed binary arithmetic coding.
  • a flag bit is also required to indicate whether the target prediction mode of the conventional prediction started by the current block comes from the MPM list. If it is determined that the target prediction mode comes from the MPM list, the target prediction is determined The index information of the mode in the MPM list. If the target prediction mode is not from the MPM list, there is no need to determine the index information of the target prediction mode in the MPM list.
  • FIG. 28 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 27. As shown in FIG. 28, the method includes the following step:
  • Step 2801 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when predicting the current block based on intra sub-block prediction, compare the prediction mode The index information is decoded, where all bits of the prediction mode index information are obtained based on bypassed binary arithmetic coding.
  • Step 2802 According to the prediction mode index information, determine the target prediction mode of the intra sub-block prediction initiated by the current block from the MPM list, and the prediction mode index information is used to indicate the index information of the target prediction mode in the MPM list.
  • Step 2803 Predict the current block according to the target prediction mode.
  • Step 2804 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when predicting the current block according to the conventional intra prediction, The prediction mode index information is decoded, wherein all bits of the prediction mode index information are obtained based on bypassed binary arithmetic coding.
  • Step 2805 Determine the target prediction mode started by the current block from the MPM list according to the prediction mode index information.
  • the prediction mode index information is used to indicate index information of the target prediction mode started by the current block in the MPM list.
  • Step 2806 Predict the current block according to the target prediction mode.
  • the reference line of the target prediction mode of the current block is the adjacent line of the current block, that is, when intra_luma_ref_idx is equal to 0, when encoding or decoding the first bit of the prediction mode index information
  • the first bit of the prediction mode index information is performed Binary arithmetic coding or decoding based on bypass. In this way, the first bit of the prediction mode index information does not need to use the context model, and the number of context models required by it is reduced to 0, thereby reducing the complexity of coding and decoding and reducing the memory overhead.
  • the context model used by the prediction mode index information is shown in Table 25 below:
  • the first bit of intra_luma_mpm_idx can be based on the first bit of intra_luma_mpm_idx when the intra sub-block prediction mode is activated and the intra sub-block prediction mode is not activated in the current block.
  • FIG. 29 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 29, the method includes the following steps:
  • Step 2901 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when the current block is predicted according to the intra sub-block prediction, it is determined that the current block is started The index information of the target prediction mode of the intra sub-block prediction in the MPM list.
  • Step 2902 Code the prediction mode index information according to the index information of the target prediction mode in the MPM list of the intra sub-block prediction initiated by the current block, where the first bit of the prediction mode index information is based on a context model.
  • the context is obtained by adaptive binary arithmetic coding, and other bits are obtained based on bypassed binary arithmetic coding.
  • Step 2903 If the current block starts conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when the current block is predicted according to the conventional intra prediction, if the current block is predicted If the target prediction mode of the conventional prediction initiated by the block is in the MPM list, the index information of the target prediction mode of the conventional prediction initiated by the current block in the MPM list is determined.
  • Step 2904 Encode the prediction mode index information according to the index information in the MPM list of the target prediction mode activated by the current block, where all bits of the prediction mode index information are obtained based on bypassed binary arithmetic coding.
  • a flag bit is needed to indicate whether the target prediction mode of the conventional prediction started by the current block comes from the MPM list. If it is determined that the target prediction mode is from the MPM list, then it is determined that the target prediction mode is in the MPM list. The index information of the list. If the target prediction mode is not from the MPM list, there is no need to determine the index information of the target prediction mode in the MPM list.
  • FIG. 30 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the decoding method shown in FIG. 29. As shown in FIG. 30, the method includes the following step:
  • Step 3001 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when predicting the current block based on intra sub-block prediction, index the prediction mode The information is decoded, where the first bit of the prediction mode index information is obtained by context-based adaptive binary arithmetic decoding based on a context model, and the other bits are obtained based on bypassed binary arithmetic decoding.
  • Step 3002 According to the prediction mode index information, determine the target prediction mode of the intra sub-block prediction initiated by the current block from the MPM list, and the prediction mode index information is used to indicate the index information of the target prediction mode in the MPM list.
  • Step 3003 Predict the current block according to the target prediction mode.
  • Step 3004 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, then when the current block is predicted according to the conventional intra prediction, The prediction mode index information is decoded, where all bits of the prediction mode index information are obtained based on bypassed binary arithmetic decoding.
  • Step 3005 Determine the target prediction mode started by the current block from the MPM list according to the prediction mode index information.
  • the prediction mode index information is used to indicate index information of the target prediction mode started by the current block in the MPM list.
  • Step 3006 Predict the current block according to the target prediction mode.
  • the reference line of the target prediction mode of the current block is the adjacent line of the current block, that is, when intra_luma_ref_idx is equal to 0, when encoding or decoding the prediction mode index information
  • the current block starts intra sub-block prediction Mode, based on a context model
  • the context-based adaptive binary arithmetic coding or decoding is performed on the first bit of the prediction mode index information.
  • the intra sub-block prediction mode is not activated for the current block, the first bit of the prediction mode index information is One bit performs binary arithmetic coding or decoding based on bypass. In this way, only one context model is needed for the coding and decoding of prediction mode index information, and the number of context models required by it is reduced to 1, which reduces the complexity of coding and decoding and reduces memory overhead.
  • the context model used by the prediction mode index information is shown in Table 26 below:
  • intra_luma_ref_idx when intra_luma_ref_idx is equal to 0, and the current block starts ISP mode, use a context model to encode and decode the first bit of intra_luma_mpm_idx, and when the current block does not start ISP mode, use Bypass to compare intra_luma_mpm_idx The first bit is encoded and decoded.
  • FIG. 31 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 31, the method includes the following steps:
  • Step 3101 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when the current block is predicted according to the intra sub-block prediction, it is determined that the current block is started The index information of the target prediction mode of the intra sub-block prediction in the MPM list.
  • Step 3102 Encode the prediction mode index information according to the index information of the target prediction mode in the MPM list of the intra sub-block prediction initiated by the current block, where all bits of the prediction mode index information are obtained based on bypassed binary arithmetic coding.
  • Step 3103 If the current block starts conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, then when the current block is predicted according to the conventional intra prediction, if The target prediction mode started by the current block is in the MPM list, and the index information of the target prediction mode started by the current block in the MPM list is determined.
  • Step 3104 Encode the prediction mode index information according to the index information of the target prediction mode activated by the current block in the MPM list, where the first bit of the prediction mode index information is based on a context model for context-based adaptive binary Arithmetic coding is obtained, and the other bits are obtained based on bypassed binary arithmetic coding.
  • a flag bit is also required to indicate whether the target prediction mode of the conventional prediction started by the current block comes from the MPM list. If it is determined that the target prediction mode comes from the MPM list, the target prediction is determined The index information of the mode in the MPM list. If the target prediction mode is not from the MPM list, there is no need to determine the index information of the target prediction mode in the MPM list.
  • FIG. 32 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the encoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 31. As shown in FIG. 32, the method includes the following step:
  • Step 3201 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when predicting the current block based on intra sub-block prediction, index the prediction mode The information is decoded, where all bits of the prediction mode index information are obtained based on bypassed binary arithmetic decoding.
  • Step 3202 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when predicting the current block according to intra sub-block prediction, index the prediction mode The information is decoded, where all bits of the prediction mode index information are obtained based on bypassed binary arithmetic decoding.
  • Step 3203 Predict the current block according to the target prediction mode.
  • Step 3204 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, then when the current block is predicted according to the conventional intra prediction, The prediction mode index information is decoded, where the first bit of the prediction mode index information is obtained by context-based adaptive binary arithmetic coding based on a context model, and the other bits are obtained based on bypassed binary arithmetic decoding.
  • Step 3205 Determine the target prediction mode started by the current block from the MPM list according to the prediction mode index information.
  • the prediction mode index information is used to indicate index information of the target prediction mode started by the current block in the MPM list.
  • Step 3206 Predict the current block according to the target prediction mode.
  • the reference line of the target prediction mode of the current block is the adjacent line of the current block, that is, when intra_luma_ref_idx is equal to 0, when encoding or decoding the prediction mode index information
  • the current block starts intra sub-block prediction Mode, perform bypass-based binary arithmetic encoding or decoding on the first bit of the prediction mode index information.
  • the intra sub-block prediction mode is not activated for the current block, then based on a context model, the first bit of the prediction mode index information
  • the bits are used for context-based adaptive binary arithmetic coding or decoding. In this way, only one context model is needed for the coding and decoding of prediction mode index information, and the number of context models required by it is reduced to 1, which reduces the complexity of coding and decoding and reduces memory overhead.
  • the context model used by the prediction mode index information is shown in Table 27 below:
  • intra_luma_ref_idx when intra_luma_ref_idx is equal to 0 and the current block does not start ISP mode, use a context model to encode and decode the first bit of intra_luma_mpm_idx.
  • use Bypass when the current block starts ISP mode, use Bypass to compare intra_luma_mpm_idx The first bit is encoded and decoded.
  • the syntax element transmitted between the encoding end and the decoding end may also include planar indication information.
  • the planar indication information is used to indicate whether the target prediction mode of the current block is the planar prediction mode, and the planar indication information occupies one bit.
  • the plan indication information is intra_luma_not_planar_flag.
  • the plan indication information intra_luma_not_planar_flag uses context-based adaptive binary arithmetic coding, and the context selection depends on whether the current block starts the intra sub-block prediction mode, that is, the coding and decoding of the plan indication information requires 2 Different context models.
  • FIG. 33 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 33, the method includes the following steps:
  • Step 3301 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when the current block is predicted according to intra sub-block prediction, start according to the current block If the target prediction mode of the intra-frame sub-block prediction is the planar prediction mode, the planar indication information is encoded, where the planar indication information is used to indicate whether the target prediction mode initiated by the current block is the planar prediction mode, and the planar indication information is based on the first
  • the context model is obtained by adaptive binary arithmetic coding based on context.
  • Step 3302 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when the current block is predicted according to the conventional intra prediction, the current block Whether the target prediction mode of the started conventional intra prediction is the planar prediction mode, the planar indication information is encoded, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is based on the first block.
  • the second context model is obtained by performing context-based adaptive binary arithmetic coding, and the first context model is the same as the second context model.
  • FIG. 34 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 33. As shown in FIG. 34, the method includes the following step:
  • Step 3401 If the current block starts intra sub-block prediction, the target prediction mode of the intra sub-block prediction exists in the MPM list, and the current block is a luminance block, when predicting the current block according to the intra sub-block prediction, provide the planar indication information Perform decoding, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is obtained by performing context-based adaptive binary arithmetic decoding based on the first context model.
  • Step 3402 When it is determined based on the planar indication information that the target prediction mode of the intra sub-block prediction initiated by the current block is the planar prediction mode, predict the current block according to the planar prediction mode.
  • Step 3403 When it is determined based on the planar indication information that the target prediction mode of the intra sub-block prediction initiated by the current block is not the planar prediction mode, determine the target prediction mode of the intra sub-block prediction initiated by the current block from the MPM list according to the prediction mode index information , Predict the current block according to the target prediction mode.
  • Step 3404 If the current block starts the conventional intra prediction, the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luma block.
  • the planar indication information Perform decoding where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is obtained by context-based adaptive binary arithmetic decoding based on the second context model.
  • the first context model and the second context model The two context models are the same.
  • Step 3405 When it is determined based on the planar indication information that the target prediction mode started by the current block is the planar prediction mode, predict the current block according to the planar prediction mode.
  • Step 3406 When it is determined based on the planar indication information that the target prediction mode started by the current block is not the planar prediction mode, the target prediction mode started by the current block is determined from the MPM list according to the prediction mode index information, and the current block is performed according to the target prediction mode. prediction.
  • the plan indication information intra_luma_not_planar_flag still uses the context-based adaptive binary arithmetic coding and decoding method, but the context selection does not depend on whether the current block starts the intra sub-block prediction mode, but the current block starts the intra sub-block In both cases of prediction mode and conventional intra prediction, a fixed context model is used for encoding and decoding.
  • FIG. 35 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 35, the method includes the following steps:
  • Step 3501 If the current block starts intra sub-block prediction, the target prediction mode of intra sub-block prediction exists in the MPM list, and the current block is a luminance block, then when the current block is predicted according to intra sub-block prediction, start according to the current block If the target prediction mode of the intra-frame sub-block prediction is the planar prediction mode, the planar indication information is encoded, where the planar indication information is used to indicate whether the target prediction mode initiated by the current block is the planar prediction mode, and the planar indication information is based on the bypass Obtained by binary arithmetic coding.
  • Step 3502 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when the current block is predicted according to the conventional intra prediction, the current block Whether the target prediction mode of the started conventional intra prediction is the planar prediction mode, encode the planar indication information, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is based on the bypass The binary arithmetic coding is obtained.
  • FIG. 36 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 35. As shown in FIG. 36, the method includes the following step:
  • Step 3601 If the current block starts intra sub-block prediction, the target prediction mode of the intra sub-block prediction exists in the MPM list, and the current block is a luminance block, when predicting the current block according to the intra sub-block prediction, provide the planar indication information Perform decoding, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is obtained based on the bypass binary arithmetic decoding.
  • Step 3602 When it is determined based on the planar indication information that the target prediction mode of the intra sub-block prediction initiated by the current block is the planar prediction mode, predict the current block according to the planar prediction mode.
  • Step 3603 When it is determined based on the planar indication information that the target prediction mode of the intra sub-block prediction initiated by the current block is not the planar prediction mode, determine the target prediction mode of the intra sub-block prediction initiated by the current block from the MPM list according to the prediction mode index information , Predict the current block according to the target prediction mode.
  • Step 3604 If the current block starts conventional intra prediction, the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luma block.
  • the planar indication information Perform decoding, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is obtained based on the bypass binary arithmetic decoding.
  • Step 3605 When it is determined based on the planar indication information that the target prediction mode started by the current block is the planar prediction mode, predict the current block according to the planar prediction mode.
  • Step 3606 When it is determined based on the planar indication information that the target prediction mode started by the current block is not the planar prediction mode, the target prediction mode started by the current block is determined from the MPM list according to the prediction mode index information, and the current block is performed according to the target prediction mode. prediction.
  • the coding and decoding modes of the planar indication information are shown in Table 30:
  • the plan indication information intra_luma_not_planar_flag no longer adopts the context-based adaptive binary arithmetic coding and decoding method, but when the current block starts the intra sub-block prediction mode and the conventional intra prediction, both are used in Binary arithmetic coding or decoding method based on bypass.
  • an encoding method is also provided, the encoding method is applied to the encoding end, and the encoding method includes the following steps:
  • Step 1 If the current block starts conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when the current block is predicted according to the conventional intra prediction, if the current block is predicted
  • the target prediction mode for block startup comes from the MPM list, and the index information of the target prediction mode for the current block startup in the MPM list is determined.
  • Step 2 According to the index information of the target prediction mode started by the current block in the MPM list, the prediction mode index information is encoded, where the first bit of the prediction mode index information is based on a context model for context-based adaptation The binary arithmetic coding is obtained, and the other bits are obtained based on the bypassed binary arithmetic coding.
  • a flag bit is needed to indicate whether the target prediction mode of the conventional prediction started by the current block comes from the MPM list. If it is determined that the target prediction mode comes from the MPM list, then determine Index information of the target prediction mode in the MPM list. If the target prediction mode is not from the MPM list, there is no need to determine the index information of the target prediction mode in the MPM list.
  • a decoding method is also provided.
  • the decoding method is applied to the decoding end and is a decoding method corresponding to the foregoing encoding method.
  • the decoding method includes the following steps:
  • Step 1 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, then when the current block is predicted according to the conventional intra prediction,
  • the prediction mode index information is decoded, where the first bit of the prediction mode index information is obtained by context-based adaptive binary arithmetic decoding based on a context model, and the other bits are obtained based on bypassed binary arithmetic decoding, and the second context
  • the model and the first context model are the same context model.
  • Step 2 Determine the target prediction mode started by the current block from the MPM list according to the prediction mode index information.
  • the prediction mode index information is used to indicate the index information of the target prediction mode in the MPM list.
  • Step 3 Predict the current block according to the target prediction mode.
  • an encoding method is also provided, which is applied to the encoding end, and the encoding method includes the following steps:
  • Step 1 If the current block starts conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, then when the current block is predicted according to the conventional intra prediction, if The target prediction mode started by the current block is in the MPM list, and the index information of the target prediction mode started by the current block in the MPM list is determined.
  • Step 2 According to the index information of the target prediction mode activated by the current block in the MPM list, the prediction mode index information is coded, where all the bits of the prediction mode index information are obtained based on bypassed binary arithmetic coding.
  • a flag bit is also required to indicate whether the target prediction mode of the conventional prediction started by the current block comes from the MPM list. If it is determined that the target prediction mode comes from the MPM list, the target prediction is determined The index information of the mode in the MPM list. If the target prediction mode is not from the MPM list, there is no need to determine the index information of the target prediction mode in the MPM list.
  • a decoding method is also provided.
  • the decoding method is applied to the decoding end and is a decoding method corresponding to the foregoing encoding method.
  • the decoding method includes the following steps:
  • Step 1 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, then when the current block is predicted according to the conventional intra prediction, The prediction mode index information is decoded, wherein all bits of the prediction mode index information are obtained based on bypassed binary arithmetic coding.
  • Step 2 According to the prediction mode index information, determine the target prediction mode started by the current block from the MPM list.
  • the prediction mode index information is used to indicate the index information of the target prediction mode started by the current block in the MPM list.
  • Step 3 Predict the current block according to the target prediction mode.
  • an encoding method is also provided, which is applied to the encoding end, and the encoding method includes the following steps:
  • Step 1 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when the current block is predicted according to the conventional intra prediction, the current block Whether the target prediction mode of the started conventional intra prediction is the planar prediction mode, the planar indication information is encoded, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is based on one
  • the context model is obtained by adaptive binary arithmetic coding based on context.
  • a decoding method is also provided.
  • the decoding method is applied to the decoding end and is a decoding method corresponding to the foregoing encoding method.
  • the decoding method includes the following steps:
  • Step 1 If the current block starts conventional intra prediction, the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luma block.
  • the planar indication information Performing decoding, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is obtained by performing context-based adaptive binary arithmetic decoding based on a context model.
  • Step 2 When it is determined based on the planar indication information that the target prediction mode started by the current block is the planar prediction mode, predict the current block according to the planar prediction mode.
  • Step 3 When it is determined that the target prediction mode started by the current block is not the planar prediction mode based on the planar indication information, the target prediction mode started by the current block is determined from the MPM list according to the prediction mode index information, and the current block is performed according to the target prediction mode prediction.
  • an encoding method is also provided, which is applied to the encoding end, and the encoding method includes the following steps:
  • Step 1 If the current block starts the conventional intra prediction, when the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luminance block, when the current block is predicted according to the conventional intra prediction, the current block Whether the target prediction mode of the started conventional intra prediction is the planar prediction mode, the planar indication information is encoded, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is based on the side The binary arithmetic coding of the road is obtained.
  • a decoding method is also provided.
  • the decoding method is applied to the decoding end and is a decoding method corresponding to the foregoing encoding method.
  • the decoding method includes the following steps:
  • Step 1 If the current block starts conventional intra prediction, the target prediction mode of the conventional intra prediction comes from the MPM list, and the current block is a luma block.
  • the planar indication information Perform decoding, where the planar indication information is used to indicate whether the target prediction mode started by the current block is the planar prediction mode, and the planar indication information is obtained based on bypass binary arithmetic coding.
  • Step 2 When it is determined based on the planar indication information that the target prediction mode started by the current block is the planar prediction mode, predict the current block according to the planar prediction mode.
  • Step 3 When it is determined that the target prediction mode started by the current block is not the planar prediction mode based on the planar indication information, the target prediction mode started by the current block is determined from the MPM list according to the prediction mode index information, and the current block is performed according to the target prediction mode prediction.
  • the syntax element transmitted between the encoding end and the decoding end also includes chroma prediction mode index information, and the chroma prediction mode index information is used to indicate the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
  • the chroma prediction mode index information and its corresponding prediction mode are shown in Table 31 below.
  • the chroma prediction mode index information occupies a maximum of 4 bits. If the current block supports the cross-component prediction mode, and the current block If the cross-component prediction mode is not activated, the chroma prediction mode index information occupies a maximum of 5 bits.
  • the coding and decoding mode of the chroma prediction mode index information is shown in Table 32:
  • the first bit of the chroma prediction mode index information is based on the context-based self of the first context model. It is obtained by adaptive binary arithmetic decoding.
  • the second bit of chroma prediction mode index information is obtained by context-based adaptive binary arithmetic decoding based on the second context model.
  • the third bit and the third bit of chroma prediction mode index information The 4 bits are obtained by performing context-based adaptive binary arithmetic decoding based on the third context model, and the three context models are all different context models. That is, the chroma prediction mode index information needs to use three context models, and the memory overhead is relatively large.
  • FIG. 37 is a flowchart of an encoding method provided by an embodiment of the application. The method is applied to the encoding end. As shown in FIG. 37, the current block supports the cross-component prediction mode, and the current block starts the cross-component prediction mode, and the current block is Chroma block, the method includes:
  • Step 3701 When predicting the current block according to the cross-component prediction mode, determine the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
  • the encoding end can decide the final target prediction mode through the rate-distortion cost, and then inform the decoding end which prediction mode has been selected through the encoding index information.
  • Step 3702 Encode the chroma prediction mode index information according to the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
  • the chroma prediction mode index information is used to indicate the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
  • the first bit of the chroma prediction mode index information is obtained based on the context-based adaptive binary arithmetic coding based on the first context model, and the second bit of the chroma prediction mode index information is context-based based on the second context model Adaptive binary arithmetic coding, and the first context model is different from the second context model; the third and fourth bits of the chroma prediction mode index information are obtained by binary arithmetic coding based on bypass.
  • the encoding end stores a list of chroma prediction mode candidates, the encoding end can decide the final target prediction mode to be used through RDO, and then the encoding index value informs the decoding end which prediction mode is selected, that is, encoding the chroma prediction mode index information .
  • the chroma prediction mode includes the same prediction mode as the luminance and the cross-component prediction mode.
  • the cross-component prediction mode includes the mode in which the linear model coefficients are derived from the templates on both sides, the mode in which the linear model coefficients are derived from the upper template and the mode in which the linear model coefficients are derived from the left template, as well as the Planar prediction mode, DC Prediction mode, vertical prediction mode and horizontal prediction mode.
  • FIG. 38 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end and is a decoding method corresponding to the encoding method shown in FIG. 37. As shown in FIG. 38, the current block supports cross-component Prediction mode, the current block starts the cross-component prediction mode, and the current block is a chrominance block, the method includes:
  • Step 3801 When predicting the current block according to the cross-component prediction mode, decode the chroma prediction mode index information.
  • the first bit of the chroma prediction mode index information is obtained based on the adaptive binary arithmetic decoding based on the first context model, and the second bit of the chroma prediction mode index information is based on the second context model.
  • the first context model is different from the second context model; the third and fourth bits of the chroma prediction mode index information are obtained by binary arithmetic decoding based on bypass.
  • Step 3802 According to the chroma prediction mode index information, determine the target prediction mode of the current block from the candidate prediction mode list.
  • Step 3803 Predict the current block according to the target prediction mode.
  • the decoder can receive the coded stream, and then parse the chroma prediction mode related syntax from it.
  • the coding bit overhead required for each prediction mode is different.
  • the decoder uniquely determines the chroma prediction mode of the current block by analyzing the chroma prediction mode index information, thereby obtaining the prediction value of the current block, which is used in the subsequent reconstruction process.
  • the current block supports the cross-component prediction mode, and when the current block starts the cross-component prediction mode, the third and fourth bits of the chroma prediction mode index information can be decoded based on bypassed binary arithmetic Obtained, in this way, the number of context models required for the chroma prediction mode index information can be reduced to 2, which reduces the complexity of coding and decoding and reduces the content overhead.
  • the coding and decoding modes of the chroma prediction mode index information are shown in Table 33 and Table 34 below:
  • FIG. 39 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 39, the current block supports the cross-component prediction mode, and the current block starts the cross-component prediction mode, and the current block is Chroma block, the method includes:
  • Step 3901 When predicting the current block according to the cross-component prediction mode, determine the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
  • the encoding end can decide the final target prediction mode through the rate-distortion cost, and then inform the decoding end which prediction mode has been selected through the encoding index information.
  • Step 3902 Encoding the chroma prediction mode index information according to the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
  • the chroma prediction mode index information is used to indicate the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
  • the first bit of the chroma prediction mode index information is obtained by context-based adaptive binary arithmetic coding based on a context model.
  • the second, 3, and 4 bits of the chroma prediction mode index information The bits are obtained by binary arithmetic coding based on bypass.
  • FIG. 40 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end and is a decoding method corresponding to the encoding method shown in FIG. 39.
  • the current block supports cross-component Prediction mode
  • the current block starts the cross-component prediction mode
  • the current block is a chrominance block
  • the method includes:
  • Step 4001 When predicting the current block according to the cross-component prediction mode, decode the chroma prediction mode index information.
  • the first bit of the chroma prediction mode index information is obtained by performing context-based adaptive binary arithmetic decoding based on a context model, and the second, third and fourth bits of the chroma prediction mode index information The bits are obtained by binary arithmetic decoding based on bypass.
  • Step 4002 According to the chroma prediction mode index information, determine the target prediction mode of the current block from the candidate prediction mode list.
  • Step 4003 Predict the current block according to the target prediction mode.
  • the current block supports the cross-component prediction mode, and when the current block starts the cross-component prediction mode, the first bit of the chroma prediction mode index information uses one context model, and the second bit and the second bit Both the 3 bits and the 4th bit adopt a bypass-based binary arithmetic coding and decoding method.
  • the number of context models required for chroma prediction mode index information can be reduced to 1, reducing the complexity of coding and decoding , Reducing content overhead.
  • the coding and decoding modes of the chroma prediction mode index information are shown in Table 35 and Table 36 below:
  • FIG. 41 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 41, the current block supports the cross-component prediction mode, and when the current block is a chrominance block, the method includes:
  • Step 4101 When predicting the current block according to the cross-component prediction mode, determine the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
  • the encoding end can decide the final target prediction mode through the rate-distortion cost, and then inform the decoding end which prediction mode has been selected through the encoding index information.
  • Step 4102 Encoding the chroma prediction mode index information according to the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
  • the chroma prediction mode index information is used to indicate the index information of the target prediction mode of the current block in the corresponding candidate prediction mode list.
  • the target prediction mode is the first cross-component prediction mode
  • the target prediction mode is the second cross-component prediction mode
  • the target prediction mode is the second cross-component prediction mode
  • the target prediction mode is the planar prediction mode
  • the target prediction mode is the vertical prediction mode
  • the target prediction mode is the horizontal prediction mode
  • the target prediction mode is the DC prediction mode.
  • the chroma prediction mode index information and its corresponding prediction mode are shown in Table 37 below:
  • the chroma prediction mode index information indicates the cross-component prediction mode, and in this case, the chroma prediction mode index information occupies the most 3 bits, reducing bit overhead, thereby reducing memory overhead.
  • the chroma prediction mode index information indicates conventional intra prediction, and in this case, the chroma prediction mode index information occupies up to 6 bits Bit.
  • the target prediction mode is the first cross-component prediction mode
  • the target prediction mode is the second cross-component prediction mode
  • the target prediction mode is the second cross-component prediction mode
  • the target prediction mode is the planar prediction mode
  • the target prediction mode is the vertical prediction mode
  • the target prediction mode is the horizontal prediction mode
  • the target prediction mode is the DC prediction mode.
  • Step 4103 Predict the current block according to the target prediction mode.
  • index information of the chroma prediction mode and its corresponding prediction mode are shown in Table 38 below:
  • the chroma prediction mode index information indicates the cross-component prediction mode, and in this case, the chroma prediction mode index information occupies the most 3 bits, reducing bit overhead, thereby reducing memory overhead.
  • the chroma prediction mode index information indicates conventional intra prediction, and in this case, the chroma prediction mode index information occupies a maximum of 7 bits Bit.
  • FIG. 42 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 41. As shown in FIG. 42, the current block supports span In component prediction mode, when the current block is a chrominance block, the method includes:
  • Step 4201 When predicting the current block according to the cross-component prediction mode, decode the chroma prediction mode index information.
  • Step 4202 According to the chroma prediction mode index information, determine the target prediction mode of the current block from the candidate prediction mode list.
  • the target prediction mode is the first cross-component prediction mode
  • the target prediction mode is the second cross-component prediction mode
  • the target prediction mode is the second cross-component prediction mode
  • the target prediction mode is the planar prediction mode
  • the target prediction mode is the vertical prediction mode
  • the target prediction mode is the horizontal prediction mode
  • the target prediction mode is the DC prediction mode.
  • the target prediction mode is the first cross-component prediction mode
  • the target prediction mode is the second cross-component prediction mode
  • the target prediction mode is the second cross-component prediction mode
  • the target prediction mode is the planar prediction mode
  • the target prediction mode is the vertical prediction mode
  • the target prediction mode is the horizontal prediction mode
  • the target prediction mode is the DC prediction mode.
  • Step 4203 Predict the current block according to the target prediction mode.
  • the bit overhead of the chroma prediction mode index information can be reduced, thereby reducing the memory overhead.
  • Fig. 43 is a flow chart of an encoding and decoding method provided by an application embodiment. The method is applied to the encoding end or the decoding end. As shown in Fig. 43, the method includes:
  • Step 4301 When the luminance and chrominance of the current block share a division tree, if the width and height size of the luminance block corresponding to the current block is 64*64, and the size of the chrominance block corresponding to the current block is 32*32, then Block does not support cross-component prediction mode.
  • the embodiment of the present application can reduce the dependence of luminance and chrominance in the CCLM mode, and avoid the need for the chrominance block to wait for a reconstruction value of a 64*64 luminance block.
  • the syntax elements transmitted between the encoding end and the decoding end also include ALF indication information, and the ALF indication information is used to indicate whether ALF is enabled for the current block.
  • the ALF indication information is alf_ctb_flag.
  • the ALF indication information is subjected to context-based adaptive binary arithmetic coding or decoding based on the target context model.
  • the target context model is based on whether the upper block of the current block starts ALF, And whether the left block of the current block starts ALF, a context model selected from 3 different context models included in the first context model set.
  • the current block supports ALF and the current block is a CB chrominance block, then perform context-based adaptive binary arithmetic coding or decoding on the ALF indication information based on the target context model.
  • the target context model is based on whether ALF is enabled on the upper block of the current block, and Whether the left block of the current block starts ALF, a context model selected from 3 different context models included in the second context model set.
  • the current block supports ALF and the current block is a CR chrominance block, then perform context-based adaptive binary arithmetic coding or decoding on the ALF indication information based on the target context model.
  • the target context model is based on whether ALF is enabled on the upper block of the current block, and Whether the left block of the current block starts ALF, a context model selected from 3 different context models included in the third context model set. Among them, the above nine context models are all different.
  • FIG. 44 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 44, the method includes the following steps:
  • Step 4401 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the target context model; wherein, the target context The model is a context model selected from three different context models included in the first context model set according to whether the upper block of the current block starts ALF and the left block of the current block starts ALF.
  • the first context model set includes a first context model, a second context model, and a third context model. If the upper block of the current block starts ALF, and the left block of the current block starts ALF, the target context model is the first context model; if the upper block of the current block starts ALF and the left block of the current block does not start ALF, or if the current block If the upper block of the block does not start ALF and the left block of the current block starts ALF, the target context model is the second context model; if the upper block of the current block does not start ALF, and the left block of the current block does not start ALF, then the target context model It is the third context model.
  • Step 4402 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the target context model; where the target The context model is based on whether ALF is enabled on the upper block of the current block, and whether ALF is enabled on the left block of the current block, a context model is selected from 3 different context models included in the second context model set, the second The three context models included in the context model set are different from the three context models included in the first context model set.
  • chroma blocks include CB chroma blocks and CR chroma blocks.
  • the second context model set includes a fourth context model, a fifth context model, and a sixth context model. If the upper block of the current block starts ALF, and the left block of the current block starts ALF, the target context model is the fourth context model; if the upper block of the current block starts ALF and the left block of the current block does not start ALF, or if the current block If the upper block of the block does not start ALF and the left block of the current block starts ALF, the target context model is the fifth context model; if the upper block of the current block does not start ALF, and the left block of the current block does not start ALF, the target context model It is the sixth context model.
  • the encoder can decide whether to enable ALF for the current block through RDO, that is, whether to use adaptive loop filtering, and encode ALF indication information in the code stream to inform the decoder whether to enable ALF, and then inform the decoder whether to perform self Adapt to loop filtering. Moreover, if ALF is enabled, ALF-related syntax elements need to be encoded, and the encoding end also performs filtering.
  • FIG. 45 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 44. As shown in FIG. 45, the method includes the following steps :
  • Step 4501 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic on the ALF indication information based on the target context model Decoding; where the target context model is based on whether the upper block of the current block starts ALF, and the left block of the current block starts ALF, a context model selected from three different context models included in the first context model set.
  • Step 4502 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the target context model; where the target The context model is based on whether ALF is enabled on the upper block of the current block, and whether ALF is enabled on the left block of the current block.
  • a context model is selected from 3 different context models included in the second context model set.
  • the second context model set includes The 3 context models of is different from the 3 context models included in the first context model set.
  • chroma blocks include CB chroma blocks and CR chroma blocks.
  • the decoding end can decode the ALF indication information to analyze whether the current block starts adaptive loop filtering. If the ALF indication information indicates that ALF is enabled for the current block, the decoding end can also continue to decode ALF-related syntax elements to perform adaptive loop filtering on the current block to obtain filtered reconstructed pixels.
  • the encoding and decoding mode of ALF indication information is shown in Table 40:
  • the CB chroma block and the CR chroma block can share 3 different context models. In this way, the number of context models that can be used for the ALF indication information will be 6, thereby reducing the editing The complexity of decoding reduces the memory overhead.
  • FIG. 46 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 46, the method includes the following steps:
  • Step 4601 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the target context model; wherein, the target context The model is a context model selected from three different context models included in the first context model set according to whether the upper block of the current block starts ALF and the left block of the current block starts ALF.
  • the first context model set includes a first context model, a second context model, and a third context model. If the upper block of the current block starts ALF, and the left block of the current block starts ALF, the target context model is the first context model; if the upper block of the current block starts ALF and the left block of the current block does not start ALF, or if the current block If the upper block of the block does not start ALF and the left block of the current block starts ALF, the target context model is the second context model; if the upper block of the current block does not start ALF, and the left block of the current block does not start ALF, then the target context model It is the third context model.
  • Step 4602 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the target context model; wherein, the target The context model is based on whether ALF is enabled on the upper block of the current block, and whether ALF is enabled on the left block of the current block, a context model is selected from 3 different context models included in the second context model set, the second The three context models included in the context model set are the same as the three context models included in the first context model set.
  • chroma blocks include CB chroma blocks and CR chroma blocks.
  • the second context model set includes a fourth context model, a fifth context model, and a sixth context model. If the upper block of the current block starts ALF, and the left block of the current block starts ALF, the target context model is the fourth context model; if the upper block of the current block starts ALF and the left block of the current block does not start ALF, or if the current block If the upper block of the block does not start ALF and the left block of the current block starts ALF, the target context model is the fifth context model; if the upper block of the current block does not start ALF, and the left block of the current block does not start ALF, the target context model It is the sixth context model.
  • FIG. 47 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 46. As shown in FIG. 47, the method includes the following steps :
  • Step 4701 If the current block supports adaptive loop filter ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic on the ALF indication information based on the target context model Decoding; where the target context model is based on whether the upper block of the current block starts ALF, and the left block of the current block starts ALF, a context model selected from three different context models included in the first context model set.
  • Step 4702 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the target context model; where the target The context model is based on whether ALF is enabled on the upper block of the current block, and whether ALF is enabled on the left block of the current block.
  • a context model is selected from 3 different context models included in the second context model set.
  • the second context model set includes The 3 context models of is different from the 3 context models included in the first context model set.
  • chroma blocks include CB chroma blocks and CR chroma blocks.
  • the luminance block, the CB chrominance block and the CR chrominance block can all share 3 different context models. In this way, the number of context models that can be used for the ALF indication information will be 3. Reduce the complexity of encoding and decoding, and reduce the memory overhead.
  • FIG. 48 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 48, the method includes the following steps:
  • Step 4801 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the first context model.
  • Step 4802 If the current block supports ALF and the current block is a CB chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the second context model.
  • Step 4803 If the current block supports ALF and the current block is a CR chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the third context model.
  • the first context model, the second context model, and the third context model are different context models.
  • FIG. 49 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 48. As shown in FIG. 49, the method includes the following steps :
  • Step 4901 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the first context model, and the ALF indication information Used to indicate whether the current block starts ALF.
  • Step 4902 If the current block supports ALF and the current block is a CB chroma block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the second context model.
  • Step 4903 If the current block supports ALF and the current block is a CR chroma block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the third context model.
  • the first context model, the second context model, and the third context model are different context models.
  • the luminance block shares a context model
  • the CB chrominance block shares a context model
  • the CR chrominance block shares a context model.
  • FIG. 50 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 50, the method includes the following steps:
  • Step 5001 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the first context model.
  • Step 5002 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the second context model.
  • chroma blocks include CB chroma blocks and CR chroma blocks.
  • FIG. 51 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 50. As shown in FIG. 51, the method includes the following steps :
  • Step 5101 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the first context model, and the ALF indication information Used to indicate whether the current block starts ALF.
  • Step 5102 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the second context model.
  • the context model is different from the second context model.
  • the luminance block shares a context model
  • the CB chrominance block and the CR chrominance block share the same context model.
  • the number of context models used by the ALF indication information can be reduced to 2.
  • the complexity of encoding and decoding is reduced, and the memory overhead is reduced.
  • FIG. 52 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 52, the method includes the following steps:
  • Step 5201 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the target context model; wherein, the target context The model is a context model selected from three different context models included in the first context model set according to whether the upper block of the current block starts ALF and the left block of the current block starts ALF.
  • the first context model set includes a first context model, a second context model, and a third context model. If the upper block of the current block starts ALF, and the left block of the current block starts ALF, the target context model is the first context model; if the upper block of the current block starts ALF and the left block of the current block does not start ALF, or if the current block If the upper block of the block does not start ALF and the left block of the current block starts ALF, the target context model is the second context model; if the upper block of the current block does not start ALF, and the left block of the current block does not start ALF, then the target context model It is the third context model.
  • Step 5202 If the current block supports ALF and the current block is a CB chroma block, before performing filtering processing on the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the first context model.
  • Step 5203 If the current block supports ALF and the current block is a CR chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the second context model.
  • the context model, the first context model, and the second context model included in a context model set are different context models.
  • FIG. 53 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 52. As shown in FIG. 53, the method includes the following steps :
  • Step 5301 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the target context model; where the target context
  • the model is a context model selected from three different context models included in the first context model set according to whether the upper block of the current block starts ALF and the left block of the current block starts ALF.
  • the first context model set includes a first context model, a second context model, and a third context model. If the upper block of the current block starts ALF, and the left block of the current block starts ALF, the target context model is the first context model; if the upper block of the current block starts ALF and the left block of the current block does not start ALF, or if the current block If the upper block of the block does not start ALF and the left block of the current block starts ALF, the target context model is the second context model; if the upper block of the current block does not start ALF, and the left block of the current block does not start ALF, then the target context model It is the third context model.
  • Step 5302 If the current block supports ALF and the current block is a CB chroma block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the first context model.
  • Step 5303 If the current block supports ALF and the current block is a CR chroma block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the second context model.
  • the context model, the first context model, and the second context model included in a context model set are different context models.
  • the coding and decoding mode of the ALF indication information is shown in Table 44:
  • the coding and decoding mode of the ALF indication information is shown in Table 45:
  • the luma block needs to use three different context models, the CB chroma block shares one context module, and the CR chroma block shares a different context model. In this way, the ALF can be indicated
  • the number of context models used by the information will be 5, thereby reducing the complexity of encoding and decoding and reducing memory overhead.
  • FIG. 54 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 54, the method includes the following steps:
  • Step 5401 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the first context model.
  • Step 5402 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on the second context model.
  • the context model and the first context model are the same context model.
  • FIG. 55 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 54. As shown in FIG. 55, the method includes the following steps :
  • Step 5501 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the first context model, and the ALF indication information Used to indicate whether the current block starts ALF.
  • Step 5502 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on the second context model.
  • the context model and the first context model are the same context model.
  • the luminance block, the CB chrominance block, and the CR chrominance block share one context model.
  • the number of context models that can be used for the ALF indication information will be 1, thereby reducing the compilation
  • the complexity of decoding reduces the memory overhead.
  • FIG. 56 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 56, the method includes the following steps:
  • Step 5601 If the current block supports ALF and the current block is a luminance block, perform bypass-based binary arithmetic coding on the ALF indication information before performing filtering processing on the current block according to the ALF mode.
  • Step 5602 If the current block supports ALF and the current block is a chrominance block, perform bypass-based binary arithmetic coding on the ALF indication information and the ALF indication information before filtering the current block according to the ALF mode.
  • FIG. 57 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 56. As shown in FIG. 57, the method includes the following steps :
  • Step 5701 If the current block supports ALF and the current block is a luma block, perform bypass-based binary arithmetic decoding on the ALF indication information before filtering the current block according to the ALF mode.
  • the ALF indication information is used to indicate whether the current block is Start ALF.
  • Step 5702 If the current block supports ALF and the current block is a chrominance block, perform bypass-based binary arithmetic decoding on the ALF indication information before filtering the current block according to the ALF mode.
  • a bypass-based binary arithmetic coding and decoding method can be used to perform the ALF indication information.
  • Encoding or decoding in this way, the number of context models used by the ALF indication information can be reduced to zero, thereby reducing the complexity of encoding and decoding and reducing memory overhead.
  • FIG. 58 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 58, the method includes the following steps:
  • Step 5801 If the current block supports ALF and the current block is a luminance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on a context model, and use the ALF indication information To indicate whether ALF is enabled for the current block.
  • Step 5802 If the current block supporter ALF, and the current block activates the adaptive loop filter ALF, and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform a bypass based on the ALF indication information The binary arithmetic decoding.
  • chroma blocks include CB chroma blocks and CR chroma blocks.
  • FIG. 59 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 58. As shown in FIG. 59, the method includes the following steps :
  • Step 5901 If the current block supports ALF and the current block is a brightness block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on a context model, and use the ALF indication information To indicate whether ALF is enabled for the current block.
  • Step 5902 If the current block supporter ALF, and the current block starts the adaptive loop filter ALF, and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform a bypass based on the ALF indication information The binary arithmetic decoding.
  • chroma blocks include CB chroma blocks and CR chroma blocks.
  • the luma block uses one context model
  • the CB chroma block and the CR chroma block both use the bypass-based binary arithmetic coding and decoding method for encoding or decoding, so that the ALF can
  • the number of context models used by the indication information is reduced to 1, thereby reducing the complexity of encoding and decoding and reducing memory overhead.
  • FIG. 60 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 60, the method includes the following steps:
  • Step 6001 If the current block supports ALF and the current block is a luma block, perform bypass-based binary arithmetic coding on the ALF indication information before filtering the current block according to the ALF mode.
  • the ALF indication information is used to indicate whether the current block is Start ALF.
  • Step 6002 If the current block supports ALF and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic coding on the ALF indication information based on a context model.
  • chroma blocks include CB chroma blocks and CR chroma blocks.
  • Fig. 61 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in Fig. 60. As shown in Fig. 61, the method includes the following steps :
  • Step 6101 If the current block supports ALF and the current block is a luma block, perform bypass-based binary arithmetic decoding on the ALF indication information before filtering the current block according to the ALF mode.
  • the ALF indication information is used to indicate whether the current block is Start ALF.
  • Step 6102 If the current block supports ALF, and the current block is a chrominance block, before filtering the current block according to the ALF mode, perform context-based adaptive binary arithmetic decoding on the ALF indication information based on a context model.
  • chroma blocks include CB chroma blocks and CR chroma blocks.
  • the luminance block adopts a bypass-based binary arithmetic coding and decoding method for encoding or decoding.
  • the CB chrominance block and the CR chrominance block share a context model.
  • the ALF indication information can be The number of context models used is reduced to 1, thereby reducing the complexity of encoding and decoding and reducing memory overhead.
  • the syntax elements transmitted between the encoding end and the decoding end also include MIP indication information.
  • the MIP indication information is used to indicate whether the current block starts the matrix-based intra prediction mode.
  • the MIP indication information is Intra_MIP_flag.
  • context-based adaptive binary arithmetic can be performed on the MIP indication information based on the target context model decoding.
  • the target context model is based on whether the upper block of the current block starts the matrix-based intra prediction mode, whether the left block of the current block starts the matrix-based intra prediction mode, and whether the current block meets the preset size condition, from 4 A context model selected from different context models.
  • the preset size condition may be that the width of the current block is greater than twice the height, or the height of the current block is greater than twice the width.
  • the preset size condition may also be other conditions, which are not limited in the embodiment of the present application.
  • the above four different context models include a first context model, a second context model, a third context model, and a fourth context model. If the upper block of the current block starts the matrix-based intra prediction mode, the left block of the current block starts the matrix-based intra prediction mode, and the current block does not meet the preset size condition, the target context model is the first context model; if The upper block of the current block starts the matrix-based intra prediction mode, the left block of the current block does not start the matrix-based intra prediction mode, and the current block does not meet the preset size condition, or if the upper block of the current block is not started based on Matrix intra prediction mode, the left block of the current block starts the matrix-based intra prediction mode, and the current block does not meet the preset size condition, the target context model is the second context model; if the upper block of the current block is not started based on Matrix intra prediction mode, the left block of the current block does not start the matrix-based intra prediction mode, and the current block does not meet the preset size condition, the target
  • the MIP indication information needs to use 4 different context models, and the memory overhead is relatively large.
  • FIG. 62 is a flowchart of an encoding and decoding method provided by an embodiment of the present application. The method is applied to the encoding end or the decoding end. As shown in FIG. 62, the method includes:
  • Step 6201 If the width and height dimensions of the current block are 32*32, the current block does not support the matrix-based intra prediction mode.
  • the current block is a luminance block or a chrominance block.
  • the current block is a luminance block and the width and height dimensions of the current block are 32*32, the current block does not support the matrix-based intra prediction mode.
  • the current block does not support the matrix-based intra prediction mode.
  • the current block is a luminance block or a chrominance block.
  • the current block does not support the matrix-based intra prediction mode.
  • the current block is a luminance block or a chrominance block.
  • the current block when the current block is a large-size block, the current block does not support the matrix-based intra prediction mode, that is, the current block cannot enable the matrix-based intra prediction mode. In this way, the computational complexity can be reduced. degree.
  • FIG. 63 is a flowchart of an encoding and decoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 63, if the current block supports a matrix-based intra prediction mode, the method includes:
  • Step 6301 According to whether the matrix-based intra prediction mode is activated for the current block, perform context-based adaptive binary arithmetic coding on the MIP indication information based on the target context model; where the target context model is based on whether the upper block of the current block is activated based on the matrix The intra prediction mode of the current block, and whether the left block of the current block activates the matrix-based intra prediction mode to select a context model from 3 different context models.
  • the above three different context models include a first context model, a second context model, and a third context model. If the upper block of the current block starts the matrix-based intra prediction mode, and the left block of the current block starts the matrix-based intra prediction mode, the target context model is the first context model; if the upper block of the current block starts the matrix-based intra prediction mode Intra prediction mode and the matrix-based intra prediction mode is not activated for the left block of the current block, or if the matrix-based intra prediction mode is not activated for the upper block of the current block and the matrix-based intra prediction is activated for the left block of the current block Mode, the target context model is the second context model; if the upper block of the current block does not activate the matrix-based intra prediction mode, and the left block of the current block does not activate the matrix-based intra prediction mode, the target context model is the first Three context models.
  • the encoder determines that the current block meets the conditions of matrix-based intra prediction, it can use RDO to decide whether the current block starts the MIP mode, that is, whether to use the matrix-based intra prediction method, and encode MIP in the encoding stream. Instruction information to tell the decoder whether to start the MIP mode.
  • the above-mentioned MIP indication information will be encoded according to the specific situation, and if the MIP mode is activated in the current block, other syntax elements related to MIP need to be encoded.
  • Fig. 64 is a flowchart of an encoding and decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in Fig. 63. As shown in Fig. 64, if the current The block supports matrix-based intra prediction mode, and the method includes:
  • Step 6401 Before predicting the current block according to the matrix-based intra prediction mode, perform context-based adaptive binary arithmetic decoding on the MIP indication information based on the target context model; wherein the target context model is based on whether the upper block of the current block is Activate the matrix-based intra prediction mode, and whether the left block of the current block activates the matrix-based intra prediction mode to select a context model from 3 different context models.
  • the above three different context models include a first context model, a second context model, and a third context model. If the upper block of the current block starts the matrix-based intra prediction mode, and the left block of the current block starts the matrix-based intra prediction mode, the target context model is the first context model; if the upper block of the current block starts the matrix-based intra prediction mode Intra prediction mode and the matrix-based intra prediction mode is not activated for the left block of the current block, or if the matrix-based intra prediction mode is not activated for the upper block of the current block and the matrix-based intra prediction is activated for the left block of the current block Mode, the target context model is the second context model; if the upper block of the current block does not activate the matrix-based intra prediction mode, and the left block of the current block does not activate the matrix-based intra prediction mode, the target context model is the first Three context models.
  • Step 6402 If it is determined according to the MIP indication information that the current block starts the matrix-based intra prediction mode, then the matrix-based intra prediction mode is used to predict the current block.
  • the decoding end receives the encoded stream, and if it is determined that the current block meets the parsing condition, the MIP indication information can be parsed to determine whether the current block starts the MIP mode.
  • the analysis conditions include: the current block is a luminance block, and the current block size meets certain conditions. Of course, the analysis conditions do not limit the above conditions, and may also include other conditions.
  • the decoder can determine whether the prediction mode of the current block is a matrix-based intra prediction mode. If it is a matrix-based intra prediction mode, it can continue to analyze other syntax related to the mode to obtain its prediction Mode information, and then get the predicted value.
  • the size condition of the current block may not be considered, and the matrix-based intra prediction mode is activated only according to whether the upper block of the current block starts the matrix-based intra prediction mode, and whether the left block of the current block starts the matrix-based intra prediction
  • the prediction mode is a context model selected from 3 different context models. In this way, the number of context models required for the MIP indication information can be reduced to 3, thereby reducing the complexity of coding and decoding and reducing the memory Overhead.
  • FIG. 65 is a flowchart of an encoding and decoding method according to an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 65, if the current block supports a matrix-based intra prediction mode, the method includes:
  • Step 6501 According to whether the matrix-based intra prediction mode is activated for the current block, perform context-based adaptive binary arithmetic coding on the MIP indication information based on the target context model; wherein, the target context model is based on whether the current block meets the preset size condition One of the two different context models.
  • the preset size condition may be that the width of the current block is greater than twice the height, or the height of the current block is greater than twice the width.
  • the above two different context models include a first context model and a second context model. If the size of the current block meets the preset size condition, the target context model is the first context model, and if the size of the current block does not meet the preset size condition, the target context model is the second context model.
  • FIG. 66 is a flowchart of an encoding and decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 65. As shown in FIG. 66, if the current The block supports matrix-based intra prediction mode, and the method includes:
  • Step 6601 Before predicting the current block according to the matrix-based intra prediction mode, perform context-based adaptive binary arithmetic decoding on the MIP indication information based on the target context model; wherein, the target context model is based on whether the current block satisfies a preset The size condition is from one of the two different context models.
  • the preset size condition may be that the width of the current block is greater than twice the height, or the height of the current block is greater than twice the width.
  • the above two different context models include a first context model and a second context model. If the size of the current block meets the preset size condition, the target context model is the first context model, and if the size of the current block does not meet the preset size condition, the target context model is the second context model.
  • Step 6602 If it is determined that the current block starts the matrix-based intra prediction mode according to the MIP indication information, then the matrix-based intra prediction mode is used to predict the current block.
  • the context model is selected only according to the size condition. In this way, the number of context models required for the MIP indication information can be reduced to 2, thereby reducing the complexity of coding and decoding and reducing the memory overhead.
  • FIG. 67 is a flowchart of an encoding and decoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 67, if the current block supports the matrix-based intra prediction mode, the method includes:
  • Step 6701 According to whether the matrix-based intra prediction mode is activated for the current block, perform context-based adaptive binary arithmetic coding on the MIP indication information based on the same context model.
  • FIG. 68 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 67. As shown in FIG. 68, if the current block Support matrix-based intra prediction mode, the method includes:
  • Step 6801 Before predicting the current block according to the matrix-based intra prediction mode, perform context-based adaptive binary arithmetic decoding on the MIP indication information based on the same context model.
  • Step 6802 If it is determined that the current block starts the matrix-based intra prediction mode according to the MIP indication information, then the matrix-based intra prediction mode is used to predict the current block.
  • the MIP indication information in the MIP mode, for the selection of the context model of the MIP indication information, whether the upper block of the current block starts the matrix-based intra prediction mode, and whether the left block of the current block starts the matrix-based intra prediction mode
  • the intra-frame prediction mode does not consider the size conditions.
  • the MIP indication information is subjected to context-based adaptive binary arithmetic coding or decoding based on the same context model. In this way, the context required by the MIP indication information can be changed.
  • the number of models is reduced to 1, thereby reducing the complexity of encoding and decoding and reducing memory overhead.
  • FIG. 69 is a flowchart of an encoding and decoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 69, if the current block supports a matrix-based intra prediction mode, the method includes:
  • Step 6901 According to whether the matrix-based intra prediction mode is activated for the current block, perform bypass-based binary arithmetic coding on the MIP indication information.
  • FIG. 70 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 69. As shown in FIG. 70, if the current block Support matrix-based intra prediction mode, the method includes:
  • Step 7001 Before predicting the current block according to the matrix-based intra prediction mode, perform bypass-based binary arithmetic decoding on the MIP indication information.
  • Step 7002 If it is determined according to the MIP indication information that the current block starts the matrix-based intra prediction mode, then the matrix-based intra prediction mode is used to predict the current block.
  • the MIP mode in the MIP mode, whether the upper block of the current block starts the matrix-based intra prediction mode, and whether the left block of the current block starts the matrix-based intra prediction mode, and the size conditions are not considered.
  • bypass-based binary arithmetic coding or decoding is performed on the MIP indication information, that is, context-based adaptive binary arithmetic coding or decoding is not used.
  • the context required by the MIP indication information can be changed.
  • the number of models is reduced to 0, thereby reducing the complexity of encoding and decoding and reducing memory overhead.
  • the BDPCM technology lacks an SPS-level syntax to turn on or off the BDPCM mode, and also lacks an SPS-level syntax to control the switch of the size of the largest encoding block that can enable the BDPCM mode, which is less flexible.
  • FIG. 71 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 71, if the current block supports the BDPCM mode, the method includes the following steps:
  • Step 7101 Before performing BDPCM encoding on the current block, encode the first BDPCM indication information, where the first BDPCM indication information is used to indicate whether the current processing unit supports the BDPCM mode.
  • the first BDPCM indication information may exist in a sequence parameter set, an image parameter level, a slice level, or a tile level.
  • the first BDPCM indication information exists in the sequence parameter set, that is, the first BDPCM indication information is an SPS-level syntax.
  • the encoding end may also encode range indication information, where the range indication information is used to indicate the range of the processing unit supporting the BDPCM mode.
  • the range indication information can exist in the sequence parameter set, image parameter level, slice level or tile level.
  • FIG. 72 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 71. As shown in FIG. 72, if the current block Support BDPCM mode, the method includes the following steps:
  • Step 7201 Before performing BDPCM decoding on the current block, decode the first BDPCM indication information, where the first BDPCM indication information is used to indicate whether the current processing unit supports the BDPCM mode.
  • Step 7202 According to the first BDPCM instruction information, decode the current processing unit.
  • the current processing unit is processed based on the BDPCM mode.
  • the first BDPCM indication information may exist in a sequence parameter set, image parameter level, slice level, or tile level.
  • the first BDPCM indication information exists in the sequence parameter set, that is, the first BDPCM indication information is an SPS-level syntax.
  • the decoding end may also decode the range indication information, where the range indication information is used to indicate the range of the processing unit supporting the BDPCM mode.
  • the range indication information can exist in the sequence parameter set, image parameter level, slice level or tile level.
  • a syntax is added to enable or disable the BDPCM mode, which improves the flexibility of the coding and decoding process.
  • a syntax is added to indicate the range of processing units that support the BDPCM mode.
  • Fig. 73 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in Fig. 73, if the current block supports the BDPCM mode, the method includes the following steps:
  • Step 7301 Before performing BDPCM processing on the current block, encode the second BDPCM indication information, where the second BDPCM indication information is used to indicate the size range of the processing unit that supports the BDPCM mode.
  • the range of the unit in the current processing unit can be sequence level, image parameter level or block level.
  • the current processing unit may be the current image block.
  • the size range may be a size range smaller than 32*32.
  • the second BDPCM indication information is used to indicate the maximum size of a processing unit that can support the BDPCM mode, that is, the maximum size of a processing unit that can use the BDPCM mode.
  • the maximum size is 32*32.
  • the second BDPCM indication information may exist in a sequence parameter set (SPS), an image parameter level, a slice level, or a tile level.
  • SPS sequence parameter set
  • the second BDPCM indication information exists in the sequence parameter set, that is, the second BDPCM indication information is a syntax added at the SPS level.
  • FIG. 74 is a flowchart of a decoding method provided by an embodiment of the present application. The method is applied to the decoding end. The method is a decoding method corresponding to the encoding method shown in FIG. 71. As shown in FIG. 72, if the current block Support BDPCM mode, the method includes the following steps:
  • Step 7401 Before performing BDPCM processing on the current block, decode the second BDPCM indication information, where the second BDPCM indication information is used to indicate the size range of the processing unit supporting the BDPCM mode.
  • Step 7402 Based on the second BDPCM indication information and the size of the current block, determine whether the current block can perform BDPCM processing.
  • the current block can perform BDPCM processing. If the size of the current block is not within the size range of the processing unit supporting the BDPCM mode indicated by the second BDPCM indication information, it is determined that the current block cannot be processed by BDPCM.
  • the second BDPCM indication information is used to indicate the maximum size of a processing unit that can support the BDPCM mode, and if the size of the current block is less than or equal to the maximum size indicated by the second BDPCM indication information, it is determined that the current block is capable of BDPCM processing . If the size of the current block is greater than the maximum size indicated by the second BDPCM indication information, it is determined that the current block cannot be processed by BDPCM.
  • the second BDPCM indication information may exist in a sequence parameter set (SPS), an image parameter level, a slice level, or a tile level.
  • SPS sequence parameter set
  • the second BDPCM indication information exists in the sequence parameter set, that is, the second BDPCM indication information is a syntax added at the SPS level.
  • a syntax is added to control the size range in which the BDPCM mode can be used, which improves the flexibility of the coding and decoding process.
  • the syntax elements transmitted between the encoding end and the decoding end may also include third BDPCM indication information and fourth BDPCM indication information.
  • the third BDPCM indication information is used to indicate whether the current processing unit starts the BDPCM mode
  • the fourth BDPCM indication information is used to indicate index information of the prediction direction of the BDPCM mode.
  • the third BDPCM indication information is Intra_bdpcm_flag
  • the fourth BDPCM indication information is Intra_bdpcm_dir_flag.
  • the current block supports the BDPCM mode
  • when it is determined to encode or decode the third BDPCM indication information it is necessary to perform context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic coding on the third BDPCM indication information based on a context model.
  • Adaptive binary arithmetic decoding when it is determined to encode or decode the fourth BDPCM indication information, it is necessary to perform context-based adaptive binary arithmetic coding or context-based adaptation on the fourth BDPCM indication information based on a different context model Binary arithmetic decoding. That is, two context models are needed to encode and decode the third BDPCM indication information and the fourth BDPCM indication information, as shown in Table 46 below.
  • FIG. 75 is a flowchart of an encoding method provided by an embodiment of the present application. The method is applied to the encoding end. As shown in FIG. 75, if the current block supports the BDPCM mode, the method includes the following steps:
  • Step 7501 Before performing BDPCM coding on the current block, according to whether the BDPCM mode is activated for the current block, based on a context model, perform context-based adaptive binary arithmetic coding on the third BDPCM indication information.
  • RDO can be used to decide whether to start the BDPCM mode, that is, whether to use the differential PCM encoding method of the quantized residual, and in the encoding stream Encode the third BDPCM indication information to indicate whether the current block starts the BDPCM mode.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请公开了一种编解码方法、装置及存储介质,属于图像处理技术领域。所述方法包括:在确定进行第一ISP指示信息的编码或解码时,基于一个上下文模型,对第一ISP指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码,第一ISP指示信息用于指示是否启动帧内子块预测模式;在确定进行第二ISP指示信息的编码或解码时,对第二ISP指示信息进行基于旁路的二进制算术编码或解码,第二ISP指示信息用于指示帧内子块预测模式的子块划分方式,如此,可以减少编解码过程中所需的上下文模型的数量,降低了编解码过程的复杂度,减少了内存开销。

Description

一种编解码方法、装置及存储介质
本申请要求于2019年06月21日提交的申请号为201910545251.0、发明名称为“一种编解码方法、装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别涉及一种编解码方法、装置及存储介质。
背景技术
目前,在图像编解码技术领域中,编码端对图像块进行编码时,通常需要利用上下文模型对一些语法元素进行编码,以将这些语法元素携带在图像块的编码流中发送给解码端。解码端接收到图像块的编码流后,需要按照与编码端相同的上下文模型对这些语法元素进行解析,以基于这些语法元素重构图像。这些语法元素可以为各种指示信息,比如可以为第一ISP指示信息或第二ISP指示信息,第一ISP指示信息用于指示是否启动帧内子块预测模式,第二ISP指示信息用于指示帧内子块预测模式的子块划分方式。
但是,不同的语法元素可能需要基于不同的上下文模型进行基于上下文的自适应二进制算术编解码,同一语法元素在不同情况下,可能也需要基于不同的上下文模型进行基于上下文的自适应二进制算术编解码,这样将导致编解码过程中需要用到较多的上下文模型,编解码过程复杂程度高,内存开销较大。
发明内容
本申请实施例提供了一种编解码方法及存储介质,可以用于解决相关技术中存在的编解码过程中所需上下文模型较多,内存开销较大的问题。所述技术方案如下:
一方面,提供了一种编解码方法,所述方法包括:
在确定进行第一ISP指示信息的编码或解码时,基于一个上下文模型,对所述第一ISP指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码,所述第一ISP指示信息用于指示是否启动帧内子块预测模式;
在确定进行第二ISP指示信息的编码或解码时,对所述第二ISP指示信息进行基于旁路的二进制算术编码或解码,所述第二ISP指示信息用于指示帧内子块预测模式的子块划分方式。
一方面,提供了一种编解码方法,所述方法包括:
在确定进行第一ISP指示信息的编码或解码时,对第一ISP指示信息进行基于旁路的二进制算术编码或解码,所述第一ISP指示信息用于指示是否启动帧内子块预测模式;
在确定进行第二ISP指示信息的编码或解码时,对第二ISP指示信息进行基于旁路的二进制算术编码或解码,所述第二ISP指示信息用于指示帧内子块预测模式的子块划分类型。
一方面,提供了一种编解码方法,所述方法包括:
若当前块的宽高尺寸为M*N,所述M小于64且所述N小于64,则所述当前块不支持多行预测模式。
在本申请一种可能的实现方式中,若所述当前块的宽高尺寸为4*4,则所述当前块不支持多行预测模式。
一方面,提供了一种编解码方法,若当前块支持多行预测模式,且所述多行预测模式对应的候选参考行行数为3,所述多行预测模式对应的参考行指示信息至多占用2个比特位,所述参考行指示信息用于指示基于多行预测模式进行所述当前块的预测时所使用的目标参考行的索引信息,所述方法包括:
基于一个上下文模型,对所述参考行指示信息的第1个比特位进行基于上下文的自适应二进制算术编码或自适应二进制算术解码;
当需要对所述参考行指示信息的第2个比特位进行编码或解码时,对所述参考行指示信息的第2个比特位进行基于旁路的二进制算术编码或解码。
一方面,提供了一种编解码方法,若当前块支持多行预测模式,且所述多行预测模式对应的候选参考行行数为4,所述多行预测模式对应的参考行指示信息至多占用3个比特位,所述参考行指示信息用于指示基于多行预测模式进行所述当前块的预测时所使用的目标参考行的索引信息,所述方法包括:
基于一个上下文模型,对所述参考行指示信息的第1个比特位进行基于上下文的自适应二进制算术编码或自适应二进制算术解码;
当需要对所述参考行指示信息的第2个比特位进行编码或解码时,对所述参考行指示信息的第2个比特位进行基于旁路的二进制算术编码或解码;
当需要对所述参考行指示信息的第3个比特位进行编码或解码时,对所述参考行指示信息的第3个比特位进行基于旁路的二进制算术编码或解码。
一方面,提供了一种编解码方法,若当前块支持多行预测模式,且所述多行预测模式对应的候选参考行行数为3,其中,索引信息为0的候选参考行为第0行,所述第0行是与所述当前块边界相邻的行;索引信息为1的候选参考行为第1行,所述第1行是与所述当前块边界次相邻的行;索引信息为2的候选参考行为第2行,所述第2行是与所述第1行相邻的行;所述方法包括:
在根据所述多行预测模式对所述当前块进行预测时,根据目标参考行对所述当前块进行预测;
其中,所述目标参考行根据参考行指示信息确定;
若所述参考行指示信息所指示的索引信息为0,则所述目标参考行为第0行;
若所述参考行指示信息所指示的索引信息为1,则所述目标参考行是第1行;
若所述参考行指示信息所指示的索引信息为2,则所述目标参考行是第2行。
一方面,提供了一种编解码方法,若确定当前块启动多行预测模式,且所述多行预测模式对应的候选参考行行数为3,其中,索引信息为0的候选参考行为第0行,所述第0行是 与所述当前块边界相邻的行;索引信息为1的候选参考行为第1行,所述第1行是与所述当前块边界次相邻的行;索引信息为2的候选参考行为第2行,所述第2行是所述第1行相邻的行;索引信息为3的候选参考行为第3行,所述第3行是与所述第2行相邻的行;所述方法包括:
在根据所述多行预测模式对所述当前块进行预测时,根据目标参考行对所述当前块进行预测;
其中,所述目标参考行根据参考行指示信息确定;
若所述参考行指示信息所指示的索引信息为0,则所述目标参考行为第0行;
若所述参考行指示信息所指示的索引信息为1,则所述目标参考行是第2行;
若所述参考行指示信息所指示的索引信息为2,则所述目标参考行是第3行。
一方面,提供了一种解码方法,若当前块支持多行预测模式,所述方法包括:
在根据所述多行预测模式对所述当前块进行预测之前,对行数指示信息进行解码,所述行数指示信息用于指示多行预测模式对应的候选参考行行数;
根据所述行数指示信息确定所述多行预测模式对应的候选参考行行数;
根据所述多行预测模式对应的候选参考行行数和所述参考行指示信息确定目标参考行,所述参考行指示信息用于指示基于所述多行预测模式进行所述当前块的预测时所使用的目标参考行的索引信息;
根据所述目标参考行所述对当前块进行预测。
在本申请一种可能的实现方式中,所述行数指示信息存在于序列参数集、图像参数级、slice级或Tile级中。
一方面,提供了一种编解码方法,所述方法包括:
若当前块启动仿射预测模式或启动除仿射预测模式之外的其他预测模式,在进行当前块的运动矢量差编码或解码时,如果所述当前块支持自适应运动矢量精度AMVR模式,则在进行第一AMVR指示信息的编码或解码时,基于第一上下文模型,对所述第一AMVR指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码,所述第一AMVR指示信息用于指示是否启动AMVR模式;
当所述第一AMVR指示信息指示所述当前块启动AMVR模式时,基于第二上下文模型,对第二AMVR指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码,所述第二AMVR指示信息用于指示在AMVR模式下进行运行矢量差编码或解码时所采用的像素精度的索引信息,所述第一上下文模型和所述第二上下文模型不同。
一方面,提供了一种编解码方法,其特征在于,所述方法包括:
若当前块启动仿射预测模式,在进行当前块的运动矢量差编码或解码时,如果所述当前块支持AMVR模式,则在进行第一AMVR指示信息的编码或解码时,基于第一上下文模型,对所述第一AMVR指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码,当所述第一AMVR指示信息指示当前块启动AMVR模式时,对第二AMVR指示信息进行基于旁路的二进制算术编码或解码;
若当前块启动除仿射预测模式之外的其他预测模式,在进行所述当前块的运动矢量差编 码或解码时,如果所述当前块支持AMVR模式,则在进行第一AMVR指示信息的编码或解码时,基于第二上下文模型,对所述第一AMVR指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码,当所述第一AMVR指示信息指示当前块启动AMVR模式时,对第二AMVR指示信息进行基于旁路的二进制算术编码或解码;
其中,所述第一上下文模型和所述第二上下文模型不同,所述第一AMVR指示信息用于指示是否启动AMVR模式,所述第二AMVR指示信息用于指示在AMVR模式下进行运行矢量差编码或解码时所采用的像素精度的索引信息。
一方面,提供了一种编解码方法,所述方法包括:
若当前块启动仿射预测模式或启动除仿射预测模式之外的其他预测模式,在进行所述当前块的运动矢量差编码或解码时,若所述当前块支持自适应运动矢量精度AMVR模式,在进行第一AMVR指示信息的编码或解码时,基于第一上下文模型,对所述第一AMVR指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码,所述第一AMVR指示信息用于指示是否启动AMVR模式;
当所述第一AMVR指示信息指示所述当前块启动AMVR模式时,对第二AMVR指示信息进行基于旁路的二进制算术编码或解码,所述第二AMVR指示信息用于指示在AMVR模式下进行运行矢量差编码或解码时所采用的像素精度的索引信息。
一方面,提供了一种解码方法,所述方法包括:
若当前块启动帧内子块预测,所述帧内子块预测的目标预测模式存在于最有可能的帧内预测模式MPM列表中,所述当前块为亮度块,则在根据所述帧内子块预测对所述当前块进行预测时,对预测模式索引信息进行解码,其中,所述预测模式索引信息的第1个比特位是基于第一上下文模型进行基于上下文的自适应二进制算术解码得到,其他比特位是基于旁路的二进制算术解码得到;
根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的帧内子块预测的目标预测模式;所述预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引信息;
根据所述目标预测模式对所述当前块进行预测;或者
若当前块启动常规帧内预测,当常规帧内预测的目标预测模式来自于MPM列表中时,若当前块为亮度块,则在根据常规帧内预测对所述当前块进行预测时,对预测模式索引信息进行解码,其中,所述预测模式索引信息的第1个比特位是基于第二上下文模型进行基于上下文的自适应二进制算术解码得到,其他比特位基于旁路的二进制算术解码得到,所述第二上下文模型与所述第一上下文模型是同一上下文模型;
根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的常规帧内预测的目标预测模式;所述预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引信息;
根据所述目标预测模式对所述当前块进行预测。
一方面,提供了一种解码方法,所述方法包括:
若当前块启动帧内子块预测时,帧内子块预测的目标预测模式存在于MPM列表中,所述当前块为亮度块,则在根据帧内子块预测对所述当前块进行预测时,对预测模式索引信息进行解码,其中,所述预测模式索引信息的所有比特位基于旁路的二进制算术解码得到;
根据预测模式索引信息,从所述MPM列表中确定当前块启动的帧内子块预测的目标预测模式;所述预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引信息;
根据所述目标预测模式对当前块进行预测;或者
若当前块启动常规帧内预测时,当常规帧内预测的目标预测模式来自于MPM列表中时,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,对预测模式索引信息进行解码,其中,所述预测模式索引信息的所有比特位基于旁路的二进制算术解码得到;
根据预测模式索引信息,从所述MPM列表中确定当前块启动的常规帧内预测的目标预测模式;所述预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引信息;
根据所述目标预测模式对当前块进行预测。
一方面,提供了一种解码方法,所述方法包括:
若当前块启动帧内子块预测时,帧内子块预测的目标预测模式存在于MPM列表中,若所述当前块为亮度块,则在根据帧内子块预测对所述当前块进行预测时,对预测模式索引信息进行解码,其中,所述预测模式索引信息的第1个比特位是基于一个上下文模型进行基于上下文的自适应二进制算术解码得到,其他比特位基于旁路的二进制算术解码得到;
根据所述预测模式索引信息,从所述MPM列表中确定所述当前块启动的帧内子块预测的目标预测模式;所述预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引信息;
根据所述目标预测模式对当前块进行预测;或者
若当前块启动常规帧内预测,当常规帧内预测的目标预测模式来自于MPM列表中时,所述当前块为亮度块,则在根据常规帧内预测对所述当前块进行预测时,对预测模式索引信息进行解码,其中,所述预测模式索引信息所有比特位基于旁路的二进制算术解码得到;
根据所述预测模式索引信息,从所述MPM列表中确定所述当前块启动的常规帧内预测的目标预测模式;所述预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引信息;
根据所述目标预测模式对所述当前块进行预测。
一方面,提供了一种解码方法,所述方法包括:
若当前块为亮度块,则在根据帧内子块预测对所述当前块进行预测时,对预测模式索引信息进行解码,所述预测模式索引信息所有比特位基于旁路的二进制算术解码得到;
根据所述预测模式索引信息,从所述MPM列表中确定所述当前块启动的帧内子块预测的目标预测模式;所述预测模式索引信息用于指示所述目标预测模式在所述MPM列表中的索引信息;
根据所述目标预测模式对所述当前块进行预测;或者
若当前块启动常规帧内预测,当常规帧内预测的目标预测模式来自于MPM列表中时,所述当前块为亮度块,则在根据常规帧内预测对所述当前块进行预测时,对预测模式索引信息进行解码,其中,所述预测模式索引信息的第1个比特位是基于一个上下文模型进行基于上下文的自适应二进制算术解码得到,其他比特位基于旁路的二进制算术解码得到;
根据所述预测模式索引信息,从所述MPM列表中确定所述当前块启动的常规帧内预测的目标预测模式;所述预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引 信息;
根据所述目标预测模式对所述当前块进行预测。
一方面,提供了一种解码方法,所述方法包括:
若当前块启动帧内子块预测时,帧内子块预测的目标预测模式存在于MPM列表中,所述当前块为亮度块,则在根据帧内子块预测对所述当前块进行预测时,对planar指示信息进行解码,其中,所述planar指示信息用于指示所述当前块启动的帧内子块预测的目标预测模式是否为planar预测模式,所述planar指示信息是基于第一上下文模型进行基于上下文的自适应二进制算术解码得到;
当基于所述planar指示信息确定所述当前块启动的帧内子块预测的目标预测模式为planar预测模式时,根据planar预测模式对所述当前块进行预测;
当基于所述planar指示信息确定所述当前块启动的帧内子块预测的目标预测模式不是planar预测模式时,根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的帧内子块预测的目标预测模式,根据所述目标预测模式对当前块进行预测;或者
若所述当前块启动常规帧内预测,当常规帧内预测的目标预测模式来自于MPM列表时,若所述当前块为亮度块,则在根据常规帧内预测对所述当前块进行预测时,对planar指示信息进行解码,其中,所述planar指示信息用于指示所述当前块启动的目标预测模式是否为planar预测模式,所述planar指示信息是基于第二上下文模型进行基于上下文的自适应二进制算术解码得到,所述第一上下文模型与所述第二上下文模型相同;
当基于所述planar指示信息确定所述当前块启动的目标预测模式为planar预测模式时,根据planar预测模式对所述当前块进行预测;
当基于所述planar指示信息确定所述当前块启动的目标预测模式不是planar预测模式时,根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的目标预测模式,根据所述目标预测模式对所述当前块进行预测。
一方面,提供了一种解码方法,所述方法包括:
若当前块启动帧内子块预测时,帧内子块预测的目标预测模式存在于MPM列表中,所述当前块为亮度块,则在根据帧内子块预测对当前块进行预测时,对planar指示信息进行解码,其中,所述planar指示信息用于指示所述当前块启动的帧内子块预测的目标预测模式是否为planar预测模式,所述planar指示信息基于旁路的二进制算术解码得到;
当基于所述planar指示信息确定所述当前块启动的帧内子块预测的目标预测模式为planar预测模式时,根据planar预测模式对所述当前块进行预测;
当基于所述planar指示信息确定所述当前块启动的帧内子块预测的目标预测模式不是planar预测模式时,根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的帧内子块预测的目标预测模式,根据所述目标预测模式对所述当前块进行预测;或者
若所述当前块启动常规帧内预测时,当常规帧内预测的目标预测模式来自于MPM列表中,若所述当前块为亮度块,则在根据常规帧内预测对所述当前块进行预测时,对planar指示信息进行解码,其中,所述planar指示信息用于指示所述当前块启动的目标预测模式是否为planar预测模式,所述planar指示信息基于旁路的二进制算术解码得到;
当基于所述planar指示信息确定所述当前块启动的目标预测模式为planar预测模式时, 根据planar预测模式对所述当前块进行预测;
当基于所述planar指示信息确定所述当前块启动的目标预测模式不是planar预测模式时,根据预测模式索引信息,从所述MPM列表中确定当前块启动的目标预测模式,根据所述目标预测模式对所述当前块进行预测。
一方面,提供了一种解码方法,若当前块支持跨分量预测模式,且所述当前块启动跨分量预测模式,所述当前块为色度块,所述方法包括:
在根据跨分量预测模式对所述当前块进行预测时,对色度预测模式索引信息进行解码,所述色度预测模式索引信息用于指示所述当前块的目标预测模式在对应的候选预测模式列表中的索引;其中,所述色度预测模式索引信息的第1个比特位是基于第一上下文模型基于上下文的自适应二进制算术解码得到,所述色度预测模式索引信息的第2个比特位是基于第二上下文模型进行基于上下文的自适应二进制算术解码,所述第一上下文模型与所述第二上下文模型不同;所述色度预测模式索引信息的第3个比特位和第4个比特位是基于旁路的二进制算术解码得到;
根据所述色度预测模式索引信息,从所述候选预测模式列表中确定所述当前块的目标预测模式;
根据所述目标预测模式对当前块进行预测。
一方面,提供了一种解码方法,若当前块支持跨分量预测模式,且所述当前块启动跨分量预测模式,所述当前块为色度块,所述方法包括:
在根据跨分量预测模式对所述当前块进行预测时,对色度预测模式索引信息进行解码,所述色度预测模式索引信息用于指示所述当前块的目标预测模式在对应的候选预测模式列表中的索引;其中,所述色度预测模式索引信息的第1个比特位是基于一个上下文模型进行基于上下文的自适应二进制算术解码得到,所述色度预测模式索引信息的第2个比特位、3个比特位和第4个比特位是基于旁路的二进制算术解码得到;
根据所述色度预测模式索引信息,从所述候选预测模式列表中确定所述当前块的目标预测模式;
根据所述目标预测模式对所述当前块进行预测。
一方面,提供了一种编解码方法,若当前块支持跨分量预测模式,且所述当前块启动跨分量预测模式,所述当前块为色度块,所述方法包括:
在根据跨分量预测模式对所述当前块进行预测时,对色度预测模式索引信息进行解码,所述色度预测模式索引信息用于指示所述当前块的目标预测模式在对应的候选预测模式列表中的索引;
根据所述色度预测模式索引信息,从所述候选预测模式列表中确定所述当前块的目标预测模式;
其中,若所述色度预测模式索引信息为10,则所述目标预测模式为第一跨分量预测模式;
若所述色度预测模式索引信息为110,则所述目标预测模式为第二跨分量预测模式;
若所述色度预测模式索引信息为111,则所述目标预测模式为第二跨分量预测模式;
若所述色度预测模式索引信息为11110,则所述目标预测模式为planar预测模式;
若所述色度预测模式索引信息为111110,则所述目标预测模式为垂直预测模式;
若所述色度预测模式索引信息为1111110,则所述目标预测模式为水平预测模式;
若所述色度预测模式索引信息为1111111,则所述目标预测模式为DC预测模式;
根据所述目标预测模式对所述当前块进行预测。
一方面,提供了一种解码方法,若当前块支持跨分量预测模式,且所述当前块启动跨分量预测模式,所述当前块为色度块,所述方法包括:
在根据跨分量预测模式对所述当前块进行预测时,对色度预测模式索引信息进行解码,所述色度预测模式索引信息用于指示所述当前块的目标预测模式在对应的候选预测模式列表中的索引;
根据所述色度预测模式索引信息,从所述候选预测模式列表中确定所述当前块的目标预测模式;
其中,若所述色度预测模式索引信息为10,则所述目标预测模式为第一跨分量预测模式;
若所述色度预测模式索引信息为110,则所述目标预测模式为第二跨分量预测模式;
若所述色度预测模式索引信息为111,则所述目标预测模式为第二跨分量预测模式;
若所述色度预测模式索引信息为111100,则所述目标预测模式为planar预测模式;
若所述色度预测模式索引信息为111101,则所述目标预测模式为垂直预测模式;
若所述色度预测模式索引信息为111110,则所述目标预测模式为水平预测模式;
若所述色度预测模式索引信息为111111,则所述目标预测模式为DC预测模式;
根据所述目标预测模式对所述当前块进行预测。
一方面,提供了一种编解码方法,所述方法包括:
当当前块的亮度和色度共用一棵划分树时,若所述当前块对应的亮度块的宽高尺寸为64*64,所述当前块对应的色度块的尺寸为32*32,则所述当前块不支持跨分量预测模式。
一方面,提供了一种解码方法,所述方法包括:
若当前块支持自适应环路滤波器ALF,所述当前块为亮度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码;其中,所述ALF指示信息用于指示所述当前块是否启动ALF,所述目标上下文模型是根据所述当前块的上边块是否启动ALF,以及所述当前块的左边块是否启动ALF,从第一上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型;或者,
若当前块支持ALF,所述当前块为色度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码;其中,所述目标上下文模型是根据所述当前块的上边块是否启动ALF,以及所述当前块的左边块是否启动ALF,从第二上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型,且所述第二上下文模型集合包括的3个上下文模型与所述第一上下文模型集合包括的3个上下文模型不同。
一方面,提供了一种解码方法,所述方法包括:
若当前块支持ALF,所述当前块为亮度块,则在根据ALF模式对所述当前块进行滤波处 理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码;其中,所述ALF指示信息用于指示所述当前块是否启动ALF,所述目标上下文模型是根据所述当前块的上边块是否启动ALF,以及所述当前块的左边块是否启动ALF,从第一上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型;或者,
若当前块支持ALF,所述当前块为色度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码;其中,所述目标上下文模型是根据所述当前块的上边块是否启动ALF,以及所述当前块的左边块是否启动ALF,从第二上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型,且所述第二上下文模型集合包括的3个上下文模型与所述第一上下文模型集合包括的3个上下文模型相同。
一方面,提供了一种解码方法,所述方法包括:
若当前块支持ALF,所述当前块为亮度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于第一上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,所述ALF指示信息用于指示所述当前块是否启动ALF;或者,
若当前块支持ALF,所述当前块为色度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于第二上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,所述第二上下文模型与所述第一上下文模型不同。
一方面,提供了一种解码方法,所述方法包括:
若当前块支持ALF,所述当前块为亮度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码;其中,所述ALF指示信息用于指示所述当前块是否启动ALF,所述目标上下文模型是根据所述当前块的上边块是否启动ALF,以及所述当前块的左边块是否启动ALF,从第一上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型;或者,
若所述当前块支持ALF,所述当前块为CB色度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于第一上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码;或者,
若所述当前块支持ALF,所述当前块为CR色度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于第二上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,所述第一上下文模型集合包括的上下文模型、所述第一上下文模型和所述第二上下文模型为不同的上下文模型。
一方面,提供了一种解码方法,其特征在于,所述方法包括:
若当前块支持ALF,所述当前块为亮度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于第一上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,所述ALF指示信息用于指示当前块是否启动ALF;或者,
若所述当前块支持ALF,所述当前块为CB色度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于第二上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码;或者,
若所述当前块支持ALF,所述当前块为CR色度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于第三上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,所述第一上下文模型、所述第二上下文模型和所述第三上下文模型为不同的上下文模型。
一方面,提供了一种解码方法,所述方法包括:
若当前块支持ALF,所述当前块为亮度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于第一上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,所述ALF指示信息用于指示所述当前块是否启动ALF;或者,
若当前块支持ALF,所述当前块为色度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于第二上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,所述第二上下文模型与所述第一上下文模型为同一上下文模型。
一方面,提供了一种解码方法,所述方法包括:
若当前块支持ALF,所述当前块为亮度块,则在根据ALF模式对所述当前块进行滤波处理之前,对ALF指示信息进行基于旁路的二进制算术解码,所述ALF指示信息用于指示所述当前块是否启动ALF;或者,
若当前块支持ALF,所述当前块为色度块,则在根据ALF模式对当前块进行滤波处理之前,对ALF指示信息进行基于旁路的二进制算术解码。
一方面,提供了一种解码方法,所述方法包括:
若当前块支持ALF,所述当前块为亮度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于一个上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,所述ALF指示信息用于指示所述当前块是否启动ALF;或者,
若当前块支持器ALF,且所述当前块启动ALF,所述当前块为色度块,则在根据ALF模式对所述当前块进行滤波处理之前,对ALF指示信息进行基于旁路的二进制算术解码。
一方面,提供了一种解码方法,所述方法包括:
若当前块支持ALF,所述当前块为亮度块,则在根据ALF模式对所述当前块进行滤波处理之前,对ALF指示信息进行基于旁路的二进制算术解码,所述ALF指示信息用于指示所述当前块是否启动ALF;或者,
若当前块支持ALF,所述当前块为色度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于一个上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码。
一方面,提供了一种编解码方法,所述方法包括:
若当前块的宽高尺寸为32*32,则所述当前块不支持基于矩阵的帧内预测模式。
一方面,提供了一种解码方法,若当前块支持基于矩阵的帧内预测模式,所述方法包括:
在根据基于矩阵的帧内预测模式对所述当前块进行预测之前,基于目标上下文模型对 MIP指示信息进行基于上下文的自适应二进制算术解码;其中,所述MIP指示信息用于指示所述当前块是否启动基于矩阵的帧内预测模式,所述目标上下文模型是根据所述当前块的上边块是否启动基于矩阵的帧内预测模式,以及所述当前块的左边块是否启动基于矩阵的帧内预测模式从3个不同的上下文模型中选择的一个上下文模型;
若根据所述MIP指示信息确定所述当前块启动基于矩阵的帧内预测模式,则基于矩阵的帧内预测模式,对所述当前块进行预测。
一方面,提供了一种解码方法,若当前块支持基于矩阵的帧内预测模式,所述方法包括:
在根据基于矩阵的帧内预测模式对所述当前块进行预测之前,基于目标上下文模型对MIP指示信息进行基于上下文的自适应二进制算术解码;其中,所述MIP指示信息用于指示所述当前块是否启动基于矩阵的帧内预测模式,所述目标上下文模型是根据所述当前块是否满足预设尺寸条件从2个不同的上下文模型中的一个上下文模型;
若根据所述MIP指示信息确定所述当前块启动基于矩阵的帧内预测模式,则基于矩阵的帧内预测模式,对所述当前块进行预测。
一方面,提供了一种解码方法,若当前块支持基于矩阵的帧内预测模式,所述方法包括:
在根据基于矩阵的帧内预测模式对所述当前块进行预测之前,基于同一个上下文模型对MIP指示信息进行基于上下文的自适应二进制算术解码,所述MIP指示信息用于指示所述当前块是否启动基于矩阵的帧内预测模式;
若根据所述MIP指示信息确定所述当前块启动基于矩阵的帧内预测模式,则基于矩阵的帧内预测模式,对所述当前块进行预测。
一方面,提供了一种解码方法,若当前块支持基于矩阵的帧内预测模式,所述方法包括:
在根据基于矩阵的帧内预测模式对所述当前块进行预测之前,对MIP指示信息进行基于旁路的二进制算术解码,所述MIP指示信息用于指示所述当前块是否启动基于矩阵的帧内预测模式;
若根据所述MIP指示信息确定所述当前块启动基于矩阵的帧内预测模式,则基于矩阵的帧内预测模式,对所述当前块进行预测。
一方面,提供了一种解码方法,所述方法包括:
对第一BDPCM指示信息进行解码,所述第一BDPCM指示信息用于指示当前处理单元是否支持BDPCM模式;
根据所述第一BDPCM指示信息,对所述当前处理单元进行解码。
在本申请一种可能的实现方式中,所述第一BDPCM指示信息存在于序列参数集、图像参数级、slice级或Tile级中。
一方面,提供了一种编解码方法,所述方法包括:
对第二BDPCM指示信息进行编码或解码,所述第二BDPCM指示信息用于指示支持BDPCM模式的处理单元的尺寸范围;
基于所述第二BDPCM指示信息和所述当前块的尺寸,确定所述当前块是否能够进行 BDPCM编码或解码。
在本申请一种可能的实现方式中,所述第二BDPCM指示信息存在于序列参数集、图像参数级、slice级或Tile级中。
一方面,提供了一种解码方法,若当前块支持BDPCM模式所述方法包括:
基于一个上下文模型,对第三BDPCM指示信息进行基于上下文的自适应二进制算术解码,所述第三BDPCM指示信息用于指示所述当前块是否启动BDPCM模式;
当所述第三BDPCM指示信息指示所述当前块启动BDPCM模式时,对第四BDPCM指示信息进行基于旁路的二进制算术解码,所述第四BDPCM指示信息用于指示BDPCM模式的预测方向的索引信息;
按照所述第四BDPCM指示信息指示的预测方向,对所述当前块进行BDPCM处理。
一方面,提供了一种解码方法,若当前块支持BDPCM模式,所述方法包括:
对第三BDPCM指示信息进行基于旁路的二进制算术解码,所述第三BDPCM指示信息用于指示所述当前块是否启动BDPCM模式;
当所述第三BDPCM指示信息指示所述当前块启动BDPCM模式时,对第四BDPCM指示信息进行基于旁路的二进制算术解码,所述第四BDPCM指示信息用于指示BDPCM模式的预测方向的索引信息;
按照所述第四BDPCM指示信息指示的预测方向,对所述当前块进行BDPCM处理。
一方面,提供了一种编解码方法,所述方法包括:
若当前块启动帧内子块预测,则确定进行CBF指示信息的编码或解码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码;其中,所述CBF指示信息用于指示所述当前块的变换块是否具有非零变换系数,所述目标上下文模型是根据所述当前块的前一个变换块是否具有非零变换系数从第一上下文模型集合包括的2个不同的上下文模型中选择的一个上下文模型;或者,
若当前块启动常规帧内预测或启动BDPCM模式,则在确定进行CBF指示信息的编码或解码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码;其中,所述目标上下文模型是根据所述当前块的变换块的划分深度从第二上下文模型集合包括的2个不同的上下文模型中选择的一个上下文模型,所述第二上下文模型集合包括的2个上下文模型集合与所述第一上下文模型集合包括的2个上下文模型不同。
一方面,提供了一种编解码方法,所述方法包括:
若当前块启动帧内子块预测,或者启动常规帧内预测,或者启动BDPCM模式,则在确定进行CBF指示信息的编码或解码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码;其中,所述CBF指示信息用于指示所述当前块的变换块是否具有非零变换系数,所述目标上下文模型是根据所述当前块的变换块的划分深度从2个不同的上下文模型中选择的一个上下文模型。
一方面,提供了一种编解码方法,所述方法包括:
若当前块启动帧内子块预测或启动常规帧内预测,则在确定进行CBF指示信息的编码或解码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码;其中,所述CBF指示信息用于指示所述当前块的变换块是否具有非零变换系数,所述目标上下文模型是根据所述当前块的变换块的划分深度从第一上下文模型集合包括的2个不同的上下文模型中选择的一个上下文模型;或者,
若当前块启动BDPCM模式,则在确定进行CBF指示信息的编码或解码时,基于目标上下文模型,对所述CBF指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码;其中,所述目标上下文模型是所述第一上下文模型集合中的一个上下文模型。
一方面,提供了一种解码方法,所述方法包括:
对JCCR指示信息进行解码,所述JCCR指示信息用于指示当前处理单元是否支持JCCR模式;
若根据所述JCCR指示信息确定所述当前块支持JCCR模式,且所述当前块启动JCCR模式,则按照所述当前块的蓝色色度CB分量和红色色度CR分量的相关性对所述当前块进行解码,得到所述当前块的色度残差系数。
在本申请一种可能的实现方式中,所述JCCR指示信息存在于序列参数集、图像参数级、slice级或Tile级中。
一方面,提供了一种编解码装置,其特征在于,所述装置包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为执行上述任一种编解码方法或解码方法。
一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有指令,所述指令被处理器执行时实现上述任一种编解码方法或解码方法。
一方面,提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述任一种编解码方法或解码方法。
本申请实施例提供的技术方案带来的有益效果是:
本申请实施例中,通过在确定进行第一ISP指示信息的编码或解码时,基于一个上下文模型,对第一ISP指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码,在确定进行第二ISP指示信息的编码或解码时,对第二ISP指示信息进行基于旁路的二进制算术编码或解码,如此,可以减少编解码过程中所需的上下文模型的数量,降低了编解码过程的复杂度,减少了内存开销。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附 图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种编解码系统的结构示意图;
图2是本申请实施例提供的一种编解码的流程示意图;
图3是本申请实施例提供的一种帧内预测模式对应的方向示例性;
图4是本申请实施例提供的一种角度模式对应的方向示例性;
图5是本申请实施例提供的一种图像块的划分示意图;
图6是本申请实施例提供的一种编码方法的流程图;
图7是本申请实施例提供的一种解码方法的流程图;
图8是本申请实施例提供的一种编码方法的流程图;
图9是本申请实施例提供的一种解码方法的流程图;
图10是本申请实施例提供的一种编解码方法的流程图;
图11是本申请实施例提供的一种编码方法的流程图;
图12是本申请实施例提供的一种解码方法的流程图;
图13是本申请实施例提供的一种编码方法的流程图;
图14是本申请实施例提供的一种解码方法的流程图;
图15是本申请实施例提供的一种编解码方法的流程图;
图16是本申请实施例提供的一种编解码方法的流程图;
图17是本申请实施例提供的一种编码方法的流程图;
图18是本申请实施例提供的一种解码方法的流程图;
图19是本申请实施例提供的一种编码方法的流程图;
图20是本申请实施例提供的一种解码方法的流程图;
图21是本申请实施例提供的一种编码方法的流程图;
图22是本申请实施例提供的一种解码方法的流程图;
图23是本申请实施例提供的一种编码方法的流程图;
图24是本申请实施例提供的一种解码方法的流程图;
图25是本申请实施例提供的一种编码方法的流程图;
图26是本申请实施例提供的一种解码方法的流程图;
图27是本申请实施例提供的一种编码方法的流程图;
图28是本申请实施例提供的一种解码方法的流程图;
图29是本申请实施例提供的一种编码方法的流程图;
图30是本申请实施例提供的一种解码方法的流程图;
图31是本申请实施例提供的一种编码方法的流程图;
图32是本申请实施例提供的一种解码方法的流程图;
图33是本申请实施例提供的一种编码方法的流程图;
图34是本申请实施例提供的一种解码方法的流程图;
图35是本申请实施例提供的一种编码方法的流程图;
图36是本申请实施例提供的一种解码方法的流程图;
图37是本申请实施例提供的一种编码方法的流程图;
图38是本申请实施例提供的一种解码方法的流程图;
图39是本申请实施例提供的一种编码方法的流程图;
图40是本申请实施例提供的一种解码方法的流程图;
图41是本申请实施例提供的一种编码方法的流程图;
图42是本申请实施例提供的一种解码方法的流程图;
图43是申请实施例提供的一种编解码方法的流程图;
图44是本申请实施例提供的一种编码方法的流程图;
图45是本申请实施例提供的一种解码方法的流程图;
图46是本申请实施例提供的一种解码方法的流程图;
图47是本申请实施例提供的一种解码方法的流程图;
图48是本申请实施例提供的一种编码方法的流程图;
图49是本申请实施例提供的一种解码方法的流程图;
图50是本申请实施例提供的一种编码方法的流程图;
图51是本申请实施例提供的一种解码方法的流程图;
图52是本申请实施例提供的一种编码方法的流程图;
图53是本申请实施例提供的一种解码方法的流程图;
图54是本申请实施例提供的一种编码方法的流程图;
图55是本申请实施例提供的一种解码方法的流程图;
图56是本申请实施例提供的一种编码方法的流程图;
图57是本申请实施例提供的一种解码方法的流程图;
图58是本申请实施例提供的一种编码方法的流程图;
图59是本申请实施例提供的一种解码方法的流程图;
图60是本申请实施例提供的一种编码方法的流程图;
图61是本申请实施例提供的一种解码方法的流程图;
图62是本申请实施例提供的一种编解码方法的流程图;
图63是本申请实施例提供的一种编解方法的流程图;
图64是本申请实施例提供的一种编解方法的流程图;
图65是本申请实施例提供的一种编解方法的流程图;
图66是本申请实施例提供的一种编解方法的流程图;
图67是本申请实施例提供的一种编解方法的流程图;
图68是本申请实施例提供的一种编解方法的流程图;
图69是本申请实施例提供的一种编解方法的流程图;
图70是本申请实施例提供的一种编解方法的流程图;
图71是本申请实施例提供的一种编码方法的流程图;
图72是本申请实施例提供的一种解码方法的流程图;
图73是本申请实施例提供的一种编码方法的流程图;
图74是本申请实施例提供的一种解码方法的流程图;
图75是本申请实施例提供的一种编码方法的流程图;
图76是本申请实施例提供的一种解码方法的流程图;
图77是本申请实施例提供的一种编码方法的流程图;
图78是本申请实施例提供的一种解码方法的流程图;
图79是本申请实施例提供的一种解码方法的流程图;
图80是本申请实施例提供的一种解码方法的流程图;
图81是本申请实施例提供的一种编码方法的流程图;
图82是本申请实施例提供的一种解码方法的流程图;
图83是本申请实施例提供的一种编码方法的流程图;
图84是本申请实施例提供的一种解码方法的流程图;
图85是本申请实施例提供的一种编码模式的流程图;
图86是本申请实施例提供的一种编码模式的流程图;
图87是本申请实施例提供的一种编码端的结构示意图;
图88是本申请实施例提供的一种解码端的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
在对本申请实施例进行详细地解释说明之前,先对本申请实施例的应用场景予以说明。
图1是本申请实施例提供的一种编解码系统的结构示意图。如图1所示,编解码系统包括编码器01、解码器02、存储装置03和链路04。编码器01与存储装置03可以进行通信,编码器01还可以通过链路04与解码器02进行通信。解码器02还可以与存储装置03进行通信。
编码器01用于获取数据源,对数据源进行编码,并将编码后的码流传输至存储装置03进行存储,或者直接通过链路04传输至解码器02。解码器02可以从存储装置03中获取码流,并进行解码得到数据源,或者在接收到编码器01通过链路04传输的码流后进行解码,得到数据源。其中,数据源可以为拍摄到的图像,也可以为拍摄到的视频。编码器01和解码器02均可以单独作为一个电子设备。存储装置03可包括多种分布式或本地存取式的数据存储媒体中的任一者。例如,硬盘驱动器、蓝光光盘、只读光盘、快闪存储器,或用于存储经编码的数据的其它合适数字存储媒体。链路04可包括至少一个通信媒体,该至少一个通信媒体可以包括无线和/或有线通信媒体,例如RF(Radio Frequency,射频)频谱或一根或多根物理传输线。
请参考图2,图2是根据一示例性实施例示出的一种编解码的流程示意图,编码包括预测、变换、量化、熵编码几个过程,解码包括解码、反变换、反量化、预测几个过程。目前,通常采用二进制算术编解码技术来对当前语法元素进行编解码。编码和解码中的预测一般包括帧内预测、多行预测、跨分量预测和基于矩阵的帧内预测等,另外,编码和解码中还将用到帧内亮度候选者列表、自适应环路滤波器、自适应运动矢量精度编解码技术和BD(Block-based quantized residual domain Differential,基于块的量化后残差的差分)PCM(Pulse Code Modulation,脉冲编码调制)编解码技术等,接下来对这些预测方式和编解码技术进行简单介绍。
二进制算术编码
二进制算术编码是指对当前语法元素二进制化后的每一个bin(比特位)根据其概率模型 参数进行算术编码,得到最后的码流。它包括两种编码方式:基于上下文的自适应算术编码和基于旁路的二进制算术编码。
CABAC(Context-based Adaptive Binary Arithmetic Coding,基于上下文的自适应二进制算术编码)是一种将自适应的二进制算术编码和一个设计精良的上下文模型结合起来得到的方法。在编码中,每一个符号的编码都与以前编码的结果有关,根据符号流的统计特性来自适应的为每个符号分配码字,尤其是适用于出现概率为非等概率的符号,能够进一步压缩码率。语法元素的各个比特位顺序的进入上下文模型器,编码器根据之前编码过的语法元素或者比特位,为每一个输入的比特位分配合适的概率模型,该过程称为上下文建模。将比特位和分配给它的概率模型一起送到二元算术编码器进行编码。编码器要根据比特位值更新上下文模型,这就是编码的自适应。
基于旁路的二进制算术编码(Bypass-based Binary Arithmetic Coding)是一种基于等概率的二进制算术编码模式(也称为旁路编码模式),相比于CABAC,Bypass少了概率更新的过程,也无需对概率状态进行自适应更新,而是采用0和1概率各50%的固定概率进行编码,此编码方法更加简单,并且编码复杂度低,内存消耗小,适用于等概率的符号。
帧内预测
帧内预测是指利用图像空间域的相关性,使用当前图像块周围已编码重建的相邻块的像素,预测该当前图像块的像素,从而达到去除图像空域冗余的目的。帧内预测中规定了多种帧内预测模式,每一种帧内预测模式都对应一种纹理方向(DC模式除外)。譬如,如果图像的纹理是呈现水平状排布的,那么选择水平类预测模式可以更好的预测图像信息。示例性的,HEVC(High Efficiency Video Coding,高效视频编码)中亮度分量可以支持5种尺寸的预测单元(图像块或子块),每一种尺寸的预测单元都对应35种帧内预测模式,包含Planar模式、DC模式和33种角度模式,如表1所示。
表1
模式号 帧内预测模式
0 Intra_Planar
1 Intra_DC
2...34 Intra_angular2…Intra_angular34
该多种帧内预测模式对应的预测方向如图3所示。Planar模式适用于像素值缓慢变化的区域,在实施中,可以使用水平方向和垂直方向的两个线性滤波器进行滤波处理,将两者的平均值作为当前图像块的预测值。DC模式适用于大面积平坦区域,可以将当前图像块周围已编码重建的相邻块的平均像素值作为当前图像块的预测值。作为一种示例,Planar模式和DC模式又可以称为非角度模式。请继续参考图3,在角度模式中,模式号26和模式号10对应帧内预测模式分别表示垂直方向和水平方向,在本申请一种可能的实现方式中,可以将与模式号26相邻的模式号对应的帧内预测模式统称为垂直类预测模式,以及将与模式号10相邻的模式号对应的帧内预测模式统称为水平类预测模式,示例性的,垂直类预测模式可以包括模式号2至模式号18,水平类预测模式可以包括模式号19至模式号34。另外,在新一代编解码标准VVC(Versatile Video Coding,瓦尔塞视频编码)中针对角度模式做了更细致的划分,如图4所示。
常规帧内预测
常规帧内预测所采用的方法是利用周围像素预测当前块,去除的是空域冗余性。常规帧内预测模式下,所采用的目标预测模式可以来自MPM(Most probable mode,最有可能的帧内预测模式)列表,也可以来自非MPM列表。
ISP(Intra Sub-block-Partitions,帧内子块预测)
ISP技术中帧内预测采用的方法是将图像块划分成多个子块进行预测,对于支持ISP技术的图像块来说能够支持的划分方式包括水平划分和垂直划分。对于解码端来说,在当前块启动ISP模式的情况下,如果当前块的尺寸默认仅支持一种划分方式,则按照默认划分方向对当前块进行划分,以及对其进行预测,反变换,反量化等处理,如果当前块的尺寸支持两种划分方式,则还需进一步解析其划分方向,再按照确定的划分方向对当前块进行划分,以及对其进行预测,反变换,反量化等处理。
MRL(Multi-Reference Line,多行预测)
MRL技术中采用的方法是基于当前块的参考像素进行预测,参考像素可以来自当前块的邻近行。譬如,参考像素可以来自如图5所示的Reference line0(第0行),Reference line1(第1行),Reference line2(第2行)和Reference line3(第3行)。其中,第0行是与当前块边界相邻的行,第1行是与当前块边界次相邻的行,第2行是第1行相邻的行,第3行是第2行相邻的行。目前在新一代编解码标准VVC中,参考像素来自Reference line0,Reference line1和Reference line3,Reference2没有被使用。其中,所述line可以是当前块上侧的行,也可以是当前块左侧的列。
MPM
在HEVC中MPM的个数是3个,在目前的VVC中MPM的个数是6个。对于ISP和MRL模式来说,其帧内预测模式一定是来自于MPM的,对于常规帧内预测来说,其帧内预测模式可能来自MPM也可能来自非MPM。
CCLM(Cross-Component Linear Model Prediction,跨分量预测)
CCLM技术中采用的方法是使用一种线性预测模型通过亮度分量重构像素值利用线性方程来得到色度分量预测像素值的技术,能够去除图像分量之间的冗余,进一步提升编码性能。目前有三种跨分量预测模式,分别为MDLM_L模式、MDLM_T模式和DM模式,MDLM-L为仅用了左侧模板信息得到线性参数的跨分量预测模式,MDLM-T为仅用了上侧模板信息推导得到线性模型参数的跨分量预测模式,DM为色度采用与亮度一样的预测模式。
自适应环路滤波
自适应环路滤波器(Adaptive loop filter,ALF),可以根据自身梯度方向,在固定的滤波器中选择一种滤波器进行滤波,并可以通过CTU级的flag表示该块是否开启ALF滤波,色度和亮度可以分开控制。
AMVR(adaptive motion vector resolution,自适应运动矢量精度)
AMVR用于表示在进行运动矢量差编码时可以采用不同的精度,所采用的的精度可以是整像素精度,如4像素精度,也可以是非整数像素精度,如1/16像素精度。该技术可以应用于常规帧内预测下的运动矢量数据编码,也可以用于affine(仿射)预测模式下的运动矢量数据编码。
MIP(Matrix Based Intra Prediction,基于矩阵的帧内预测)
基于矩阵的帧内预测技术是指通过将当前块上边和左边相邻像素作为参考像素,送入矩阵向量乘法器并加上一个偏置值来确定得到当前块的预测像素值。
BDPCM
BDPCM是指在预测环节预测像素时是直接垂直方向拷贝对应参考像素的像素值,或者水平方向拷贝对应参考像素的像素值,类似于垂直预测和水平预测。然后将预测像素和原始像素的残差值进行量化,并对量化后的残差进行差分编码。
举例说明,若当前块的尺寸为M*N,则r i,j,0≤i≤M-1,0≤j≤N-1表示预测残差,Q(r i,j),0≤i≤M-1,0≤j≤N-1表示对预测残差r i,j,进行量化得到量化后的残差。然后,对量化后的残差Q(r i,j)进行差分编码,得到差分编码结果
Figure PCTCN2020097144-appb-000001
当是垂直的RDPCM模式时,
Figure PCTCN2020097144-appb-000002
当是水平RDPCM模式时,
Figure PCTCN2020097144-appb-000003
最终将
Figure PCTCN2020097144-appb-000004
编入码流。
对于解码端来说,逆向的累加过程用于得到量化后的残差数据。
对于垂直预测来说,
Figure PCTCN2020097144-appb-000005
对于水平预测来说,
Figure PCTCN2020097144-appb-000006
然后,将量化后的残差进行反量化,并与预测值进行相加得到重构像素值。
JCCR(Joint Coding of Chrominance Residuals,色度残差的联合编码)
JCCR是一种CB(蓝色色度)分量和CR(红色色度)分量联合编码的方式,通过观察色度残差的分布,不难发现,CB和CR总是呈现出负相关的趋势,因此JCCR利用这种现象,提出CB和CR联合编码的方式,例如只需要编码(CB-CR)/2,也就是CB和CR分量的均值。
相关技术中,在不同预测模式或编解码技术下,解码端需要向编码端传输不同的语法元素,而且传输语法元素所需的上下文模型较多,编解码过程复杂程度高,内存开销较大。基于此,本申请提供了一种能够减少所需上下文模型的数量,进而减小编解码过程复杂程度高和内存开销的编解码方法。接下来,将分别针对上述几种预测模式和编解码技术,对本申请实施例编解码方法进行介绍。
ISP模式
在ISP模式下,解码端和编码端之间需要传输的语法元素可以包括第一ISP指示信息和第二ISP指示信息,第一ISP指示信息用于指示是否启动帧内子块预测模式,第二ISP指示信息用于指示帧内子块预测模式的子块划分方式。示例的,第一指示信息为 intra_subpartitions_mode_flag,第二指示信息为intra_subpartitions_split_flag。
而且,在确定进行第一ISP指示信息的编码或解码时,需要基于一个上下文模型,对第一ISP指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码;在确定进行第二ISP指示信息的编码或解码时,需要基于另一个不同的上下文模型,对第二ISP指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码。也即是,需要用到两个上下文模型,来对第一ISP指示信息和第二ISP指示信息进行编解码,如下表2所示。
表2
Figure PCTCN2020097144-appb-000007
ISP模式的第一种实现方式
图6是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图1所示,该方法包括:
步骤601:在确定进行第一ISP指示信息的编码时,基于一个上下文模型,对第一ISP指示信息进行基于上下文的自适应二进制算术编码,第一ISP指示信息用于指示是否启动帧内子块预测模式。
作为一个示例,若当前块满足支持子块划分技术的条件,则当前块可尝试使用子块划分技术,编码端可通过RDO(Rate Distortion Optimization,率失真优化)决策最终是否使用子块划分技术,并进行第一ISP指示信息的编码,通过第一ISP指示信息来表示当前块是否启动帧内子块预测模式。其中,支持子块划分技术的条件包含:当前块为亮度块,当前块未启动多行预测模式,以及当前块的尺寸符合一定的限制条件。当然,支持子块划分技术的条件也不仅限定于上述三个条件,还可以包括其他条件。
示例的,第一ISP指示信息为intra_subpartitions_mode_flag,intra_subpartitions_mode_flag为表示当前块是否启动帧内子块预测模式的标志位。若intra_subpartitions_mode_flag为0,则表示当前块未启动帧内子块预测模式,若intra_subpartitions_mode_flag为1,则表示当前块启动帧内子块预测模式。
步骤602:在确定进行第二ISP指示信息的编码时,对第二ISP指示信息进行基于旁路的二进制算术编码,第二ISP指示信息用于指示帧内子块预测模式的子块划分方式。
其中,子块划分方式包括水平划分方向和垂直划分方向。当当前块支持两种划分方向时,则需要确定最终使用的划分方向,并基于使用的划分方向,继续进行第二ISP指示信息的编码。当当前块仅支持一种划分方向时,则不需要继续进行第二ISP指示信息的编码。
其中,第二ISP指示信息可以为intra_subpartitions_split_flag,intra_subpartitions_split_flag为表示当前块的ISP模式的子块划分方式的标志位。示例的,intra_subpartitions_split_flag为 0时,表示当前块的ISP模式的子块划分方式为水平划分,intra_subpartitions_split_flag为1时,表示当前块的ISP模式的子块划分方式为垂直划分。
作为一个示例,ISP模式下的语法元素的编码方式如下表3所示:
表3
Figure PCTCN2020097144-appb-000008
也即是,对相关技术中的第二ISP指示信息的编码方式进行了修改,使用旁路编码方式替换复杂的CABAC编码方式,如此,可以减少内存开销,降低编码复杂度,而且,从编码性能出发,性能基本保持不变。
图7是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,是与图6实施例提供的编码方法对应的解码方法,如图7所示,该方法包括:
步骤701:在确定进行第一ISP指示信息的解码时,基于一个上下文模型,对第一ISP指示信息进行基于上下文的自适应二进制算术解码,第一ISP指示信息用于指示是否启动帧内子块预测模式。
作为一个示例,可以先接收当前块的编码流,若当前块满足解析条件,则对编码流中的第一ISP指示信息进行解码,以解析出当前块是否启动帧内子块预测模式。其中,解析条件包括:当前块为亮度块,当前块未启动多行预测模式,以及当前块的尺寸符合一定限制条件。当然,该解析条件不限定于上述三个条件,还可以包括其他条件。
示例的,第一ISP指示信息为intra_subpartitions_mode_flag。若intra_subpartitions_mode_flag为0,则表示当前块未启动帧内子块预测模式,若intra_subpartitions_mode_flag为1,则表示当前块启动帧内子块预测模式。
步骤702:在确定进行第二ISP指示信息的解码时,对第二ISP指示信息进行基于旁路的二进制算术解码,第二ISP指示信息用于指示帧内子块预测模式的子块划分方式。
示例的,当第一ISP指示信息指示当前块启动帧内子块预测模式,且当前块支持两个划分方向时,确定需要进行第二ISP指示信息的解码。若第一ISP指示信息指示当前块未启动帧内子块预测模式,或第一ISP指示信息指示当前块启动帧内子块预测模式,且当前块仅支持一个划分方向时,确定不需要进行第二ISP指示信息的解码。
例如,若intra_subpartitions_mode_flag为1,且当前块支持两个划分方向时,则需要进一步解析划分方向的标志位intra_subpartitions_split_flag。若intra_subpartitions_mode_flag为0,或intra_subpartitions_mode_flag为1,但当前块仅支持某一个固定方向的划分方向时,则不需要解析表示划分方向的标志位。
根据以上两个ISP指示信息,解码端可以确定当前块是否启动ISP模式及对应的划分方向,从而基于确定的划分方向对当前块进行预测,得到当前块的预测值,用于后续的重建过程。
ISP模式的第二种实现方式
图8是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图8所示,该方法包括:
步骤801:在确定进行第一ISP指示信息的编码时,对第一ISP指示信息进行基于旁路的二进制算术编码,第一ISP指示信息用于指示是否启动帧内子块预测模式。
作为一个示例,若当前块满足支持子块划分技术的条件,则当前块可尝试使用子块划分技术,编码端可通过RDO决策最终是否使用子块划分技术,并进行第一ISP指示信息的编码,通过第一ISP指示信息来表示当前块是否启动帧内子块预测模式。其中,支持子块划分技术的条件包含:当前块为亮度块,当前块未启动多行预测模式,以及当前块的尺寸符合一定的限制条件。当然,支持子块划分技术的条件也不仅限定于上述三个条件,还可以包括其他条件。
示例的,第一ISP指示信息为intra_subpartitions_mode_flag,intra_subpartitions_mode_flag为表示当前块是否启动帧内子块预测模式的标志位。若intra_subpartitions_mode_flag为0,则表示当前块未启动帧内子块预测模式,若intra_subpartitions_mode_flag为1,则表示当前块启动帧内子块预测模式。
步骤802:在确定进行第二ISP指示信息的编码时,对第二ISP指示信息进行基于旁路的二进制算术编码,第二ISP指示信息用于指示帧内子块预测模式的子块划分方式。
其中,子块划分方式包括水平划分方向和垂直划分方向。当当前块支持两种划分方向时,则需要确定最终使用的划分方向,并基于使用的划分方向,继续进行第二ISP指示信息的编码。当当前块仅支持一种划分方向时,则不需要继续进行第二ISP指示信息的编码。
其中,第二ISP指示信息可以为intra_subpartitions_split_flag,intra_subpartitions_split_flag为表示当前块的ISP模式的子块划分方式的标志位。示例的,intra_subpartitions_split_flag为0时,表示当前块的ISP模式的子块划分方式为水平划分,intra_subpartitions_split_flag为1时,表示当前块的ISP模式的子块划分方式为垂直划分。
作为一个示例,ISP模式下的语法元素的编码方式如下表4所示:
表4
Figure PCTCN2020097144-appb-000009
也即是,对相关技术中的intra_subpartitions_mode_flag的标志位,以及intra_subpartitions_split_flag的标志位的编码方式均进行了修改,均使用旁路编码方式替换原来复杂的CABAC编码方式,如此,可以进一步减少内存开销,降低编码复杂度,而且,从编码性能出发,性能基本保持不变。
图9是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,是与图8实施例提供的编码方法对应的解码方法,如图9所示,该方法包括:
步骤901:在确定进行第一ISP指示信息的解码时,对第一ISP指示信息进行基于旁路的 二进制算术解码,第一ISP指示信息用于指示是否启动帧内子块预测模式。
作为一个示例,可以先接收当前块的编码流,若当前块满足解析条件,则对编码流中的第一ISP指示信息进行解码,以解析出当前块是否启动帧内子块预测模式。其中,解析条件包括:当前块为亮度块,当前块未启动多行预测模式,以及当前块的尺寸符合一定限制条件。当然,该解析条件不限定于上述三个条件,还可以包括其他条件。
示例的,第一ISP指示信息为intra_subpartitions_mode_flag。若intra_subpartitions_mode_flag为0,则表示当前块未启动帧内子块预测模式,若intra_subpartitions_mode_flag为1,则表示当前块启动帧内子块预测模式。
步骤902:在确定进行第二ISP指示信息的解码时,对第二ISP指示信息进行基于旁路的二进制算术解码,第二ISP指示信息用于指示帧内子块预测模式的子块划分方式。
示例的,当第一ISP指示信息指示当前块启动帧内子块预测模式,且当前块支持两个划分方向时,确定需要进行第二ISP指示信息的解码。若第一ISP指示信息指示当前块未启动帧内子块预测模式,或第一ISP指示信息指示当前块启动帧内子块预测模式,且当前块仅支持一个划分方向时,确定不需要进行第二ISP指示信息的解码。
例如,若intra_subpartitions_mode_flag为1,且当前块支持两个划分方向时,则需要进一步解析划分方向的标志位intra_subpartitions_split_flag。若intra_subpartitions_mode_flag为0,或intra_subpartitions_mode_flag为1,但当前块仅支持某一个固定方向的划分方向时,则不需要解析表示划分方向的标志位。
根据以上两个ISP指示信息,解码端可以确定当前块是否启动ISP模式及对应的划分方向,从而基于确定的划分方向对当前块进行预测,得到当前块的预测值,用于后续的重建过程。
MRL模式
MRL模式的第一种实现方式
图10是本申请实施例提供的一种编解码方法的流程图,该方法应用于编码端或解码端,如图10所示,该方法包括如下步骤:
步骤1001:若当前块的宽高尺寸为M*N,M小于64且N小于64,则当前块不支持多行预测模式。
示例的,若当前块的宽高尺寸为4*4,则当前块不支持多行预测模式。
MRL模式的第二种实现方式
在MRL模式下,解码端和编码端之间需要传输的语法元素可以包括参考行指示信息,参考行指示信息用于指示基于多行预测模式进行当前块的预测时所使用的目标参考行的索引信息。示例的,参考行指示信息为intra_luma_ref_idx。
相关技术中,若确定当前块支持多行预测模式,且多行预测模式对应的候选参考行行数为3,多行预测模式对应的参考行指示信息至多占用2个比特位,且这2个比特位需要使用2个不同的上下文模型进行编解码,如下表5和表6所示:
表5
第一个bin(比特位) 第二个bin(比特位)
MultiRefLineIdx(0)即第一个上下文模型 MultiRefLineIdx(1)即第二个上下文模型
其中,第一个bin是指参考行指示信息的第一个比特位,需要基于第一个上下文模型进行编解码,第二个bin是指参考行指示信息的第二个比特位,需要基于第二个上下文模型进行编解码,且第一个上下文模型与第二个上下文模型不同。
表6
Figure PCTCN2020097144-appb-000010
而且,目标参考行的索引信息与目标参考行的行号的对应关系如表7所示:
表7
Figure PCTCN2020097144-appb-000011
由表7可知,若参考行指示信息所指示的索引信息为0,则目标参考行为第0行;若参考行指示信息所指示的索引信息为1,则目标参考行是第1行;若参考行指示信息所指示的索引信息为2,则目标参考行是第3行。
需要说明的是,本申请实施例所述的行可以是当前块上侧的行,也可以是当前块左侧的列。
图11是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图10所示,若确定当前块支持多行预测模式,且多行预测模式对应的候选参考行行数为3,多行预测模式对应的参考行指示信息至多占用2个比特位,则该方法包括:
步骤1101:基于一个上下文模型,对参考行指示信息的第1个比特位进行基于上下文的自适应二进制算术编码。
作为一个示例,可以确定当前块是否满足支持多行预测技术的条件,若当前块满足支持多行预测技术的条件,则确定当前块可尝试使用各个参考行进行编码。编码端可通过RDO决策最终参考像素来源,并编码参考行索引信息到编码流中。其中,支持多行预测技术的条件包含:当前块为亮度帧内块,并且当前块的尺寸符合一定的限制条件,当前块不包含编码树单元的第一行。当然,支持多行预测技术的条件不限定于上述三个条件,还可以包括其他条件。
作为一个示例,若可以使用多行预测技术,则遍历所有参考行,通过RDO决定最终的目标参考行,并编码参考行指示信息。在编码流中可以根据具体情况编码参考行指示信息。作为一个示例,参考行指示信息可以为intra_luma_ref_idx。
需要说明的是,本申请实施例所述的行可以是当前块上侧的行,也可以是当前块左侧的列。
步骤1102:当需要对参考行指示信息的第2个比特位进行编码时,对参考行指示信息的第2个比特位进行基于旁路的二进制算术编码。
本申请实施例中,若确定当前块支持多行预测模式,且多行预测模式对应的候选参考行行数为3,多行预测模式对应的参考行指示信息至多占用2个比特位,则这2个比特位的第一个比特位可以使用一个上下文模型进行编码,第二个比特位可以基于旁路编码模式进行编码,如此,只需要使用一个上下文模型即可实现对参考行指示信息的所有比特位的编码,减少了所使用的上下文模型的数量,进而减小了编码的复杂度和内存消耗,而且编码性能也没有很大改变。
例如,参考行指示信息使用的上下文模型可以下表8和表9所示:
表8
第一个bin 第二个bin
MultiRefLineIdx(0)即第一个上下文模型 不用上下文模型,用Bypass进行编码
表9
Figure PCTCN2020097144-appb-000012
图12是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法是与图11所示的编码方法对应的解码方法,如图12所示,若确定当前块支持多行预测模式,且多行预测模式对应的候选参考行行数为3,多行预测模式对应的参考行指示信息至多占用2个比特位,则该方法包括:
步骤1201:基于一个上下文模型,对参考行指示信息的第1个比特位进行基于上下文的自适应二进制算术解码。
步骤1202:当需要对参考行指示信息的第2个比特位进行解码时,对参考行指示信息的第2个比特位进行基于旁路的二进制算术解码。
通过对参考行指示信息进行解码,可以基于参考行指示信息确定基于多行预测模式进行当前块的预测时所使用的目标参考行,然后利用目标参考行对当前块进行预测。
作为一个示例,可以先接收当前块的编码流,若当前块满足解析条件,则对参考行指示信息进行解码,以确定当前块的参考像素的来源。解析条件包括:当前块为亮度帧内块,当前块的尺寸符合一定条件,当前块不是编码树单元的第一行。当然,该解析条件不限定于上述3种条件,还可以包括其他条件。
作为一个示例,若当前块可以使用多行预测的情况下,需要解析intra_luma_ref_idx,以便根据intra_luma_ref_idx的值,确定当前块的参考像素,从而得到当前块的预测值,用于后续的重建过程。
MRL模式的第三种实现方式
相关技术中,若确定当前块支持多行预测模式,且多行预测模式对应的候选参考行行数为4,多行预测模式对应的参考行指示信息至多占用3个比特位,且这3个比特位需要使用3个不同的上下文模型进行编解码,如下表10和表11所示:
表10
Figure PCTCN2020097144-appb-000013
表11
Figure PCTCN2020097144-appb-000014
其中,第一个bin是指参考行指示信息的第一个比特位,需要基于第一个上下文模型进行编解码,第二个bin是指参考行指示信息的第二个比特位,需要基于第二个上下文模型进行编解码,第三个bin是指参考行指示信息的第三个比特位,需要基于第三个上下文模型进行编解码,且这3个上下文模型均不同。
而且,目标参考行的索引信息与对应的目标参考行的行号如表12所示:
表12
Figure PCTCN2020097144-appb-000015
由表12可知,若参考行指示信息所指示的索引信息为0,则目标参考行为第0行;若参考行指示信息所指示的索引信息为1,则目标参考行是第1行;若参考行指示信息所指示的索引信息为2,则目标参考行是第2行;若参考行指示信息所指示的索引信息为3,则目标参考行是第3行。
图13是本申请实施例提供的一种编码方法的流程图,应用于编码端,如图13所示,若确定当前块支持多行预测模式,且多行预测模式对应的候选参考行行数为4,多行预测模式对应的参考行指示信息至多占用3个比特位,所述方法包括:
步骤1301:基于一个上下文模型,对参考行指示信息的第1个比特位进行基于上下文的自适应二进制算术编码。
作为一个示例,可以确定当前块是否满足支持多行预测技术的条件,若当前块满足支持多行预测技术的条件,则确定当前块可尝试使用各个参考行进行编码。编码端可通过RDO决策最终参考像素来源,并编码参考行索引信息到编码流中。其中,支持多行预测技术的条件包含:当前块为亮度帧内块,并且当前块的尺寸符合一定的限制条件,当前块不是编码树单元的第一行。当然,支持多行预测技术的条件不限定于上述三个条件,还可以包括其他条件。
作为一个示例,若可以使用多行预测技术,则遍历所有参考行,通过RDO决定最终的目标参考行,并编码参考行指示信息。在编码流中可以根据具体情况编码参考行指示信息。作 为一个示例,参考行指示信息可以为intra_luma_ref_idx。
步骤1302:当需要对参考行指示信息的第2个比特位进行编码时,对参考行指示信息的第2个比特位进行基于旁路的二进制算术编码。
步骤1303:当需要对参考行指示信息的第3个比特位进行编码时,对参考行指示信息的第3个比特位进行基于旁路的二进制算术编码。
本申请实施例中,若确定当前块支持多行预测模式,且多行预测模式对应的候选参考行行数为4,多行预测模式对应的参考行指示信息至多占用3个比特位,则这3个比特位的第一个比特位可以使用一个上下文模型进行编码,第二个比特位和第三个比特位可以基于旁路编码模式进行编码,如此,只需要使用一个上下文模型即可实现对参考行指示信息的所有比特位的编码,减少了所使用的上下文模型的数量,进而减小了编码的复杂度和内存消耗,而且编码性能也没有很大改变。
例如,参考行指示信息使用的上下文模型可以下表13和表14所示:
表13
Figure PCTCN2020097144-appb-000016
表14
Figure PCTCN2020097144-appb-000017
图14是本申请实施例提供的一种解码方法的流程图,应用于解码端,是与图12实施例所述的编码方法对应的解码方法,如图14所示,若确定当前块支持多行预测模式,且多行预测模式对应的候选参考行行数为4,多行预测模式对应的参考行指示信息至多占用3个比特位,所述方法包括:
步骤1401:基于一个上下文模型,对参考行指示信息的第1个比特位进行基于上下文的自适应二进制算术解码。
步骤1402:当需要对参考行指示信息的第2个比特位进行解码时,对参考行指示信息的第2个比特位进行基于旁路的二进制算术解码。
步骤1403:当需要对参考行指示信息的第3个比特位进行解码时,对参考行指示信息的第3个比特位进行基于旁路的二进制算术解码。
作为一个示例,可以先接收当前块的编码流,若当前块满足解析条件,则对参考行指示信息进行解码,以确定当前块的参考像素的来源。解析条件包括:当前块为亮度帧内块,当前块的尺寸符合一定条件,当前块不是编码树单元的第一行。当然,该解析条件不限定于上述3种条件,还可以包括其他条件。
作为一个示例,若当前块可以使用多行预测的情况下,需要解析intra_luma_ref_idx,以便根据intra_luma_ref_idx的值,确定当前块的参考像素,从而得到当前块的预测值,用于后续的重建过程。
MRL模式的第四种实现方式
图15是本申请实施例提供的一种编解码方法的流程图,该方法应用于编码端或解码端,如图15所示,若确定当前块支持多行预测模式,且多行预测模式对应的候选参考行行数为3,其中,索引信息为0的候选参考行为第0行,索引信息为1的候选参考行为第1行,索引信息为2的候选参考行为第2行,该方法包括:
步骤1501:在根据多行预测模式对当前块进行预测时,根据目标参考行对当前块进行预测,目标参考行根据参考行指示信息确定。
其中,若参考行指示信息所指示的索引信息为0,则目标参考行为第0行;
若参考行指示信息所指示的索引信息为1,则目标参考行是第1行;
若参考行指示信息所指示的索引信息为2,则目标参考行是第2行。
示例的,参考行指示信息所指示的索引信息与对应的目标参考行可以如下表15所示:
表15
Figure PCTCN2020097144-appb-000018
本申请实施例中,可以选择最近的三行三列作为目标参考行的候选者。也即是,目标参考行是从候选参考行中选择的其中一行,其中,多行预测模式对应的候选参考行行数为3,且距离当前块边界最近的三行三列作为候选参考行。
MRL模式的第五种实现方式
图16是本申请实施例提供的一种编解码方法的流程图,该方法应用于编码端或解码端,如图16所示,若确定当前块支持多行预测模式,且多行预测模式对应的候选参考行行数为3,其中,索引信息为0的候选参考行为第0行,索引信息为1的候选参考行为第1行,索引信息为2的候选参考行为第2行,所述方法包括:
步骤1601:在根据多行预测模式对当前块进行预测时,根据目标参考行对当前块进行预测,目标参考行根据参考行指示信息确定。
其中,若参考行指示信息所指示的索引信息为0,则目标参考行为第0行;
若参考行指示信息所指示的索引信息为1,则目标参考行是第2行;
若参考行指示信息所指示的索引信息为2,则目标参考行是第3行。
示例的,参考行指示信息所指示的索引信息与对应的目标参考行可以如下表16所示:
表16
Figure PCTCN2020097144-appb-000019
Figure PCTCN2020097144-appb-000020
本申请实施例中,可以选择第0行、第2行和第3行作为目标参考行的候选者。
MRL模式的第六种实现方式
图17是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图17所示,若确定当前块支持多行预测模式,该方法包括:
步骤1701:在根据多行预测模式对当前块进行预测之前,根据多行预测模式对应的候选参考行行数,对行数指示信息进行编码,行数指示信息用于指示多行预测模式对应的候选参考行行数。
步骤1702:基于多行预测模式进行当前块的预测时所使用的目标参考行,对参考行指示信息进行编码,参考行指示信息用于指示基于多行预测模式进行当前块的预测时所使用的目标参考行的索引信息。
步骤1703:根据目标参考行对当前块进行预测。
本申请实施例中,增加了一个可以指示多行预测模式对应的候选参考行行数的行数指示信息,使得多行预测模式可以选择参考的行数。
作为一个示例,行数指示信息可以存在于序列参数集(SPS)、图像参数级、slice级或Tile级中。优选地,行数指示信息存在于序列参数集,也即是,可以在SPS级增加一个用于指示多行预测模式对应的候选参考行行数的语法。
图18是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,如图17所示,若确定当前块支持多行预测模式,该方法包括:
步骤1801:在根据多行预测模式对当前块进行预测之前,对行数指示信息进行解码,行数指示信息用于指示多行预测模式对应的候选参考行行数。
步骤1802:根据行数指示信息确定多行预测模式对应的候选参考行行数。
步骤1803:根据多行预测模式对应的候选参考行行数和参考行指示信息确定目标参考行,参考行指示信息用于指示基于多行预测模式进行当前块的预测时所使用的目标参考行的索引信息。
步骤1804:根据目标参考行对当前块进行预测。
本申请实施例中,增加了一个可以指示多行预测模式对应的候选参考行行数的行数指示信息,使得多行预测模式可以选择参考的行数。
作为一个示例,行数指示信息可以存在于序列参数集(SPS)、图像参数级、slice级或Tile级中。优选地,行数指示信息存在于序列参数集,也即是,可以在SPS级增加一个用于指示多行预测模式对应的候选参考行行数的语法。
AMVR模式
在AMVR模式下,解码端和编码端之间需要传输的语法元素可以包括第一AMVR指示信息和第二AMVR指示信息,第一AMVR指示信息用于指示是否启动AMVR模式,第二AMVR指示信息用于指示在AMVR模式下进行运行矢量差编码或解码时所采用的像素精度的索引信息。作为一个示例,第一AMVR指示信息为amvr_flag,第二AMVR指示信息为amvr_precision_flag。
相关技术中,针对仿射预测模式和非仿射预测模式,非仿射预测模式是指除仿射预测模式之外的其他预测模式,第一AMVR指示信息和第二AMVR指示信息总共需要4个上下文模型进行编解码,如下表17和18所示:
表17
Figure PCTCN2020097144-appb-000021
表18
Figure PCTCN2020097144-appb-000022
由表17和表18可知,若当前块启动仿射预测模式,在进行第一AMVR指示信息的编码或解码时,需要基于第3个上下文模型,对第一AMVR指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码,当第二AMVR指示信息指示当前块启动AMVR模式时,需要基于第4个上下文模型,对第二AMVR指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码。若当前块启动非仿射预测模式,在进行第一AMVR指示信息的编码或解码时,需要基于第1个上下文模型,对第一AMVR指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码,当第一AMVR指示信息指示当前块启动AMVR模式时,需要基于第2个上下文模型,对第一AMVR指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码。也即是,总共需要4个不同的上下文模型对AMVR指示信息进行编码或解码,内存消耗较大。
AMVR模式的第一种实现方式
图19是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图19所示,该方法包括:
步骤1901:若当前块启动仿射预测模式或启动除仿射预测模式之外的其他预测模式,在进行当前块的运动矢量差编码时,如果当前块支持AMVR模式,则在进行第一AMVR指示信息的编码时,基于第一上下文模型,对第一AMVR指示信息进行基于上下文的自适应二进制算术编码。
步骤1902:当第一AMVR指示信息指示当前块启动AMVR模式时,基于第二上下文模型,对第二AMVR指示信息进行基于上下文的自适应二进制算术编码,第一上下文模型和第二上下文模型不同。
作为一个示例,若当前块满足可以使用自适应运动矢量精度的条件,则当前块可进行尝 试使用多个运动矢量精度进行编码。示例的,编码端可以通过RDO决策是否启动AMVR及采用的运动矢量精度,并编码相应的语法信息到编码流中。其中,使用自适应运动矢量精度的条件包含:当前块为帧间预测块,并且当前块运动信息包含非零运动矢量差。当然,使用自适应运动矢量精度的条件不限定与上述条件,还可以包括其他条件。
图20是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法为图19所示的编码方法对应的解码方法,如图20所示,该方法包括:
步骤2001:若当前块启动仿射预测模式或启动除仿射预测模式之外的其他预测模式,在进行当前块的运动矢量差解码时,如果当前块支持AMVR模式,则在进行第一AMVR指示信息的解码时,基于第一上下文模型,对第一AMVR指示信息进行基于上下文的自适应二进制算术解码。
作为一个示例,解码端可以先接收当前块的编码流,若确定当前块满足解析条件,则解析出第一AMVR,以确定当前块是否启动AMVR,即是否使用了自适应运动矢量精度技术。其中,解析条件包括:当前块为帧间块,当前块运动信息包含非零运动矢量差。当然,解析条件不限定于上述条件,还可以包括其他条件。
步骤2002:当第一AMVR指示信息指示当前块启动AMVR模式时,基于第二上下文模型,对第二AMVR指示信息进行基于上下文的自适应二进制算术解码,第一上下文模型和第二上下文模型不同。
在确定当前块启动了AMVR模式的的情况下,还需要进一步解析第二AMVR指示信息,以确定所使用到的精度。根据第一AMVR指示信息和第二AMVR指示信息,解码端可以唯一确定当前块的运动信息的运动矢量精度,从而得到当前块的预测值,用于后续的重建过程。
示例的,针对仿射预测模式和非仿射预测模式,第一AMVR指示信息和第二AMVR指示信息所使用的上下文模型可以如下表19所示:
表19
Figure PCTCN2020097144-appb-000023
本申请实施例中,AMVR指示信息在Affine仿射预测模式下和非affine仿射预测模式下可以共用上下文模型,如此,可以将AMVR下所需的上下文模型减小至2个,从而降低了编解码的复杂度,减少了内存开销。
AMVR模式的第二种实现方式
图21是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图21所示,该方法包括:
步骤2101:若当前块启动仿射预测模式,在进行当前块的运动矢量差编码时,如果当前块支持AMVR模式,则在进行第一AMVR指示信息的编码时,基于第一上下文模型,对第一AMVR指示信息进行基于上下文的自适应二进制算术编码,当第一AMVR指示信息指示当前块启动AMVR模式时,对第二AMVR指示信息进行基于旁路的二进制算术编码。
步骤2102:若当前块启动除仿射预测模式之外的其他预测模式,在进行当前块的运动矢量差编码时,如果当前块支持AMVR模式,则在进行第一AMVR指示信息的编码时,基于 第二上下文模型,对第一AMVR指示信息进行基于上下文的自适应二进制算术编码,当第一AMVR指示信息指示当前块启动AMVR模式时,对第二AMVR指示信息进行基于旁路的二进制算术编码。
图22是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,是与图21所示的编码方法对应的解码方法,如图22所示,该方法包括:
步骤2201:若当前块启动仿射预测模式,在进行当前块的运动矢量差解码时,如果当前块支持AMVR模式,则在进行第一AMVR指示信息的解码时,基于第一上下文模型,对第一AMVR指示信息进行基于上下文的自适应二进制算术解码,当第一AMVR指示信息指示当前块启动AMVR模式时,对第二AMVR指示信息进行基于旁路的二进制算术解码。
步骤2202:若当前块启动除仿射预测模式之外的其他预测模式,在进行当前块的运动矢量差解码时,如果当前块支持AMVR模式,则在进行第一AMVR指示信息的解码时,基于第二上下文模型,对第一AMVR指示信息进行基于上下文的自适应二进制算术解码,当第一AMVR指示信息指示当前块启动AMVR模式时,对第二AMVR指示信息进行基于旁路的二进制算术解码。
示例的,针对仿射预测模式和非仿射预测模式,第一AMVR指示信息和第二AMVR指示信息所使用的上下文模型可以如下表20所示:
表20
Figure PCTCN2020097144-appb-000024
本申请实施例中,在Affine仿射预测模式下和非affine仿射预测模式下,第二AMVR指示信息均修改为进行基于旁路的二进制算术编码或解码,如此,可以将AMVR下所需的上下文模型减小至2个,从而降低了编解码的复杂度,减少了内存开销。
AMVR模式的第三种实现方式
图23是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图23所示,该方法包括:
步骤2301:若当前块启动仿射预测模式或启动除仿射预测模式之外的其他预测模式,在进行当前块的运动矢量差编码时,若当前块支持自适应运动矢量精度AMVR模式,在进行第一AMVR指示信息的编码时,基于第一上下文模型,对第一AMVR指示信息进行基于上下文的自适应二进制算术编码。
步骤2302:当第一AMVR指示信息指示当前块启动AMVR模式时,对第二AMVR指示信息进行基于旁路的二进制算术编码。
图24是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,是与图22所示的编码方法对应的解码方法,如图24所示,该方法包括:
步骤2401:若当前块启动仿射预测模式或启动除仿射预测模式之外的其他预测模式,在进行当前块的运动矢量差解码时,若当前块支持自适应运动矢量精度AMVR模式,在进行第 一AMVR指示信息的解码时,基于第一上下文模型,对第一AMVR指示信息进行基于上下文的自适应二进制算术解码。
步骤2402:当第一AMVR指示信息指示当前块启动AMVR模式时,对第二AMVR指示信息进行基于旁路的二进制算术解码。
示例的,针对仿射预测模式和非仿射预测模式,第一AMVR指示信息和第二AMVR指示信息所使用的上下文模型可以如下表21所示:
表21
Figure PCTCN2020097144-appb-000025
本申请实施例中,在Affine仿射预测模式下和非affine仿射预测模式下,第一AMVR指示信息共用一个上下文模型,第二AMVR指示信息均修改为进行基于旁路的二进制算术编码或解码,如此,可以将AMVR下所需的上下文模型减小至1个,从而降低了编解码的复杂度,减少了内存开销。
AMVR模式的第四种实现方式
在另一实施例中,还提供了一种编码方法,该方法应用于编码端,该方法包括:
步骤1:若当前块启动仿射预测模式,在进行当前块的运动矢量差编码时,如果当前块支持AMVR模式,则在进行第一AMVR指示信息的编码时,基于第一上下文模型,对第一AMVR指示信息进行基于上下文的自适应二进制算术编码,当第一AMVR指示信息指示当前块启动AMVR模式时,基于第二上下文模型,对第二AMVR指示信息进行基于上下文的自适应二进制算术编码。
步骤2:若当前块启动除仿射预测模式之外的其他预测模式,在进行当前块的运动矢量差解码时,如果当前块支持AMVR模式,则在进行第一AMVR指示信息的解码时,基于第三上下文模型,对第一AMVR指示信息进行基于上下文的自适应二进制算术解码,当第一AMVR指示信息指示当前块启动AMVR模式时,基于第二上下文模型,对第二AMVR指示信息进行基于上下文的自适应二进制算术编码。
其中,第一上下文模型、第二上下文模型和第三上下文模型不同。
在另一实施例中,还提供了一种解码方法,应用于解码端,该方法是与上述编码方法对应的解码方法,该方法包括如下步骤:
步骤1:若当前块启动仿射预测模式,在进行当前块的运动矢量差解码时,如果当前块支持AMVR模式,则在进行第一AMVR指示信息的解码时,基于第一上下文模型,对第一AMVR指示信息进行基于上下文的自适应二进制算术解码,当第一AMVR指示信息指示当前块启动AMVR模式时,基于第二上下文模型,对第二AMVR指示信息进行基于上下文的自适应二进制算术解码。
步骤2:若当前块启动除仿射预测模式之外的其他预测模式,在进行当前块的运动矢量差解码时,如果当前块支持AMVR模式,则在进行第一AMVR指示信息的解码时,基于第三上下文模型,对第一AMVR指示信息进行基于上下文的自适应二进制算术解码,当第一 AMVR指示信息指示当前块启动AMVR模式时,基于第二上下文模型,对第二AMVR指示信息进行基于上下文的自适应二进制算术解码。
其中,第一上下文模型、第二上下文模型和第三上下文模型不同。
示例的,针对仿射预测模式和非仿射预测模式,第一AMVR指示信息和第二AMVR指示信息所使用的上下文模型可以如下表22所示:
表22
Figure PCTCN2020097144-appb-000026
本申请实施例中,在Affine仿射预测模式下和非affine仿射预测模式下,第二指示信息可以共用一个上下文模型,如此,可以将AMVR模式下所需的上下文模型降至3个,从而降低了编解码的复杂度,减少了内存开销。
亮度的MPM
当当前块为亮度块时,编码端与解码端之间需要传输预测模式索引信息,预测模式索引信息用于指示当前块的目标预测模式在MPM列表中的索引信息。编码端和解码端存储有最有可能的帧内预测模式MPM列表,常规帧内预测模式、帧内子块预测模式和多行预测模式可以共用该MPM列表。
相关技术中,当当前块的目标预测模式的参考行是当前块的相邻行,则需要两个不同的上下文模型,对预测模式索引信息的第一个比特位进行基于上下文的自适应二进制算术编码或解码,具体使用哪个上下文模型取决于当前块是否启动帧内子块预测模式。
示例的,预测模式索引信息所使用的上下文模型如下表23所示:
表23
Figure PCTCN2020097144-appb-000027
其中,预测模式索引信息intra_luma_mpm_idx。intra_luma_ref_idx等于0时,表示当前块的目标预测模式的参考行是当前块的相邻行,即当前块未启动多行预测模式。intra_luma_ref_idx不等于0时,表示当前块的目标预测模式的参考行不是当前块的相邻行,即当前块启动了多行预测模式。
由表22可知,当intra_luma_ref_idx等于0时,intra_luma_mpm_idx的第一个比特位需要根据当前块是否启动帧内子块预测模式,从2个不同的上下文模型中选择一个上下文模型来 进行编解码。此外,当intra_luma_ref_idx不等于0,表示当前块启动多行预测模式,当前块启动的多行预测模式的目标预测模式也来自于MPM列表。
亮度的MPM的第一种实现方式
图25是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图25所示,该方法包括如下步骤:
步骤2501:若当前块启动帧内子块预测,帧内子块预测的目标预测模式存在于MPM列表中,当前块为亮度块,则在根据帧内子块预测对当前块进行预测时,确定当前块启动的帧内子块预测的目标预测模式在MPM列表的索引信息。
步骤2502:根据当前块启动的帧内子块预测的目标预测模式在MPM列表的索引信息,对预测模式索引信息进行编码,其中,预测模式索引信息的第1个比特位是基于第一上下文模型进行基于上下文的自适应二进制算术编码得到,其他比特位是基于旁路的二进制算术编码得到。
步骤2503:若当前块启动常规帧内预测,当常规帧内预测的目标预测模式来自于MPM列表中时,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,如果当前块启动的目标预测模式来自于MPM列表中,则确定当前块启动的目标预测模式在MPM列表的索引信息。
步骤2504:根据当前块启动的目标预测模式在MPM列表中的索引信息,对预测模式索引信息进行编码,其中,预测模式索引信息的第1个比特位是基于第二上下文模型进行基于上下文的自适应二进制算术编码得到,其他比特位基于旁路的二进制算术编码得到。
作为一个示例,若当前块启动常规帧内预测块,还需要一个标记位表示当前块启动的常规预测的目标预测模式是否来自该MPM列表,如果确定目标预测模式来自该MPM列表,则接下来确定目标预测模式在MPM列表的索引信息,如果目标预测模式不是来自MPM列表,则不需要确定目标预测模式在MPM列表的索引信息。
作为一个示例,编码端可以构建MPM列表,帧内子块预测模式、多行预测模式和常规帧内预测可以共用该MPM列表。
作为一个示例,编码端可以通过RDO抉择最终使用的预测模式即目标预测模式,如果目标预测模式为帧内子块预测模式或多行预测模式,那么目标预测模式一定是从该MPM列表中选择的一个预测模式,需要编码预测模式索引信息(intra_luma_mpm_idx)来告知解码端选择了哪一个预测模式。如果目标预测模式为常规帧内预测,还需要编码一个标记位表示当前块启动的常规预测的目标预测模式是否来自该MPM列表,如果确定目标预测模式来自该MPM列表,则接下来确定目标预测模式在MPM列表的索引信息,如果目标预测模式不是来自MPM列表,则不需要确定目标预测模式在MPM列表的索引信息。
图26是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法是与图24所示的编码方法对应的解码方法,如图26所示,该方法包括如下步骤:
步骤2601:若当前块启动帧内子块预测时,帧内子块预测的目标预测模式存在于MPM列表中,当前块为亮度块,则在根据帧内子块预测对当前块进行预测时,对预测模式索引信息进行解码,其中,预测模式索引信息的第1个比特位是基于第一上下文模型进行基于上下文的自适应二进制算术解码得到,其他比特位是基于旁路的二进制算术解码得到。
步骤2602:根据预测模式索引信息,从MPM列表中确定当前块启动的帧内子块预测的目标预测模式,该预测模式索引信息用于指示目标预测模式在MPM列表中的索引信息。
步骤2603:根据目标预测模式对当前块进行预测。
步骤2604:若当前块启动常规帧内预测时,当常规帧内预测的目标预测模式来自于MPM列表中时,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,对预测模式索引信息进行解码,其中,预测模式索引信息的第1个比特位是基于第二上下文模型进行基于上下文的自适应二进制算术解码得到,其他比特位基于旁路的二进制算术解码得到,第二上下文模型与第一上下文模型是同一上下文模型。
步骤2605:根据预测模式索引信息,从MPM列表中确定当前块启动的目标预测模式,该预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引信息。
步骤2606:根据目标预测模式对当前块进行预测。
作为一个示例,解码端可以先接收编码流,基于常规帧内预测、帧内子块预测模式和多行预测模式构建相同MPM列表的前提下,若当前块启动帧内子块预测模式或者多行预测模式,则其采用的目标预测模式一定是来自该MPM列表中,解析出其在列表中的索引值,即可得到最终的目标预测模式。若当前块启动常规帧内预测,则还需要解析一个标记位来确定目标预测模式是否来自该MPM列表,如果是来自该MPM列表,再解析其在该MPM列表中的索引值。
根据以上标志位的值,解码端可以唯一确定当前块的预测模式,从而得到当前块的预测值,用于后续的重建过程。
本申请实施例中,如果当前块的目标预测模式的参考行是当前块的相邻行,在对预测模式索引信息的第一个比特位进行编码或解码时,可以不根据当前块是否启动帧内子块预测模式这个条件来从两个不同的上下文模型选择上下文模型,而是在当前块启动帧内子块预测模式和未启动帧内子块预测模式这两个不同的条件下,可以使用同一上下文模型,对预测模式索引信息的第一个比特位进行基于上下文的自适应二进制算术编码或解码,如此,可以将所需的上下文模型的数量减小为1,减小了编解码的复杂度,减下来内存开销。
作为一个示例,预测模式索引信息所使用的上下文模型如下表24所示:
表24
Figure PCTCN2020097144-appb-000028
也即是,当intra_luma_ref_idx等于0时,intra_luma_mpm_idx的第一个比特位在当前块启动帧内子块预测模式和未启动帧内子块预测模式的情况下,均可以基于同一上下文模型,对intra_luma_mpm_idx的第一个比特位进行基于上下文的自适应二进制算术编码或解码。
亮度的MPM的第二种实现方式
图27是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图27所示,该方法包括如下步骤:
步骤2701:若当前块启动帧内子块预测,帧内子块预测的目标预测模式存在于MPM列表中,当前块为亮度块,则在根据帧内子块预测对当前块进行预测时,确定当前块启动的帧内子块预测的目标预测模式在MPM列表的索引信息。
步骤2702:根据当前块启动的帧内子块预测的目标预测模式在MPM列表的索引信息,对预测模式索引信息进行编码,其中,预测模式索引信息的所有比特位基于旁路的二进制算术编码得到。
步骤2703:若当前块启动常规帧内预测时,当常规帧内预测的目标预测模式来自于MPM列表中时,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,如果当前块启动的目标预测模式在MPM列表中,则确定当前块启动的目标预测模式在MPM列表的索引信息。
步骤2704:根据当前块启动的目标预测模式在MPM列表中索引信息,对预测模式索引信息进行编码,其中,预测模式索引信息的所有比特位基于旁路的二进制算术编码得到。
作为一个示例,若当前块启动常规帧内预测块,还需要一个标记位表示当前块启动的常规预测的目标预测模式是否来自该MPM列表,如果确定目标预测模式来自该MPM列表,则确定目标预测模式在MPM列表的索引信息,如果目标预测模式不是来自MPM列表,则不需要确定目标预测模式在MPM列表的索引信息。
图28是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法是与图27所示的编码方法对应的解码方法,如图28所示,该方法包括如下步骤:
步骤2801:若当前块启动帧内子块预测时,帧内子块预测的目标预测模式存在于MPM列表中,当前块为亮度块,则在根据帧内子块预测对当前块进行预测时,对预测模式索引信息进行解码,其中,预测模式索引信息的所有比特位基于旁路的二进制算术编码得到。
步骤2802:根据预测模式索引信息,从MPM列表中确定当前块启动的帧内子块预测的目标预测模式,该预测模式索引信息用于指示目标预测模式在MPM列表中的索引信息。
步骤2803:根据目标预测模式对当前块进行预测。
步骤2804:若当前块启动常规帧内预测时,当常规帧内预测的目标预测模式来自于MPM列表中时,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,对预测模式索引信息进行解码,其中,预测模式索引信息的所有比特位基于旁路的二进制算术编码得到。
步骤2805:根据预测模式索引信息,从MPM列表中确定当前块启动的目标预测模式,该预测模式索引信息用于指示当前块启动的目标预测模式在MPM列表中的索引信息。
步骤2806:根据目标预测模式对当前块进行预测。
本申请实施例中,如果当前块的目标预测模式的参考行是当前块的相邻行,即intra_luma_ref_idx等于0时,在对预测模式索引信息的第一个比特位进行编码或解码时,可以不考虑当前块是否启动帧内子块预测模式,即在当前块启动帧内子块预测模式和未启动帧内子块预测模式这两个不同的情况下,均对预测模式索引信息的第一个比特位进行基于旁路的二进制算术编码或解码。如此,预测模式索引信息的第一个比特位也就不需要使用上下文模型,将其所需的上下文模型的数量减小为0,进而减小了编解码的复杂度,减小了内存开销。
作为一个示例,预测模式索引信息所使用的上下文模型如下表25所示:
表25
Figure PCTCN2020097144-appb-000029
也即是,当intra_luma_ref_idx等于0时,intra_luma_mpm_idx的第一个比特位在当前块启动帧内子块预测模式和未启动帧内子块预测模式的情况下,对intra_luma_mpm_idx的第一个比特位均可以进行基于旁路的二进制算术编码或解码。
亮度的MPM的第三种实现方式
图29是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图29所示,该方法包括如下步骤:
步骤2901:若当前块启动帧内子块预测,帧内子块预测的目标预测模式存在于MPM列表中,当前块为亮度块,则在根据帧内子块预测对当前块进行预测时,确定当前块启动的帧内子块预测的目标预测模式在MPM列表的索引信息。
步骤2902:根据当前块启动的帧内子块预测的目标预测模式在MPM列表的索引信息,对预测模式索引信息进行编码,其中,预测模式索引信息的第1个比特位是基于一个上下文模型进行基于上下文的自适应二进制算术编码得到,其他比特位基于旁路的二进制算术编码得到。
步骤2903:若当前块启动常规帧内预测,当常规帧内预测的目标预测模式来自于MPM列表中时,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,如果当前块启动的常规预测的目标预测模式在MPM列表中,则确定当前块启动的常规预测的目标预测模式在MPM列表的索引信息。
步骤2904:根据当前块启动的目标预测模式在MPM列表中索引信息,对预测模式索引信息进行编码,其中,预测模式索引信息的所有比特位基于旁路的二进制算术编码得到。
作为一个示例,若当前块启动常规帧内预测,还需要一个标记位表示当前块启动的常规预测的目标预测模式是否来自MPM列表,如果确定目标预测模式来自MPM列表,则确定目标预测模式在MPM列表的索引信息,如果目标预测模式不是来自MPM列表,则不需要确定目标预测模式在MPM列表的索引信息。
图30是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法是与图29所示的解码方法对应的解码方法,如图30所示,该方法包括如下步骤:
步骤3001:若当前块启动帧内子块预测,帧内子块预测的目标预测模式存在于MPM列表中,当前块为亮度块,则在根据帧内子块预测对当前块进行预测时,对预测模式索引信息进行解码,其中,预测模式索引信息的第1个比特位是基于一个上下文模型进行基于上下文的自适应二进制算术解码得到,其他比特位基于旁路的二进制算术解码得到。
步骤3002:根据预测模式索引信息,从MPM列表中确定当前块启动的帧内子块预测的目标预测模式,该预测模式索引信息用于指示目标预测模式在MPM列表中的索引信息。
步骤3003:根据目标预测模式对当前块进行预测。
步骤3004:若当前块启动常规帧内预测时,当常规帧内预测的目标预测模式来自于MPM列表中时,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,对预测模式索引信息进行解码,其中,预测模式索引信息的所有比特位基于旁路的二进制算术解码得到。
步骤3005:根据预测模式索引信息,从MPM列表中确定当前块启动的目标预测模式,该预测模式索引信息用于指示当前块启动的目标预测模式在MPM列表中的索引信息。
步骤3006:根据目标预测模式对当前块进行预测。
本申请实施例中,如果当前块的目标预测模式的参考行是当前块的相邻行,即intra_luma_ref_idx等于0时,在对预测模式索引信息进行编码或解码时,如果当前块启动帧内子块预测模式,则基于一个上下文模型,对预测模式索引信息的第一个比特位进行基于上下文的自适应二进制算术编码或解码,如果当前块未启动帧内子块预测模式,则对预测模式索引信息的第一个比特位进行基于旁路的二进制算术编码或解码。如此,预测模式索引信息的编解码也就仅需一个上下文模型,将其所需的上下文模型的数量减小至1,减小了编解码的复杂度,减小了内存开销。
作为一个示例,预测模式索引信息所使用的上下文模型如下表26所示:
表26
Figure PCTCN2020097144-appb-000030
也即是,当intra_luma_ref_idx等于0的情况下,且当前块启动ISP模式时,用一个上下文模型对intra_luma_mpm_idx的第一个比特位进行编解码,当当前块未启动ISP模式时,用Bypass对intra_luma_mpm_idx的第一个比特位进行编解码。
亮度的MPM的第四种实现方式
图31是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图31所示,该方法包括如下步骤:
步骤3101:若当前块启动帧内子块预测,帧内子块预测的目标预测模式存在于MPM列表中,当前块为亮度块,则在根据帧内子块预测对当前块进行预测时,确定当前块启动的帧内子块预测的目标预测模式在MPM列表的索引信息。
步骤3102:根据当前块启动的帧内子块预测的目标预测模式在MPM列表的索引信息,对预测模式索引信息进行编码,其中,预测模式索引信息的所有比特位基于旁路的二进制算术编码得到。
步骤3103:若当前块启动常规帧内预测时,当常规帧内预测的目标预测模式来自于MPM列表中时,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,如果当前块启动的目标预测模式在MPM列表中,则确定当前块启动的目标预测模式在MPM列表的索引信息。
步骤3104:根据当前块启动的目标预测模式在MPM列表中索引信息,对预测模式索引信息进行编码,其中,预测模式索引信息的第1个比特位是基于一个上下文模型进行基于上下文的自适应二进制算术编码得到,其他比特位基于旁路的二进制算术编码得到。
作为一个示例,若当前块启动常规帧内预测块,还需要一个标记位表示当前块启动的常规预测的目标预测模式是否来自该MPM列表,如果确定目标预测模式来自该MPM列表,则确定目标预测模式在MPM列表的索引信息,如果目标预测模式不是来自MPM列表,则不需要确定目标预测模式在MPM列表的索引信息。
图32是本申请实施例提供的一种解码方法的流程图,该方法应用于编码端,该方法是与图31所示的编码方法对应的解码方法,如图32所示,该方法包括如下步骤:
步骤3201:若当前块启动帧内子块预测,帧内子块预测的目标预测模式存在于MPM列表中,当前块为亮度块,则在根据帧内子块预测对当前块进行预测时,对预测模式索引信息进行解码,其中,预测模式索引信息的所有比特位基于旁路的二进制算术解码得到。
步骤3202:若当前块启动帧内子块预测,帧内子块预测的目标预测模式存在于MPM列表中,当前块为亮度块,则在根据帧内子块预测对当前块进行预测时,对预测模式索引信息进行解码,其中,预测模式索引信息的所有比特位基于旁路的二进制算术解码得到。
步骤3203:根据目标预测模式对当前块进行预测。
步骤3204:若当前块启动常规帧内预测时,当常规帧内预测的目标预测模式来自于MPM列表中时,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,对预测模式索引信息进行解码,其中,预测模式索引信息的第1个比特位是基于一个上下文模型进行基于上下文的自适应二进制算术编码得到,其他比特位基于旁路的二进制算术解码得到。
步骤3205:根据预测模式索引信息,从MPM列表中确定当前块启动的目标预测模式,该预测模式索引信息用于指示当前块启动的目标预测模式在MPM列表中的索引信息。
步骤3206:根据目标预测模式对当前块进行预测。
本申请实施例中,如果当前块的目标预测模式的参考行是当前块的相邻行,即intra_luma_ref_idx等于0时,在对预测模式索引信息进行编码或解码时,如果当前块启动帧内子块预测模式,则对预测模式索引信息的第一个比特位进行基于旁路的二进制算术编码或解码,如果当前块未启动帧内子块预测模式,则基于一个上下文模型,对预测模式索引信息的第一个比特位进行基于上下文的自适应二进制算术编码或解码。如此,预测模式索引信息的编解码也就仅需一个上下文模型,将其所需的上下文模型的数量减小至1,减小了编解码的复杂度,减小了内存开销。
作为一个示例,预测模式索引信息所使用的上下文模型如下表27所示:
表27
Figure PCTCN2020097144-appb-000031
Figure PCTCN2020097144-appb-000032
也即是,当intra_luma_ref_idx等于0的情况下,且当前块未启动ISP模式时,用一个上下文模型对intra_luma_mpm_idx的第一个比特位进行编解码,当当前块启动ISP模式时,用Bypass对intra_luma_mpm_idx的第一个比特位进行编解码。
亮度的MPM的第五种实现方式
编码端与解码端之间传输的语法元素还可以包括planar指示信息,planar指示信息用于指示当前块的目标预测模式是否为planar预测模式,planar指示信息占据一个比特位。示例的,planar指示信息为intra_luma_not_planar_flag。
相关技术中,planar指示信息的编解码方式如下表28所示:
表28
Figure PCTCN2020097144-appb-000033
如上表26所示,planar指示信息intra_luma_not_planar_flag采用的是基于上下文的自适应二进制算数编码,其上下文的选择依赖于当前块是否启动帧内子块预测模式,也即是,planar指示信息的编解码需要2个不同的上下文模型。
图33是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图33所示,该方法包括如下步骤:
步骤3301:若当前块启动帧内子块预测,帧内子块预测的目标预测模式存在于MPM列表中,当前块为亮度块,则在根据帧内子块预测对当前块进行预测时,根据当前块启动的帧内子块预测的目标预测模式是否为planar预测模式,对planar指示信息进行编码,其中,planar指示信息用于指示当前块启动的目标预测模式是否为planar预测模式,planar指示信息是基于第一上下文模型进行基于上下文的自适应二进制算术编码得到。
步骤3302:若当前块启动常规帧内预测,当常规帧内预测的目标预测模式来自于MPM列表中,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,根据当前块启动的常规帧内预测的目标预测模式是否为planar预测模式,对planar指示信息进行编码,其中,planar指示信息用于指示当前块启动的目标预测模式是否为planar预测模式,planar指示信息是基于第二上下文模型进行基于上下文的自适应二进制算术编码得到,第一上下文模型与所述第二上下文模型相同。
图34是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法是与图33所示的编码方法对应的解码方法,如图34所示,该方法包括如下步骤:
步骤3401:若当前块启动帧内子块预测,帧内子块预测的目标预测模式存在于MPM列 表中,当前块为亮度块,则在根据帧内子块预测对当前块进行预测时,对planar指示信息进行解码,其中,planar指示信息用于指示当前块启动的目标预测模式是否为planar预测模式,planar指示信息是基于第一上下文模型进行基于上下文的自适应二进制算术解码得到。
步骤3402:当基于planar指示信息确定当前块启动的帧内子块预测的目标预测模式为planar预测模式时,根据planar预测模式对当前块进行预测。
步骤3403:当基于planar指示信息确定当前块启动的帧内子块预测的目标预测模式不是planar预测模式时,根据预测模式索引信息,从MPM列表中确定当前块启动的帧内子块预测的目标预测模式,根据目标预测模式对当前块进行预测。
步骤3404:若当前块启动常规帧内预测,常规帧内预测的目标预测模式来自于MPM列表中,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,对planar指示信息进行解码,其中,planar指示信息用于指示当前块启动的目标预测模式是否为planar预测模式,planar指示信息是基于第二上下文模型进行基于上下文的自适应二进制算术解码得到,第一上下文模型与第二上下文模型相同。
步骤3405:当基于planar指示信息确定当前块启动的目标预测模式为planar预测模式时,根据planar预测模式对当前块进行预测。
步骤3406:当基于planar指示信息确定当前块启动的目标预测模式不是planar预测模式时,根据预测模式索引信息,从MPM列表中确定当前块启动的的目标预测模式,根据目标预测模式对当前块进行预测。
本申请实施例中,不考虑当前块是否启动帧内子块预测模式,在对planar指示信息进行编解码时,在当前块启动帧内子块预测模式和常规帧内预测这两种情况下,均基于同一上下文模型,对planar指示信息进行基于上下文的自适应二进制算术编码或解码。如此,将planar指示信息所需的上下文模型数量降为1,减小了编解码的复杂度和内存开销。
作为一个示例,planar指示信息的编解码方式如下表29所示:
表29
Figure PCTCN2020097144-appb-000034
由上表27所示,planar指示信息intra_luma_not_planar_flag仍然采用基于上下文的自适应二进制算数编解码方式,但是其上下文的选择不依赖于当前块是否启动帧内子块预测模式,而是当前块启动帧内子块预测模式和常规帧内预测这两种情况下,均采用固定的1个上下文模型进行编解码。
亮度的MPM的第六种实现方式
图35是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图35所示,该方法包括如下步骤:
步骤3501:若当前块启动帧内子块预测,帧内子块预测的目标预测模式存在于MPM列表中,当前块为亮度块,则在根据帧内子块预测对当前块进行预测时,根据当前块启动的帧 内子块预测的目标预测模式是否为planar预测模式,对planar指示信息进行编码,其中,planar指示信息用于指示当前块启动的目标预测模式是否为planar预测模式,planar指示信息基于旁路的二进制算术编码得到。
步骤3502:若当前块启动常规帧内预测,当常规帧内预测的目标预测模式来自于MPM列表中,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,根据当前块启动的常规帧内预测的目标预测模式是否为planar预测模式,对planar指示信息进行编码,其中,planar指示信息用于指示当前块启动的目标预测模式是否为planar预测模式,planar指示信息基于旁路的二进制算术编码得到。
图36是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法是与图35所示的编码方法对应的解码方法,如图36所示,该方法包括如下步骤:
步骤3601:若当前块启动帧内子块预测,帧内子块预测的目标预测模式存在于MPM列表中,当前块为亮度块,则在根据帧内子块预测对当前块进行预测时,对planar指示信息进行解码,其中,planar指示信息用于指示当前块启动的目标预测模式是否为planar预测模式,planar指示信息基于旁路的二进制算术解码得到。
步骤3602:当基于planar指示信息确定当前块启动的帧内子块预测的目标预测模式为planar预测模式时,根据planar预测模式对当前块进行预测。
步骤3603:当基于planar指示信息确定当前块启动的帧内子块预测的目标预测模式不是planar预测模式时,根据预测模式索引信息,从MPM列表中确定当前块启动的帧内子块预测的目标预测模式,根据目标预测模式对当前块进行预测。
步骤3604:若当前块启动常规帧内预测,常规帧内预测的目标预测模式来自于MPM列表中,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,对planar指示信息进行解码,其中,planar指示信息用于指示当前块启动的目标预测模式是否为planar预测模式,planar指示信息基于旁路的二进制算术解码得到。
步骤3605:当基于planar指示信息确定当前块启动的目标预测模式为planar预测模式时,根据planar预测模式对当前块进行预测。
步骤3606:当基于planar指示信息确定当前块启动的目标预测模式不是planar预测模式时,根据预测模式索引信息,从MPM列表中确定当前块启动的的目标预测模式,根据目标预测模式对当前块进行预测。
本申请实施例中,不考虑当前块是否启动帧内子块预测模式,在对planar指示信息进行编解码时,在当前块启动帧内子块预测和常规帧内预测这两种情况下,均对planar指示信息进行基于旁路的二进制算术编码或解码。如此,将planar指示信息所需的上下文模型数量降为0,减小了编解码的复杂度和内存开销。
作为一个示例,planar指示信息的编解码方式如下表30所示:
表30
Figure PCTCN2020097144-appb-000035
Figure PCTCN2020097144-appb-000036
由上表30所示,planar指示信息intra_luma_not_planar_flag不再采用基于上下文的自适应二进制算数编解码方式,而是在当前块启动帧内子块预测模式和常规帧内预测这两种情况下,均采用于基于旁路的二进制算术编码或解码方式。
需要说明的是,上述方法还可以应用于仅考虑常规帧内预测的场景中。
实施例一
在另一实施例中,还提供了一种编码方法,该编码方法应用于编码端,该编码方法包括如下步骤:
步骤1:若当前块启动常规帧内预测,当常规帧内预测的目标预测模式来自于MPM列表中时,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,如果当前块启动的目标预测模式来自于MPM列表中,则确定当前块启动的目标预测模式在MPM列表的索引信息。
步骤2:根据当前块启动的目标预测模式在MPM列表中的索引信息,对预测模式索引信息进行编码,其中,预测模式索引信息的第1个比特位是基于一个上下文模型进行基于上下文的自适应二进制算术编码得到,其他比特位基于旁路的二进制算术编码得到。
作为一个示例,若当前块启动常规帧内预测块,还需要一个标记位表示当前块启动的常规预测的目标预测模式是否来自该MPM列表,如果确定目标预测模式来自该MPM列表,则接下来确定目标预测模式在MPM列表的索引信息,如果目标预测模式不是来自MPM列表,则不需要确定目标预测模式在MPM列表的索引信息。
在另一实施例中,还提供了一种解码方法,该解码方法应用于解码端,是与上述编码方法对应的解码方法,该解码方法包括如下步骤:
步骤1:若当前块启动常规帧内预测时,当常规帧内预测的目标预测模式来自于MPM列表中时,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,对预测模式索引信息进行解码,其中,预测模式索引信息的第1个比特位是基于一个上下文模型进行基于上下文的自适应二进制算术解码得到,其他比特位基于旁路的二进制算术解码得到,第二上下文模型与第一上下文模型是同一上下文模型。
步骤2:根据预测模式索引信息,从MPM列表中确定当前块启动的目标预测模式,该预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引信息。
步骤3:根据目标预测模式对当前块进行预测。
实施例二
在另一实施例中还提供了一种编码方法,该编码方法应用于编码端,该编码方法包括如下步骤:
步骤1:若当前块启动常规帧内预测时,当常规帧内预测的目标预测模式来自于MPM列表中时,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,如果当前块启动的目标预测模式在MPM列表中,则确定当前块启动的目标预测模式在MPM列表的索引信息。
步骤2:根据当前块启动的目标预测模式在MPM列表中索引信息,对预测模式索引信息 进行编码,其中,预测模式索引信息的所有比特位基于旁路的二进制算术编码得到。
作为一个示例,若当前块启动常规帧内预测块,还需要一个标记位表示当前块启动的常规预测的目标预测模式是否来自该MPM列表,如果确定目标预测模式来自该MPM列表,则确定目标预测模式在MPM列表的索引信息,如果目标预测模式不是来自MPM列表,则不需要确定目标预测模式在MPM列表的索引信息。
在另一实施例中,还提供了一种解码方法,该解码方法应用于解码端,是与上述编码方法对应的解码方法,该解码方法包括如下步骤:
步骤1:若当前块启动常规帧内预测时,当常规帧内预测的目标预测模式来自于MPM列表中时,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,对预测模式索引信息进行解码,其中,预测模式索引信息的所有比特位基于旁路的二进制算术编码得到。
步骤2:根据预测模式索引信息,从MPM列表中确定当前块启动的目标预测模式,该预测模式索引信息用于指示当前块启动的目标预测模式在MPM列表中的索引信息。
步骤3:根据目标预测模式对当前块进行预测。
实施例三
在另一实施例中还提供了一种编码方法,该编码方法应用于编码端,该编码方法包括如下步骤:
步骤1:若当前块启动常规帧内预测,当常规帧内预测的目标预测模式来自于MPM列表中,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,根据当前块启动的常规帧内预测的目标预测模式是否为planar预测模式,对planar指示信息进行编码,其中,planar指示信息用于指示当前块启动的目标预测模式是否为planar预测模式,planar指示信息是基于一个上下文模型进行基于上下文的自适应二进制算术编码得到。
在另一实施例中,还提供了一种解码方法,该解码方法应用于解码端,是与上述编码方法对应的解码方法,该解码方法包括如下步骤:
步骤1:若当前块启动常规帧内预测,常规帧内预测的目标预测模式来自于MPM列表中,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,对planar指示信息进行解码,其中,planar指示信息用于指示当前块启动的目标预测模式是否为planar预测模式,planar指示信息是基于一个上下文模型进行基于上下文的自适应二进制算术解码得到。
步骤2:当基于planar指示信息确定当前块启动的目标预测模式为planar预测模式时,根据planar预测模式对当前块进行预测。
步骤3:当基于planar指示信息确定当前块启动的目标预测模式不是planar预测模式时,根据预测模式索引信息,从MPM列表中确定当前块启动的的目标预测模式,根据目标预测模式对当前块进行预测。
实施例四
在另一实施例中还提供了一种编码方法,该编码方法应用于编码端,该编码方法包括如下步骤:
步骤1:若当前块启动常规帧内预测,当常规帧内预测的目标预测模式来自于MPM列表中,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,根据当前块启动的常规帧内预测的目标预测模式是否为planar预测模式,对planar指示信息进行编码,其中,planar 指示信息用于指示当前块启动的目标预测模式是否为planar预测模式,planar指示信息是基于旁路的二进制算术编码得到。
在另一实施例中,还提供了一种解码方法,该解码方法应用于解码端,是与上述编码方法对应的解码方法,该解码方法包括如下步骤:
步骤1:若当前块启动常规帧内预测,常规帧内预测的目标预测模式来自于MPM列表中,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,对planar指示信息进行解码,其中,planar指示信息用于指示当前块启动的目标预测模式是否为planar预测模式,planar指示信息是基于旁路的二进制算术编码得到。
步骤2:当基于planar指示信息确定当前块启动的目标预测模式为planar预测模式时,根据planar预测模式对当前块进行预测。
步骤3:当基于planar指示信息确定当前块启动的目标预测模式不是planar预测模式时,根据预测模式索引信息,从MPM列表中确定当前块启动的的目标预测模式,根据目标预测模式对当前块进行预测。
色度的MPM
编码端与解码端之间传输的语法元素还包括色度预测模式索引信息,色度预测模式索引信息用于指示当前块的目标预测模式在对应的候选预测模式列表中的索引信息。
相关技术中,色度预测模式索引信息与其对应的预测模式如下表31所示,
表31
Figure PCTCN2020097144-appb-000037
由上表29可知,若当前块支持跨分量预测模式,且当前块启动跨分量预测模式,则色度预测模式索引信息最大占据4个比特位,若当前块支持跨分量预测模式,且当前块未启动跨分量预测模式,则色度预测模式索引信息最大占据5个比特位。
相关技术中,若当前块支持跨分量预测模式,且当前块启动跨分量预测模式,则色度预测模式索引信息的编解码方式如下表32所示:
表32
Figure PCTCN2020097144-appb-000038
由上表30可知,在当前块支持跨分量预测模式,且当前块启动跨分量预测模式的情况下,色度预测模式索引信息的第1个比特位是基于第1个上下文模型基于上下文的自适应二进制算术解码得到,色度预测模式索引信息的第2个比特位是基于第2个上下文模型进行基于上下文的自适应二进制算术解码得到,色度预测模式索引信息的第3个比特位和第4个比特位是基于第3个上下文模型进行基于上下文的自适应二进制算术解码得到,而且,这3个上下文模型均为不同的上下文模型。也即是,色度预测模式索引信息需要使用3个上下文模型,内存开销较大。
色度的MPM的第一种实现方式
图37是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图37所示,当前块支持跨分量预测模式,且当前块启动跨分量预测模式,当前块为色度块,该方法包括:
步骤3701:在根据跨分量预测模式对当前块进行预测时,确定当前块的目标预测模式在对应的候选预测模式列表中的索引信息。
其中,编码端可以通过率失真代价抉择最终的目标预测模式,然后通过编码索引信息通知解码端选择了哪一个预测模式。
步骤3702:根据当前块的目标预测模式在对应的候选预测模式列表中的索引信息,对色度预测模式索引信息进行编码。
其中,色度预测模式索引信息用于指示当前块的目标预测模式在对应的候选预测模式列表中的索引信息。色度预测模式索引信息的第1个比特位是基于第一上下文模型基于上下文的自适应二进制算术编码得到,色度预测模式索引信息的第2个比特位是基于第二上下文模型进行基于上下文的自适应二进制算术编码,而且,第一上下文模型与所述第二上下文模型不同;色度预测模式索引信息的第3个比特位和第4个比特位是基于旁路的二进制算术编码得到。
作为一个示例,编码端存储有色度预测模式候选者列表,编码端可以通过RDO抉择最终使用的目标预测模式,然后编码索引值通知解码端选择了哪一个预测模式,即编码色度预测模式索引信息。
作为一个示例,色度预测模式包含与亮度相同的预测模式和跨分量预测模式。其中,跨 分量预测模式包含通过两侧模板推导得到线性模型系数的模式,通过上侧模板推导得到线性模型系数的模式和通过左侧模板推导得到线性模型系数的模式,还有Planar预测模式,DC预测模式、垂直预测模式和水平预测模式。
图38是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,是与上述图37所示的编码方法对应的解码方法,如图38所示,当前块支持跨分量预测模式,且当前块启动跨分量预测模式,当前块为色度块,该方法包括:
步骤3801:在根据跨分量预测模式对当前块进行预测时,对色度预测模式索引信息进行解码。
其中,色度预测模式索引信息的第1个比特位是基于第一上下文模型基于上下文的自适应二进制算术解码得到,色度预测模式索引信息的第2个比特位是基于第二上下文模型进行基于上下文的自适应二进制算术解码,第一上下文模型与第二上下文模型不同;色度预测模式索引信息的第3个比特位和第4个比特位是基于旁路的二进制算术解码得到。
步骤3802:根据色度预测模式索引信息,从候选预测模式列表中确定当前块的目标预测模式。
步骤3803:根据目标预测模式对当前块进行预测。
作为一个示例,解码端可以接收编码流,然后从中解析色度预测模式相关语法。每一种预测模式需要花费的编码比特开销是不同的,解码端通过解析色度预测模式索引信息唯一确定当前块的色度预测模式,从而得到当前块的预测值,用于后续的重建过程。
本申请实施例中,当前块支持跨分量预测模式,且当前块启动跨分量预测模式时,色度预测模式索引信息的第3个比特位和第4个比特位可以基于旁路的二进制算术解码得到,如此,可以将色度预测模式索引信息所需的上下文模型的数量降至2,减少了编解码的复杂度,减少了内容开销。
作为一个示例,若当前块支持跨分量预测模式,且当前块启动跨分量预测模式,则色度预测模式索引信息的编解码方式如下表33和下表34所示:
表33
Figure PCTCN2020097144-appb-000039
表34
Figure PCTCN2020097144-appb-000040
色度的MPM的第二种实现方式
图39是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图39所示,当前块支持跨分量预测模式,且当前块启动跨分量预测模式,当前块为色度块,该方法包括:
步骤3901:在根据跨分量预测模式对当前块进行预测时,确定当前块的目标预测模式在对应的候选预测模式列表中的索引信息。
其中,编码端可以通过率失真代价抉择最终的目标预测模式,然后通过编码索引信息通知解码端选择了哪一个预测模式。
步骤3902:根据当前块的目标预测模式在对应的候选预测模式列表中的索引信息,对色度预测模式索引信息进行编码。
其中,色度预测模式索引信息用于指示当前块的目标预测模式在对应的候选预测模式列表中的索引信息。色度预测模式索引信息的第1个比特位是基于一个上下文模型进行基于上下文的自适应二进制算术编码得到,色度预测模式索引信息的第2个比特位、3个比特位和第4个比特位是基于旁路的二进制算术编码得到。
图40是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,是与上述图39所示的编码方法对应的解码方法,如图40所示,当前块支持跨分量预测模式,且当前块启动跨分量预测模式,当前块为色度块,该方法包括:
步骤4001:在根据跨分量预测模式对当前块进行预测时,对色度预测模式索引信息进行解码。
其中,色度预测模式索引信息的第1个比特位是基于一个上下文模型进行基于上下文的自适应二进制算术解码得到,色度预测模式索引信息的第2个比特位、3个比特位和第4个比特位是基于旁路的二进制算术解码得到。
步骤4002:根据色度预测模式索引信息,从候选预测模式列表中确定当前块的目标预测模式。
步骤4003:根据目标预测模式对当前块进行预测。
本申请实施例中,当前块支持跨分量预测模式,且当前块启动跨分量预测模式时,色度预测模式索引信息的第1个比特位使用1个上下文模型,而第2个比特位、第3个比特位和第4个比特位均采用基于旁路的二进制算术编解码方式,如此,可以将色度预测模式索引信息所需的上下文模型的数量降至1,减少了编解码的复杂度,减少了内容开销。
作为一个示例,若当前块支持跨分量预测模式,且当前块启动跨分量预测模式,则色度预测模式索引信息的编解码方式如下表35和下表36所示:
表35
Figure PCTCN2020097144-appb-000041
表36
Figure PCTCN2020097144-appb-000042
色度的MPM的第三种实现方式
图41是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图41所示,当前块支持跨分量预测模式,当前块为色度块时,该方法包括:
步骤4101:在根据跨分量预测模式对当前块进行预测时,确定当前块的目标预测模式在对应的候选预测模式列表中的索引信息。
其中,编码端可以通过率失真代价抉择最终的目标预测模式,然后通过编码索引信息通知解码端选择了哪一个预测模式。
步骤4102:根据当前块的目标预测模式在对应的候选预测模式列表中的索引信息,对色度预测模式索引信息进行编码。
其中,色度预测模式索引信息用于指示当前块的目标预测模式在对应的候选预测模式列表中的索引信息。
作为一个示例,若色度预测模式索引信息为10,则目标预测模式为第一跨分量预测模式;
若色度预测模式索引信息为110,则目标预测模式为第二跨分量预测模式;
若色度预测模式索引信息为111,则目标预测模式为第二跨分量预测模式;
若色度预测模式索引信息为11110,则目标预测模式为planar预测模式;
若色度预测模式索引信息为111110,则目标预测模式为垂直预测模式;
若色度预测模式索引信息为1111110,则目标预测模式为水平预测模式;
若色度预测模式索引信息为1111111,则目标预测模式为DC预测模式。
示例的,色度预测模式索引信息与其对应的预测模式如下表37所示:
表37
Figure PCTCN2020097144-appb-000043
由上表35可知,当当前块支持跨分量预测模式,且当前块启动跨分量预测模式,色度预测模式索引信息指示跨分量预测模式,且这种情况下,色度预测模式索引信息最多占据3个比特位,减少了比特开销,进而减小了内存开销。另外,当当前块支持跨分量预测模式,且当前块未启动跨分量预测模式,色度预测模式索引信息指示常规帧内预测,且这种情况下,色度预测模式索引信息最多占据6个比特位。
作为另一实施例,若色度预测模式索引信息为10,则目标预测模式为第一跨分量预测模式;
若色度预测模式索引信息为110,则目标预测模式为第二跨分量预测模式;
若色度预测模式索引信息为111,则目标预测模式为第二跨分量预测模式;
若色度预测模式索引信息为11110,则目标预测模式为planar预测模式;
若色度预测模式索引信息为111110,则目标预测模式为垂直预测模式;
若色度预测模式索引信息为1111110,则目标预测模式为水平预测模式;
若色度预测模式索引信息为1111111,则目标预测模式为DC预测模式。
步骤4103:根据目标预测模式对当前块进行预测。
示例的,色度预测模式索引信息与其对应的预测模式如下表38所示:
表38
Figure PCTCN2020097144-appb-000044
由上表36可知,当当前块支持跨分量预测模式,且当前块启动跨分量预测模式,色度预测模式索引信息指示跨分量预测模式,且这种情况下,色度预测模式索引信息最多占据3个比特位,减少了比特开销,进而减小了内存开销。另外,当当前块支持跨分量预测模式,且当前块未启动跨分量预测模式,色度预测模式索引信息指示常规帧内预测,且这种情况下,色度预测模式索引信息最多占据7个比特位。
图42是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法是与图41所示的编码方法对应的解码方法,如图42所示,当前块支持跨分量预测模式,当前块为色度块时,该方法包括:
步骤4201:在根据跨分量预测模式对当前块进行预测时,对色度预测模式索引信息进行解码。
步骤4202:根据色度预测模式索引信息,从候选预测模式列表中确定当前块的目标预测模式。
作为一个示例,若色度预测模式索引信息为10,则目标预测模式为第一跨分量预测模式;
若色度预测模式索引信息为110,则目标预测模式为第二跨分量预测模式;
若色度预测模式索引信息为111,则目标预测模式为第二跨分量预测模式;
若色度预测模式索引信息为11110,则目标预测模式为planar预测模式;
若色度预测模式索引信息为111110,则目标预测模式为垂直预测模式;
若色度预测模式索引信息为1111110,则目标预测模式为水平预测模式;
若色度预测模式索引信息为1111111,则目标预测模式为DC预测模式。
作为另一实施例,若色度预测模式索引信息为10,则目标预测模式为第一跨分量预测模式;
若色度预测模式索引信息为110,则目标预测模式为第二跨分量预测模式;
若色度预测模式索引信息为111,则目标预测模式为第二跨分量预测模式;
若色度预测模式索引信息为11110,则目标预测模式为planar预测模式;
若色度预测模式索引信息为111110,则目标预测模式为垂直预测模式;
若色度预测模式索引信息为1111110,则目标预测模式为水平预测模式;
若色度预测模式索引信息为1111111,则目标预测模式为DC预测模式。
根据目标预测模式对当前块进行预测。
步骤4203:根据目标预测模式对当前块进行预测。
本申请实施例,当当前块支持跨分量预测模式,且当前块启动跨分量预测模式时,可以减小色度预测模式索引信息的比特开销,进而减小内存开销。
CCLM模式
图43是申请实施例提供的一种编解码方法的流程图,该方法应用于编码端或解码端,如图43所示,该方法包括:
步骤4301:当当前块的亮度和色度共用一棵划分树时,若当前块对应的亮度块的宽高尺寸为64*64,当前块对应的色度块的尺寸为32*32,则当前块不支持跨分量预测模式。
本申请实施例,可以降低CCLM模式下亮度和色度的依赖性,避免色度块需要等待一个64*64的亮度块的重建值。
ALF模式
编码端与解码端之间传输的语法元素还包括ALF指示信息,ALF指示信息用于指示当前块是否启动ALF。示例的,ALF指示信息为alf_ctb_flag。
相关技术中,ALF指示信息的编解码方式如下表39所示:
表39
Figure PCTCN2020097144-appb-000045
其中,上下文模型选择的计算公式为:
ctxInc=(condL&&availableL)+(condA&&availableA)+ctxSetIdx*3
由表37和上述公式可知,ALF指示信息的编解码的上下文模型的选择,取决于三个因素:当前块的上边块是否使用了ALF,当前块的左边块是否使用了ALF,以及当前分量的索引,总计需要使用到9个上下文模型。
具体地,若当前块支持ALF,当前块为亮度块,则基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术编码或解码,目标上下文模型是根据当前块的上边块是否启动ALF,以及当前块的左边块是否启动ALF,从第一上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型。
若当前块支持ALF,当前块为CB色度块,则基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术编码或解码,目标上下文模型是根据当前块的上边块是否启动ALF,以及当前块的左边块是否启动ALF,从第二上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型。
若当前块支持ALF,当前块为CR色度块,则基于目标上下文模型对ALF指示信息进行 基于上下文的自适应二进制算术编码或解码,目标上下文模型是根据当前块的上边块是否启动ALF,以及当前块的左边块是否启动ALF,从第三上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型。其中,上述9个上下文模型均不同。
ALF模式的第一种实现方式
图44是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图44所示,该方法包括如下步骤:
步骤4401:若当前块支持ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术编码;其中,目标上下文模型是根据当前块的上边块是否启动ALF,以及当前块的左边块是否启动ALF,从第一上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型。
作为一个示例,第一上下文模型集合包括第一上下文模型、第二上下文模型和第三上下文模型。若当前块的上边块启动ALF,且当前块的左边块启动ALF,则目标上下文模型为第一上下文模型;若当前块的上边块启动ALF且当前块的左边块未启动ALF,或者,若当前块的上边块未启动ALF且当前块的左边块启动ALF,则目标上下文模型为第二上下文模型;若当前块的上边块未启动ALF,且当前块的左边块未启动ALF,则目标上下文模型为第三上下文模型。
步骤4402:若当前块支持ALF,当前块为色度块,则在根据ALF模式对当前块进行滤波处理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术编码;其中,目标上下文模型是根据所述当前块的上边块是否启动ALF,以及所述当前块的左边块是否启动ALF,从第二上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型,第二上下文模型集合包括的3个上下文模型与第一上下文模型集合包括的3个上下文模型不同。
其中,色度块包括CB色度块和CR色度块。
作为一个示例,第二上下文模型集合包括第四上下文模型、第五上下文模型和第六上下文模型。若当前块的上边块启动ALF,且当前块的左边块启动ALF,则目标上下文模型为第四上下文模型;若当前块的上边块启动ALF且当前块的左边块未启动ALF,或者,若当前块的上边块未启动ALF且当前块的左边块启动ALF,则目标上下文模型为第五上下文模型;若当前块的上边块未启动ALF,且当前块的左边块未启动ALF,则目标上下文模型为第六上下文模型。
作为一个示例,编码端可以通过RDO决策当前块是否启动ALF,即是否使用自适应环路滤波,并在码流中编码ALF指示信息,以告知解码端是否启动ALF,进而告知解码端是否进行自适应环路滤波。而且,如果启动ALF,还需要编码ALF相关的语法元素,编码端也同样进行滤波。
图45是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法为图44所示的编码方法对应的解码方法,如图45所示,该方法包括如下步骤:
步骤4501:若当前块支持自适应环路滤波器ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码;其中,目标上下文模型是根据当前块的上边块是否启动ALF,以及当前 块的左边块是否启动ALF,从第一上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型。
步骤4502:若当前块支持ALF,当前块为色度块,则在根据ALF模式对当前块进行滤波处理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码;其中,目标上下文模型是根据当前块的上边块是否启动ALF,以及当前块的左边块是否启动ALF,从第二上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型,第二上下文模型集合包括的3个上下文模型与第一上下文模型集合包括的3个上下文模型不同。
其中,色度块包括CB色度块和CR色度块。
解码端接收编码流后,可以通过解码ALF指示信息,解析当前块是否启动自适应环路滤波。若ALF指示信息指示当前块启动ALF,则解码端还可以继续解码ALF相关的语法元素,以对当前块进行自适应环路滤波,从而得到滤波后的重构像素。
作为一个示例,ALF指示信息的编解码方式如下表40所示:
表40
Figure PCTCN2020097144-appb-000046
其中,上下文模型选择的计算公式为:
ctxInc=(condL&&availableL)+(condA&&availableA)+ctxSetIdx*3
本申请实施例中,针对ALF指示信息,CB色度块和CR色度块可以共用3个不同的上下文模型,如此,可以将ALF指示信息使用的上下文模型数量将为6,从而减小了编解码的复杂度,减小了内存开销。
ALF模式的第二种实现方式
图46是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图46所示,该方法包括如下步骤:
步骤4601:若当前块支持ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术编码;其中,目标上下文模型是根据当前块的上边块是否启动ALF,以及当前块的左边块是否启动ALF,从第一上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型。
作为一个示例,第一上下文模型集合包括第一上下文模型、第二上下文模型和第三上下文模型。若当前块的上边块启动ALF,且当前块的左边块启动ALF,则目标上下文模型为第一上下文模型;若当前块的上边块启动ALF且当前块的左边块未启动ALF,或者,若当前块的上边块未启动ALF且当前块的左边块启动ALF,则目标上下文模型为第二上下文模型;若当前块的上边块未启动ALF,且当前块的左边块未启动ALF,则目标上下文模型为第三上下文模型。
步骤4602:若当前块支持ALF,当前块为色度块,则在根据ALF模式对当前块进行滤波处理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术编码;其中,目标上下文模型是根据所述当前块的上边块是否启动ALF,以及所述当前块的左边块是否启动ALF,从第二上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模 型,第二上下文模型集合包括的3个上下文模型与第一上下文模型集合包括的3个上下文模型相同。
其中,色度块包括CB色度块和CR色度块。
作为一个示例,第二上下文模型集合包括第四上下文模型、第五上下文模型和第六上下文模型。若当前块的上边块启动ALF,且当前块的左边块启动ALF,则目标上下文模型为第四上下文模型;若当前块的上边块启动ALF且当前块的左边块未启动ALF,或者,若当前块的上边块未启动ALF且当前块的左边块启动ALF,则目标上下文模型为第五上下文模型;若当前块的上边块未启动ALF,且当前块的左边块未启动ALF,则目标上下文模型为第六上下文模型。
图47是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法为图46所示的编码方法对应的解码方法,如图47所示,该方法包括如下步骤:
步骤4701:若当前块支持自适应环路滤波器ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码;其中,目标上下文模型是根据当前块的上边块是否启动ALF,以及当前块的左边块是否启动ALF,从第一上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型。
步骤4702:若当前块支持ALF,当前块为色度块,则在根据ALF模式对当前块进行滤波处理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码;其中,目标上下文模型是根据当前块的上边块是否启动ALF,以及当前块的左边块是否启动ALF,从第二上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型,第二上下文模型集合包括的3个上下文模型与第一上下文模型集合包括的3个上下文模型不同。
其中,色度块包括CB色度块和CR色度块。
作为一个示例,ALF指示信息的编解码方式如下表41所示:
表41
Figure PCTCN2020097144-appb-000047
其中,上下文模型选择的计算公式为:
ctxInc=(condL&&availableL)+(condA&&availableA)+ctxSetIdx*3
本申请实施例中,针对ALF指示信息,亮度块、CB色度块和CR色度块均可以共用3个不同的上下文模型,如此,可以将ALF指示信息使用的上下文模型数量将为3,从而减小了编解码的复杂度,减小了内存开销。
ALF模式的第三种实现方式
图48是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图48所示,该方法包括如下步骤:
步骤4801:若当前块支持ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,基于第一上下文模型对ALF指示信息进行基于上下文的自适应二进制算术编码。
步骤4802:若当前块支持ALF,当前块为CB色度块,则在根据ALF模式对当前块进行 滤波处理之前,基于第二上下文模型对ALF指示信息进行基于上下文的自适应二进制算术编码。
步骤4803:若当前块支持ALF,当前块为CR色度块,则在根据ALF模式对当前块进行滤波处理之前,基于第三上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,第一上下文模型、第二上下文模型和第三上下文模型为不同的上下文模型。
图49是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法为图48所示的编码方法对应的解码方法,如图49所示,该方法包括如下步骤:
步骤4901:若当前块支持ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,基于第一上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,ALF指示信息用于指示当前块是否启动ALF。
步骤4902:若当前块支持ALF,当前块为CB色度块,则在根据ALF模式对当前块进行滤波处理之前,基于第二上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码。
步骤4903:若当前块支持ALF,当前块为CR色度块,则在根据ALF模式对当前块进行滤波处理之前,基于第三上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,第一上下文模型、第二上下文模型和第三上下文模型为不同的上下文模型。
作为一个示例,ALF指示信息的编解码方式如下表42所示:
表42
Figure PCTCN2020097144-appb-000048
其中,上下文模型选择的计算公式为:
ctxInc=(condL&&availableL)+(condA&&availableA)+ctxSetIdx
本申请实施例中,针对ALF指示信息,亮度块共用一个上下文模型,CB色度块共用一个上下文模型,CR色度块共用一个上下文模型,如此,可以将ALF指示信息使用的上下文模型数量将为3,从而减小了编解码的复杂度,减小了内存开销。
ALF模式的第四种实现方式
图50是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图50所示,该方法包括如下步骤:
步骤5001:若当前块支持ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,基于第一上下文模型对ALF指示信息进行基于上下文的自适应二进制算术编码。
步骤5002:若当前块支持ALF,当前块为色度块,则在根据ALF模式对当前块进行滤波处理之前,基于第二上下文模型对ALF指示信息进行基于上下文的自适应二进制算术编码。
其中,色度块包括CB色度块和CR色度块。
图51是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法为图50所示的编码方法对应的解码方法,如图51所示,该方法包括如下步骤:
步骤5101:若当前块支持ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,基于第一上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码, ALF指示信息用于指示当前块是否启动ALF。
步骤5102:若当前块支持ALF,当前块为色度块,则在根据ALF模式对当前块进行滤波处理之前,基于第二上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,第一上下文模型和第二上下文模型不同。
作为一个示例,ALF指示信息的编解码方式如下表43所示:
表43
Figure PCTCN2020097144-appb-000049
其中,上下文模型选择的计算公式为:
ctxInc=(condL&&availableL)+(condA&&availableA)+ctxSetIdx
本申请实施例中,针对ALF指示信息,亮度块共用一个上下文模型,CB色度块和CR色度块共用一个上下文模型,如此,可以将ALF指示信息使用的上下文模型数量将为2,从而减小了编解码的复杂度,减小了内存开销。
ALF模式的第五种实现方式
图52是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图52所示,该方法包括如下步骤:
步骤5201:若当前块支持ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术编码;其中,目标上下文模型是根据当前块的上边块是否启动ALF,以及当前块的左边块是否启动ALF,从第一上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型。
作为一个示例,第一上下文模型集合包括第一上下文模型、第二上下文模型和第三上下文模型。若当前块的上边块启动ALF,且当前块的左边块启动ALF,则目标上下文模型为第一上下文模型;若当前块的上边块启动ALF且当前块的左边块未启动ALF,或者,若当前块的上边块未启动ALF且当前块的左边块启动ALF,则目标上下文模型为第二上下文模型;若当前块的上边块未启动ALF,且当前块的左边块未启动ALF,则目标上下文模型为第三上下文模型。
步骤5202:若当前块支持ALF,当前块为CB色度块,则在根据ALF模式对当前块进行滤波处理之前,基于第一上下文模型对ALF指示信息进行基于上下文的自适应二进制算术编码。
步骤5203:若当前块支持ALF,当前块为CR色度块,则在根据ALF模式对当前块进行滤波处理之前,基于第二上下文模型对ALF指示信息进行基于上下文的自适应二进制算术编码,第一上下文模型集合包括的上下文模型、第一上下文模型和第二上下文模型为不同的上下文模型。
图53是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法为图52所示的编码方法对应的解码方法,如图53所示,该方法包括如下步骤:
步骤5301:若当前块支持ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码; 其中,目标上下文模型是根据当前块的上边块是否启动ALF,以及当前块的左边块是否启动ALF,从第一上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型。
作为一个示例,第一上下文模型集合包括第一上下文模型、第二上下文模型和第三上下文模型。若当前块的上边块启动ALF,且当前块的左边块启动ALF,则目标上下文模型为第一上下文模型;若当前块的上边块启动ALF且当前块的左边块未启动ALF,或者,若当前块的上边块未启动ALF且当前块的左边块启动ALF,则目标上下文模型为第二上下文模型;若当前块的上边块未启动ALF,且当前块的左边块未启动ALF,则目标上下文模型为第三上下文模型。
步骤5302:若当前块支持ALF,当前块为CB色度块,则在根据ALF模式对当前块进行滤波处理之前,基于第一上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码。
步骤5303:若当前块支持ALF,当前块为CR色度块,则在根据ALF模式对当前块进行滤波处理之前,基于第二上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,第一上下文模型集合包括的上下文模型、第一上下文模型和第二上下文模型为不同的上下文模型。
作为一个示例,对于亮度块来说,ALF指示信息的编解码方式如下表44所示:
表44
Figure PCTCN2020097144-appb-000050
其中,上下文模型选择的计算公式:
ctxInc=(condL&&availableL)+(condA&&availableA)+ctxSetIdx*3
作为一个示例,对于色度块来说,ALF指示信息的编解码方式如下表45所示:
表45
Figure PCTCN2020097144-appb-000051
其中,上下文模型选择的计算公式:
ctxInc=(condL&&availableL)+(condA&&availableA)+ctxSetIdx
本申请实施例中,针对ALF指示信息,亮度块需要使用3个不同的上下文模型,CB色度块共用1个上下文模块,CR色度块共用1个不同的上下文模型,如此,可以将ALF指示信息使用的上下文模型数量将为5,从而减小了编解码的复杂度,减小了内存开销。
ALF模式的第六种实现方式
图54是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图54所示,该方法包括如下步骤:
步骤5401:若当前块支持ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,基于第一上下文模型对ALF指示信息进行基于上下文的自适应二进制算术编码。
步骤5402:若当前块支持ALF,当前块为色度块,则在根据ALF模式对当前块进行滤 波处理之前,基于第二上下文模型对ALF指示信息进行基于上下文的自适应二进制算术编码,第二上下文模型与第一上下文模型为同一上下文模型。
图55是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法为图54所示的编码方法对应的解码方法,如图55所示,该方法包括如下步骤:
步骤5501:若当前块支持ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,基于第一上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,ALF指示信息用于指示当前块是否启动ALF。
步骤5502:若当前块支持ALF,当前块为色度块,则在根据ALF模式对当前块进行滤波处理之前,基于第二上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,第二上下文模型与第一上下文模型为同一上下文模型。
本申请实施例中,针对ALF指示信息,亮度块、CB色度块和CR色度块共用1个上下文模型,如此,可以将ALF指示信息使用的上下文模型数量将为1,从而减小了编解码的复杂度,减小了内存开销。
ALF模式的第七种实现方式
图56是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图56所示,该方法包括如下步骤:
步骤5601:若当前块支持ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,对ALF指示信息进行基于旁路的二进制算术编码。
步骤5602:若当前块支持ALF,当前块为色度块,则在根据ALF模式对当前块进行滤波处理之前,对ALF指示信息对ALF指示信息进行基于旁路的二进制算术编码。
图57是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法为图56所示的编码方法对应的解码方法,如图57所示,该方法包括如下步骤:
步骤5701:若当前块支持ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,对ALF指示信息进行基于旁路的二进制算术解码,ALF指示信息用于指示当前块是否启动ALF。
步骤5702:若当前块支持ALF,当前块为色度块,则在根据ALF模式对当前块进行滤波处理之前,对ALF指示信息进行基于旁路的二进制算术解码。
本申请实施例中,针对ALF指示信息,当当前块为亮度块、CB色度块和CR色度块的情况下,均可以采用基于旁路的二进制算术编解码的方式,对ALF指示信息进行编码或解码,如此,可以将ALF指示信息使用的上下文模型数量降为0,从而减小了编解码的复杂度,减小了内存开销。
ALF模式的第八种实现方式
图58是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图58所示,该方法包括如下步骤:
步骤5801:若当前块支持ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,基于一个上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,ALF指示信息用于指示当前块是否启动ALF。
步骤5802:若当前块支持器ALF,且当前块启动自适应环路滤波器ALF,当前块为色度 块,则在根据ALF模式对当前块进行滤波处理之前,对ALF指示信息进行基于旁路的二进制算术解码。
其中,色度块包括CB色度块和CR色度块。
图59是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法为图58所示的编码方法对应的解码方法,如图59所示,该方法包括如下步骤:
步骤5901:若当前块支持ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,基于一个上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,ALF指示信息用于指示当前块是否启动ALF。
步骤5902:若当前块支持器ALF,且当前块启动自适应环路滤波器ALF,当前块为色度块,则在根据ALF模式对当前块进行滤波处理之前,对ALF指示信息进行基于旁路的二进制算术解码。
其中,色度块包括CB色度块和CR色度块。
本申请实施例中,针对ALF指示信息,亮度块使用1个上下文模型,CB色度块和CR色度块均采用基于旁路的二进制算术编解码的方式进行编码或解码,如此,可以将ALF指示信息使用的上下文模型数量降为1,从而减小了编解码的复杂度,减小了内存开销。
ALF模式的第九种实现方式
图60是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图60所示,该方法包括如下步骤:
步骤6001:若当前块支持ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,对ALF指示信息进行基于旁路的二进制算术编码,ALF指示信息用于指示当前块是否启动ALF。
步骤6002:若当前块支持ALF,当前块为色度块,则在根据ALF模式对当前块进行滤波处理之前,基于一个上下文模型对ALF指示信息进行基于上下文的自适应二进制算术编码。
其中,色度块包括CB色度块和CR色度块。
图61是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法为图60所示的编码方法对应的解码方法,如图61所示,该方法包括如下步骤:
步骤6101:若当前块支持ALF,当前块为亮度块,则在根据ALF模式对当前块进行滤波处理之前,对ALF指示信息进行基于旁路的二进制算术解码,ALF指示信息用于指示当前块是否启动ALF。
步骤6102:若当前块支持ALF,当前块为色度块,则在根据ALF模式对当前块进行滤波处理之前,基于一个上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码。
其中,色度块包括CB色度块和CR色度块。
本申请实施例中,针对ALF指示信息,亮度块采用基于旁路的二进制算术编解码的方式进行编码或解码,CB色度块和CR色度块共用一个上下文模型,如此,可以将ALF指示信息使用的上下文模型数量降为1,从而减小了编解码的复杂度,减小了内存开销。
MIP模式
在MIP模式下,编码端与解码端之间传输的语法元素还包括MIP指示信息,MIP指示信 息用于指示当前块是否启动基于矩阵的帧内预测模式。示例的,MIP指示信息为Intra_MIP_flag。
相关技术中,若当前块支持基于矩阵的帧内预测模式,则在根据基于矩阵的帧内预测模式对当前块进行预测之前,可以基于目标上下文模型对MIP指示信息进行基于上下文的自适应二进制算术解码。其中,目标上下文模型是根据当前块的上边块是否启动基于矩阵的帧内预测模式,当前块的左边块是否启动基于矩阵的帧内预测模式,以及当前块是否满足预设尺寸条件,从4个不同的上下文模型中选择的一个上下文模型。
作为一个示例,预设尺寸条件可以为当前块的宽度大于高度的两倍,或者当前块的高度大于宽度的两倍。当然,该预设尺寸条件也可以为其他条件,本申请实施例对此不做限定。
具体地,假设上述4个不同的上下文模型包括第一上下文模型、第二上下文模型、第三上下文模型和第四上下文模型。若当前块的上边块启动基于矩阵的帧内预测模式,当前块的左边块启动基于矩阵的帧内预测模式,且当前块不满足预设尺寸条件,则目标上下文模型为第一上下文模型;若当前块的上边块启动基于矩阵的帧内预测模式,当前块的左边块未启动基于矩阵的帧内预测模式,且当前块不满足预设尺寸条件,或者,若当前块的上边块未启动基于矩阵的帧内预测模式,当前块的左边块启动基于矩阵的帧内预测模式,且当前块不满足预设尺寸条件,则目标上下文模型为第二上下文模型;若当前块的上边块未启动基于矩阵的帧内预测模式,当前块的左边块未启动基于矩阵的帧内预测模式,且当前块不满足预设尺寸条件,则目标上下文模型为第三上下文模型;若当前块满足预设尺寸条件,则目标上下文模型为第四上下文模型。
由上可知,在MIP模式下,MIP指示信息需要使用4个不同的上下文模型,内存开销较大。
MIP模式下的第一种实现方式
图62是本申请实施例提供的一种编解码方法的流程图,该方法应用于编码端或解码端,如图62所示,该方法包括:
步骤6201:若当前块的宽高尺寸为32*32,则当前块不支持基于矩阵的帧内预测模式。
其中,当前块为亮度块或色度块。示例的,若当前块为亮度块,且当前块的宽高尺寸为32*32,则当前块不支持基于矩阵的帧内预测模式。
作为另一个示例,若当前块的宽高尺寸为32*16,则当前块不支持基于矩阵的帧内预测模式。示例的,当前块为亮度块或色度块。
作为另一个示例,若当前块的宽高尺寸为4*4,则当前块不支持基于矩阵的帧内预测模式。示例的,当前块为亮度块或色度块。
本申请实施例中,可以保证当当前块为大尺寸块时,当前块不支持基于矩阵的帧内预测模式,即当前块不能使能基于矩阵的帧内预测模式,如此,可以减小运算复杂度。
MIP模式下的第二种实现方式
图63是本申请实施例提供的一种编解方法的流程图,该方法应用于编码端,如图63所示,若当前块支持基于矩阵的帧内预测模式,该方法包括:
步骤6301:根据当前块是否启动基于矩阵的帧内预测模式,基于目标上下文模型对MIP指示信息进行基于上下文的自适应二进制算术编码;其中,目标上下文模型是根据当前块的 上边块是否启动基于矩阵的帧内预测模式,以及当前块的左边块是否启动基于矩阵的帧内预测模式从3个不同的上下文模型中选择的一个上下文模型。
作为一个示例,假设上述3个不同的上下文模型包括第一上下文模型、第二上下文模型和第三上下文模型。若当前块的上边块启动基于矩阵的帧内预测模式,且当前块的左边块启动基于矩阵的帧内预测模式,则目标上下文模型为第一上下文模型;若当前块的上边块启动基于矩阵的帧内预测模式且当前块的左边块未启动基于矩阵的帧内预测模式,或者,若当前块的上边块未启动基于矩阵的帧内预测模式且当前块的左边块启动基于矩阵的帧内预测模式,则目标上下文模型为第二上下文模型;若当前块的上边块未启动基于矩阵的帧内预测模式,且当前块的左边块未启动基于矩阵的帧内预测模式,则目标上下文模型为第三上下文模型。
作为一个示例,编码端若确定当前块满足基于矩阵的帧内预测的条件,可以通过RDO决策当前块是否启动MIP模式,即要不要使用基于矩阵的帧内预测方法,并在编码流中编码MIP指示信息,来告知解码端是否启动MIP模式。
在编码流中会根据具体情况编码上述MIP指示信息,而且,如果当前块启动了MIP模式,则还需要编码其他与MIP相关的语法元素。
图64是本申请实施例提供的一种编解方法的流程图,该方法应用于解码端,该方法是与上述图63所示的编码方法对应的解码方法,如图64所示,若当前块支持基于矩阵的帧内预测模式,该方法包括:
步骤6401:在根据基于矩阵的帧内预测模式对当前块进行预测之前,基于目标上下文模型对MIP指示信息进行基于上下文的自适应二进制算术解码;其中,目标上下文模型是根据当前块的上边块是否启动基于矩阵的帧内预测模式,以及当前块的左边块是否启动基于矩阵的帧内预测模式从3个不同的上下文模型中选择的一个上下文模型。
作为一个示例,假设上述3个不同的上下文模型包括第一上下文模型、第二上下文模型和第三上下文模型。若当前块的上边块启动基于矩阵的帧内预测模式,且当前块的左边块启动基于矩阵的帧内预测模式,则目标上下文模型为第一上下文模型;若当前块的上边块启动基于矩阵的帧内预测模式且当前块的左边块未启动基于矩阵的帧内预测模式,或者,若当前块的上边块未启动基于矩阵的帧内预测模式且当前块的左边块启动基于矩阵的帧内预测模式,则目标上下文模型为第二上下文模型;若当前块的上边块未启动基于矩阵的帧内预测模式,且当前块的左边块未启动基于矩阵的帧内预测模式,则目标上下文模型为第三上下文模型。
步骤6402:若根据MIP指示信息确定当前块启动基于矩阵的帧内预测模式,则基于矩阵的帧内预测模式,对当前块进行预测。
作为一个示例,解码端接收编码流,若确定当前块满足解析条件,则可以解析MIP指示信息,以确定当前块是否启动MIP模式。解析条件包括:当前块为亮度块,并且当前块尺寸满足一定条件。当然,解析条件不限定上述条件,还可以包括其他条件。
通过MIP指示信息,解码端可以确定当前块的预测模式是否为基于矩阵的帧内预测模式,若为基于矩阵的帧内预测模式,则可以继续解析与该模式相关的其他语法,从而得到其预测模式信息,进而得到预测值。
本申请实施例中,在MIP模式下,可以不考虑当前块的尺寸条件,仅根据当前块的上边块是否启动基于矩阵的帧内预测模式,以及当前块的左边块是否启动基于矩阵的帧内预测模式,来从3个不同的上下文模型中选择的一个上下文模型,如此,可以将MIP指示信息所需的上下文模型的数量降至3,从而减小了编解码的复杂度,减小了内存开销。
MIP模式下的第三种实现方式
图65是本申请实施例提供的一种编解方法的流程图,该方法应用于编码端,如图65所示,若当前块支持基于矩阵的帧内预测模式,该方法包括:
步骤6501:根据当前块是否启动基于矩阵的帧内预测模式,基于目标上下文模型对MIP指示信息进行基于上下文的自适应二进制算术编码;其中,目标上下文模型是根据当前块是否满足预设尺寸条件从2个不同的上下文模型中的一个上下文模型。
其中,预设尺寸条件可以为当前块的宽度大于高度的两倍,或者当前块的高度大于宽度的两倍。
作为一个示例,假设上述2个不同的上下文模型包括第一上下文模型和第二上下文模型。若当前块的尺寸满足预设尺寸条件,则目标上下文模型为第一上下文模型,若当前块的尺寸不满足预设尺寸条件,则目标上下文模型为第二上下文模型。
图66是本申请实施例提供的一种编解方法的流程图,该方法应用于解码端,该方法是与上述图65所示的编码方法对应的解码方法,如图66所示,若当前块支持基于矩阵的帧内预测模式,该方法包括:
步骤6601:在根据基于矩阵的帧内预测模式对当前块进行预测之前,基于目标上下文模型对MIP指示信息进行基于上下文的自适应二进制算术解码;其中,目标上下文模型是根据当前块是否满足预设尺寸条件从2个不同的上下文模型中的一个上下文模型。
其中,预设尺寸条件可以为当前块的宽度大于高度的两倍,或者当前块的高度大于宽度的两倍。
作为一个示例,假设上述2个不同的上下文模型包括第一上下文模型和第二上下文模型。若当前块的尺寸满足预设尺寸条件,则目标上下文模型为第一上下文模型,若当前块的尺寸不满足预设尺寸条件,则目标上下文模型为第二上下文模型。
步骤6602:若根据MIP指示信息确定当前块启动基于矩阵的帧内预测模式,则基于矩阵的帧内预测模式,对当前块进行预测。
本申请实施例中,在MIP模式下,针对MIP指示信息的上下文模型的选择,可以不考虑当前块的上边块是否启动基于矩阵的帧内预测模式,以及当前块的左边块是否启动基于矩阵的帧内预测模式,仅根据尺寸条件来选择上下文模型,如此,可以将MIP指示信息所需的上下文模型的数量降至2,从而减小了编解码的复杂度,减小了内存开销。
MIP模式下的第四种实现方式
图67是本申请实施例提供的一种编解方法的流程图,该方法应用于编码端,如图67所示,若当前块支持基于矩阵的帧内预测模式,该方法包括:
步骤6701:根据当前块是否启动基于矩阵的帧内预测模式,基于同一个上下文模型对MIP指示信息进行基于上下文的自适应二进制算术编码。
图68是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法是与 上述图67所示的编码方法对应的解码方法,如图68所示,若当前块支持基于矩阵的帧内预测模式,该方法包括:
步骤6801:在根据基于矩阵的帧内预测模式对当前块进行预测之前,基于同一个上下文模型对MIP指示信息进行基于上下文的自适应二进制算术解码。
步骤6802:若根据MIP指示信息确定当前块启动基于矩阵的帧内预测模式,则基于矩阵的帧内预测模式,对当前块进行预测。
本申请实施例中,在MIP模式下,针对MIP指示信息的上下文模型的选择,可以不考虑当前块的上边块是否启动基于矩阵的帧内预测模式,以及当前块的左边块是否启动基于矩阵的帧内预测模式,也不考虑尺寸条件,在不同的条件下,均基于同一个上下文模型对MIP指示信息进行基于上下文的自适应二进制算术编码或解码,如此,可以将MIP指示信息所需的上下文模型的数量降至1,从而减小了编解码的复杂度,减小了内存开销。
MIP模式下的第五种实现方式
图69是本申请实施例提供的一种编解方法的流程图,该方法应用于编码端,如图69所示,若当前块支持基于矩阵的帧内预测模式,该方法包括:
步骤6901:根据当前块是否启动基于矩阵的帧内预测模式,对MIP指示信息进行基于旁路的二进制算术编码。
图70是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法是与上述图69所示的编码方法对应的解码方法,如图70所示,若当前块支持基于矩阵的帧内预测模式,该方法包括:
步骤7001:在根据基于矩阵的帧内预测模式对当前块进行预测之前,对MIP指示信息进行基于旁路的二进制算术解码。
步骤7002:若根据MIP指示信息确定当前块启动基于矩阵的帧内预测模式,则基于矩阵的帧内预测模式,对当前块进行预测。
本申请实施例中,在MIP模式下,不考虑当前块的上边块是否启动基于矩阵的帧内预测模式,以及当前块的左边块是否启动基于矩阵的帧内预测模式,也不考虑尺寸条件,在不同的条件下,对MIP指示信息均进行基于旁路的二进制算术编码或解码,也即是,不使用基于上下文的自适应二进制算术编码或解码,如此,可以将MIP指示信息所需的上下文模型的数量降至0,从而减小了编解码的复杂度,减小了内存开销。
BDPCM模式
相关技术中,BDPCM技术缺少SPS级语法来开启或者关闭BDPCM模式,也缺少SPS级语法来控制可以启用BDPCM模式的最大编码块的尺寸的开关,灵活性较低。
BDPCM模式下的第一种实现方式
图71是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图71所示,若当前块支持BDPCM模式,该方法包括如下步骤:
步骤7101:在对当前块进行BDPCM编码之前,对第一BDPCM指示信息进行编码,第一BDPCM指示信息用于指示当前处理单元是否支持BDPCM模式。
作为一个示例,第一BDPCM指示信息可以存在于序列参数集、图像参数级、slice级或 Tile级中。优选地,第一BDPCM指示信息存在于序列参数集,也即是,第一BDPCM指示信息为SPS级语法。
在另一实施例中,编码端还可以编码范围指示信息,该范围指示信息用于指示支持BDPCM模式的处理单元的范围。该范围指示信息可以存在于序列参数集、图像参数级、slice级或Tile级中。
图72是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法是与上述图71所示的编码方法对应的解码方法,如图72所示,若当前块支持BDPCM模式,该方法包括如下步骤:
步骤7201:在对当前块进行BDPCM解码之前,对第一BDPCM指示信息进行解码,第一BDPCM指示信息用于指示当前处理单元是否支持BDPCM模式。
步骤7202:根据第一BDPCM指示信息,对当前处理单元进行解码。
作为一个示例,若第一BDPCM指示信息指示当前处理单元支持BDPCM模式,则基于BDPCM模式对当前处理单元进行处理。
作为一个示例,第一BDPCM指示信息可以存在于序列参数集、图像参数级、slice级或Tile级中。优选地,第一BDPCM指示信息存在于序列参数集,也即是,第一BDPCM指示信息为SPS级语法。
在另一实施例中,解码端还可以对范围指示信息进行解码,该范围指示信息用于指示支持BDPCM模式的处理单元的范围。该范围指示信息可以存在于序列参数集、图像参数级、slice级或Tile级中。
本申请实施例中,增加了一个语法来开启或者关闭BDPCM模式,提高了编解码过程的灵活性。另外,还增加了一个语法用于指示支持BDPCM模式的处理单元的范围。
BDPCM模式下的第二种实现方式
图73是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图73所示,若当前块支持BDPCM模式,该方法包括如下步骤:
步骤7301:在对当前块进行BDPCM处理之前,对第二BDPCM指示信息进行编码,第二BDPCM指示信息用于指示支持BDPCM模式的处理单元的尺寸范围。
其中,当前处理单元中的单元的范围可以是序列级,图像参数级或块级等。比如,当前处理单元可以为当前图像块。
示例的,该尺寸范围可以为小于32*32的尺寸范围。
作为一个示例,第二BDPCM指示信息用于指示能够支持BDPCM模式的处理单元的最大尺寸,即可以使用BDPCM模式的处理单元的最大尺寸。示例的,所述最大尺寸为32*32。
作为一个示例,第二BDPCM指示信息可以存在于序列参数集(SPS)、图像参数级、slice级或Tile级中。优选地,第二BDPCM指示信息存在于序列参数集,也即是,第二BDPCM指示信息是在SPS级增加的一个语法。
图74是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法是与上述图71所示的编码方法对应的解码方法,如图72所示,若当前块支持BDPCM模式,该方法包括如下步骤:
步骤7401:在对当前块进行BDPCM处理之前,对第二BDPCM指示信息进行解码,第 二BDPCM指示信息用于指示支持BDPCM模式的处理单元的尺寸范围。
步骤7402:基于第二BDPCM指示信息和当前块的尺寸,确定当前块是否能够进行BDPCM处理。
作为一个示例,若当前块的尺寸在第二BDPCM指示信息指示的支持BDPCM模式的处理单元的尺寸范围内,则确定当前块能够进行BDPCM处理。若当前块的尺寸不在第二BDPCM指示信息指示的支持BDPCM模式的处理单元的尺寸范围内,则确定当前块不能进行BDPCM处理。
作为一个示例,第二BDPCM指示信息用于指示能够支持BDPCM模式的处理单元的最大尺寸,则若当前块的尺寸小于或等于第二BDPCM指示信息指示的最大尺寸,则确定当前块能够进行BDPCM处理。若当前块的尺寸大于第二BDPCM指示信息指示的最大尺寸,则确定当前块不能进行BDPCM处理。
作为一个示例,第二BDPCM指示信息可以存在于序列参数集(SPS)、图像参数级、slice级或Tile级中。优选地,第二BDPCM指示信息存在于序列参数集,也即是,第二BDPCM指示信息是在SPS级增加的一个语法。
本申请实施例中,增加了一个语法来控制可以使用BDPCM模式的尺寸范围,提高了编解码过程的灵活性。
BDPCM模式下的第三种实现方式
在BDPCM模式下,编码端与解码端之间传输的语法元素还可以包括第三BDPCM指示信息和第四BDPCM指示信息。第三BDPCM指示信息用于指示当前处理单元是否启动BDPCM模式,第四BDPCM指示信息用于指示BDPCM模式的预测方向的索引信息。示例的,第三BDPCM指示信息为Intra_bdpcm_flag,第四BDPCM指示信息为Intra_bdpcm_dir_flag。
相关技术中,若当前块支持BDPCM模式,在确定进行第三BDPCM指示信息的编码或解码时,需要基于一个上下文模型,对第三BDPCM指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码;在确定进行第四BDPCM指示信息进行编码或解码时,需要基于另一个不同的上下文模型,对第四BDPCM指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码。也即是,需要用到两个上下文模型,来对第三BDPCM指示信息和第四BDPCM指示信息进行编解码,如下表46所示。
表46
Figure PCTCN2020097144-appb-000052
图75是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图75所示,若当前块支持BDPCM模式,该方法包括如下步骤:
步骤7501:在对当前块进行BDPCM编码之前,根据当前块是否启动BDPCM模式,基 于一个上下文模型,对第三BDPCM指示信息进行基于上下文的自适应二进制算术编码。
作为一个示例,若当前块满足基于块的量化后残差的差分PCM编码的条件,可以通过RDO决策是否启动BDPCM模式,即要不要使用量化后残差的差分PCM编码方法,并在编码流中编码第三BDPCM指示信息,来表示当前块是否启动BDPCM模式。
步骤7502:若确定当前块启动BDPCM模式,则根据BDPCM模式的预测方向的索引信息,对第四BDPCM指示信息进行基于旁路的二进制算术编码。
其中,BDPCM模式的预测方向包括水平预测方向和垂直预测方向。
作为一个示例,编码端可以通过RDO决策预测方向,基于选择的预测方向,在编码流中编码第四BDPCM指示信息,来表示BDPCM模式的预测方向。
图76是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法为上述图75所示的编码方法对应的解码方法,如图76所示,若当前块支持BDPCM模式,该方法包括如下步骤:
步骤7601:在对当前块进行基于块的量化后残差的差分PCM解码之前,基于一个上下文模型,对第三BDPCM指示信息进行基于上下文的自适应二进制算术解码。
作为一个示例,解码端可以接收当前块的编码流,若当前块满足解析条件,则解析出第三BDPCM指示信息,来确定当前块是否启动BDPCM模式。
其中,解析条件包括:当前块尺寸满足一定尺寸条件。当然,该解析条件不限定于上述条件,还可以包括其他条件。
步骤7602:当第三BDPCM指示信息指示当前块启动BDPCM模式,对第四BDPCM指示信息进行基于旁路的二进制算术解码,第四BDPCM指示信息用于指示BDPCM模式的预测方向的索引信息。
当当前块启动BDPCM模式的情况下,需要进一步解析第四BDPCM指示信息,以确定预测方向。
步骤7603:按照第四BDPCM指示信息指示的预测方向,对当前块进行BDPCM处理。
作为一个示例,解码端可以逆向累加过程得到量化后的残差数据,再进行反量化并且与预测值相加得到重构像素值。
作为一个示例,第三BDPCM指示信息和第四BDPCM指示信息的编解码方式可以如下表47所示:
表47
Figure PCTCN2020097144-appb-000053
本申请实施例中,第三BDPCM指示信息使用1个上下文模型,而第四BDPCM指示信息采用基于旁路的二进制算术编解码方式,如此,可以将第三BDPCM指示信息和第四BDPCM指示信息所需的上下文模型的数量降至1,从而减小了编解码的复杂度,减小了内存开销。
BDPCM模式下的第四种实现方式
图77是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图77所示,若当前块支持BDPCM模式,该方法包括如下步骤:
步骤7701:在对当前块进行BDPCM编码之前,根据当前块是否启动BDPCM模式,对第三BDPCM指示信息进行基于旁路的二进制算术编码。
作为一个示例,若当前块满足基于块的量化后残差的差分PCM编码的条件,可以通过率失真抉择当前块要不要使用量化后残差的差分PCM编码方法,并在编码流中编码第三BDPCM指示信息,来表示当前块是否启动BDPCM模式。
步骤7702:若确定当前块启动BDPCM模式,则根据BDPCM模式的预测方向的索引信息,对第四BDPCM指示信息进行基于旁路的二进制算术解码。
其中,BDPCM模式的预测方向包括水平预测方向和垂直预测方向。
作为一个示例,编码端可以通过率失真抉择预测方向,基于选择的预测方向,在编码流中编码第四BDPCM指示信息,来表示BDPCM模式的预测方向。
图78是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法为上述图77所示的编码方法对应的解码方法,如图78所示,若当前块支持BDPCM模式,该方法包括如下步骤:
步骤7801:在对当前块进行BDPCM解码之前,对第三BDPCM指示信息进行基于旁路的二进制算术解码。
作为一个示例,解码端可以接收当前块的编码流,若当前块满足解析条件,则解析出第三BDPCM指示信息,来确定当前块是否启动BDPCM模式。
其中,解析条件包括:当前块尺寸满足一定尺寸条件。当然,该解析条件不限定于上述条件,还可以包括其他条件。
步骤7802:当第三BDPCM指示信息指示当前块启动BDPCM模式,对第四BDPCM指示信息进行基于旁路的二进制算术解码,第四BDPCM指示信息用于指示BDPCM模式的预测方向的索引信息。
当当前块启动BDPCM模式的情况下,需要进一步解析第四BDPCM指示信息,以确定预测方向。
步骤7803:按照第四BDPCM指示信息指示的预测方向,对当前块进行BDPCM处理。
作为一个示例,解码端可以逆向累加过程得到量化后的残差数据,再进行反量化并且与预测值相加得到重构像素值。
作为一个示例,第三BDPCM指示信息和第四BDPCM指示信息的编解码方式可以如下表48所示:
表48
Figure PCTCN2020097144-appb-000054
本申请实施例中,第三BDPCM指示信息和第四BDPCM指示信息均采用基于旁路的二 进制算术编解码方式,如此,可以将第三BDPCM指示信息和第四BDPCM指示信息所需的上下文模型的数量降至0,从而减小了编解码的复杂度,减小了内存开销。
BDPCM模式下的第五种实现方式
在BDPCM模式下,编码端与解码端之间传输的语法元素还包括CBF指示信息,CBF指示信息用于指示当前块的变换块是否具有非零变换系数。示例的,CBF指示信息为cbf flag,或者Tu_cbf_luma。
相关技术中,在根据帧内子块预测对当前块进行预测之前,可以基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码。其中,目标上下文模型是根据当前块是否启动帧内子块预测,当前块的前一个变换块是否具有非零变换系数,当前块的变换块的划分深度,以及当前块是否启动BDPCM模式这些条件,从5个不同的上下文模型中选择的一个上下文模型。
具体地,假设这5种不同的上下文模型包括第一上下文模型、第二上下文模型、第三上下文模型、第四上下文模型和第五上下文模型。若当前块启动帧内子块预测,则目标上下文模型是根据当前块的前一个变换块是否具有非零变换系数,从第一上下文模型和第二上下文模型中选择的一个上下文模型。示例的,若当前块的前一个变换块具有非零变换系数,则目标上下文模型为第一上下文模型;若当前块的前一个变换块不具有非零变换系数,则目标上下文模型为第二上下文模型。若当前块未启动帧内子块预测,即当前块启动常规帧内预测,则目标上下文模型是根据当前块的变换块的划分深度,从第三上下文模型、第四上下文模型中选择的一个上下文模型。示例的,若当前块的变换块的划分深度大于预设划分深度,则目标上下文模型为第三上下文模型,若当前块的变换块的划分深度小于或等于预设划分深度,则目标上下文模型为第四上下文模型。若当前块启动BDPCM模式,则目标上下文模型为第五上下文模型。
由于在相关技术中,CBF指示信息需要用到5个上下文模型,所需的上下文模型的数量较多,因此编解码的复杂度高,内存开销较大。
图78是本申请实施例提供的一种解码方法的流程图,该方法应用于编码端,如图79所示,该方法包括如下步骤:
步骤7901:若当前块启动帧内子块预测,则在确定进行CBF指示信息的编码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术编码;其中,目标上下文模型是根据当前块的前一个变换块是否具有非零变换系数从第一上下文模型集合包括的2个不同的上下文模型中选择的一个上下文模型。
作为一个示例,假设第一上下文模型集合包括第一上下文模型和第二上下文模型,则若当前块的前一个变换块具有非零变换系数,则目标上下文模型为第一上下文模型;若当前块的前一个变换块不具有非零变换系数,则目标上下文模型为第二上下文模型。
步骤7902:若当前块启动常规帧内预测或启动BDPCM模式,则在确定进行CBF指示信息的编码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术编码;其中,目标上下文模型是根据当前块的变换块的划分深度从第二上下文模型集合包括的2个不同的上下文模型中选择的一个上下文模型,第二上下文模型集合包括的2个上下文模型集合与第一上下文模型集合包括的2个上下文模型不同。
作为一个示例,假设第二上下文模型集合中包括第三上下文模型和第四上下文模型,则若当前块的变换块的划分深度大于预设划分深度,则目标上下文模型为第三上下文模型,若当前块的变换块的划分深度小于或等于预设划分深度,则目标上下文模型为第四上下文模型。
图80是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法为与上述图79所示的编码方法对应的解码方法,如图80所示,该方法包括如下步骤:
步骤8001:若当前块启动帧内子块预测,则在确定进行CBF指示信息的解码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术解码;其中,目标上下文模型是根据当前块的前一个变换块是否具有非零变换系数从第一上下文模型集合包括的2个不同的上下文模型中选择的一个上下文模型。
作为一个示例,假设第一上下文模型集合包括第一上下文模型和第二上下文模型,则若当前块的前一个变换块具有非零变换系数,则目标上下文模型为第一上下文模型;若当前块的前一个变换块不具有非零变换系数,则目标上下文模型为第二上下文模型。
步骤8002:若当前块启动常规帧内预测或启动BDPCM模式,则在确定进行CBF指示信息的解码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术解码;其中,目标上下文模型是根据当前块的变换块的划分深度从第二上下文模型集合包括的2个不同的上下文模型中选择的一个上下文模型,第二上下文模型集合包括的2个上下文模型集合与第一上下文模型集合包括的2个上下文模型不同。
作为一个示例,假设第二上下文模型集合中包括第三上下文模型和第四上下文模型,则若当前块的变换块的划分深度大于预设划分深度,则目标上下文模型为第三上下文模型,若当前块的变换块的划分深度小于或等于预设划分深度,则目标上下文模型为第四上下文模型。
本申请实施例中,若当前块启动BDPCM模式,CBF指示信息的编解码的上下文模型的选择也取决于当前块的变换块的划分深度,使得当前块启动BDPCM模式与启动常规帧内预测时,共用2个上下文模型,如此,可以将CBF指示信息所需的上下文模型的数量降至4,从而减小了编解码的复杂度,减小了内存开销。
BDPCM模式下的第六种实现方式
图81是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图81所示,该方法包括如下步骤:
步骤8101:若当前块启动帧内子块预测,或启动常规帧内预测,或启动BDPCM模式,则在确定进行CBF指示信息的编码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术编码;其中,目标上下文模型是根据当前块的变换块的划分深度从2个不同的上下文模型中选择的一个上下文模型。
作为一个示例,假设这2个不同的上下文模型为第一上下文模型和第二上下文模型,则则若当前块的变换块的划分深度大于预设划分深度,则目标上下文模型为第一上下文模型,若当前块的变换块的划分深度小于或等于预设划分深度,则目标上下文模型为第二上下文模型。
图82是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法为与上述图81所示的编码方法对应的解码方法,如图82所示,该方法包括如下步骤:
步骤8201:若当前块启动帧内子块预测,或启动常规帧内预测,或启动BDPCM模式, 则在确定进行CBF指示信息的解码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术解码;其中,目标上下文模型是根据当前块的变换块的划分深度从2个不同的上下文模型中选择的一个上下文模型。
作为一个示例,假设这2个不同的上下文模型为第一上下文模型和第二上下文模型,则则若当前块的变换块的划分深度大于预设划分深度,则目标上下文模型为第一上下文模型,若当前块的变换块的划分深度小于或等于预设划分深度,则目标上下文模型为第二上下文模型。
本申请实施例中,若当前块启动BDPCM模式或启动帧内子块预测模式,CBF指示信息的编解码的上下文模型的选择也取决于当前块的变换块的划分深度,使得当前块启动BDPCM模式、启动常规帧内预测以及启动帧内子块划分模式时,均共用2个上下文模型,如此,可以将CBF指示信息所需的上下文模型的数量降至2,从而减小了编解码的复杂度,减小了内存开销。
BDPCM模式下的第七种实现方式
图83是本申请实施例提供的一种编码方法的流程图,该方法应用于编码端,如图83所示,该方法包括如下步骤:
步骤8301:若当前块启动帧内子块预测或启动常规帧内预测,则在确定进行CBF指示信息的编码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术编码;其中,目标上下文模型是根据当前块的变换块的划分深度从第一上下文模型集合包括的2个不同的上下文模型中选择的一个上下文模型。
作为一个示例,假设第一上下文模型集合包括第一上下文模型和第二上下文模型,则若当前块的变换块的划分深度大于预设划分深度,则目标上下文模型为第一上下文模型,若当前块的变换块的划分深度小于或等于预设划分深度,则目标上下文模型为第二上下文模型。
步骤8302:若当前块启动BDPCM模式,则在确定进行CBF指示信息的编码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术编码;其中,目标上下文模型是第一上下文模型集合中的一个上下文模型。
图84是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法为与上述图83实施例所述的编码方法对应的解码方法,如图84所示,该方法包括如下步骤:
步骤8401:若当前块启动帧内子块预测或启动常规帧内预测,则在根据帧内子块预测对当前块进行预测之前,确定进行CBF指示信息的解码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术解码;其中,目标上下文模型是根据当前块的变换块的划分深度从第一上下文模型集合包括的2个不同的上下文模型中选择的一个上下文模型。
作为一个示例,假设第一上下文模型集合包括第一上下文模型和第二上下文模型,则若当前块的变换块的划分深度大于预设划分深度,则目标上下文模型为第一上下文模型,若当前块的变换块的划分深度小于或等于预设划分深度,则标上下文模型为第二上下文模型。
步骤8402:若当前块启动BDPCM模式,则在确定进行CBF指示信息的解码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术解码;其中,目标上下文模型是第一上下文模型集合中的一个上下文模型。
本申请实施例中,在帧内子块预测和启动常规帧内预测下,CBF指示信息根据当前块的变换块的划分深度共用2个上下文模型,在BDPCM模式下,CBF指示信息的编解码的上下文模型可以从常规帧内预测和帧内子块预测模式使用的2个上下文模型中选择1个,如此,可以将CBF指示信息所需的上下文模型的数量降至2,从而减小了编解码的复杂度,减小了内存开销。
JCCR模式
图85是本申请实施例提供的一种编码模式的流程图,该方法应用于编码端,如图85所示,该方法包括如下步骤:
步骤8501:在根据JCCR模式对当前块进行解码之前,根据当前块是否支持JCCR模式,对JCCR指示信息进行编码,JCCR指示信息用于指示当前处理单元是否支持JCCR模式。
其中,当前处理单元中的单元的范围可以是序列级,图像参数级或块级等。比如,当前处理单元为当前图像块。
其中,当前处理单元是否支持JCCR模式是指是否使能JCCR模式,也即是,是否开启JCCR模式。示例的,JCCR指示信息为sps_jccr_enable_flag,其为JCCR的使能标识位。作为一个示例,sps_jccr_enable_flag为真时,表示当前块支持JCCR模式。作为一个示例,当前块可以为色度残差块。
作为一个示例,JCCR指示信息可以存在于序列参数集(SPS)、图像参数级、slice级或Tile级中。优选地,JCCR指示信息存在于序列参数集,也即是,JCCR指示信息是在SPS级增加的一个语法。
作为一个示例,编码端还可以编码范围指示信息,该范围指示信息用于指示支持JCCR模式的处理单元的范围。作为一个示例,该范围指示信息可以存在于序列参数集(SPS)、图像参数级、slice级或Tile级中。
之后,编码端可以根据JCCR指示信息,确定当前块是否启动JCCR模式。
作为一个示例,若JCCR模式指示当前块支持JCCR模式,可以继续确定当前块的CBF值,若当前块的CB分量和CR分量的CBF值均为真,即当前块的CB分量和CR分量的残差系数均不为0,则编码端可以考虑启动JCCR模式。示例的,当前块是否启动JCCR模式可以由编码端通过RDO决策得出的。
图86是本申请实施例提供的一种解码方法的流程图,该方法应用于解码端,该方法是与上述图85所示的编码方法对应的解码方法,如图86所示,该方法包括如下步骤:
步骤8601:在根据JCCR模式对当前块进行解码之前,对JCCR指示信息进行解码,JCCR指示信息用于指示当前处理单元是否支持JCCR模式。
步骤8602:若根据JCCR指示信息确定当前块支持JCCR模式,且当前块启动JCCR模式,则按照当前块的CB分量和CR分量的相关性对当前块进行解码,得到当前块的色度残差系数。
作为一个示例,若根据JCCR指示信息确定当前块支持JCCR模式,则可以继续确定当前块的CBF值,若当前块的CB分量的CBF和CR分量的CBF值均为真,即当前块的CB分量和CR分量均具有非零变换系数,则继续解析当前块是否启动JCCR模式。若确定当前 块启动JCCR模式,则按照当前块的CB分量和CR分量的相关性对当前块进行解码,得到当前块的色度残差系数。
其中,当前块的CBF值用于指示当前块的变换块是否具有非零变换系数,即当前块的变换块是否包含一个或多个不等于0的变换系数。当前块的CBF值可以包括当前块的CB分量的CBF值和当前块的CR分量的CBF值。其中,当前块的CB分量的CBF值用于指示当前块的CB变换块是否具有非零变换系数,即当前块的CB变换块是否包含一个或多个不等于0的变换系数。当前块的CR分量的CBF值用于指示当前块的CR变换块是否具有非零变换系数,即当前块的CR变换块是否包含一个或多个不等于0的变换系数。若当前块的CB分量的CBF值为真,即CB分量的CBF值为1,则表示当前块的CB变换块具有非零变换系数。若当前块的CR分量的CBF值为真,即CR分量的CBF值为1,则表示当前块的CR变换块具有非零变换系数。
作为一个示例,JCCR指示信息可以存在于序列参数集(SPS)、图像参数级、slice级或Tile级中。优选地,JCCR指示信息存在于序列参数集,也即是,JCCR指示信息是在SPS级增加的一个语法
在另一实施例中,解码端还可以对范围指示信息进行解码,该范围指示信息用于指示支持JCCR模式的处理单元的尺寸范围。该范围指示信息可以为可以存在于序列参数集(SPS)、图像参数级、slice级或Tile级中。
本申请实施例中,增加了一个用于指示是否支持JCCR模式的语法,提高了编解码过程的灵活性。另外,还增加了一个语法用于指示支持JCCR模式的处理单元的范围。
需要说明的是,本申请实施例所述的当前块或图像块还可以为序列级、图像参数级或块级的其他处理单元,本申请实施例对此不作限定。
图87是本申请实施例提供的一种编码端8700的结构示意图,该编码端8700可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(central processing units,CPU)8701和一个或一个以上的存储器8702,其中,所述存储器8702中存储有至少一条指令,所述至少一条指令由所述处理器8701加载并执行以实现上述各个方法实施例提供的编码方法。当然,该编码端8700还可以具有有线或无线网络接口、键盘以及输入输出接口等部件,以便进行输入输出,该编码端8700还可以包括其他用于实现设备功能的部件,在此不做赘述。
图88是本申请实施例提供的一种解码端8800的结构示意图,该解码端8800可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(central processing units,CPU)8801和一个或一个以上的存储器8802,其中,所述存储器8802中存储有至少一条指令,所述至少一条指令由所述处理器8801加载并执行以实现上述各个方法实施例提供的解码方法。当然,该解码端8800还可以具有有线或无线网络接口、键盘以及输入输出接口等部件,以便进行输入输出,该编码端8800还可以包括其他用于实现设备功能的部件,在此不做赘述。
在另一实施例中,还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有指令,所述指令被处理器执行时实现上述任一种编码方法、解码方法或编解码方法。
在另一实施例中,还提供了一种包含指令的计算机程序产品,其特征在于,当其在计算 机上运行时,使得计算机执行上述任一种编码方法、解码方法或编解码方法。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (33)

  1. 一种编解码方法,其特征在于,所述方法包括:
    在确定进行第一ISP指示信息的编码或解码时,基于一个上下文模型,对所述第一ISP指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码,所述第一ISP指示信息用于指示是否启动帧内子块预测模式;
    在确定进行第二ISP指示信息的编码或解码时,对所述第二ISP指示信息进行基于旁路的二进制算术编码或解码,所述第二ISP指示信息用于指示帧内子块预测模式的子块划分方式。
  2. 一种编解码方法,其特征在于,所述方法包括:
    若当前块的宽高尺寸为M*N,所述M小于64且所述N小于64,则所述当前块不支持多行预测模式。
  3. 如权利要求2所述的方法,其特征在于,若所述当前块的宽高尺寸为4*4,则所述当前块不支持多行预测模式。
  4. 一种编解码方法,其特征在于,若当前块支持多行预测模式,且所述多行预测模式对应的候选参考行行数为3,所述多行预测模式对应的参考行指示信息至多占用2个比特位,所述参考行指示信息用于指示基于多行预测模式进行所述当前块的预测时所使用的目标参考行的索引信息,所述方法包括:
    基于一个上下文模型,对所述参考行指示信息的第1个比特位进行基于上下文的自适应二进制算术编码或自适应二进制算术解码;
    当需要对所述参考行指示信息的第2个比特位进行编码或解码时,对所述参考行指示信息的第2个比特位进行基于旁路的二进制算术编码或解码。
  5. 一种编解码方法,其特征在于,若当前块支持多行预测模式,且所述多行预测模式对应的候选参考行行数为3,其中,索引信息为0的候选参考行为第0行,所述第0行是与所述当前块边界相邻的行;索引信息为1的候选参考行为第1行,所述第1行是与所述当前块边界次相邻的行;索引信息为2的候选参考行为第2行,所述第2行是与所述第1行相邻的行;所述方法包括:
    在根据所述多行预测模式对所述当前块进行预测时,根据目标参考行对所述当前块进行预测;
    其中,所述目标参考行根据参考行指示信息确定;
    若所述参考行指示信息所指示的索引信息为0,则所述目标参考行为第0行;
    若所述参考行指示信息所指示的索引信息为1,则所述目标参考行是第1行;
    若所述参考行指示信息所指示的索引信息为2,则所述目标参考行是第2行。
  6. 一种解码方法,其特征在于,若当前块支持多行预测模式,所述方法包括:
    在根据所述多行预测模式对所述当前块进行预测之前,对行数指示信息进行解码,所述行数指示信息用于指示所述多行预测模式对应的候选参考行行数;
    根据所述行数指示信息确定所述多行预测模式对应的候选参考行行数;
    根据所述多行预测模式对应的候选参考行行数和所述参考行指示信息确定目标参考行,所述参考行指示信息用于指示基于所述多行预测模式进行所述当前块的预测时所使用的目标参考行的索引信息;
    根据所述目标参考行对所述当前块进行预测。
  7. 如权利要求6所述的方法,其特征在于,所述行数指示信息存在于序列参数集中。
  8. 一种编解码方法,其特征在于,所述方法包括:
    若当前块启动仿射预测模式或启动除仿射预测模式之外的其他预测模式,在进行当前块的运动矢量差编码或解码时,如果所述当前块支持AMVR模式,则在进行第一AMVR指示信息的编码或解码时,基于第一上下文模型,对所述第一AMVR指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码,所述第一AMVR指示信息用于指示是否启动AMVR模式;
    当所述第一AMVR指示信息指示所述当前块启动AMVR模式时,基于第二上下文模型,对第二AMVR指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码,所述第二AMVR指示信息用于指示在AMVR模式下进行运行矢量差编码或解码时所采用的像素精度的索引信息,所述第一上下文模型和所述第二上下文模型不同。
  9. 一种解码方法,其特征在于,所述方法包括:
    若当前块启动帧内子块预测,所述帧内子块预测的目标预测模式存在于最有可能的帧内预测模式MPM列表中,所述当前块为亮度块,则在根据所述帧内子块预测对所述当前块进行预测时,对预测模式索引信息进行解码,其中,所述预测模式索引信息的第1个比特位是基于第一上下文模型进行基于上下文的自适应二进制算术解码得到,其他比特位是基于旁路的二进制算术解码得到;
    根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的帧内子块预测的目标预测模式;所述预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引信息;
    根据所述目标预测模式对所述当前块进行预测;或者
    若当前块启动常规帧内预测,当常规帧内预测的目标预测模式来自于MPM列表中时,若当前块为亮度块,则在根据常规帧内预测对所述当前块进行预测时,对预测模式索引信息进行解码,其中,所述预测模式索引信息的第1个比特位是基于第二上下文模型进行基于上下文的自适应二进制算术解码得到,其他比特位基于旁路的二进制算术解码得到,所述第二上下文模型与所述第一上下文模型是同一上下文模型;
    根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的常规帧内预测的目标预测模式;所述预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引信息;
    根据所述目标预测模式对所述当前块进行预测。
  10. 一种解码方法,其特征在于,所述方法包括:
    若当前块启动帧内子块预测时,帧内子块预测的目标预测模式存在于MPM列表中,所述当前块为亮度块,则在根据帧内子块预测对所述当前块进行预测时,对预测模式索引信息进行解码,其中,所述预测模式索引信息的所有比特位基于旁路的二进制算术解码得到;
    根据预测模式索引信息,从所述MPM列表中确定当前块启动的帧内子块预测的目标预测模式;所述预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引信息;
    根据所述目标预测模式对当前块进行预测;或者
    若当前块启动常规帧内预测时,当常规帧内预测的目标预测模式来自于MPM列表中时,当前块为亮度块,则在根据常规帧内预测对当前块进行预测时,对预测模式索引信息进行解码,其中,所述预测模式索引信息的所有比特位基于旁路的二进制算术解码得到;
    根据预测模式索引信息,从所述MPM列表中确定当前块启动的常规帧内预测的目标预测模式;所述预测模式索引信息用于指示目标预测模式在所述MPM列表中的索引信息;
    根据所述目标预测模式对当前块进行预测。
  11. 一种解码方法,其特征在于,所述方法包括:
    若当前块启动帧内子块预测时,帧内子块预测的目标预测模式存在于MPM列表中,所述当前块为亮度块,则在根据帧内子块预测对所述当前块进行预测时,对planar指示信息进行解码,其中,所述planar指示信息用于指示所述当前块启动的帧内子块预测的目标预测模式是否为planar预测模式,所述planar指示信息是基于第一上下文模型进行基于上下文的自适应二进制算术解码得到;
    当基于所述planar指示信息确定所述当前块启动的帧内子块预测的目标预测模式为planar预测模式时,根据planar预测模式对所述当前块进行预测;
    当基于所述planar指示信息确定所述当前块启动的帧内子块预测的目标预测模式不是planar预测模式时,根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的帧内子块预测的目标预测模式,根据所述目标预测模式对所述当前块进行预测;或者
    若所述当前块启动常规帧内预测,当常规帧内预测的目标预测模式来自于MPM列表时,若所述当前块为亮度块,则在根据常规帧内预测对所述当前块进行预测时,对planar指示信息进行解码,其中,所述planar指示信息用于指示所述当前块启动的目标预测模式是否为planar预测模式,所述planar指示信息是基于第二上下文模型进行基于上下文的自适应二进制算术解码得到,所述第一上下文模型与所述第二上下文模型相同;
    当基于所述planar指示信息确定所述当前块启动的目标预测模式为planar预测模式时,根据planar预测模式对所述当前块进行预测;
    当基于所述planar指示信息确定所述当前块启动的目标预测模式不是planar预测模式时, 根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的目标预测模式,根据所述目标预测模式对所述当前块进行预测。
  12. 一种解码方法,其特征在于,所述方法包括:
    若当前块启动帧内子块预测时,帧内子块预测的目标预测模式存在于MPM列表中,所述当前块为亮度块,则在根据帧内子块预测对所述当前块进行预测时,对planar指示信息进行解码,其中,所述planar指示信息用于指示所述当前块启动的帧内子块预测的目标预测模式是否为planar预测模式,所述planar指示信息基于旁路的二进制算术解码得到;
    当基于所述planar指示信息确定所述当前块启动的帧内子块预测的目标预测模式为planar预测模式时,根据planar预测模式对所述当前块进行预测;
    当基于所述planar指示信息确定所述当前块启动的帧内子块预测的目标预测模式不是planar预测模式时,根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的帧内子块预测的目标预测模式,根据所述目标预测模式对所述当前块进行预测;或者
    若所述当前块启动常规帧内预测时,当常规帧内预测的目标预测模式来自于MPM列表中,若所述当前块为亮度块,则在根据常规帧内预测对所述当前块进行预测时,对planar指示信息进行解码,其中,所述planar指示信息用于指示所述当前块启动的目标预测模式是否为planar预测模式,所述planar指示信息基于旁路的二进制算术解码得到;
    当基于所述planar指示信息确定所述当前块启动的目标预测模式为planar预测模式时,根据planar预测模式对所述当前块进行预测;
    当基于所述planar指示信息确定所述当前块启动的目标预测模式不是planar预测模式时,根据预测模式索引信息,从所述MPM列表中确定所述当前块启动的目标预测模式,根据所述目标预测模式对所述当前块进行预测。
  13. 一种解码方法,其特征在于,若当前块支持跨分量预测模式,且所述当前块启动跨分量预测模式,所述当前块为色度块,所述方法包括:
    在根据跨分量预测模式对所述当前块进行预测时,对色度预测模式索引信息进行解码,所述色度预测模式索引信息用于指示所述当前块的目标预测模式在对应的候选预测模式列表中的索引;其中,所述色度预测模式索引信息的第1个比特位是基于第一上下文模型基于上下文的自适应二进制算术解码得到,所述色度预测模式索引信息的第2个比特位是基于第二上下文模型进行基于上下文的自适应二进制算术解码,所述第一上下文模型与所述第二上下文模型不同;所述色度预测模式索引信息的第3个比特位和第4个比特位是基于旁路的二进制算术解码得到;
    根据所述色度预测模式索引信息,从所述候选预测模式列表中确定所述当前块的目标预测模式;
    根据所述目标预测模式对当前块进行预测。
  14. 一种解码方法,其特征在于,若当前块支持跨分量预测模式,且所述当前块启动跨 分量预测模式,所述当前块为色度块,所述方法包括:
    在根据跨分量预测模式对所述当前块进行预测时,对色度预测模式索引信息进行解码,所述色度预测模式索引信息用于指示所述当前块的目标预测模式在对应的候选预测模式列表中的索引;其中,所述色度预测模式索引信息的第1个比特位是基于一个上下文模型进行基于上下文的自适应二进制算术解码得到,所述色度预测模式索引信息的第2个比特位、3个比特位和第4个比特位是基于旁路的二进制算术解码得到;
    根据所述色度预测模式索引信息,从所述候选预测模式列表中确定所述当前块的目标预测模式;
    根据所述目标预测模式对所述当前块进行预测。
  15. 一种编解码方法,其特征在于,所述方法包括:
    当当前块的亮度和色度共用一棵划分树时,若所述当前块对应的亮度块的宽高尺寸为64*64,所述当前块对应的色度块的尺寸为32*32,则所述当前块不支持跨分量预测模式。
  16. 一种解码方法,其特征在于,所述方法包括:
    若当前块支持ALF,所述当前块为亮度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于目标上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码;其中,所述ALF指示信息用于指示所述当前块是否启动ALF,所述目标上下文模型是根据所述当前块的上边块是否启动ALF,以及所述当前块的左边块是否启动ALF,从第一上下文模型集合包括的3个不同的上下文模型中选择的一个上下文模型;或者,
    若所述当前块支持ALF,所述当前块为CB色度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于第一上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码;或者,
    若所述当前块支持ALF,所述当前块为CR色度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于第二上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,所述第一上下文模型集合包括的上下文模型、所述第一上下文模型和所述第二上下文模型为不同的上下文模型。
  17. 一种解码方法,其特征在于,所述方法包括:
    若当前块支持ALF,所述当前块为亮度块,则在根据ALF模式对所述当前块进行滤波处理之前,基于一个上下文模型对ALF指示信息进行基于上下文的自适应二进制算术解码,所述ALF指示信息用于指示所述当前块是否启动ALF;或者,
    若所述当前块支持器ALF,且所述当前块启动自适应环路滤波器ALF,所述当前块为色度块,则在根据ALF模式对所述当前块进行滤波处理之前,对ALF指示信息进行基于旁路的二进制算术解码。
  18. 一种编解码方法,其特征在于,所述方法包括:
    若当前块的宽高尺寸为32*32,则所述当前块不支持基于矩阵的帧内预测模式。
  19. 一种解码方法,其特征在于,若当前块支持基于矩阵的帧内预测模式,所述方法包括:
    在根据基于矩阵的帧内预测模式对所述当前块进行预测之前,基于目标上下文模型对MIP指示信息进行基于上下文的自适应二进制算术解码;其中,所述MIP指示信息用于指示所述当前块是否启动基于矩阵的帧内预测模式,所述目标上下文模型是根据所述当前块的上边块是否启动基于矩阵的帧内预测模式,以及所述当前块的左边块是否启动基于矩阵的帧内预测模式从3个不同的上下文模型中选择的一个上下文模型;
    若根据所述MIP指示信息确定所述当前块启动基于矩阵的帧内预测模式,则基于矩阵的帧内预测模式,对所述当前块进行预测。
  20. 一种解码方法,其特征在于,若当前块支持基于矩阵的帧内预测模式,所述方法包括:
    在根据基于矩阵的帧内预测模式对所述当前块进行预测之前,基于目标上下文模型对MIP指示信息进行基于上下文的自适应二进制算术解码;其中,所述MIP指示信息用于指示所述当前块是否启动基于矩阵的帧内预测模式,所述目标上下文模型是根据所述当前块是否满足预设尺寸条件从2个不同的上下文模型中的一个上下文模型;
    若根据所述MIP指示信息确定所述当前块启动基于矩阵的帧内预测模式,则基于矩阵的帧内预测模式,对所述当前块进行预测。
  21. 一种解码方法,其特征在于,若当前块支持基于矩阵的帧内预测模式,所述方法包括:
    在根据基于矩阵的帧内预测模式对所述当前块进行预测之前,基于同一个上下文模型对MIP指示信息进行基于上下文的自适应二进制算术解码,所述MIP指示信息用于指示所述当前块是否启动基于矩阵的帧内预测模式;
    若根据所述MIP指示信息确定所述当前块启动基于矩阵的帧内预测模式,则基于矩阵的帧内预测模式,对所述当前块进行预测。
  22. 一种解码方法,其特征在于,所述方法包括:
    对第一BDPCM指示信息进行解码,所述第一BDPCM指示信息用于指示当前处理单元是否支持BDPCM模式;
    根据所述第一BDPCM指示信息,对所述当前处理单元进行解码。
  23. 如权利要求22所述的方法,其特征在于,所述第一BDPCM指示信息存在于序列参数集中。
  24. 一种编解码方法,其特征在于,所述方法包括:
    对第二BDPCM指示信息进行编码或解码,所述第二BDPCM指示信息用于指示支持BDPCM模式的处理单元的尺寸范围;
    基于所述第二BDPCM指示信息和所述当前块的尺寸,确定所述当前块是否能够进行BDPCM模式的编码或解码。
  25. 如权利要求24所述的方法,其特征在于,所述第二BDPCM指示信息存在于序列参数集中。
  26. 一种解码方法,其特征在于,若当前块支持BDPCM模式,所述方法包括:
    基于一个上下文模型,对第三BDPCM指示信息进行基于上下文的自适应二进制算术解码,所述第三BDPCM指示信息用于指示所述当前块是否启动BDPCM模式;
    当所述第三BDPCM指示信息指示所述当前块启动BDPCM模式时,对第四BDPCM指示信息进行基于旁路的二进制算术解码,所述第四BDPCM指示信息用于指示BDPCM模式的预测方向的索引信息;
    按照所述第四BDPCM指示信息指示的预测方向,对所述当前块进行BDPCM处理。
  27. 一种编解码方法,其特征在于,所述方法包括:
    若当前块启动帧内子块预测,或者启动常规帧内预测,或者启动BDPCM模式,则在确定进行CBF指示信息的编码或解码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码;其中,所述CBF指示信息用于指示所述当前块的变换块是否具有非零变换系数,所述目标上下文模型是根据所述当前块的变换块的划分深度从2个不同的上下文模型中选择的一个上下文模型。
  28. 一种编解码方法,其特征在于,所述方法包括:
    若当前块启动帧内子块预测或启动常规帧内预测,则在确定进行CBF指示信息的编码或解码时,基于目标上下文模型,对CBF指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码;其中,所述CBF指示信息用于指示所述当前块的变换块是否具有非零变换系数,所述目标上下文模型是根据所述当前块的变换块的划分深度从第一上下文模型集合包括的2个不同的上下文模型中选择的一个上下文模型;或者,
    若当前块启动BDPCM模式,则在确定进行CBF指示信息的编码或解码时,基于目标上下文模型,对所述CBF指示信息进行基于上下文的自适应二进制算术编码或基于上下文的自适应二进制算术解码;其中,所述目标上下文模型是所述第一上下文模型集合中的一个上下文模型。
  29. 一种解码方法,其特征在于,所述方法包括:
    对JCCR指示信息进行解码,所述JCCR指示信息用于指示当前处理单元是否支持JCCR 模式;
    若根据所述JCCR指示信息确定当前块支持JCCR模式,且所述当前块启动JCCR模式,则按照所述当前块的蓝色色度CB分量和红色色度CR分量的相关性对所述当前块进行解码,得到所述当前块的色度残差系数。
  30. 如权利要求29所述的方法,其特征在于,所述JCCR指示信息存在于序列参数集中。
  31. 一种编解码装置,其特征在于,所述装置包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为执行权利要求1-30所述的任一项编解码方法或解码方法。
  32. 一种计算机可读存储介质,所述计算机可读存储介质上存储有指令,其特征在于,所述指令被处理器执行时实现权利要求1-30任一项所述的编解码方法或解码方法。
  33. 一种包含指令的计算机程序产品,其特征在于,当其在计算机上运行时,使得计算机执行权利要求1-30任一项所述的编解码方法或解码方法。
PCT/CN2020/097144 2019-06-21 2020-06-19 一种编解码方法、装置及存储介质 WO2020253829A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP20825445.8A EP3979647A4 (en) 2019-06-21 2020-06-19 CODING/DECODING METHOD AND DEVICE AND STORAGE MEDIUM
JP2021576392A JP7325553B2 (ja) 2019-06-21 2020-06-19 符号化・復号化の方法、装置、および記憶媒体
US17/621,644 US20220360800A1 (en) 2019-06-21 2020-06-19 Coding/decoding method and device, and storage medium
KR1020217043437A KR20220016232A (ko) 2019-06-21 2020-06-19 코딩 및 디코딩 방법, 장치 및 저장 매체

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910545251.0A CN112118448B (zh) 2019-06-21 2019-06-21 一种编解码方法、装置及存储介质
CN201910545251.0 2019-06-21

Publications (1)

Publication Number Publication Date
WO2020253829A1 true WO2020253829A1 (zh) 2020-12-24

Family

ID=69102758

Family Applications (4)

Application Number Title Priority Date Filing Date
PCT/CN2020/097144 WO2020253829A1 (zh) 2019-06-21 2020-06-19 一种编解码方法、装置及存储介质
PCT/CN2020/097130 WO2020253828A1 (zh) 2019-06-21 2020-06-19 一种编解码方法、装置及存储介质
PCT/CN2020/097088 WO2020253823A1 (zh) 2019-06-21 2020-06-19 一种编解码方法、装置及存储介质
PCT/CN2020/097148 WO2020253831A1 (zh) 2019-06-21 2020-06-19 一种编解码方法、装置及存储介质

Family Applications After (3)

Application Number Title Priority Date Filing Date
PCT/CN2020/097130 WO2020253828A1 (zh) 2019-06-21 2020-06-19 一种编解码方法、装置及存储介质
PCT/CN2020/097088 WO2020253823A1 (zh) 2019-06-21 2020-06-19 一种编解码方法、装置及存储介质
PCT/CN2020/097148 WO2020253831A1 (zh) 2019-06-21 2020-06-19 一种编解码方法、装置及存储介质

Country Status (6)

Country Link
US (1) US20220360800A1 (zh)
EP (1) EP3979647A4 (zh)
JP (2) JP7325553B2 (zh)
KR (1) KR20220016232A (zh)
CN (12) CN113382251B (zh)
WO (4) WO2020253829A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11265581B2 (en) * 2019-08-23 2022-03-01 Tencent America LLC Method and apparatus for video coding

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11991350B2 (en) * 2019-05-27 2024-05-21 Sk Telecom Co., Ltd. Method and device for deriving intra-prediction mode
KR20200145749A (ko) * 2019-06-19 2020-12-30 한국전자통신연구원 화면 내 예측 모드 및 엔트로피 부호화/복호화 방법 및 장치
CN113382251B (zh) * 2019-06-21 2022-04-08 杭州海康威视数字技术股份有限公司 一种编解码方法、装置、设备及存储介质
CN114827609B (zh) * 2019-06-25 2023-09-12 北京大学 视频图像编码和解码方法、设备及介质
JP2022539768A (ja) * 2019-07-07 2022-09-13 オッポ広東移動通信有限公司 画像予測方法、エンコーダ、デコーダ及び記憶媒体
CN113497936A (zh) * 2020-04-08 2021-10-12 Oppo广东移动通信有限公司 编码方法、解码方法、编码器、解码器以及存储介质
KR20230004797A (ko) * 2020-05-01 2023-01-06 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 파티션 신택스를 위한 엔트로피 코딩
WO2022141278A1 (zh) * 2020-12-30 2022-07-07 深圳市大疆创新科技有限公司 视频处理方法和编码装置
WO2022266971A1 (zh) * 2021-06-24 2022-12-29 Oppo广东移动通信有限公司 编解码方法、编码器、解码器以及计算机存储介质
US20230008488A1 (en) * 2021-07-07 2023-01-12 Tencent America LLC Entropy coding for intra prediction modes
WO2023194193A1 (en) * 2022-04-08 2023-10-12 Interdigital Ce Patent Holdings, Sas Sign and direction prediction in transform skip and bdpcm
WO2023224289A1 (ko) * 2022-05-16 2023-11-23 현대자동차주식회사 가상의 참조라인을 사용하는 비디오 코딩을 위한 방법 및 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102857763A (zh) * 2011-06-30 2013-01-02 华为技术有限公司 一种基于帧内预测的解码方法和解码装置
CN102986213A (zh) * 2010-04-16 2013-03-20 Sk电信有限公司 视频编码/解码设备和方法
CN103621099A (zh) * 2011-04-01 2014-03-05 Lg电子株式会社 熵解码方法和使用其的解码装置
CN109314783A (zh) * 2016-06-01 2019-02-05 三星电子株式会社 用于根据编码顺序对视频进行编码和解码的方法和设备
CN110677663A (zh) * 2019-06-21 2020-01-10 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及存储介质

Family Cites Families (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4549304A (en) * 1983-11-28 1985-10-22 Northern Telecom Limited ADPCM Encoder/decoder with signalling bit insertion
JP3542572B2 (ja) * 2001-06-14 2004-07-14 キヤノン株式会社 画像復号方法及び装置
EP1363458A3 (en) * 2002-05-14 2004-12-15 Broadcom Corporation Video bitstream preprocessing method
EP1365592A3 (en) * 2002-05-20 2005-02-09 Broadcom Corporation System, method, and apparatus for decoding flexibly ordered macroblocks
CN101106721A (zh) * 2006-07-10 2008-01-16 华为技术有限公司 一种编解码装置及相关编码器
US8344917B2 (en) * 2010-09-30 2013-01-01 Sharp Laboratories Of America, Inc. Methods and systems for context initialization in video coding and decoding
CN103069805B (zh) * 2011-06-27 2017-05-31 太阳专利托管公司 图像编码方法、图像解码方法、图像编码装置、图像解码装置及图像编码解码装置
WO2013047805A1 (ja) * 2011-09-29 2013-04-04 シャープ株式会社 画像復号装置、画像復号方法および画像符号化装置
US9088796B2 (en) * 2011-11-07 2015-07-21 Sharp Kabushiki Kaisha Video decoder with enhanced CABAC decoding
KR20130058524A (ko) * 2011-11-25 2013-06-04 오수미 색차 인트라 예측 블록 생성 방법
US9843809B2 (en) * 2012-07-02 2017-12-12 Electronics And Telecommunications Research Method and apparatus for coding/decoding image
US9313500B2 (en) * 2012-09-30 2016-04-12 Microsoft Technology Licensing, Llc Conditional signalling of reference picture list modification information
CN103024384B (zh) * 2012-12-14 2015-10-21 深圳百科信息技术有限公司 一种视频编码、解码方法及装置
CN103024389B (zh) * 2012-12-24 2015-08-12 芯原微电子(北京)有限公司 一种用于hevc的解码装置和方法
KR101726572B1 (ko) * 2013-05-22 2017-04-13 세종대학교산학협력단 무손실 이미지 압축 및 복원 방법과 이를 수행하는 장치
FR3012004A1 (fr) * 2013-10-15 2015-04-17 Orange Procede de codage et de decodage d'images, dispositif de codage et de decodage d'images et programmes d'ordinateur correspondants
BR112016015080A2 (pt) * 2014-01-03 2017-08-08 Microsoft Technology Licensing Llc Predição de vetor de bloco em codificação / decodificação de vídeo e imagem
US9948933B2 (en) * 2014-03-14 2018-04-17 Qualcomm Incorporated Block adaptive color-space conversion coding
WO2015188297A1 (zh) * 2014-06-08 2015-12-17 北京大学深圳研究生院 加权跳过模式的视频图像块压缩算术编解码方法及装置
CN106797471B (zh) * 2014-09-03 2020-03-10 联发科技股份有限公司 一种对图像内区块使用调色板预测模式的颜色索引图解码方法
RU2562414C1 (ru) * 2014-09-24 2015-09-10 Закрытое акционерное общество "Элекард наноДевайсез" Способ быстрого выбора режима пространственного предсказания в системе кодирования hevc
US10212445B2 (en) * 2014-10-09 2019-02-19 Qualcomm Incorporated Intra block copy prediction restrictions for parallel processing
CN107113444A (zh) * 2014-11-04 2017-08-29 三星电子株式会社 使用帧内预测对视频进行编码/解码的方法和装置
US10148977B2 (en) * 2015-06-16 2018-12-04 Futurewei Technologies, Inc. Advanced coding techniques for high efficiency video coding (HEVC) screen content coding (SCC) extensions
WO2017087751A1 (en) * 2015-11-20 2017-05-26 Mediatek Inc. Method and apparatus for global motion compensation in video coding system
US10659812B2 (en) * 2015-11-24 2020-05-19 Samsung Electronics Co., Ltd. Method and device for video decoding and method and device for video encoding
US10390021B2 (en) * 2016-03-18 2019-08-20 Mediatek Inc. Method and apparatus of video coding
WO2017173593A1 (en) * 2016-04-06 2017-10-12 Mediatek Singapore Pte. Ltd. Separate coding secondary transform syntax elements for different color components
CN109076241B (zh) * 2016-05-04 2023-06-23 微软技术许可有限责任公司 利用样本值的非相邻参考线进行帧内图片预测
WO2017203882A1 (en) * 2016-05-24 2017-11-30 Sharp Kabushiki Kaisha Systems and methods for intra prediction coding
ES2724568B2 (es) * 2016-06-24 2021-05-19 Kt Corp Método y aparato para tratar una señal de vídeo
EP3972256B1 (en) * 2016-06-24 2024-01-03 KT Corporation Adaptive reference sample filtering for intra prediction using distant pixel lines
US11368681B2 (en) * 2016-07-18 2022-06-21 Electronics And Telecommunications Research Institute Image encoding/decoding method and device, and recording medium in which bitstream is stored
CN109565591B (zh) * 2016-08-03 2023-07-18 株式会社Kt 用于对视频进行编码和解码的方法和装置
WO2018045332A1 (en) * 2016-09-02 2018-03-08 Vid Scale, Inc. Methods and apparatus for coded block flag coding in quad-tree plus binary-tree block partitioning
WO2018062950A1 (ko) * 2016-09-30 2018-04-05 엘지전자(주) 영상 처리 방법 및 이를 위한 장치
CN109891892A (zh) * 2016-10-11 2019-06-14 Lg 电子株式会社 依赖于图像编译系统中的帧内预测的图像解码方法和装置
JPWO2018070267A1 (ja) * 2016-10-14 2019-08-15 ソニー株式会社 画像処理装置および画像処理方法
US10742975B2 (en) * 2017-05-09 2020-08-11 Futurewei Technologies, Inc. Intra-prediction with multiple reference lines
RU2020109859A (ru) * 2017-09-15 2021-09-07 Сони Корпорейшн Устройство и способ обработки изображения
CN108093264B (zh) * 2017-12-29 2019-03-08 东北石油大学 基于分块压缩感知的岩心图像压缩、解压方法和系统
CN109743576B (zh) * 2018-12-28 2020-05-12 杭州海康威视数字技术股份有限公司 编码方法、解码方法及装置
CN109788285B (zh) * 2019-02-27 2020-07-28 北京大学深圳研究生院 一种量化系数结束标志位的上下文模型选取方法及装置
US11451826B2 (en) * 2019-04-15 2022-09-20 Tencent America LLC Lossless coding mode and switchable residual coding
WO2020216375A1 (en) * 2019-04-26 2020-10-29 Huawei Technologies Co., Ltd. Method and apparatus for signaling of mapping function of chroma quantization parameter
JP2022537275A (ja) * 2019-06-20 2022-08-25 インターデジタル ブイシー ホールディングス フランス,エスアーエス 多用途ビデオコーディングのためのロスレスモード

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102986213A (zh) * 2010-04-16 2013-03-20 Sk电信有限公司 视频编码/解码设备和方法
CN103621099A (zh) * 2011-04-01 2014-03-05 Lg电子株式会社 熵解码方法和使用其的解码装置
CN102857763A (zh) * 2011-06-30 2013-01-02 华为技术有限公司 一种基于帧内预测的解码方法和解码装置
CN109314783A (zh) * 2016-06-01 2019-02-05 三星电子株式会社 用于根据编码顺序对视频进行编码和解码的方法和设备
CN110677663A (zh) * 2019-06-21 2020-01-10 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及存储介质
CN110677655A (zh) * 2019-06-21 2020-01-10 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及存储介质
CN110784712A (zh) * 2019-06-21 2020-02-11 杭州海康威视数字技术股份有限公司 一种编解码方法、装置及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SANTIAGO DE LUXÁN HERNÁNDEZ ET AL.: "CE3: Line-based intra coding mode (Tests 2.1.1 and 2.1.2)", 12. JVET MEETING; 20181003 - 20181012; MACAO; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ),, 30 September 2018 (2018-09-30), pages 1 - 9, XP030194061 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11265581B2 (en) * 2019-08-23 2022-03-01 Tencent America LLC Method and apparatus for video coding
US20220150547A1 (en) * 2019-08-23 2022-05-12 Tencent America LLC Method and apparatus for video coding
US11632573B2 (en) * 2019-08-23 2023-04-18 Tencent America LLC Method and apparatus for video coding

Also Published As

Publication number Publication date
EP3979647A4 (en) 2023-03-22
CN113347426A (zh) 2021-09-03
CN113382254A (zh) 2021-09-10
CN110677655A (zh) 2020-01-10
WO2020253831A1 (zh) 2020-12-24
JP2023096190A (ja) 2023-07-06
CN110677663A (zh) 2020-01-10
CN113382255A (zh) 2021-09-10
EP3979647A1 (en) 2022-04-06
US20220360800A1 (en) 2022-11-10
CN113382254B (zh) 2022-05-17
CN110677663B (zh) 2021-05-14
JP7325553B2 (ja) 2023-08-14
CN113382255B (zh) 2022-05-20
WO2020253823A1 (zh) 2020-12-24
CN113382251A (zh) 2021-09-10
CN112118448B (zh) 2022-09-16
CN113382256B (zh) 2022-05-20
CN110784712B (zh) 2021-05-11
JP2022537220A (ja) 2022-08-24
KR20220016232A (ko) 2022-02-08
CN113347427A (zh) 2021-09-03
CN113382253B (zh) 2022-05-20
CN110784712A (zh) 2020-02-11
CN113382256A (zh) 2021-09-10
CN113382252A (zh) 2021-09-10
WO2020253828A1 (zh) 2020-12-24
CN113382251B (zh) 2022-04-08
CN113382252B (zh) 2022-04-05
CN113382253A (zh) 2021-09-10
CN110677655B (zh) 2022-08-16
CN112118448A (zh) 2020-12-22

Similar Documents

Publication Publication Date Title
WO2020253829A1 (zh) 一种编解码方法、装置及存储介质
CN110024392B (zh) 用于视频译码的低复杂度符号预测
CN108605127B (zh) 滤波视频数据的经解码块的方法和装置及存储介质
US9167269B2 (en) Determining boundary strength values for deblocking filtering for video coding
CN103563380B (zh) 减少用于视频处理的行缓冲的方法及装置
TW201742458A (zh) 二值化二次轉換指數
TW201352004A (zh) 轉換係數寫碼
US20230239464A1 (en) Video processing method with partial picture replacement
JP7286783B2 (ja) 符号化方法、復号化方法、デコーダ、エンコーダー及び記憶媒体
TWI832661B (zh) 圖像編解碼的方法、裝置及存儲介質
WO2022191947A1 (en) State based dependent quantization and residual coding in video coding
WO2021211576A1 (en) Methods and systems for combined lossless and lossy coding
CN117203960A (zh) 视频编码中的旁路对齐

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20825445

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021576392

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20217043437

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020825445

Country of ref document: EP

Effective date: 20211230