TW202232954A - Most probable modes for intra prediction for video coding - Google Patents

Most probable modes for intra prediction for video coding Download PDF

Info

Publication number
TW202232954A
TW202232954A TW110147632A TW110147632A TW202232954A TW 202232954 A TW202232954 A TW 202232954A TW 110147632 A TW110147632 A TW 110147632A TW 110147632 A TW110147632 A TW 110147632A TW 202232954 A TW202232954 A TW 202232954A
Authority
TW
Taiwan
Prior art keywords
list
intra
probable
probable mode
frame prediction
Prior art date
Application number
TW110147632A
Other languages
Chinese (zh)
Inventor
張耀仁
瑪塔 卡克基維克茲
Original Assignee
美商高通公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/456,080 external-priority patent/US11863752B2/en
Application filed by 美商高通公司 filed Critical 美商高通公司
Publication of TW202232954A publication Critical patent/TW202232954A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A video coder may code a block of video data using an intra prediction mode determined from a most probable mode list. The video coder may construct a general most probable mode list containing N entries, wherein the N entries of the general most probable mode list are intra prediction modes, and wherein a planar mode is an ordinal first entry in the general most probable mode list, construct a primary most probable mode list from a first Np entries in the general most probable mode list, where Np is less than N, and construct a secondary most probable mode list from a remaining (N-Np) entries in the general most probable mode list. The video coder may then determine a current intra prediction mode for a current block of video data using the primary most probable mode list or the secondary most probable mode list.

Description

用於視訊譯碼的訊框內預測的最可能模式Most Likely Mode for In-Frame Prediction for Video Coding

本專利申請案主張享受於2020年12月28日提出申請的美國臨時專利申請案第63/131,115的權益,將上述申請案的全部內容經由引用的方式併入本文中。This patent application claims the benefit of US Provisional Patent Application Serial No. 63/131,115, filed on December 28, 2020, the entire contents of which are incorporated herein by reference.

本案內容係關於視訊編碼和視訊解碼。The content of this case is about video encoding and video decoding.

數位視訊能力可以被合併到各種各樣的設備中,包括數位電視機、數位直播系統、無線廣播系統、個人數位助理(PDA)、膝上型電腦或桌上型電腦、平板電腦、電子書閱讀器、數位相機、數位記錄設備、數位媒體播放機、視訊遊戲設備、視訊遊戲控制台、蜂巢或衛星無線電電話(所謂的「智慧型電話」)、視訊電話會議設備、視訊串流設備等。數位視訊設備實現視訊譯碼技術(諸如在由MPEG-2、MPEG-4、ITU-T H.263、ITU-T H.264/MPEG-4(第10部分,高級視訊譯碼(AVC))、ITU-T H.265/高效率視訊譯碼(HEVC)所定義的標準以及此類標準的擴展中描述的彼等技術)。經由實現此種視訊譯碼技術,視訊設備可以更加高效地傳輸、接收、編碼、解碼及/或儲存數位視訊資訊。Digital video capabilities can be incorporated into a wide variety of devices, including digital televisions, digital broadcast systems, wireless broadcasting systems, personal digital assistants (PDAs), laptop or desktop computers, tablet computers, e-book readers devices, digital cameras, digital recording devices, digital media players, video game equipment, video game consoles, cellular or satellite radio telephones (so-called "smart phones"), video teleconferencing equipment, video streaming equipment, etc. Digital video equipment implementing video coding techniques (such as those described in MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4 (Part 10, Advanced Video Coding (AVC)) , the standards defined by ITU-T H.265/High Efficiency Video Coding (HEVC) and their techniques described in extensions to such standards). By implementing such video decoding technology, video equipment can transmit, receive, encode, decode and/or store digital video information more efficiently.

視訊譯碼技術包括空間(圖片內(intra-picture))預測及/或時間(圖片間(inter-picture))預測以減少或去除在視訊序列中固有的冗餘。對於基於區塊的視訊譯碼,視訊切片(例如,視訊圖片或視訊圖片的一部分)可以被分割為視訊區塊,視訊區塊亦可以被稱為譯碼樹單元(CTU)、譯碼單元(CU)及/或譯碼節點。圖片的經訊框內譯碼(I)的切片中的視訊區塊是使用相對於同一圖片中的相鄰區塊中的參考取樣的空間預測來編碼的。圖片的經訊框間譯碼(P或B)的切片中的視訊區塊可以使用相對於同一圖片中的相鄰區塊中的參考取樣的空間預測,或者相對於其他參考圖片中的參考取樣的時間預測。圖片可以被稱為訊框,並且參考圖片可以被稱為參考訊框。Video coding techniques include spatial (intra-picture) prediction and/or temporal (inter-picture) prediction to reduce or remove redundancy inherent in video sequences. For block-based video coding, a video slice (eg, a video picture or a portion of a video picture) may be partitioned into video blocks, which may also be referred to as coding tree units (CTUs), coding units ( CU) and/or coding nodes. Video blocks in an intra-coded (I) slice of a picture are encoded using spatial prediction relative to reference samples in neighboring blocks in the same picture. Video blocks in an inter-frame-coded (P or B) slice of a picture may use spatial prediction relative to reference samples in neighboring blocks in the same picture, or relative to reference samples in other reference pictures time forecast. A picture may be referred to as a frame, and a reference picture may be referred to as a reference frame.

概括而言,本案內容描述了用於決定用於訊框內預測的最可能模式(MPM)列表和從該最可能模式列表中決定訊框內預測模式的技術。本案內容的技術可以在使用訊框內預測對視訊資料進行譯碼時提高譯碼效率。具體而言,本案內容描述了用於構造一般最可能模式列表、並且隨後從該一般最可能模式列表中構造主要(primary)和次要(secondary)最可能模式列表的技術。主要和次要最可能模式列表可以包括來自相鄰區塊的訊框內預測模式以及從相鄰區塊的訊框內預測模式的訊框內預測模式偏移。本案內容的技術可以提高產生主要和次要最可能模式列表的效率。In general, techniques for determining a most probable mode (MPM) list for intra-frame prediction and determining an intra-frame prediction mode from the most probable mode list are described. The technique disclosed in this case can improve decoding efficiency when decoding video data using in-frame prediction. In particular, the subject matter describes techniques for constructing a general list of most probable patterns, and then constructing lists of primary and secondary most probable patterns from the general list of most probable patterns. The primary and secondary most probable mode lists may include intra-frame prediction modes from adjacent blocks and intra-frame prediction mode offsets from the intra-frame prediction modes of adjacent blocks. The techniques presented in this case can improve the efficiency of generating lists of major and minor most likely patterns.

在一個實例中,一種方法包括以下步驟:構造包含N個條目的一般最可能模式列表,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中平面模式是該一般最可能模式列表中的順序第一條目;從該一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從該一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;使用該主要最可能模式列表或該次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式;及使用該當前訊框內預測模式來對該當前視訊資料區塊進行解碼,以產生經解碼的視訊資料區塊。In one example, a method includes the steps of constructing a general most probable mode list comprising N entries, wherein the N entries of the general most probable mode list are intra-frame prediction modes, and wherein the planar mode is the general most probable mode Sequential first entry in the list of possible modes; construct a list of primary most likely modes from the first Np entries in this list of general most probable modes, where Np is less than N; ) entries to construct a secondary most probable mode list; use the primary most probable mode list or the secondary most probable mode list to determine the current in-frame prediction mode for the current block of video data; and use the current in-frame prediction mode A prediction mode is used to decode the current block of video data to generate a decoded block of video data.

在另一實例中,一種設備包括記憶體和一或多個處理器,該一或多個處理器被配置為:構造包含N個條目的一般最可能模式列表,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中平面模式是該一般最可能模式列表中的順序第一條目;從該一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從該一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;使用該主要最可能模式列表或該次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式;及使用該當前訊框內預測模式來對該當前視訊資料區塊進行解碼,以產生經解碼的視訊資料區塊。In another example, an apparatus includes memory and one or more processors configured to: construct a general most probable mode list comprising N entries, wherein the N entries are intra-frame prediction modes, and wherein the planar mode is the sequential first entry in the general most probable mode list; constructing a main most probable mode list from the first Np entries in the general most probable mode list, where Np is less than N; construct a secondary most probable mode list from the remaining (N-Np) entries in the general most probable mode list; use either the primary most probable mode list or the secondary most probable mode list to decide for the current a current intra-frame prediction mode for a block of video data; and decoding the current block of video data using the current intra-frame prediction mode to generate a decoded block of video data.

在另一實例中,一種設備包括:用於構造包含N個條目的一般最可能模式列表的構件,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中平面模式是該一般最可能模式列表中的順序第一條目;用於從該一般最可能模式列表中的前Np個條目構造主要最可能模式列表的構件,其中Np小於N;用於從該一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表的構件;用於使用該主要最可能模式列表或該次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式的構件;及用於使用該當前訊框內預測模式來對該當前視訊資料區塊進行解碼,以產生經解碼的視訊資料區塊的構件。In another example, an apparatus includes means for constructing a general most probable mode list comprising N entries, wherein the N entries of the general most probable mode list are intra-frame prediction modes, and wherein the planar mode is Sequential first entry in the general most probable pattern list; used to construct a member of the main most probable pattern list from the first Np entries in the general most probable pattern list, where Np is less than N; used to construct from the general most probable pattern list The remaining (N-Np) entries in the mode list construct the building block of the secondary most probable mode list; used to use the primary most probable mode list or the secondary most probable mode list to determine the current video data block means for intra-frame prediction mode; and means for decoding the current block of video data using the current intra-frame prediction mode to generate a decoded block of video data.

在另一實例中,一種非暫時性電腦可讀取儲存媒體被編碼有指令,該等指令在被執行時使得可程式設計處理器進行以下操作:構造包含N個條目的一般最可能模式列表,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中平面模式是該一般最可能模式列表中的順序第一條目;從該一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從該一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;使用該主要最可能模式列表或該次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式;及使用該當前訊框內預測模式來對該當前視訊資料區塊進行解碼,以產生經解碼的視訊資料區塊。In another example, a non-transitory computer-readable storage medium is encoded with instructions that, when executed, cause a programmable processor to: construct a general most probable pattern list comprising N entries, where the N entries of the general most probable mode list are intra-frame prediction modes, and wherein planar mode is the sequential first entry in the general most probable mode list; from the first Np in the general most probable mode list Entries construct a list of primary most probable modes, where Np is less than N; construct a list of secondary most probable modes from the remaining (N-Np) entries in this general list of most probable modes; use either this list of primary most probable modes or the list of secondary most probable modes A list of possible modes is used to determine a current intra-frame prediction mode for the current block of video data; and the current block of video data is decoded using the current intra-frame prediction mode to generate a decoded block of video data.

在另一實例中,一種設備包括記憶體和一或多個處理器,該一或多個處理器被配置為:構造包含N個條目的一般最可能模式列表,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中平面模式是該一般最可能模式列表中的順序第一條目;從該一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從該一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;使用該主要最可能模式列表或該次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式;及使用該當前訊框內預測模式來對該當前視訊資料區塊進行編碼,以產生經編碼的視訊資料區塊。In another example, an apparatus includes memory and one or more processors configured to: construct a general most probable mode list comprising N entries, wherein the N entries are intra-frame prediction modes, and wherein the planar mode is the sequential first entry in the general most probable mode list; constructing a main most probable mode list from the first Np entries in the general most probable mode list, where Np is less than N; construct a secondary most probable mode list from the remaining (N-Np) entries in the general most probable mode list; use either the primary most probable mode list or the secondary most probable mode list to decide for the current a current intra-frame prediction mode for a block of video data; and encoding the current block of video data using the current intra-frame prediction mode to generate an encoded block of video data.

在附圖和以下描述中闡述了一或多個實例的細節。根據說明書、附圖和請求項,其他特徵、目的和優點將是顯而易見的。The details of one or more examples are set forth in the accompanying drawings and the description below. Other features, objects and advantages will be apparent from the description, drawings and claims.

通用視訊譯碼(VVC)由ITU-T和ISO/IEC的聯合視訊專家組(JVET)開發,以實現HEVC以外的重要壓縮能力,用於廣泛的應用。VVC規範於2020年7月最終決定並且由ITU-T和ISO/IEC發佈。VVC規範規定了標準的位元串流和圖片格式、高級語法(HLS)和譯碼單元級別語法以及解析和解碼程序。VVC亦規定了附錄中的簡介/層/級別(PTL)限制、位元組串流格式、假設參考解碼器和補充增強資訊(SEI)。Generic Video Coding (VVC) was developed by the Joint Video Experts Group (JVET) of ITU-T and ISO/IEC to enable significant compression capabilities beyond HEVC for a wide range of applications. The VVC specification was finalized in July 2020 and published by ITU-T and ISO/IEC. The VVC specification specifies standard bitstream and picture formats, high level syntax (HLS) and coding unit level syntax, and parsing and decoding procedures. VVC also specifies profile/layer/level (PTL) restrictions, byte stream formats, hypothetical reference decoders, and Supplemental Enhancement Information (SEI) in the appendix.

概括而言,本案描述了用於決定用於訊框內預測的最可能模式列表和從最可能模式列表中決定訊框內預測模式的技術。本案內容的技術可以提高產生主要和次要最可能模式列表的效率。例如,視訊編碼器及/或視訊解碼器可以被配置為:構造包含N個條目的一般最可能模式列表,其中一般最可能模式列表的N個條目是訊框內預測模式;從一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;及使用主要最可能模式列表或次要最可能模式列表來決定用於當前視訊資料區塊的訊框內預測模式。In general terms, this case describes techniques for determining a list of most probable modes for intra-frame prediction and determining an intra-frame prediction mode from the list of most probable modes. The techniques presented in this case can improve the efficiency of generating lists of major and minor most likely patterns. For example, a video encoder and/or a video decoder may be configured to: construct a general most probable mode list containing N entries, where the N entries of the general most probable mode list are intra-frame prediction modes; first Np entries in the list to construct the primary most probable mode list, where Np is less than N; construct the secondary most probable mode list from the remaining (N-Np) entries in the general most probable mode list; and use the primary most probable mode list or a list of second most probable modes to determine the intra-frame prediction mode for the current block of video data.

圖1是圖示可以執行本案內容的技術的示例性視訊編碼和解碼系統100的方塊圖。概括而言,本案內容的技術涉及對視訊資料進行譯碼(編碼及/或解碼)。通常,視訊資料包括用於處理視訊的任何資料。因此,視訊資料可以包括原始的未經編碼的視訊、經編碼的視訊、經解碼(例如,經重構)的視訊,以及視訊中繼資料(例如,信號傳遞資料)。FIG. 1 is a block diagram illustrating an exemplary video encoding and decoding system 100 that may implement the techniques disclosed herein. In general, the technology of the subject matter involves the decoding (encoding and/or decoding) of video data. Generally, video data includes any data used to process video. Accordingly, video data may include original unencoded video, encoded video, decoded (eg, reconstructed) video, and video metadata (eg, signaling data).

如圖1所示,在該實例中,系統100包括源設備102,源設備102提供要被目的地設備116解碼和顯示的、經編碼的視訊資料。具體地,源設備102經由電腦可讀取媒體110來將視訊資料提供給目的地設備116。源設備102和目的地設備116可以包括各種各樣的設備中的任何一種,包括桌上型電腦、筆記型電腦(亦即,膝上型電腦)、行動設備、平板電腦、機上盒、諸如智慧型電話之類的電話手機、電視機、相機、顯示設備、數位媒體播放機、視訊遊戲控制台、視訊串流設備、廣播接收器設備等。在一些情況下,源設備102和目的地設備116可以被配備用於無線通訊,並且因此可以被稱為無線通訊設備。As shown in FIG. 1 , in this example, system 100 includes source device 102 that provides encoded video material to be decoded and displayed by destination device 116 . Specifically, source device 102 provides video material to destination device 116 via computer-readable medium 110 . Source device 102 and destination device 116 may include any of a wide variety of devices, including desktop computers, notebook computers (ie, laptop computers), mobile devices, tablet computers, set-top boxes, such as Phone handsets such as smartphones, televisions, cameras, display devices, digital media players, video game consoles, video streaming devices, broadcast receiver devices, etc. In some cases, source device 102 and destination device 116 may be equipped for wireless communication, and thus may be referred to as wireless communication devices.

在圖1的實例中,源設備102包括視訊源104、記憶體106、視訊編碼器200以及輸出介面108。目的地設備116包括輸入介面122、視訊解碼器300、記憶體120以及顯示設備118。根據本案內容,源設備102的視訊編碼器200和目的地設備116的視訊解碼器300可以被配置為應用用於推導用於訊框內預測的最可能模式列表的技術。因此,源設備102表示視訊編碼設備的實例,而目的地設備116表示視訊解碼設備的實例。在其他實例中,源設備和目的地設備可以包括其他元件或佈置。例如,源設備102可以從諸如外部相機之類的外部視訊源接收視訊資料。同樣,目的地設備116可以與外部顯示設備對接,而不是包括整合顯示設備。In the example of FIG. 1 , source device 102 includes video source 104 , memory 106 , video encoder 200 , and output interface 108 . Destination device 116 includes input interface 122 , video decoder 300 , memory 120 , and display device 118 . In accordance with the teachings, video encoder 200 of source device 102 and video decoder 300 of destination device 116 may be configured to apply techniques for deriving a list of most probable modes for intra-frame prediction. Thus, source device 102 represents an instance of a video encoding device, and destination device 116 represents an instance of a video decoding device. In other instances, the source and destination devices may include other elements or arrangements. For example, source device 102 may receive video material from an external video source, such as an external camera. Likewise, the destination device 116 may interface with an external display device instead of including an integrated display device.

如圖1所示的系統100僅是一個實例。通常,任何數位視訊編碼及/或解碼設備可以執行用於推導用於訊框內預測的最可能模式列表的技術。源設備102和目的地設備116僅是此種譯碼設備的實例,其中源設備102產生經譯碼的視訊資料以用於傳輸給目的地設備116。本案內容將「譯碼」設備代表為執行對資料的譯碼(例如,編碼及/或解碼)的設備。因此,視訊編碼器200和視訊解碼器300分別表示譯碼設備(具體地,視訊編碼器和視訊解碼器)的實例。在一些實例中,源設備102和目的地設備116可以以基本上對稱的方式進行操作,使得源設備102和目的地設備116中的每一者皆包括視訊編碼和解碼用元件。因此,系統100可以支援在源設備102和目的地設備116之間的單向或雙向視訊傳輸,例如,以用於視訊串流、視訊重播、視訊廣播或視訊電話。The system 100 shown in FIG. 1 is but one example. In general, any digital video encoding and/or decoding device can perform the techniques for deriving a list of most probable modes for intra-frame prediction. Source device 102 and destination device 116 are merely examples of such decoding devices, where source device 102 generates decoded video data for transmission to destination device 116 . The context of this case refers to a "decoding" device as a device that performs the decoding (eg, encoding and/or decoding) of data. Thus, the video encoder 200 and the video decoder 300 respectively represent an example of a decoding apparatus (specifically, a video encoder and a video decoder). In some examples, source device 102 and destination device 116 may operate in a substantially symmetrical manner, such that source device 102 and destination device 116 each include elements for video encoding and decoding. Thus, system 100 can support one-way or two-way video transmission between source device 102 and destination device 116, eg, for video streaming, video playback, video broadcasting, or video telephony.

通常,視訊源104表示視訊資料(亦即,原始的未經編碼的視訊資料)的源,並且將視訊資料的順序的一系列圖片(亦被稱為「訊框」)提供給視訊編碼器200,視訊編碼器200對用於圖片的資料進行編碼。源設備102的視訊源104可以包括視訊擷取設備,諸如攝像機、包含先前擷取的原始視訊的視訊存檔單元,及/或用於從視訊內容提供方接收視訊的視訊饋送介面。作為另外的替代方式,視訊源104可以產生基於電腦圖形的資料作為源視訊,或者產生即時視訊、被存檔的視訊和電腦產生的視訊的組合。在每種情況下,視訊編碼器200可以對被擷取的、預擷取的或電腦產生的視訊資料進行編碼。視訊編碼器200可以將圖片從所接收的次序(有時被稱為「顯示次序」)重新排列為用於譯碼的譯碼次序。視訊編碼器200可以產生包括經編碼的視訊資料的位元串流。隨後,源設備102可以經由輸出介面108將經編碼的視訊資料輸出到電腦可讀取媒體110上,以便由例如目的地設備116的輸入介面122接收及/或取得。Typically, video source 104 represents a source of video data (ie, raw unencoded video data) and provides video encoder 200 with a sequential series of pictures (also referred to as "frames") of the video data , the video encoder 200 encodes the data for the picture. The video source 104 of the source device 102 may include a video capture device, such as a video camera, a video archive unit containing previously captured raw video, and/or a video feed interface for receiving video from a video content provider. As a further alternative, the video source 104 may generate computer graphics-based material as the source video, or a combination of real-time video, archived video, and computer-generated video. In each case, video encoder 200 may encode captured, pre-fetched, or computer-generated video data. Video encoder 200 may rearrange the pictures from the received order (sometimes referred to as the "display order") to the decoding order for decoding. Video encoder 200 may generate a bitstream including encoded video data. The source device 102 may then output the encoded video data to the computer-readable medium 110 via the output interface 108 for reception and/or retrieval by, for example, the input interface 122 of the destination device 116 .

源設備102的記憶體106和目的地設備116的記憶體120表示通用記憶體。在一些實例中,記憶體106、120可以儲存原始視訊資料,例如,來自視訊源104的原始視訊以及來自視訊解碼器300的原始的經解碼的視訊資料。另外或替代地,記憶體106、120可以儲存可由例如視訊編碼器200和視訊解碼器300分別執行的軟體指令。儘管在該實例中記憶體106和記憶體120被示為與視訊編碼器200和視訊解碼器300分開,但是應當理解的是,視訊編碼器200和視訊解碼器300亦可以包括用於在功能上類似或等效目的的內部記憶體。此外,記憶體106、120可以儲存例如從視訊編碼器200輸出並且輸入到視訊解碼器300的經編碼的視訊資料。在一些實例中,記憶體106、120的部分可以被分配為一或多個視訊緩衝器,例如,以儲存原始的經解碼及/或經編碼的視訊資料。Memory 106 of source device 102 and memory 120 of destination device 116 represent general purpose memory. In some examples, memories 106 , 120 may store raw video data, eg, raw video from video source 104 and raw decoded video data from video decoder 300 . Additionally or alternatively, memories 106, 120 may store software instructions executable by, for example, video encoder 200 and video decoder 300, respectively. Although memory 106 and memory 120 are shown as separate from video encoder 200 and video decoder 300 in this example, it should be understood that video encoder 200 and video decoder 300 may also be included for functionally Internal memory for similar or equivalent purposes. In addition, the memories 106 , 120 may store, for example, encoded video data output from the video encoder 200 and input to the video decoder 300 . In some examples, portions of memory 106, 120 may be allocated as one or more video buffers, eg, to store raw decoded and/or encoded video data.

電腦可讀取媒體110可以表示能夠將經編碼的視訊資料從源設備102輸送到目的地設備116的任何類型的媒體或設備。在一個實例中,電腦可讀取媒體110表示通訊媒體,以使得源設備102能夠例如經由射頻網路或基於電腦的網路,來即時地向目的地設備116直接傳輸經編碼的視訊資料。輸出介面108可以根據諸如無線通訊協定之類的通訊標準來對包括經編碼的視訊資料的傳輸信號進行調制,並且輸入介面122可以根據諸如無線通訊協定之類的通訊標準來對所接收的傳輸信號進行解調。通訊媒體可以包括任何無線或有線通訊媒體,例如,射頻(RF)頻譜或一或多條實體傳輸線。通訊媒體可以形成諸如以下各項的基於封包的網路的一部分:區域網路、廣域網路,或諸如網際網路之類的全球網路。通訊媒體可以包括路由器、交換機、基地站,或對於促進從源設備102到目的地設備116的通訊而言可以有用的任何其他設備。Computer-readable media 110 may represent any type of media or device capable of transporting encoded video material from source device 102 to destination device 116 . In one example, computer-readable medium 110 represents a communication medium that enables source device 102 to transmit encoded video data directly to destination device 116 in real time, such as via a radio frequency network or a computer-based network. Output interface 108 may modulate transmission signals including encoded video data according to a communication standard such as a wireless communication protocol, and input interface 122 may modulate received transmission signals according to a communication standard such as a wireless communication protocol demodulate. Communication media may include any wireless or wired communication media, eg, the radio frequency (RF) spectrum or one or more physical transmission lines. The communication medium may form part of a packet-based network such as a local area network, a wide area network, or a global network such as the Internet. Communication media may include routers, switches, base stations, or any other device that may be useful for facilitating communication from source device 102 to destination device 116 .

在一些實例中,源設備102可以將經編碼的資料從輸出介面108輸出到儲存設備112。類似地,目的地設備116可以經由輸入介面122從儲存設備112存取經編碼的資料。儲存設備112可以包括各種分散式或本端存取的資料儲存媒體中的任何一種,諸如硬碟、藍光光碟、DVD、CD-ROM、快閃記憶體、揮發性或非揮發性記憶體,或用於儲存經編碼的視訊資料的任何其他適當的數位儲存媒體。In some examples, source device 102 may output the encoded data from output interface 108 to storage device 112 . Similarly, destination device 116 may access encoded data from storage device 112 via input interface 122 . Storage device 112 may include any of a variety of distributed or locally-accessible data storage media, such as hard disks, Blu-ray discs, DVDs, CD-ROMs, flash memory, volatile or non-volatile memory, or Any other suitable digital storage medium for storing encoded video data.

在一些實例中,源設備102可以將經編碼的視訊資料輸出到檔案伺服器114或者到可以儲存由源設備102產生的經編碼的視訊的另一中間儲存設備。目的地設備116可以經由串流或下載來從檔案伺服器114存取被儲存的視訊資料。In some examples, source device 102 may output the encoded video data to file server 114 or to another intermediate storage device that may store the encoded video generated by source device 102 . The destination device 116 may access the stored video data from the file server 114 via streaming or downloading.

檔案伺服器114可以是能夠儲存經編碼的視訊資料並且將該經編碼的視訊資料傳輸給目的地設備116的任何類型的伺服器設備。檔案伺服器114可以表示網頁伺服器(例如,用於網站)、被配置為提供檔案傳輸通訊協定服務(諸如檔案傳輸通訊協定(FTP)或基於單向傳輸的檔案遞送(FLUTE)協定)的伺服器、內容遞送網路(CDN)設備、超文字傳輸協定(HTTP)伺服器、多媒體廣播多播服務(MBMS)或增強型MBMS(eMBMS)伺服器,及/或網路附加儲存(NAS)設備。檔案伺服器114可以另外或替代地實現一或多個HTTP串流協定,諸如基於HTTP的動態自我調整串流(DASH)、HTTP即時串流(HLS)、即時串流協定(RTSP)、HTTP動態串流等。File server 114 may be any type of server device capable of storing encoded video data and transmitting the encoded video data to destination device 116 . File server 114 may represent a web server (eg, for a website), a server configured to provide file transfer protocol services, such as file transfer protocol (FTP) or file delivery over one-way transfer (FLUTE) protocol server, Content Delivery Network (CDN) device, Hypertext Transfer Protocol (HTTP) server, Multimedia Broadcast Multicast Service (MBMS) or Enhanced MBMS (eMBMS) server, and/or Network Attached Storage (NAS) device . File server 114 may additionally or alternatively implement one or more HTTP streaming protocols, such as Dynamic Self-Adjusting Streaming over HTTP (DASH), HTTP Live Streaming (HLS), Real Time Streaming Protocol (RTSP), HTTP Dynamic streaming etc.

目的地設備116可以經由任何標準資料連接(包括網際網路連接)來從檔案伺服器114存取經編碼的視訊資料。該連接可以包括適於存取被儲存在檔案伺服器114上的經編碼的視訊資料的無線通道(例如,Wi-Fi連接)、有線連接(例如,數位用戶線路(DSL)、纜線數據機等),或該兩者的組合。輸入介面122可以被配置為根據上文論述的用於從檔案伺服器114取得或接收媒體資料的各種協定或者用於取得媒體資料的其他此類協定中的任何一或多個協定進行操作。The destination device 116 may access the encoded video data from the file server 114 via any standard data connection, including an Internet connection. The connections may include wireless channels (eg, Wi-Fi connections), wired connections (eg, digital subscriber line (DSL), cable modems) suitable for accessing encoded video data stored on file server 114 etc.), or a combination of the two. Input interface 122 may be configured to operate in accordance with any one or more of the various protocols discussed above for obtaining or receiving media data from file server 114 or other such protocols for obtaining media data.

輸出介面108和輸入介面122可以表示無線傳輸器/接收器、數據機、有線聯網元件(例如,乙太網路卡)、根據各種IEEE 802.11標準中的任何一種標準進行操作的無線通訊元件,或其他實體元件。在其中輸出介面108和輸入介面122包括無線元件的實例中,輸出介面108和輸入介面122可以被配置為根據蜂巢通訊標準(諸如4G、4G-LTE(長期進化)、改進的LTE、5G等)來傳輸資料(諸如經編碼的視訊資料)。在其中輸出介面108包括無線傳輸器的一些實例中,輸出介面108和輸入介面122可以被配置為根據其他無線標準(諸如IEEE 802.11規範、IEEE 802.15規範(例如,ZigBee TM)、Bluetooth TM標準等)來傳輸資料(諸如經編碼的視訊資料)。在一些實例中,源設備102及/或目的地設備116可以包括相應的晶片上系統(SoC)元件。例如,源設備102可以包括用於執行被賦予視訊編碼器200及/或輸出介面108的功能的SoC元件,並且目的地設備116可以包括用於執行被賦予視訊解碼器300及/或輸入介面122的功能的SoC元件。 Output interface 108 and input interface 122 may represent wireless transmitters/receivers, modems, wired networking elements (eg, Ethernet cards), wireless communication elements operating in accordance with any of the various IEEE 802.11 standards, or other solid components. In examples in which output interface 108 and input interface 122 include wireless components, output interface 108 and input interface 122 may be configured in accordance with cellular communication standards (such as 4G, 4G-LTE (Long Term Evolution), LTE-Advanced, 5G, etc.) to transmit data (such as encoded video data). In some instances where output interface 108 includes a wireless transmitter, output interface 108 and input interface 122 may be configured in accordance with other wireless standards (such as the IEEE 802.11 specification, the IEEE 802.15 specification (eg, ZigBee ), the Bluetooth standard, etc.) to transmit data (such as encoded video data). In some examples, source device 102 and/or destination device 116 may include corresponding system-on-chip (SoC) elements. For example, source device 102 may include SoC elements for performing the functions assigned to video encoder 200 and/or output interface 108 , and destination device 116 may include an SoC element for performing the functions assigned to video decoder 300 and/or input interface 122 . functional SoC components.

本案內容的技術可以應用於視訊譯碼,以支援各種多媒體應用中的任何一種,諸如空中電視廣播、有線電視傳輸、衛星電視傳輸、網際網路串流視訊傳輸(諸如基於HTTP的動態自我調整串流(DASH))、被編碼到資料儲存媒體上的數位視訊、對被儲存在資料儲存媒體上的數位視訊的解碼,或其他應用。The techniques described in this case can be applied to video decoding to support any of a variety of multimedia applications, such as over-the-air television broadcasting, cable television transmission, satellite television transmission, Internet streaming video transmission (such as HTTP-based dynamic self-adjusting streaming streaming (DASH)), digital video encoded onto data storage media, decoding of digital video stored on data storage media, or other applications.

目的地設備116的輸入介面122從電腦可讀取媒體110(例如,通訊媒體、儲存設備112、檔案伺服器114等)接收經編碼的視訊位元串流。經編碼的視訊位元串流可以包括由視訊編碼器200定義的、亦被視訊解碼器300使用的信號傳遞資訊,諸如具有用於描述視訊區塊或其他譯碼單元(例如,切片、圖片、圖片群組、序列等)的特性及/或處理的值的語法元素。顯示設備118將經解碼的視訊資料的經解碼的圖片顯示給使用者。顯示設備118可以表示各種顯示設備中的任何一種,諸如液晶顯示器(LCD)、電漿顯示器、有機發光二極體(OLED)顯示器,或另一種類型的顯示設備。The input interface 122 of the destination device 116 receives the encoded video bitstream from the computer-readable medium 110 (eg, communication medium, storage device 112, file server 114, etc.). The encoded video bitstream may include signaling information defined by video encoder 200 that is also used by video decoder 300, such as with information used to describe video blocks or other coding units (eg, slices, pictures, A syntax element for the properties and/or processed values of groups of pictures, sequences, etc. Display device 118 displays the decoded picture of the decoded video material to the user. Display device 118 may represent any of a variety of display devices, such as a liquid crystal display (LCD), plasma display, organic light emitting diode (OLED) display, or another type of display device.

儘管在圖1中未圖示,但是在一些實例中,視訊編碼器200和視訊解碼器300可以各自與音訊編碼器及/或音訊解碼器整合,並且可以包括適當的MUX-DEMUX單元或其他硬體及/或軟體,以處理包括共用資料串流中的音訊和視訊兩者的經多工的串流。若適用,MUX-DEMUX單元可以遵循ITU H.223多工器協定或其他協定(諸如使用者資料包通訊協定(UDP))。Although not shown in FIG. 1, in some examples, video encoder 200 and video decoder 300 may each be integrated with an audio encoder and/or audio decoder, and may include appropriate MUX-DEMUX units or other hardware software and/or software to process multiplexed streams including both audio and video in the shared data stream. MUX-DEMUX units may conform to the ITU H.223 multiplexer protocol or other protocols such as User Data Packet Protocol (UDP), if applicable.

視訊編碼器200和視訊解碼器300各自可以被實現為各種適當的編碼器及/或解碼器電路系統中的任何一種,諸如一或多個微處理器、數位信號處理器(DSP)、特殊應用積體電路(ASIC)、現場可程式設計閘陣列(FPGA)、個別邏輯、軟體、硬體、韌體,或其任何組合。當該等技術部分地用軟體實現時,設備可以將用於軟體的指令儲存在適當的非暫時性電腦可讀取媒體中,並且使用一或多個處理器,用硬體來執行指令以執行本案內容的技術。視訊編碼器200和視訊解碼器300中的每一者可以被包括在一或多個編碼器或解碼器中,編碼器或解碼器中的任一者可以被整合為相應設備中的組合編碼器/解碼器(CODEC)的一部分。包括視訊編碼器200及/或視訊解碼器300的設備可以包括積體電路、微處理器,及/或無線通訊設備(諸如蜂巢式電話)。Video encoder 200 and video decoder 300 may each be implemented as any of a variety of suitable encoder and/or decoder circuitry, such as one or more microprocessors, digital signal processors (DSPs), application-specific Integrated circuit (ASIC), field programmable gate array (FPGA), individual logic, software, hardware, firmware, or any combination thereof. When the techniques are implemented in part in software, the apparatus may store instructions for the software in a suitable non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to execute technology in the case. Each of the video encoder 200 and the video decoder 300 may be included in one or more encoders or decoders, either of which may be integrated as a combined encoder in a corresponding device /decoder (CODEC). Devices including video encoder 200 and/or video decoder 300 may include integrated circuits, microprocessors, and/or wireless communication devices such as cellular telephones.

視訊編碼器200和視訊解碼器300可以根據視訊譯碼標準(諸如ITU-T H.265(亦被稱為高效率視訊譯碼(HEVC)標準)或對其的擴展(諸如多視圖及/或可伸縮視訊譯碼擴展))進行操作。或者,視訊編碼器200和視訊解碼器300可以根據其他專有或行業標準(諸如ITU-T H.266,亦被稱為通用視訊譯碼(VVC))進行操作。VVC標準的草案是在以下文件中描述的:Bross等人,「Versatile Video Coding (Draft 10)」,ITU-T SG 16 WP 3和ISO/IEC JTC 1/SC 29/WG 11的聯合視訊專家組(JVET),第18次會議:經由電話會議,2020年6月22日-7月1日,JVET-S2001-vA(下文中被稱為「VVC草案10」)。然而,本案內容的技術不限於任何特定的譯碼標準。Video encoder 200 and video decoder 300 may be based on video coding standards such as ITU-T H.265 (also known as the High Efficiency Video Coding (HEVC) standard) or extensions thereto such as multi-view and/or Scalable Video Codec Extension)) to operate. Alternatively, video encoder 200 and video decoder 300 may operate according to other proprietary or industry standards, such as ITU-T H.266, also known as Generic Video Coding (VVC). A draft of the VVC standard is described in: Bross et al., "Versatile Video Coding (Draft 10)", Joint Video Experts Group of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 (JVET), 18th meeting: via conference call, June 22-July 1, 2020, JVET-S2001-vA (hereinafter referred to as "VVC Draft 10"). However, the techniques of the present case are not limited to any particular decoding standard.

通常,視訊編碼器200和視訊解碼器300可以執行對圖片的基於區塊的譯碼。術語「區塊」通常代表包括要被處理的(例如,在編碼及/或解碼程序中要被編碼、被解碼或以其他方式使用的)資料的結構。例如,區塊可以包括亮度及/或色度資料的取樣的二維矩陣。通常,視訊編碼器200和視訊解碼器300可以對以YUV(例如,Y、Cb、Cr)格式表示的視訊資料進行譯碼。亦即,並不是對用於圖片的取樣的紅色、綠色和藍色(RGB)資料進行譯碼,視訊編碼器200和視訊解碼器300可以對亮度和色度分量進行譯碼,其中色度分量可以包括紅色色相和藍色色相色度分量兩者。在一些實例中,視訊編碼器200在進行編碼之前將所接收的經RGB格式化的資料轉換為YUV表示,並且視訊解碼器300將YUV表示轉換為RGB格式。或者,預處理單元和後處理單元(未圖示)可以執行該等轉換。In general, video encoder 200 and video decoder 300 may perform block-based coding of pictures. The term "block" generally refers to a structure that includes data to be processed (eg, to be encoded, decoded, or otherwise used in an encoding and/or decoding process). For example, a block may include a two-dimensional matrix of samples of luma and/or chroma data. Generally, the video encoder 200 and the video decoder 300 can decode video data represented in YUV (eg, Y, Cb, Cr) format. That is, instead of decoding red, green, and blue (RGB) data for sampling of a picture, video encoder 200 and video decoder 300 may decode luma and chroma components, where the chroma components Both red hue and blue hue chrominance components may be included. In some examples, video encoder 200 converts the received RGB-formatted data to a YUV representation before encoding, and video decoder 300 converts the YUV representation to RGB format. Alternatively, a preprocessing unit and a postprocessing unit (not shown) may perform these conversions.

概括而言,本案內容可以涉及對圖片的譯碼(例如,編碼和解碼)以包括對圖片的資料進行編碼或解碼的程序。類似地,本案內容可以涉及對圖片的區塊的譯碼以包括對用於區塊的資料進行編碼或解碼(例如,預測及/或殘差譯碼)的程序。經編碼的視訊位元串流通常包括用於表示譯碼決策(例如,譯碼模式)以及將圖片分割為區塊的語法元素的一系列值。因此,關於對圖片或區塊進行譯碼的引用通常應當被理解為對用於形成圖片或區塊的語法元素的值進行譯碼。In general terms, the subject matter may relate to the coding (eg, encoding and decoding) of a picture to include a procedure for encoding or decoding the material of the picture. Similarly, the subject matter may relate to the coding of blocks of pictures to include procedures for encoding or decoding (eg, prediction and/or residual coding) data for the blocks. The encoded video bitstream typically includes a series of values for syntax elements that represent coding decisions (eg, coding modes) and partition the picture into blocks. Accordingly, references to coding a picture or block should generally be understood as coding the values of syntax elements used to form the picture or block.

HEVC定義了各種區塊,包括譯碼單元(CU)、預測單元(PU)和變換單元(TU)。根據HEVC,視訊譯碼器(諸如視訊編碼器200)根據四叉樹結構來將譯碼樹單元(CTU)分割為CU。亦即,視訊譯碼器將CTU和CU分割為四個相等的、不重疊的正方形,並且四叉樹的每個節點具有零個或四個子節點。沒有子節點的節點可以被稱為「葉節點」,並且此種葉節點的CU可以包括一或多個PU及/或一或多個TU。視訊譯碼器可以進一步分割PU和TU。例如,在HEVC中,殘差四叉樹(RQT)表示對TU的分割。在HEVC中,PU表示訊框間預測資料,而TU表示殘差資料。經訊框內預測的CU包括訊框內預測資訊,諸如訊框內模式指示。HEVC defines various blocks, including coding units (CUs), prediction units (PUs), and transform units (TUs). According to HEVC, a video coder, such as video encoder 200, partitions coding tree units (CTUs) into CUs according to a quad-tree structure. That is, the video coder partitions CTUs and CUs into four equal, non-overlapping squares, and each node of the quadtree has zero or four child nodes. A node without child nodes may be referred to as a "leaf node," and a CU of such a leaf node may include one or more PUs and/or one or more TUs. The video coder may further partition PUs and TUs. For example, in HEVC, a residual quadtree (RQT) represents the partitioning of TUs. In HEVC, PU stands for inter-frame prediction data, and TU stands for residual data. An intra-frame predicted CU includes intra-frame prediction information, such as an intra-frame mode indication.

作為另一實例,視訊編碼器200和視訊解碼器300可以被配置為根據VVC進行操作。根據VVC,視訊譯碼器(諸如視訊編碼器200)將圖片分割為複數個譯碼樹單元(CTU)。視訊編碼器200可以根據樹結構(諸如四叉樹-二叉樹(QTBT)結構或多類型樹(MTT)結構)分割CTU。QTBT結構去除了多種分割類型的概念,諸如在HEVC的CU、PU和TU之間的分隔。QTBT結構包括兩個級別:根據四叉樹分割而被分割的第一級別,以及根據二叉樹分割而被分割的第二級別。QTBT結構的根節點對應於CTU。二叉樹的葉節點對應於譯碼單元(CU)。As another example, video encoder 200 and video decoder 300 may be configured to operate according to VVC. According to VVC, a video coder, such as video encoder 200, partitions a picture into a plurality of coding tree units (CTUs). The video encoder 200 may partition the CTUs according to a tree structure, such as a quad-tree-binary tree (QTBT) structure or a multi-type tree (MTT) structure. The QTBT structure removes the concept of various partition types, such as the partition between CUs, PUs and TUs of HEVC. The QTBT structure includes two levels: a first level that is partitioned according to quadtree partitioning, and a second level that is partitioned according to binary tree partitioning. The root node of the QTBT structure corresponds to the CTU. The leaf nodes of the binary tree correspond to coding units (CUs).

在MTT分割結構中,可以使用四叉樹(QT)分割、二叉樹(BT)分割以及一或多個類型的三叉樹(TT)(亦被稱為三元樹(TT))分割來對區塊進行分割。三叉樹或三元樹分割是其中區塊被分離為三個子區塊的分割。在一些實例中,三叉樹或三元樹分割將區塊劃分為三個子區塊,而不經由中心劃分原始區塊。MTT中的分割類型(例如,QT、BT和TT)可以是對稱的或不對稱的。In an MTT partition structure, blocks may be partitioned using quadtree (QT) partitions, binary tree (BT) partitions, and one or more types of ternary tree (TT) (also known as triple tree (TT)) partitions to split. A ternary tree or ternary tree partition is a partition in which a block is split into three sub-blocks. In some examples, ternary or ternary tree partitioning divides the block into three sub-blocks without dividing the original block via the center. Segmentation types (eg, QT, BT, and TT) in MTT can be symmetric or asymmetric.

在一些實例中,視訊編碼器200和視訊解碼器300可以使用單個QTBT或MTT結構來表示亮度分量和色度分量中的每一者,而在其他實例中,視訊編碼器200和視訊解碼器300可以使用兩個或更多個QTBT或MTT結構,諸如用於亮度分量的一個QTBT/MTT結構以及用於兩個色度分量的另一個QTBT/MTT結構(或者用於相應色度分量的兩個QTBT/MTT結構)。In some examples, video encoder 200 and video decoder 300 may use a single QTBT or MTT structure to represent each of the luma and chroma components, while in other examples, video encoder 200 and video decoder 300 Two or more QTBT or MTT structures may be used, such as one QTBT/MTT structure for the luma component and another QTBT/MTT structure for the two chroma components (or two for the corresponding chroma components). QTBT/MTT structure).

視訊編碼器200和視訊解碼器300可以被配置為使用每HEVC的四叉樹分割、QTBT分割、MTT分割,或其他分割結構。為了解釋的目的,關於QTBT分割提供了本案內容的技術的描述。然而,應當理解的是,本案內容的技術亦可以應用於被配置為使用四叉樹分割或者亦使用其他類型的分割的視訊譯碼器。Video encoder 200 and video decoder 300 may be configured to use per-HEVC quadtree partitioning, QTBT partitioning, MTT partitioning, or other partitioning structures. For explanatory purposes, a description of the techniques of the present case is provided with respect to QTBT segmentation. It should be understood, however, that the techniques of this disclosure may also be applied to video decoders configured to use quad-tree partitioning, or other types of partitioning as well.

在一些實例中,CTU包括具有三個取樣陣列的圖片的亮度取樣的譯碼樹區塊(CTB)、色度取樣的兩個對應CTB,或者單色圖片或使用三個分別的色彩平面而譯碼的圖片的取樣的CTB,以及用於對取樣進行譯碼的語法結構。CTB可以是針對N的某個值而言的NxN取樣區塊,使得將分量劃分為CTB是一種分割。分量可以是來自包括以4:2:0、4:2:2或4:4:4色彩格式的圖片的三個陣列(亮度和兩個色度)的陣列或來自三個陣列之一的單個取樣,或包括以單色格式的圖片的陣列或該陣列的單個取樣。在一些實例中,譯碼區塊是針對M和N的一些值而言的MxN取樣區塊,使得將CTB劃分為譯碼區塊是一種分割。In some examples, a CTU includes a coding tree block (CTB) of luma samples with three sample arrays, two corresponding CTBs of chroma samples, or a monochrome picture or coded using three separate color planes The CTB of the samples of the coded picture, and the syntax structure used to code the samples. A CTB may be an NxN block of samples for some value of N, such that dividing the components into CTBs is a division. A component can be an array from three arrays (luminance and two chrominance) including pictures in 4:2:0, 4:2:2 or 4:4:4 color format or a single from one of the three arrays A sample, or an array including a picture in monochrome format or a single sample of the array. In some examples, a coding block is an MxN block of samples for some values of M and N, such that dividing the CTB into coding blocks is a partition.

可以以各種方式在圖片中對區塊(例如,CTU或CU)進行分類。作為一個實例,磚塊(brick)可以代表圖片中的特定瓦片(tile)內的CTU行的矩形區域。瓦片可以是圖片中的特定瓦片列和特定瓦片行內的CTU的矩形區域。瓦片列代表CTU的矩形區域,其具有等於圖片的高度的高度以及由(例如,諸如在圖片參數集中的)語法元素指定的寬度。瓦片行代表CTU的矩形區域,其具有由(例如,諸如在圖片參數集中的)語法元素指定的高度以及等於圖片的寬度的寬度。Blocks (eg, CTUs or CUs) may be classified in a picture in various ways. As an example, a brick may represent a rectangular area of a CTU row within a particular tile in the picture. A tile can be a rectangular area of a CTU within a specific tile column and a specific tile row in the picture. A tile column represents a rectangular area of a CTU that has a height equal to the height of the picture and a width specified by a syntax element (eg, such as in a picture parameter set). A tile row represents a rectangular area of a CTU that has a height specified by a syntax element (eg, such as in a picture parameter set) and a width equal to the width of the picture.

在一些實例中,可以將瓦片分割為多個磚塊,每個磚塊可以包括瓦片內的一或多個CTU行。沒有被分割為多個磚塊的瓦片亦可以被稱為磚塊。然而,作為瓦片的真實子集的磚塊可以不被稱為瓦片。In some instances, a tile may be partitioned into multiple tiles, each of which may include one or more CTU rows within the tile. Tiles that are not divided into bricks can also be called bricks. However, bricks that are a true subset of tiles may not be called tiles.

圖片中的磚塊亦可以以切片來排列。切片可以是圖片的整數個磚塊,其可以唯一地被包含在單個網路抽象層(NAL)單元中。在一些實例中,切片包括數個完整的瓦片或者僅包括一個瓦片的完整磚塊的連續序列。The bricks in the picture can also be arranged in slices. A slice can be an integer number of bricks of a picture, which can be uniquely contained in a single Network Abstraction Layer (NAL) unit. In some instances, a slice includes several complete tiles or a contiguous sequence of complete bricks that includes only one tile.

本案內容可以互換地使用「NxN」和「N乘N」來代表區塊(諸如CU或其他視訊區塊)在垂直和水平維度態樣的取樣維度,例如,16x16個取樣或16乘16個取樣。通常,16x16 CU在垂直方向上將具有16個取樣(y=16),並且在水平方向上將具有16個取樣(x=16)。同樣地,NxN CU通常在垂直方向上具有N個取樣,並且在水平方向上具有N個取樣,其中N表示非負整數值。CU中的取樣可以按行和列來排列。此外,CU不一定需要在水平方向上具有與在垂直方向上相同的數量的取樣。例如,CU可以包括NxM個取樣,其中M不一定等於N。The content of this case may use "NxN" and "N by N" interchangeably to refer to the sampling dimensions of a block (such as a CU or other video block) in vertical and horizontal dimensions, eg, 16x16 samples or 16 by 16 samples . Typically, a 16x16 CU will have 16 samples in the vertical direction (y=16) and 16 samples in the horizontal direction (x=16). Likewise, an NxN CU typically has N samples in the vertical direction and N samples in the horizontal direction, where N represents a non-negative integer value. The samples in the CU can be arranged in rows and columns. Furthermore, a CU does not necessarily need to have the same number of samples in the horizontal direction as in the vertical direction. For example, a CU may include NxM samples, where M is not necessarily equal to N.

視訊編碼器200對用於CU的表示預測及/或殘差資訊以及其他資訊的視訊資料進行編碼。預測資訊指示將如何預測CU以便形成針對CU的預測區塊。殘差資訊通常表示在編碼之前的CU的取樣與預測區塊之間的逐取樣的差異。Video encoder 200 encodes video data representing prediction and/or residual information and other information for a CU. The prediction information indicates how the CU is to be predicted in order to form a prediction block for the CU. Residual information typically represents the sample-by-sample difference between the samples of the CU prior to encoding and the prediction block.

為了預測CU,視訊編碼器200通常可以經由訊框間預測或訊框內預測來形成針對CU的預測區塊。訊框間預測通常代表根據先前譯碼的圖片的資料來預測CU,而訊框內預測通常代表根據同一圖片的先前譯碼的資料來預測CU。為了執行訊框間預測,視訊編碼器200可以使用一或多個運動向量來產生預測區塊。視訊編碼器200通常可以執行運動搜尋,以辨識例如在CU與參考區塊之間的差異態樣與CU緊密匹配的參考區塊。視訊編碼器200可以使用以下各項來計算差度量,以決定參考區塊是否與當前CU緊密匹配:絕對差之和(SAD)、平方差之和(SSD)、平均絕對差(MAD)、均方差(MSD),或其他此種差計算。在一些實例中,視訊編碼器200可以使用單向預測或雙向預測來預測當前CU。To predict a CU, video encoder 200 may typically form prediction blocks for the CU via inter-frame prediction or intra-frame prediction. Inter-frame prediction generally refers to predicting a CU from data from previously coded pictures, while intra-frame prediction generally refers to predicting a CU from previously coded data of the same picture. To perform inter-frame prediction, video encoder 200 may use one or more motion vectors to generate prediction blocks. Video encoder 200 may typically perform a motion search to identify reference blocks that closely match the CU, for example, in the difference aspect between the CU and the reference block. Video encoder 200 may use the following to calculate difference metrics to decide whether a reference block closely matches the current CU: Sum of Absolute Differences (SAD), Sum of Squared Differences (SSD), Mean Absolute Difference (MAD), Average variance (MSD), or other such difference calculations. In some examples, video encoder 200 may predict the current CU using uni-directional prediction or bi-directional prediction.

VVC的一些實例亦提供仿射運動補償模式,其可以被認為是訊框間預測模式。在仿射運動補償模式下,視訊編碼器200可以決定用於表示非平移運動(諸如放大或縮小、旋轉、透視運動或其他不規則的運動類型)的兩個或更多個運動向量。Some examples of VVC also provide an affine motion compensation mode, which can be thought of as an inter-frame prediction mode. In the affine motion compensation mode, video encoder 200 may decide two or more motion vectors for representing non-translational motion, such as zoom-in or zoom-out, rotation, perspective motion, or other irregular motion types.

為了執行訊框內預測,視訊編碼器200可以選擇訊框內預測模式來產生預測區塊。VVC的一些實例提供六十七種訊框內預測模式,包括各種方向性模式,以及平面模式和DC模式。通常,視訊編碼器200選擇訊框內預測模式,該訊框內預測模式描述要根據其來預測當前區塊(例如,CU的區塊)的取樣的、當前區塊的相鄰取樣。假設視訊編碼器200以光柵掃瞄次序(從左到右、從上到下)對CTU和CU進行譯碼,則此種取樣通常可以是在與當前區塊相同的圖片中在當前區塊的上方、左上方或左側。To perform intra-frame prediction, video encoder 200 may select an intra-frame prediction mode to generate prediction blocks. Some examples of VVC provide sixty-seven intra-frame prediction modes, including various directional modes, as well as planar and DC modes. Typically, video encoder 200 selects an intra-frame prediction mode that describes the adjacent samples of the current block from which samples of the current block (eg, a block of a CU) are to be predicted. Assuming that video encoder 200 decodes CTUs and CUs in raster scan order (left-to-right, top-to-bottom), such sampling may typically be in the current block in the same picture as the current block top, top left, or left.

視訊編碼器200對表示用於當前區塊的預測模式的資料進行編碼。例如,對於訊框間預測模式,視訊編碼器200可以對用於表示使用各種可用訊框間預測模式中的何種的資料,以及針對對應模式的運動資訊進行編碼。對於單向或雙向訊框間預測,例如,視訊編碼器200可以使用高級運動向量預測(AMVP)或合併模式來對運動向量進行編碼。視訊編碼器200可以使用類似的模式來對用於仿射運動補償模式的運動向量進行編碼。Video encoder 200 encodes data representing the prediction mode for the current block. For example, for an inter-frame prediction mode, video encoder 200 may encode data indicating which of the various available inter-frame prediction modes to use, as well as motion information for the corresponding mode. For uni-directional or bi-directional inter-frame prediction, for example, video encoder 200 may use Advanced Motion Vector Prediction (AMVP) or merge mode to encode motion vectors. Video encoder 200 may use a similar mode to encode motion vectors for affine motion compensation modes.

在諸如對區塊的訊框內預測或訊框間預測之類的預測之後,視訊編碼器200可以計算用於該區塊的殘差資料。殘差資料(諸如殘差區塊)表示在區塊與用於該區塊的預測區塊之間的逐取樣的差異,該預測區塊是使用對應的預測模式來形成的。視訊編碼器200可以將一或多個變換應用於殘差區塊,以在變換域中而非在取樣域中產生經變換的資料。例如,視訊編碼器200可以將離散餘弦變換(DCT)、整數變換、小波變換或概念上類似的變換應用於殘差視訊資料。另外,視訊編碼器200可以在第一變換之後應用二次變換,諸如模式相關的不可分離二次變換(MDNSST)、信號相關變換、Karhunen-Loeve變換(KLT)等。視訊編碼器200在應用一或多個變換之後產生變換係數。After prediction, such as intra-frame prediction or inter-frame prediction for a block, video encoder 200 may compute residual data for the block. Residual data, such as residual blocks, represent sample-by-sample differences between the block and the prediction block for that block, which was formed using the corresponding prediction mode. Video encoder 200 may apply one or more transforms to the residual block to produce transformed data in the transform domain rather than in the sample domain. For example, video encoder 200 may apply a discrete cosine transform (DCT), integer transform, wavelet transform, or conceptually similar transform to the residual video data. Additionally, the video encoder 200 may apply a secondary transform, such as Mode Dependent Non-Separable Secondary Transform (MDNSST), Signal Dependent Transform, Karhunen-Loeve Transform (KLT), etc., after the first transform. Video encoder 200 generates transform coefficients after applying one or more transforms.

如前述,在任何變換以產生變換係數之後,視訊編碼器200可以執行對變換係數的量化。量化通常代表如下的程序:在該程序中,對變換係數進行量化以可能減少用於表示變換係數的資料量,從而提供進一步的壓縮。經由執行量化程序,視訊編碼器200可以減小與一些或所有變換係數相關聯的位元深度。例如,視訊編碼器200可以在量化期間將 n位元的值向下捨入為 m位元的值,其中 n大於 m。在一些實例中,為了執行量化,視訊編碼器200可以執行對要被量化的值的按位元右移。 As previously described, after any transform to generate transform coefficients, video encoder 200 may perform quantization of the transform coefficients. Quantization generally represents a procedure in which transform coefficients are quantized to possibly reduce the amount of data used to represent the transform coefficients, thereby providing further compression. By performing a quantization process, video encoder 200 may reduce the bit depth associated with some or all transform coefficients. For example, video encoder 200 may round down an n- bit value to an m -bit value during quantization, where n is greater than m . In some examples, to perform quantization, video encoder 200 may perform a bitwise right shift of the value to be quantized.

在量化之後,視訊編碼器200可以掃瞄變換係數,從而從包括經量化的變換係數的二維矩陣產生一維向量。可以將掃瞄設計為將較高能量(並且因此較低頻率)的變換係數放在向量的前面,並且將較低能量(並且因此較高頻率)的變換係數放在向量的後面。在一些實例中,視訊編碼器200可以利用預定義的掃瞄次序來掃瞄經量化的變換係數以產生經序列化的向量,並且隨後對向量的經量化的變換係數進行熵編碼。在其他實例中,視訊編碼器200可以執行自我調整掃瞄。在掃瞄經量化的變換係數以形成一維向量之後,視訊編碼器200可以例如根據上下文自我調整二進位算術譯碼(CABAC)來對一維向量進行熵編碼。視訊編碼器200亦可以對用於描述與經編碼的視訊資料相關聯的中繼資料的語法元素的值進行熵編碼,以供視訊解碼器300在對視訊資料進行解碼時使用。After quantization, video encoder 200 may scan the transform coefficients to generate a one-dimensional vector from a two-dimensional matrix including the quantized transform coefficients. A scan can be designed to place higher energy (and therefore lower frequency) transform coefficients at the front of the vector, and lower energy (and therefore higher frequency) transform coefficients at the back of the vector. In some examples, video encoder 200 may scan the quantized transform coefficients using a predefined scan order to generate a serialized vector, and then entropy encode the quantized transform coefficients of the vector. In other examples, video encoder 200 may perform a self-adjusting scan. After scanning the quantized transform coefficients to form a one-dimensional vector, video encoder 200 may entropy encode the one-dimensional vector, eg, according to context self-adjusting binary arithmetic coding (CABAC). Video encoder 200 may also entropy encode the values of syntax elements used to describe metadata associated with the encoded video data for use by video decoder 300 in decoding the video data.

為了執行CABAC,視訊編碼器200可以將上下文模型內的上下文分配給要被傳輸的符號。上下文可以關於例如符號的相鄰值是否為零值。概率決定可以是基於被分配給符號的上下文的。To perform CABAC, video encoder 200 may assign contexts within a context model to symbols to be transmitted. The context may relate to, for example, whether adjacent values of a symbol are zero-valued. Probabilistic decisions may be based on the context assigned to the symbol.

視訊編碼器200亦可以例如在圖片標頭、區塊標頭、切片標頭中產生到視訊解碼器300的語法資料(諸如基於區塊的語法資料、基於圖片的語法資料和基於序列的語法資料),或其他語法資料(諸如序列參數集(SPS)、圖片參數集(PPS)或視訊參數集(VPS))。同樣地,視訊解碼器300可以對此種語法資料進行解碼以決定如何解碼對應的視訊資料。Video encoder 200 may also generate syntax data such as block-based syntax data, picture-based syntax data, and sequence-based syntax data to video decoder 300, eg, in picture headers, block headers, slice headers ), or other syntactic data (such as Sequence Parameter Set (SPS), Picture Parameter Set (PPS), or Video Parameter Set (VPS)). Likewise, the video decoder 300 can decode the syntax data to determine how to decode the corresponding video data.

以此種方式,視訊編碼器200可以產生位元串流,其包括經編碼的視訊資料,例如,用於描述將圖片分割為區塊(例如,CU)以及針對該等區塊的預測及/或殘差資訊的語法元素。最終,視訊解碼器300可以接收位元串流並且對經編碼的視訊資料進行解碼。In this manner, video encoder 200 may generate a bitstream that includes encoded video data, eg, for describing partitioning of pictures into blocks (eg, CUs) and predictions and/or predictions for those blocks. or syntax element for residual information. Finally, the video decoder 300 can receive the bitstream and decode the encoded video data.

通常,視訊解碼器300執行與由視訊編碼器200執行的程序相反的程序,以對位元串流的經編碼的視訊資料進行解碼。例如,視訊解碼器300可以使用CABAC,以與視訊編碼器200的CABAC編碼程序基本上類似的、但是相反的方式,來對用於位元串流的語法元素的值進行解碼。語法元素可以定義用於將圖片分割為CTU,以及根據對應的分割結構(諸如QTBT結構)對每個CTU進行分割以定義CTU的CU的分割資訊。語法元素亦可以定義用於視訊資料的區塊(例如,CU)的預測和殘差資訊。Typically, video decoder 300 performs the reverse of the process performed by video encoder 200 to decode the encoded video data of the bitstream. For example, video decoder 300 may use CABAC to decode the values of syntax elements for a bitstream in a substantially similar but reversed manner as the CABAC encoding procedure of video encoder 200. The syntax elements may define partitioning information for partitioning a picture into CTUs, and partitioning each CTU according to a corresponding partitioning structure, such as a QTBT structure, to define the CUs of the CTU. Syntax elements may also define prediction and residual information for blocks of video data (eg, CUs).

殘差資訊可以由例如經量化的變換係數來表示。視訊解碼器300可以對區塊的經量化的變換係數進行逆量化和逆變換,以重現用於該區塊的殘差區塊。視訊解碼器300使用經信號通知的預測模式(訊框內預測或訊框間預測)和相關的預測資訊(例如,用於訊框間預測的運動資訊)來形成用於該區塊的預測區塊。視訊解碼器300隨後可以對預測區塊和殘差區塊(在逐個取樣的基礎上)進行組合以重現原始區塊。視訊解碼器300可以執行額外處理,諸如執行去區塊程序以減少沿著區塊的邊界的視覺偽影。Residual information may be represented by, for example, quantized transform coefficients. Video decoder 300 may inverse quantize and inverse transform the quantized transform coefficients of a block to reproduce a residual block for the block. Video decoder 300 uses the signaled prediction mode (intra-frame prediction or inter-frame prediction) and associated prediction information (eg, motion information for inter-frame prediction) to form a prediction region for the block piece. Video decoder 300 may then combine the prediction block and the residual block (on a sample-by-sample basis) to reproduce the original block. Video decoder 300 may perform additional processing, such as performing deblocking procedures to reduce visual artifacts along the boundaries of blocks.

概括而言,本案內容可能涉及「用信號通知」某些資訊(諸如語法元素)。術語「用信號通知」通常可以代表對用於語法元素的值及/或用於對經編碼的視訊資料進行解碼的其他資料的傳送。亦即,視訊編碼器200可以在位元串流中用信號通知用於語法元素的值。通常,用信號通知代表產生在位元串流中的值。如前述,源設備102可以基本上即時地或不是即時地(諸如可能在將語法元素儲存到儲存設備112以供目的地設備116稍後取得時發生)將位元串流傳輸到目的地設備116。In general, the content of this case may involve the "signaling" of certain information (such as grammatical elements). The term "signaling" may generally refer to the transmission of values for syntax elements and/or other data for decoding encoded video data. That is, video encoder 200 may signal values for syntax elements in the bitstream. Typically, the signalling represents the value generated in the bitstream. As previously mentioned, source device 102 may stream the bitstream to destination device 116 substantially instantaneously or not (such as may occur when syntax elements are stored to storage device 112 for later retrieval by destination device 116) .

根據本案內容的技術,如下文將更詳細解釋的,視訊編碼器200和視訊解碼器300可以被配置為:構造包含N個條目的一般最可能模式列表,其中一般最可能模式列表的N個條目是訊框內預測模式,並且其中平面模式是一般最可能模式列表中的順序第一條目;從一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;使用主要最可能模式列表或次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式;及使用當前訊框內預測模式來對當前視訊資料區塊進行解碼。In accordance with the techniques of the present disclosure, as will be explained in greater detail below, video encoder 200 and video decoder 300 may be configured to: construct a general most probable pattern list containing N entries, where the N entries of the general most probable pattern list is the intra-frame prediction mode, and where the planar mode is the sequential first entry in the general most probable mode list; constructs the primary most probable mode list from the first Np entries in the general most probable mode list, where Np is less than N; from The remaining (N-Np) entries in the general most probable mode list construct the secondary most probable mode list; use the primary most probable mode list or the secondary most probable mode list to determine the current frame for the current video data block a prediction mode; and decoding the current block of video data using the current intra-frame prediction mode.

示例性訊框內譯碼模式包括DC模式、平面模式和複數個方向模式(例如,非平面模式)。在VVC中,J . Chen, Y. Ye和S.-H. Kim,「Algorithm description for Versatile Video Coding and Test Model 9 (VTM 9)」,JVET-R2002,2020年4月,65種方向模式用於對區塊的訊框內預測。為了對訊框內模式值進行譯碼,視訊編碼器200和視訊解碼器300可以被配置為推導最可能模式(MPM)列表。若用於對特定CU進行譯碼的訊框內模式是MPM列表中的模式,則視訊編碼器200可以僅用信號通知MPM列表中的所決定的訊框內模式的索引。否則,視訊編碼器200可以使用旁路譯碼(例如,具有固定概率模型的熵譯碼)來用信號通知模式值。Exemplary intra-frame coding modes include DC mode, planar mode, and directional modes (eg, non-planar mode). In VVC, J. Chen, Y. Ye and S.-H. Kim, "Algorithm description for Versatile Video Coding and Test Model 9 (VTM 9)", JVET-R2002, April 2020, 65 directional modes for For in-frame prediction of blocks. To decode the intra-frame mode values, the video encoder 200 and the video decoder 300 may be configured to derive a most probable mode (MPM) list. If the intra-frame mode used to code a particular CU is a mode in the MPM list, video encoder 200 may only signal the index of the determined intra-frame mode in the MPM list. Otherwise, video encoder 200 may use bypass coding (eg, entropy coding with a fixed probability model) to signal the mode value.

在VVC中,在MPM列表中存在六個條目。MPM列表中的第一條目是平面模式。MPM列表中的剩餘條目由CU 400的左側(L)相鄰區塊和上方(A)相鄰區塊的訊框內模式(參見圖2)、從相鄰區塊的方向訊框內模式推導出的訊框內模式以及預設訊框內模式組成。對於本案內容中的其餘論述,該MPM列表將被稱為 主要 MPM 列表In VVC, there are six entries in the MPM list. The first entry in the MPM list is the flat mode. The remaining entries in the MPM list are derived from the intra-frame mode of the left (L) adjacent block and the upper (A) adjacent block of the CU 400 (see Figure 2), from the directional intra-frame mode of the adjacent blocks It consists of the output in-frame mode and the default in-frame mode. For the remainder of the discussion in the content of this case, this MPM list will be referred to as the primary MPM list .

在以下文件中提出了兩個MPM列表:A.Ramasubramonian等人,「CE3-3.1.1: Two MPM modes and shape dependency (Test 3.1.1),」,ITU-T SG 16 WP 3和ISO/IEC JTC 1/SC 29/WG 11的聯合視訊探索團隊(JVET),第11次會議:斯洛維尼亞盧布亞納,2018年7月10日至18日(下文中被稱為「JVET-K0081」)。一個MPM列表是具有6個條目的主要MPM(PMPM)列表,並且另一MPM列表是具有16個條目的次要MPM(SMPM)列表。PMPM列表中的條目是使用CU 402的左側(L)、上方(A)、左下方(BL)、右上方(AR)和左上方(AL)相鄰區塊的訊框內模式推導出的,如圖3所示。SMPM列表是從PMPM列表中包括的方向模式附近(例如,在角度或索引值上接近)的模式來產生的。Two MPM lists are proposed in: A. Ramasubramonian et al., "CE3-3.1.1: Two MPM modes and shape dependency (Test 3.1.1)," ITU-T SG 16 WP 3 and ISO/IEC Joint Video Exploration Team (JVET) of JTC 1/SC 29/WG 11, 11th meeting: Lubujana, Slovenia, 10-18 July 2018 (hereinafter referred to as "JVET-K0081" ). One MPM list is a primary MPM (PMPM) list with 6 entries, and the other MPM list is a secondary MPM (SMPM) list with 16 entries. Entries in the PMPM list are derived using the in-frame mode of the left (L), top (A), bottom left (BL), top right (AR), and top left (AL) adjacent blocks of the CU 402, As shown in Figure 3. The SMPM list is generated from patterns near (eg, close in angle or index value) of the directional patterns included in the PMPM list.

例如,若PMPM列表的第一條目是訊框內模式12,並且最大偏移為4,則訊框內模式11、10、9、8、13、14、15和16分別被添加到SMPM列表中,前提是此種訊框內模式尚未包括在兩個MPM列表中。亦即,從訊框內模式索引12加上或減去4個索引的所有模式被添加到列表中,只要此種模式不是列表中已經存在的模式的副本。For example, if the first entry in the PMPM list is in-frame mode 12, and the maximum offset is 4, then in-frame modes 11, 10, 9, 8, 13, 14, 15, and 16 are added to the SMPM list, respectively , provided that this in-frame mode is not already included in the two MPM lists. That is, all patterns that add or subtract 4 indices from the in-frame pattern index 12 are added to the list, as long as such patterns are not duplicates of patterns already in the list.

用於JVET-K0081中的6個主要MPM的最大偏移為{4,3,3,2,2,1}。若該程序未填充SMPM列表中的16個條目,則從訊框內模式的預設列表中填充剩餘條目。視訊編碼器可以用信號通知用於指示針對區塊的訊框內預測模式是來自PMPM列表還是SMPM列表的語法元素。The maximum offset used for the 6 main MPMs in JVET-K0081 is {4, 3, 3, 2, 2, 1}. If the program does not populate the 16 entries in the SMPM list, the remaining entries are filled from the default list of in-frame modes. The video encoder may signal a syntax element indicating whether the intra-frame prediction mode for the block is from the PMPM list or the SMPM list.

本案內容描述了用於改良對由PMPM列表和SMPM列表組成的MPM列表的構造的不同技術。具體而言,本案內容描述了用於構造一般最可能模式列表並且隨後從該一般最可能模式列表中構造主要和次要最可能模式列表的技術。主要和次要最可能模式列表可以包括來自相鄰區塊的訊框內預測模式以及相鄰區塊的訊框內預測模式的訊框內預測模式偏移。The content of this case describes different techniques for improving the construction of MPM lists consisting of PMPM lists and SMPM lists. In particular, the present disclosure describes techniques for constructing a general list of most probable patterns and then constructing lists of primary and secondary most probable patterns from the general list of most probable patterns. The primary and secondary most probable mode lists may include intra-prediction modes from neighboring blocks and intra-prediction mode offsets for the intra-prediction modes of neighboring blocks.

一般MPM列表構造General MPM list construction

首先,一般MPM(GMPM)列表可以被定義有N個條目,其中GMPM列表的第i條目表示為GMPM[i]。該GMPM列表的第一條目是平面模式。亦即,GMPM列表的索引0(例如,GMPM[0])指示平面訊框內模式。例如,如圖3所示,當前譯碼區塊的左側(L)、上方(A)、左下方(BL)、右上方(AR)和左上方(AL)相鄰區塊的訊框內模式可以分別表示為MPM(L)、MPM(A)、MPM(BL)、MPM(AR)和MPM(AL)。First, a general MPM (GMPM) list can be defined with N entries, where the ith entry of the GMPM list is denoted as GMPM[i]. The first entry of the GMPM list is the flat mode. That is, index 0 of the GMPM list (eg, GMPM[0]) indicates the plane intra-frame mode. For example, as shown in FIG. 3, the intra-frame mode of the adjacent blocks to the left (L), upper (A), lower left (BL), upper right (AR) and upper left (AL) of the current decoding block can be denoted as MPM(L), MPM(A), MPM(BL), MPM(AR), and MPM(AL), respectively.

若MPM(j)可用並且尚未包括在GMPM列表中,則視訊編碼器200和視訊解碼器300被配置為經由將訊框內模式MPM(L)、MPM(A)、MPM(BL)、MPM(AR)和MPM(AL)添加到GMPM列表中來構建GMPM列表。若相鄰區塊j具有相關聯的訊框內預測模式,則訊框內模式MPM(j)是可用的。例如,若使用訊框內預測來對相鄰區塊進行譯碼,則相鄰區塊可以具有相關聯的訊框內預測模式。在其他實例中,即使未使用訊框內預測進行譯碼,相鄰區塊亦可以具有相關聯的訊框內預測模式。If MPM(j) is available and has not been included in the GMPM list, the video encoder 200 and the video decoder 300 are configured to use the intra-frame modes MPM(L), MPM(A), MPM(BL), MPM( AR) and MPM (AL) are added to the GMPM list to construct the GMPM list. If neighboring block j has an associated intra-frame prediction mode, then the intra-frame mode MPM(j) is available. For example, if adjacent blocks are coded using intra-frame prediction, the adjacent blocks may have associated intra-frame prediction modes. In other examples, neighboring blocks may have an associated intra-frame prediction mode even though intra-frame prediction is not used for coding.

視訊編碼器200和視訊解碼器300可以經由在MPM(L)、MPM(A)、MPM(BL)、MPM(AR)和MPM(AL)訊框內預測模式之間偏移前Na個可用方向訊框內模式,來推導GMPM列表中的剩餘條目,其中Na是小於左側(L)、上方(A)、左下方(BL)、右上方(AR)和左上方(AL)相鄰區塊的數量的數值。例如,Na可以小於5。Video encoder 200 and video decoder 300 may offset the first Na available directions between MPM(L), MPM(A), MPM(BL), MPM(AR), and MPM(AL) intra-frame prediction modes In-frame mode to derive the remaining entries in the GMPM list, where Na is less than the left (L), upper (A), lower left (BL), upper right (AR), and upper left (AL) adjacent blocks the numerical value of the quantity. For example, Na can be less than 5.

被包括在GMPM列表中的MPM(L)、MPM(A)、MPM(BL)、MPM(AR)和MPM(AL)之外的訊框內模式表示為GMPM[p]=MPM(j),其中1 ≦P≦Nb,Nb小於或等於5,並且j是L、A、BL、AR和AL中的一個。若p小於預定義值q,則最大偏移被設置為M1;否則,最大偏移被設置為M2。若所提出的程序未填充GMPM列表中的N個條目,則從預設列表填充其餘的條目。GMPM列表中的前Np個條目被設置為PMPM列表,並且GMPM列表中的其餘(N–Np)個條目被設置為SMPM列表。In-frame modes other than MPM(L), MPM(A), MPM(BL), MPM(AR) and MPM(AL) included in the GMPM list are denoted as GMPM[p]=MPM(j), where 1≦P≦Nb, Nb is less than or equal to 5, and j is one of L, A, BL, AR, and AL. If p is less than the predefined value q, the maximum offset is set to M1; otherwise, the maximum offset is set to M2. If the proposed procedure does not fill N entries in the GMPM list, the remaining entries are filled from the preset list. The first Np entries in the GMPM list are set as the PMPM list, and the remaining (N–Np) entries in the GMPM list are set as the SMPM list.

作為一個實例,若PMPM列表的第一條目是訊框內模式12,並且最大偏移為4,則訊框內模式11、10、9、8、13、14、15和16各自被添加到SMPM列表中,前提是此種訊框內模式尚未包括在兩個MPM列表中。亦即,從訊框內模式索引12加上或減去4個索引的所有模式被添加到列表中,只要此種模式不是列表中已經存在的模式的副本。在另一實例中,若PMPM列表的第一條目是訊框內模式12,並且最大偏移為3,則訊框內模式11、10、9、13、14和15各自被添加到SMPM列表中,前提是此種訊框內模式尚未包括在兩個MPM列表中。亦即,從訊框內模式索引12加上或減去3個索引的所有模式被添加到列表中,前提是此種模式不是列表中已經存在的模式的副本。As an example, if the first entry in the PMPM list is in-frame mode 12, and the maximum offset is 4, then in-frame modes 11, 10, 9, 8, 13, 14, 15, and 16 are each added to SMPM list, provided that this in-frame mode is not already included in both MPM lists. That is, all patterns that add or subtract 4 indices from the in-frame pattern index 12 are added to the list, as long as such patterns are not duplicates of patterns already in the list. In another example, if the first entry of the PMPM list is in-frame mode 12, and the maximum offset is 3, then in-frame modes 11, 10, 9, 13, 14, and 15 are each added to the SMPM list , provided that this in-frame mode is not already included in the two MPM lists. That is, all patterns that add or subtract 3 indices from the in-frame pattern index 12 are added to the list, provided that such patterns are not duplicates of patterns already in the list.

在一個實例中,N=22,Np=6,Na=2,q=3,M1=4並且M2=3。在該實例中,GMPM列表的大小為22個條目,其中前6個條目為PMPM列表,並且後16個條目為SMPM列表。MPM(L)、MPM(A)、MPM(BL)、MPM(AR)和MPM(AL)中的前兩個可用方向訊框內模式被偏移以推導在兩個可用方向訊框內模式附近的方向訊框內模式。在該上下文中,附近意味著從可用方向訊框內模式的索引加上或減去M1或M2個索引。若GMPM[1]和GMPM[2]為非DC模式,則對於GMPM[1]和GMPM[2],最大偏移為4。若GMPM[1]為DC模式,並且GMPM[2]和GMPM[3]為非DC模式,則對於GMPM[2],最大偏移為4,並且對於GMPM[3],最大偏移為3。若GMPM[2]為DC模式,並且GMPM[1]和GMPM[3]為非DC模式,則對於GMPM[1],最大偏移為4,並且對於GMPM[3],最大偏移為3。例如,若GMPM[1]=20並且GMPM[3]=40,則從GMPM[1]偏移的模式16、17、18、19、21、22、23和24以及從GMPM[3]偏移的模式37、38、39、41、42和43將被添加到GMPM列表中。In one example, N=22, Np=6, Na=2, q=3, M1=4 and M2=3. In this example, the size of the GMPM list is 22 entries, with the first 6 entries being the PMPM list and the last 16 entries being the SMPM list. The first two available directional in-frame modes in MPM(L), MPM(A), MPM(BL), MPM(AR) and MPM(AL) are shifted to derive around the two available directional in-frame modes The orientation of the in-frame mode. Nearby in this context means adding or subtracting M1 or M2 indices from the indices of the available directional intra-frame modes. If GMPM[1] and GMPM[2] are in non-DC mode, the maximum offset is 4 for GMPM[1] and GMPM[2]. If GMPM[1] is in DC mode, and GMPM[2] and GMPM[3] are in non-DC mode, then the maximum offset is 4 for GMPM[2] and 3 for GMPM[3]. If GMPM[2] is in DC mode, and GMPM[1] and GMPM[3] are in non-DC mode, then the maximum offset is 4 for GMPM[1] and 3 for GMPM[3]. For example, if GMPM[1]=20 and GMPM[3]=40, then patterns 16, 17, 18, 19, 21, 22, 23, and 24 offset from GMPM[1] and offset from GMPM[3] The modes 37, 38, 39, 41, 42 and 43 will be added to the GMPM list.

MPM索引譯碼MPM index decoding

在針對VVC的一個實例中,視訊解碼器300可以被配置為首先解碼和解析平面標誌,以決定用於CU的訊框內模式是否為PMPM列表的第一條目。若平面標誌不指示平面模式,則視訊解碼器300可以被配置為解碼和解析索引值(例如,用於指示索引值的語法元素),以決定選擇了PMPM列表的何者條目。注意,經解析的索引值0、1、2、3和4分別對應於PMPM列表的第1、第2、第3、第4、第5條目,並且經解析的用於索引值0、1、2、3、4的倉(bin)分別為0、10、110、1110、1111。In one example for VVC, the video decoder 300 may be configured to decode and parse the plane flag first to decide whether the intra-frame mode for the CU is the first entry of the PMPM list. If the plane flag does not indicate a plane mode, the video decoder 300 may be configured to decode and parse the index value (eg, a syntax element for indicating the index value) to determine which entry of the PMPM list is selected. Note that parsed index values 0, 1, 2, 3, and 4 correspond to entries 1, 2, 3, 4, and 5 of the PMPM list, respectively, and parsed for index values 0, 1, The bins of 2, 3, and 4 are 0, 10, 110, 1110, and 1111, respectively.

被稱為訊框內子分區(ISP)模式和多輔助線(MRL)模式的新譯碼工具已經被整合到VVC中。ISP模式和一般訊框內模式共享相同的PMPM列表。MRL模式應用於PMPM列表中的訊框內模式,第一條目(亦即平面模式)除外。由於一般訊框內模式、ISP模式和MRL模式皆共享相同的非平面PMPM條目,亦即PMPM列表的第1、第2、第3、第4、第5條目,因此,以從一般訊框內模式、ISP模式和MRL模式中選擇出的模式為條件的上下文譯碼將提高在用信號通知PMPM索引時的譯碼效率。因此,根據本案內容的技術,視訊編碼器200和視訊解碼器300可以被配置為按如下使用三個上下文模型來對非平面PMPM索引的第一倉進行譯碼: If(ISP模式):利用上下文索引0來對非平面PMPM索引的第一倉進行譯碼。 否則,if(MRL模式):利用上下文索引1來對非平面PMPM索引的第一倉進行譯碼。 否則(一般訊框內模式):利用上下文索引2來對非平面PMPM索引的第一倉進行譯碼。 New decoding tools called In-Frame Sub-Partition (ISP) mode and Multiple Auxiliary Line (MRL) mode have been integrated into VVC. ISP mode and normal in-frame mode share the same PMPM list. The MRL mode applies to the in-frame modes in the PMPM list, except for the first entry (ie, the planar mode). Since the general in-frame mode, ISP mode and MRL mode all share the same non-planar PMPM entry, that is, the 1st, 2nd, 3rd, 4th, and 5th entries of the PMPM list, it is necessary to use the normal in-frame Mode-conditioned context decoding of mode, ISP mode, and MRL mode will improve the decoding efficiency when signaling the PMPM index. Thus, in accordance with the techniques of this disclosure, video encoder 200 and video decoder 300 may be configured to decode the first bin of a non-planar PMPM index using three context models as follows: If (ISP mode): Code the first bin of the non-planar PMPM index with context index 0. Otherwise, if (MRL mode): Code the first bin of the non-planar PMPM index with context index 1. Otherwise (normal intra-frame mode): use context index 2 to code the first bin of the non-planar PMPM index.

當使用本案內容的技術時,if-else的次序可以更改為其他組合。一個實例如下: If(MRL模式):利用上下文索引0來對非平面PMPM索引的第一倉進行譯碼。 否則,若(ISP模式):利用上下文索引1來對非平面PMPM索引的第一倉進行譯碼。 否則(一般訊框內模式):利用上下文索引2來對非平面PMPM索引的第一倉進行譯碼。 When using the techniques of this case, the order of the if-else can be changed to other combinations. An example is as follows: If (MRL mode): The first bin of the non-planar PMPM index is coded with context index 0. Else, if (ISP mode): Code the first bin of the non-planar PMPM index with context index 1. Otherwise (normal intra-frame mode): use context index 2 to code the first bin of the non-planar PMPM index.

實例example

如前述,可以使用兩個MPM列表:一個是具有6個條目的主要MPM(PMPM)列表,並且另一個是具有16個條目的次要MPM(SMPM)列表。根據本案內容的技術,視訊編碼器200和視訊解碼器300可以構造具有22個條目的一般MPM列表,其中該一般MPM列表中的前6個條目被包括在PMPM列表中,並且22個條目中的其餘條目被設置為SMPM列表。一般MPM列表中的第一條目(例如,順序第一條目)是平面模式。其餘條目由左側(L)、上方(A)、左下方(BL)、右上方(AR)和左上方(AL)相鄰區塊(如圖3所示)的訊框內模式、從相鄰區塊的前兩個可用方向模式偏移的方向模式以及預設模式組成。若CU區塊是矩形並且垂直定向的,亦即,當高度大於寬度時,針對可用訊框內預測模式進行檢查的相鄰區塊的次序為A、L、BL、AR、AL。否則,次序為L、A、BL、AR、AL。As before, two MPM lists may be used: one is a primary MPM (PMPM) list with 6 entries, and the other is a secondary MPM (SMPM) list with 16 entries. According to the techniques of the present disclosure, the video encoder 200 and the video decoder 300 can construct a general MPM list with 22 entries, wherein the first 6 entries in the general MPM list are included in the PMPM list, and the 22 entries are included in the PMPM list. The remaining entries are set as SMPM lists. The first entry in the general MPM list (eg, sequential first entry) is the flat mode. The rest of the entries consist of in-frame modes of the left (L), upper (A), lower left (BL), upper right (AR) and upper left (AL) adjacent blocks (as shown in Figure 3), from adjacent The first two available direction modes of the block are composed of the direction mode offset and the default mode. If the CU block is rectangular and oriented vertically, that is, when the height is greater than the width, the order of adjacent blocks checked for available intra-frame prediction modes is A, L, BL, AR, AL. Otherwise, the order is L, A, BL, AR, AL.

相鄰區塊的方向模式的最大偏移取決於一般MPM列表中的該方向模式的條目點。若相鄰區塊的可用方向模式位於第二或第三條目(注意,第一條目為平面模式),則最大偏移被設置為4;否則,最大偏移被設置為3。例如,一般MPM列表的第i條目表示為GMPM[i]。GMPM[0]=平面模式。若GMPM[1]為DC模式,並且GMPM[2]和GMPM[3]為方向模式,則對於GMPM[2],最大偏移為4,並且對於GMPM[3],最大偏移為3。假設GMPM[2]=20並且GMPM[3]=40。則從GMPM[2]偏移的16、17、18、19、21、22、23和24以及從GMPM[3]偏移的37、38、39、41、42和43被添加到GMPM列表中。The maximum offset of the directional pattern of adjacent blocks depends on the entry point of the directional pattern in the general MPM list. If the available direction mode of the adjacent block is in the second or third entry (note that the first entry is the plane mode), the maximum offset is set to 4; otherwise, the maximum offset is set to 3. For example, the ith entry of a general MPM list is denoted as GMPM[i]. GMPM[0]=Planar Mode. If GMPM[1] is in DC mode, and GMPM[2] and GMPM[3] are in directional mode, the maximum offset is 4 for GMPM[2] and 3 for GMPM[3]. Suppose GMPM[2]=20 and GMPM[3]=40. then 16, 17, 18, 19, 21, 22, 23, and 24 offset from GMPM[2] and 37, 38, 39, 41, 42, and 43 offset from GMPM[3] are added to the GMPM list .

視訊解碼器300可以首先解碼和解析平面標誌,以決定針對CU的訊框內模式是否是PMPM列表的第一條目。若平面標誌不指示要使用平面模式,則視訊解碼器300解碼和解析針對PMPM列表的非平面模式的索引值,以決定選擇PMPM列表的何者條目。注意,針對非平面模式的經解析的索引值0、1、2、3、4對應於PMPM列表的第1、第2、第3、第4、第5條目,並且針對索引值0、1、2、3、4的經解析的倉分別為0、10、110、1110、1111。視訊編碼器200和視訊解碼器300可以被配置為按如下使用3個上下文模型來對針對PMPM列表中的非平面模式索引的第一倉進行譯碼: If(ISP):利用上下文索引0來對第一倉進行譯碼。 否則,if(MRL):利用上下文索引1來對第一倉進行譯碼。 否則(一般訊框內):利用上下文索引2來對第一倉進行譯碼。 Video decoder 300 may first decode and parse the plane flag to determine whether the intra-frame mode for the CU is the first entry in the PMPM list. If the plane flag does not indicate that the plane mode is to be used, the video decoder 300 decodes and parses the index value for the non-plane mode of the PMPM list to determine which entry of the PMPM list to select. Note that the parsed index values 0, 1, 2, 3, 4 for non-planar mode correspond to the 1st, 2nd, 3rd, 4th, 5th entries of the PMPM list, and for index values 0, 1, The parsed bins for 2, 3, and 4 are 0, 10, 110, 1110, 1111, respectively. Video encoder 200 and video decoder 300 may be configured to use 3 context models to code the first bin for the non-planar mode index in the PMPM list as follows: If(ISP): The first bin is decoded with context index 0. Otherwise, if(MRL): decode the first bin with context index 1. Otherwise (in normal frame): decode the first bin with context index 2.

總之,在本案內容的一個實例中,視訊解碼器300可以被配置為使用訊框內預測來對當前視訊資料區塊進行解碼。視訊解碼器300可以被配置為構造包含N個條目的一般最可能模式列表,其中一般最可能模式列表的N個條目是訊框內預測模式,並且其中平面模式是一般最可能模式列表中的順序第一條目。視訊解碼器300亦可以從一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N,並且從一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表。隨後,視訊解碼器300可以使用主要最可能模式列表或次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式,並且使用當前訊框內預測模式來對當前視訊資料區塊進行解碼,以產生經解碼的視訊資料區塊。在一個實例中,N為22並且Np為6。In summary, in one example of the present disclosure, video decoder 300 may be configured to decode a current block of video data using intra-frame prediction. Video decoder 300 may be configured to construct a general most probable mode list containing N entries, where the N entries of the general most probable mode list are intra-frame prediction modes, and where the planar mode is the order in the general most probable mode list first entry. The video decoder 300 may also construct the primary most probable mode list from the first Np entries in the general most probable mode list, where Np is less than N, and construct the secondary most probable mode list from the remaining (N-Np) entries in the general most probable mode list List of most likely patterns. Then, the video decoder 300 can use the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data, and use the current intra-frame prediction mode for the current video data The blocks are decoded to produce decoded blocks of video data. In one example, N is 22 and Np is 6.

在一個實例中,為了決定當前訊框內預測模式,視訊解碼器300亦可以被配置為對主要最可能模式列表或次要最可能模式列表的索引進行解碼,其中索引指示主要最可能模式列表或次要最可能模式列表中的非平面訊框內預測模式。視訊解碼器300可以基於索引來決定用於當前視訊資料區塊的當前訊框內預測模式。In one example, in order to determine the current intra-frame prediction mode, the video decoder 300 may also be configured to decode an index of the primary most probable mode list or the secondary most probable mode list, where the index indicates the primary most probable mode list or The non-planar intra-frame prediction mode in the second most probable mode list. The video decoder 300 may determine the current intra-frame prediction mode for the current block of video data based on the index.

在另外的實例中,為了對主要最可能模式列表或次要最可能模式列表的索引進行解碼,視訊解碼器300亦可以基於用於當前區塊的譯碼工具來對決定用於對索引的第一倉進行熵解碼的上下文,並且使用上下文來對索引的第一倉進行熵解碼。在一個實例中,譯碼工具是一般訊框內預測模式、訊框內子分區模式或多輔助線模式中的一項。In another example, in order to decode the index of the primary most probable mode list or the secondary most probable mode list, the video decoder 300 may also determine the first index to be used for the index based on the coding tool used for the current block. A context for entropy decoding of one bin, and the context is used to entropy decode the first bin of the index. In one example, the coding tool is one of a general intra-frame prediction mode, an intra-frame subpartition mode, or a multiple auxiliary line mode.

在本案內容的另一實例中,為了構造一般最可能模式列表,視訊解碼器300可以被配置為:將來自當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到一般最可能模式列表中;及將從來自相應的相鄰區塊的相應的訊框內預測模式偏移的複數個訊框內預測模式添加到一般最可能模式列表中。在一個實例中,視訊解碼器300可以基於相應的訊框內預測模式可用並且尚未被添加到一般最可能模式列表,來將來自當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到一般最可能模式列表中。In another example of the subject matter, in order to construct the general most probable mode list, the video decoder 300 may be configured to: add the corresponding intra-frame prediction modes from the corresponding neighboring blocks of the current video data block to the and adding to the general most probable mode list a plurality of intra-frame prediction modes offset from corresponding intra-frame prediction modes from corresponding neighboring blocks. In one example, video decoder 300 may convert corresponding frames from corresponding neighboring blocks of the current block of video data based on the fact that the corresponding intra-frame prediction mode is available and has not been added to the general most probable mode list Intra-prediction modes are added to the list of general most probable modes.

在另一實例中,為了使用主要最可能模式列表或次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式,視訊解碼器300可以被配置為:對用於指示來自主要最可能模式列表或次要最可能模式列表的當前最可能模式列表的語法元素進行解碼;對當前最可能模式列表的索引進行解碼;及根據到當前最可能模式列表的索引來決定當前訊框內預測模式。In another example, in order to determine the current intra-frame prediction mode for the current block of video data using the primary most probable mode list or the secondary most probable mode list, the video decoder 300 may be configured to: Decode the syntax elements of the current MPM list from either the primary MPM list or the secondary MPM list; decode the index into the current MPM list; and determine the current message based on the index to the current MPM list. In-box prediction mode.

以相互方式,視訊編碼器200亦被配置為:使用訊框內預測來對當前視訊資料區塊進行編碼。視訊編碼器200可以被配置為:構造包含N個條目的一般最可能模式列表,其中一般最可能模式列表的N個條目是訊框內預測模式,並且其中平面模式是一般最可能模式列表中的順序第一條目。視訊編碼器200亦可以從一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N,並且從一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表。視訊編碼器200亦可以使用主要最可能模式列表或次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式,並且使用當前訊框內預測模式來對當前視訊資料區塊進行編碼,以產生經編碼的視訊資料區塊。在一個實例中,N為22並且Np為6。Reciprocally, the video encoder 200 is also configured to encode the current block of video data using intra-frame prediction. The video encoder 200 may be configured to construct a general most probable mode list containing N entries, wherein the N entries of the general most probable mode list are intra-frame prediction modes, and wherein the planar mode is in the general most probable mode list Order first entry. The video encoder 200 may also construct the primary most probable mode list from the first Np entries in the general most probable mode list, where Np is less than N, and construct the secondary most probable mode list from the remaining (N-Np) entries in the general most probable mode list List of most likely patterns. The video encoder 200 may also use the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data, and use the current intra-frame prediction mode for the current video data area. The blocks are encoded to produce encoded blocks of video data. In one example, N is 22 and Np is 6.

在本案內容的一個實例中,視訊編碼器200可以被配置為對主要最可能模式列表或次要最可能模式列表的索引進行編碼,其中該索引指示主要最可能模式列表或次要最可能模式列表中的非平面訊框內預測模式。In one example of the subject matter, the video encoder 200 may be configured to encode an index of the primary most probable mode list or the secondary most probable mode list, wherein the index indicates the primary most probable mode list or the secondary most probable mode list The non-planar intra-frame prediction mode in .

在本案內容的另一實例中,為了對主要最可能模式列表或次要最可能模式列表的索引進行編碼,視訊編碼器200可以被配置為:基於用於當前區塊的譯碼工具來決定用於對索引的第一倉進行熵編碼的上下文;及使用上下文來對索引的第一倉進行熵編碼。在一個實例中,譯碼工具是一般訊框內預測模式、訊框內子分區模式或多輔助線模式中的一項。In another example of the content of the present case, in order to encode the index of the primary most probable mode list or the secondary most probable mode list, the video encoder 200 may be configured to: for entropy encoding the first bin of the index; and using the context to entropy encode the first bin of the index. In one example, the coding tool is one of a general intra-frame prediction mode, an intra-frame subpartition mode, or a multiple auxiliary line mode.

在另一實例中,為了構造一般最可能模式列表,視訊編碼器200亦可以被配置為:將來自當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到一般最可能模式列表中;及將從來自相應的相鄰區塊的相應的訊框內預測模式偏移的複數個訊框內預測模式添加到一般最可能模式列表中。視訊編碼器200可以基於相應的訊框內預測模式可用並且尚未被添加到一般最可能模式列表中,來將來自當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到一般最可能模式列表中。In another example, in order to construct the general most probable mode list, the video encoder 200 may also be configured to add corresponding intra-frame prediction modes from corresponding neighboring blocks of the current video data block to the general most probable mode list. and adding a plurality of intra-frame prediction modes offset from corresponding intra-frame prediction modes from corresponding neighboring blocks to the general most probable mode list. The video encoder 200 may add a corresponding intra-frame prediction mode from a corresponding neighboring block of the current video data block based on the corresponding intra-frame prediction mode is available and has not been added to the general most probable mode list to the general list of most probable patterns.

圖4A和圖4B是圖示示例性四叉樹二叉樹(QTBT)結構130以及對應的譯碼樹單元(CTU)132的概念圖。實線表示四叉樹分離,並且虛線指示二叉樹分離。在二叉樹的每個分離(亦即非葉)節點中,用信號通知一個標誌以指示使用何種分離類型(亦即,水平或垂直),其中在該實例中,0指示水平分離,並且1指示垂直分離。對於四叉樹分離,由於四叉樹節點將區塊水平地並且垂直地分離為具有相等大小的4個子區塊,因此無需指示分離類型。因此,視訊編碼器200可以對以下各項進行編碼,並且視訊解碼器300可以對以下各項進行解碼:用於QTBT結構130的區域樹級別(亦即,實線)的語法元素(諸如分離資訊),以及用於QTBT結構130的預測樹級別(亦即虛線)的語法元素(諸如分離資訊)。視訊編碼器200可以對用於由QTBT結構130的終端葉節點表示的CU的視訊資料(諸如預測和變換資料)進行編碼,並且視訊解碼器300可以對視訊資料進行解碼。4A and 4B are conceptual diagrams illustrating an example quadtree binary tree (QTBT) structure 130 and corresponding coding tree unit (CTU) 132 . Solid lines indicate quad-tree separation, and dashed lines indicate binary-tree separation. In each split (ie, non-leaf) node of the binary tree, a flag is signaled to indicate which split type is used (ie, horizontal or vertical), where in this example 0 indicates horizontal split and 1 indicates vertical separation. For quadtree split, since the quadtree node splits the block horizontally and vertically into 4 subblocks of equal size, there is no need to indicate the split type. Accordingly, video encoder 200 can encode, and video decoder 300 can decode, syntax elements (such as separation information) for region tree level (ie, solid lines) of QTBT structure 130 ), and syntax elements (such as separation information) for the prediction tree level (ie, dashed line) of the QTBT structure 130 . Video encoder 200 may encode video data, such as prediction and transform data, for the CU represented by the terminal leaf node of QTBT structure 130, and video decoder 300 may decode the video data.

通常,圖4B的CTU 132可以與用於定義與QTBT結構130的處於第一和第二級別的節點相對應的區塊的大小的參數相關聯。該等參數可以包括CTU大小(表示取樣中的CTU 132的大小)、最小四叉樹大小(MinQTSize,其表示最小允許四叉樹葉節點大小)、最大二叉樹大小(MaxBTSize,其表示最大允許二叉樹根節點大小)、最大二叉樹深度(MaxBTDepth,其表示最大允許二叉樹深度),以及最小二叉樹大小(MinBTSize,其表示最小允許二叉樹葉節點大小)。In general, the CTU 132 of FIG. 4B may be associated with parameters that define the size of the blocks corresponding to nodes of the QTBT structure 130 at the first and second levels. Such parameters may include CTU size (representing the size of the CTU 132 in the sample), minimum quadtree size (MinQTSize, which represents the minimum allowed quad-leaf node size), maximum binary tree size (MaxBTSize, which represents the maximum allowed binary tree root node size) size), the maximum binary tree depth (MaxBTDepth, which represents the maximum allowed binary tree depth), and the minimum binary tree size (MinBTSize, which represents the minimum allowed binary leaf node size).

QTBT結構的與CTU相對應的根節點可以在QTBT結構的第一級別處具有四個子節點,每個子節點可以是根據四叉樹分割來分割的。亦即,第一級別的節點是葉節點(沒有子節點)或者具有四個子節點。QTBT結構130的實例將此種節點表示為包括父節點和具有實線分支的子節點。若第一級別的節點不大於最大允許二叉樹根節點大小(MaxBTSize),則可以經由相應的二叉樹進一步對節點進行分割。可以對一個節點的二叉樹分離進行反覆運算,直到從分離產生的節點達到最小允許二叉樹葉節點大小(MinBTSize)或最大允許二叉樹深度(MaxBTDepth)。QTBT結構130的實例將此種節點表示為具有用於分支的虛線。二叉樹葉節點被稱為譯碼單元(CU),其用於預測(例如,圖片內或圖片間預測)和變換,而不進行任何進一步分割。如上所論述的,CU亦可以被稱為「視訊區塊」或「區塊」。The root node of the QTBT structure corresponding to the CTU may have four child nodes at the first level of the QTBT structure, and each child node may be partitioned according to quadtree partitioning. That is, the nodes of the first level are leaf nodes (no children) or have four children. An instance of the QTBT structure 130 represents such a node as including a parent node and child nodes with solid-line branches. If the nodes of the first level are not larger than the maximum allowable binary tree root node size (MaxBTSize), the nodes can be further divided via the corresponding binary tree. The binary tree separation of a node can be iterated until the node resulting from the separation reaches the minimum allowable binary leaf node size (MinBTSize) or the maximum allowable binary tree depth (MaxBTDepth). An example of the QTBT structure 130 represents such a node as having dashed lines for branches. A binary leaf node, called a coding unit (CU), is used for prediction (eg, intra- or inter-picture prediction) and transform without any further partitioning. As discussed above, CUs may also be referred to as "video blocks" or "blocks."

在QTBT分割結構的一個實例中,CTU大小被設置為128x128(亮度取樣和兩個對應的64x64色度取樣),MinQTSize被設置為16x16,MaxBTSize被設置為64x64,MinBTSize(對於寬度和高度兩者)被設置為4,並且MaxBTDepth被設置為4。首先對CTU應用四叉樹分割以產生四叉樹葉節點。四叉樹葉節點可以具有從16x16(亦即MinQTSize)到128x128(亦即CTU大小)的大小。若葉四叉樹節點為128x128,則由於該大小超過MaxBTSize(亦即,在該實例中為64x64),因此葉四叉樹節點將不被二叉樹進一步分離。否則,葉四叉樹節點將被二叉樹進一步分割。因此,四叉樹葉節點亦是用於二叉樹的根節點,並且具有為0的二叉樹深度。當二叉樹深度達到MaxBTDepth(在該實例中為4)時,不允許進一步分離。具有等於MinBTSize(在該實例中為4)的寬度的二叉樹節點意味著不允許針對該二叉樹節點進行進一步的垂直分離(亦即,對寬度的劃分)。類似地,具有等於MinBTSize的高度的二叉樹節點意味著不允許針對該二叉樹節點進行進一步的水平分離(亦即,對高度的劃分)。如前述,二叉樹的葉節點被稱為CU,並且根據預測和變換而被進一步處理,而無需進一步分割。In one example of a QTBT partition structure, the CTU size is set to 128x128 (a luma sample and two corresponding 64x64 chroma samples), MinQTSize is set to 16x16, MaxBTSize is set to 64x64, MinBTSize (for both width and height) is set to 4, and MaxBTDepth is set to 4. Quadtree partitioning is first applied to the CTU to generate quad-leaf nodes. A quad-leaf node can have a size from 16x16 (ie MinQTSize) to 128x128 (ie CTU size). If the leaf quadtree node is 128x128, then since the size exceeds MaxBTSize (ie, 64x64 in this example), the leaf quadtree node will not be further separated by the binary tree. Otherwise, the leaf quadtree node will be further split by the binary tree. Therefore, the quad-leaf node is also the root node for the binary tree and has a binary tree depth of 0. When the binary tree depth reaches MaxBTDepth (4 in this instance), no further separation is allowed. A binary tree node with a width equal to MinBTSize (4 in this example) means that no further vertical separation (ie, division of width) is allowed for that binary tree node. Similarly, a binary tree node with a height equal to MinBTSize means that no further horizontal separation (ie, division of heights) is allowed for that binary tree node. As mentioned earlier, the leaf nodes of the binary tree are called CUs and are further processed according to prediction and transformation without further partitioning.

圖5是圖示可以執行本案內容的技術的示例性視訊編碼器200的方塊圖。圖5是出於解釋的目的而提供的,並且不應當被認為對在本案內容中泛泛地舉例說明和描述的技術進行限制。出於解釋的目的,本案內容描述了根據VVC(正在開發的ITU-T H.266)和HEVC(ITU-T H.265)的技術的視訊編碼器200。然而,本案內容的技術可以由被配置為其他視訊譯碼標準的視訊編碼設備來執行。5 is a block diagram illustrating an exemplary video encoder 200 that may perform the techniques disclosed herein. FIG. 5 is provided for purposes of explanation and should not be considered limiting of the techniques broadly illustrated and described in the context of this case. For explanatory purposes, the content of this case describes a video encoder 200 according to the techniques of VVC (ITU-T H.266 under development) and HEVC (ITU-T H.265). However, the techniques of the present disclosure may be performed by video encoding devices configured to other video coding standards.

在圖5的實例中,視訊編碼器200包括視訊資料記憶體230、模式選擇單元202、殘差產生單元204、變換處理單元206、量化單元208、逆量化單元210、逆變換處理單元212、重構單元214、濾波器單元216、解碼圖片緩衝器(DPB)218和熵編碼單元220。視訊資料記憶體230、模式選擇單元202、殘差產生單元204、變換處理單元206、量化單元208、逆量化單元210、逆變換處理單元212、重構單元214、濾波器單元216、DPB 218和熵編碼單元220中的任何一者或全部可以在一或多個處理器中或者在處理電路系統中實現。例如,視訊編碼器200的單元可以被實現為一或多個電路或邏輯元件,作為硬體電路系統的一部分,或者作為處理器、ASIC或FPGA的一部分。此外,視訊編碼器200可以包括額外或替代的處理器或處理電路系統以執行該等和其他功能。In the example of FIG. 5, the video encoder 200 includes a video data memory 230, a mode selection unit 202, a residual generation unit 204, a transform processing unit 206, a quantization unit 208, an inverse quantization unit 210, an inverse transform processing unit 212, a Configuration unit 214 , filter unit 216 , decoded picture buffer (DPB) 218 , and entropy encoding unit 220 . Video data memory 230, mode selection unit 202, residual generation unit 204, transform processing unit 206, quantization unit 208, inverse quantization unit 210, inverse transform processing unit 212, reconstruction unit 214, filter unit 216, DPB 218 and Any or all of entropy encoding units 220 may be implemented in one or more processors or in processing circuitry. For example, the elements of video encoder 200 may be implemented as one or more circuits or logic elements, as part of a hardware circuit system, or as part of a processor, ASIC or FPGA. Furthermore, video encoder 200 may include additional or alternative processors or processing circuitry to perform these and other functions.

視訊資料記憶體230可以儲存要由視訊編碼器200的元件來編碼的視訊資料。視訊編碼器200可以從例如視訊源104(圖1)接收被儲存在視訊資料記憶體230中的視訊資料。DPB 218可以充當參考圖片記憶體,其儲存參考視訊資料,以在由視訊編碼器200對後續視訊資料進行預測時使用。視訊資料記憶體230和DPB 218可以由各種記憶體設備中的任何一種形成,諸如動態隨機存取記憶體(DRAM)(包括同步DRAM(SDRAM))、磁阻RAM(MRAM)、電阻性RAM(RRAM),或其他類型的記憶體設備。視訊資料記憶體230和DPB 218可以由相同的記憶體設備或分別的記憶體設備來提供。在各個實例中,視訊資料記憶體230可以與視訊編碼器200的其他元件在晶片上(如圖所示),或者相對於彼等元件在晶片外。The video data memory 230 may store video data to be encoded by the components of the video encoder 200 . Video encoder 200 may receive video data stored in video data memory 230 from, for example, video source 104 (FIG. 1). DPB 218 may act as a reference picture memory that stores reference video data for use in prediction of subsequent video data by video encoder 200 . Video data memory 230 and DPB 218 may be formed from any of a variety of memory devices, such as dynamic random access memory (DRAM) (including synchronous DRAM (SDRAM)), magnetoresistive RAM (MRAM), resistive RAM ( RRAM), or other types of memory devices. Video data memory 230 and DPB 218 may be provided by the same memory device or by separate memory devices. In various examples, video data memory 230 may be on-chip with other components of video encoder 200 (as shown), or off-chip relative to those components.

在本案內容中,對視訊資料記憶體230的引用不應當被解釋為限於在視訊編碼器200內部的記憶體(除非如此具體地描述),或者不限於在視訊編碼器200外部的記憶體(除非如此具體地描述)。確切而言,對視訊資料記憶體230的引用應當被理解為儲存視訊編碼器200接收以用於編碼的視訊資料(例如,用於要被編碼的當前區塊的視訊資料)的參考記憶體。圖1的記憶體106亦可以提供對來自視訊編碼器200的各個單元的輸出的臨時儲存。In the context of this case, references to video data memory 230 should not be construed as limited to memory internal to video encoder 200 (unless so specifically described), or to memory external to video encoder 200 (unless so specifically described). Rather, references to video data memory 230 should be understood as reference memory that stores video data received by video encoder 200 for encoding (eg, video data for the current block to be encoded). The memory 106 of FIG. 1 may also provide temporary storage of outputs from the various units of the video encoder 200 .

圖示圖5的各個單元以幫助理解由視訊編碼器200執行的操作。該等單元可以被實現為固定功能電路、可程式設計電路,或其組合。固定功能電路代表提供特定功能並且關於可以執行的操作而預先設置的電路。可程式設計電路代表可以被程式設計以執行各種任務並且以可以執行的操作來提供靈活功能的電路。例如,可程式設計電路可以執行軟體或韌體,該軟體或韌體使得可程式設計電路以軟體或韌體的指令所定義的方式進行操作。固定功能電路可以執行軟體指令(例如,以接收參數或輸出參數),但是固定功能電路執行的操作類型通常是不可變的。在一些實例中,該等單元中的一或多個單元可以是不同的電路區塊(固定功能或可程式設計),並且在一些實例中,單元中的一或多個單元可以是積體電路。The various elements of FIG. 5 are illustrated to aid in understanding the operations performed by the video encoder 200 . The units may be implemented as fixed function circuits, programmable circuits, or a combination thereof. Fixed-function circuits represent circuits that provide specific functions and are preset with respect to operations that can be performed. Programmable circuits represent circuits that can be programmed to perform various tasks and provide flexible functionality in operations that can be performed. For example, the programmable circuit may execute software or firmware that causes the programmable circuit to operate in a manner defined by the instructions of the software or firmware. Fixed-function circuits can execute software instructions (eg, to receive parameters or output parameters), but the types of operations performed by fixed-function circuits are generally immutable. In some examples, one or more of the cells may be different circuit blocks (fixed function or programmable), and in some examples, one or more of the cells may be integrated circuits .

視訊編碼器200可以包括由可程式設計電路形成的算術邏輯單位(ALU)、基本功能單元(EFU)、數位電路、類比電路及/或可程式設計核。在其中使用由可程式設計電路執行的軟體來執行視訊編碼器200的操作的實例中,記憶體106(圖1)可以儲存視訊編碼器200接收並且執行的軟體的指令(例如,目標代碼),或者視訊編碼器200內的另一記憶體(未圖示)可以儲存此種指令。The video encoder 200 may include an arithmetic logic unit (ALU), an elementary functional unit (EFU), a digital circuit, an analog circuit, and/or a programmable core formed of programmable circuits. In examples in which the operations of video encoder 200 are performed using software executed by programmable circuitry, memory 106 (FIG. 1) may store instructions (eg, object code) for the software received and executed by video encoder 200, Or another memory (not shown) in the video encoder 200 may store such instructions.

視訊資料記憶體230被配置為儲存所接收的視訊資料。視訊編碼器200可以從視訊資料記憶體230取得視訊資料的圖片,並且將視訊資料提供給殘差產生單元204和模式選擇單元202。視訊資料記憶體230中的視訊資料可以是要被編碼的原始視訊資料。The video data memory 230 is configured to store the received video data. The video encoder 200 may obtain a picture of the video data from the video data memory 230 and provide the video data to the residual generation unit 204 and the mode selection unit 202 . The video data in the video data memory 230 may be the original video data to be encoded.

模式選擇單元202包括運動估計單元222、運動補償單元224和訊框內預測單元226。模式選擇單元202可以包括額外功能單元,以根據其他預測模式來執行視訊預測。作為實例,模式選擇單元202可以包括調色板單元、區塊內複製單元(其可以是運動估計單元222及/或運動補償單元224的一部分)、仿射單元、線性模型(LM)單元等。Mode selection unit 202 includes motion estimation unit 222 , motion compensation unit 224 and intra-frame prediction unit 226 . The mode selection unit 202 may include additional functional units to perform video prediction according to other prediction modes. As examples, mode selection unit 202 may include a palette unit, an intra-block copy unit (which may be part of motion estimation unit 222 and/or motion compensation unit 224), an affine unit, a linear model (LM) unit, and the like.

模式選擇單元202通常協調多個編碼通路(pass),以測試編碼參數的組合以及針對此種組合所得到的率失真值。編碼參數可以包括將CTU分割為CU、用於CU的預測模式、用於CU的殘差資料的變換類型、用於CU的殘差資料的量化參數等。模式選擇單元202可以最終選擇編碼參數的具有比其他測試的組合更佳的率失真值的組合。Mode selection unit 202 typically coordinates multiple encoding passes to test combinations of encoding parameters and resulting rate-distortion values for such combinations. Coding parameters may include partitioning of the CTU into CUs, prediction mode for the CU, transform type for the residual data of the CU, quantization parameters for the residual data of the CU, and the like. The mode selection unit 202 may finally select a combination of encoding parameters that has a better rate-distortion value than the other tested combinations.

視訊編碼器200可以將從視訊資料記憶體230取得的圖片分割為一系列CTU,並且將一或多個CTU封裝在切片內。模式選擇單元202可以根據樹結構(諸如上述HEVC的QTBT結構或四叉樹結構)來分割圖片的CTU。如前述,視訊編碼器200可以經由根據樹結構來分割CTU,從而形成一或多個CU。此種CU通常亦可以被稱為「視訊區塊」或「區塊」。The video encoder 200 may partition the pictures obtained from the video data memory 230 into a series of CTUs, and encapsulate one or more CTUs in slices. The mode selection unit 202 may divide the CTUs of the picture according to a tree structure such as the above-described QTBT structure or quad-tree structure of HEVC. As described above, the video encoder 200 may form one or more CUs by partitioning the CTUs according to the tree structure. Such CUs may also be commonly referred to as "video blocks" or "blocks."

通常,模式選擇單元202亦控制其元件(例如,運動估計單元222、運動補償單元224和訊框內預測單元226)以產生用於當前區塊(例如,當前CU,或者在HEVC中為PU和TU的重疊部分)的預測區塊。為了對當前區塊進行訊框間預測,運動估計單元222可以執行運動搜尋以辨識在一或多個參考圖片(例如,被儲存在DPB 218中的一或多個先前譯碼的圖片)中的一或多個緊密匹配的參考區塊。具體地,運動估計單元222可以例如根據絕對差之和(SAD)、平方差之和(SSD)、平均絕對差(MAD)、均方差(MSD)等,來計算用於表示潛在參考區塊將與當前區塊的類似程度的值。運動估計單元222通常可以使用在當前區塊與所考慮的參考區塊之間的逐取樣的差,來執行該等計算。運動估計單元222可以辨識從該等計算所得到的具有最低值的參考區塊,其指示與當前區塊最緊密匹配的參考區塊。Typically, mode selection unit 202 also controls its elements (eg, motion estimation unit 222, motion compensation unit 224, and intra-frame prediction unit 226) to generate data for the current block (eg, the current CU, or in HEVC, PUs and TUs) the overlapping portion of the prediction block). To perform inter-frame prediction on the current block, motion estimation unit 222 may perform a motion search to identify motion in one or more reference pictures (eg, one or more previously coded pictures stored in DPB 218 ). One or more closely matched reference blocks. Specifically, the motion estimation unit 222 may, for example, according to the sum of absolute differences (SAD), the sum of squared differences (SSD), the mean absolute difference (MAD), the mean square error (MSD), etc. A value for the degree of similarity to the current block. Motion estimation unit 222 may typically perform these calculations using sample-by-sample differences between the current block and the reference block under consideration. Motion estimation unit 222 may identify the reference block with the lowest value resulting from these calculations, which indicates the reference block that most closely matches the current block.

運動估計單元222可以形成一或多個運動向量(MV),該等運動向量限定相對於當前區塊在當前圖片中的位置而言,參考區塊在參考圖片中的位置。隨後,運動估計單元222可以將運動向量提供給運動補償單元224。例如,對於單向訊框間預測,運動估計單元222可以提供單個運動向量,而對於雙向訊框間預測,運動估計單元222可以提供兩個運動向量。隨後,運動補償單元224可以使用運動向量來產生預測區塊。例如,運動補償單元224可以使用運動向量來取得參考區塊的資料。作為另一實例,若運動向量具有分數取樣精度,則運動補償單元224可以根據一或多個內插濾波器來對用於預測區塊的值進行內插。此外,對於雙向訊框間預測,運動補償單元224可以取得用於由相應的運動向量辨識的兩個參考區塊的資料,並且例如經由逐取樣平均或加權平均,來將所取得的資料進行組合。Motion estimation unit 222 may form one or more motion vectors (MVs) that define the position of the reference block in the reference picture relative to the position of the current block in the current picture. Motion estimation unit 222 may then provide the motion vector to motion compensation unit 224 . For example, for uni-directional inter-frame prediction, motion estimation unit 222 may provide a single motion vector, while for bi-directional inter-frame prediction, motion estimation unit 222 may provide two motion vectors. Subsequently, motion compensation unit 224 may use the motion vector to generate a prediction block. For example, the motion compensation unit 224 may use the motion vector to obtain the data of the reference block. As another example, if the motion vector has fractional sampling precision, motion compensation unit 224 may interpolate the values used for the prediction block according to one or more interpolation filters. Additionally, for bi-directional inter-frame prediction, motion compensation unit 224 may obtain data for two reference blocks identified by corresponding motion vectors and combine the obtained data, such as via sample-by-sample averaging or weighted averaging .

作為另一實例,對於訊框內預測或訊框內預測譯碼,訊框內預測單元226可以根據與當前區塊相鄰的取樣來產生預測區塊。例如,對於方向性模式,訊框內預測單元226通常可以在數學上將相鄰取樣的值進行組合,並且跨當前區塊在所定義的方向上填充該等計算出的值以產生預測區塊。作為另一實例,對於DC模式,訊框內預測單元226可以計算當前區塊的相鄰取樣的平均值,並且產生預測區塊以包括該得到的針對預測區塊的每個取樣的平均值。As another example, for intra-frame prediction or intra-frame prediction coding, intra-frame prediction unit 226 may generate a predicted block from samples adjacent to the current block. For example, for directional mode, intra-frame prediction unit 226 may typically mathematically combine the values of adjacent samples and fill in the calculated values in the defined direction across the current block to produce the predicted block . As another example, for DC mode, intra-frame prediction unit 226 may calculate an average of adjacent samples for the current block and generate a prediction block to include the resulting average for each sample of the prediction block.

根據上述本案內容的技術,訊框內預測單元226可以被配置為:構造包含N個條目的一般最可能模式列表,其中一般最可能模式列表的N個條目是訊框內預測模式,並且其中平面模式是一般最可能模式列表中的順序第一條目;從一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;使用主要最可能模式列表或次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式;及使用當前訊框內預測模式來對當前視訊資料區塊進行編碼,以產生經編碼的視訊資料區塊。In accordance with the techniques described above, intra-frame prediction unit 226 may be configured to construct a general most probable mode list containing N entries, wherein the N entries of the general most probable mode list are intra-frame prediction modes, and wherein the plane The pattern is the sequential first entry in the list of general most probable patterns; construct the list of primary most likely patterns from the first Np entries in the list of general most probable patterns, where Np is less than N; from the remainder of the list of general most probable patterns (N -Np) entries to construct the secondary most probable mode list; use the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current video data block; and use the current intra-frame prediction mode to encode the current block of video data to generate an encoded block of video data.

模式選擇單元202將預測區塊提供給殘差產生單元204。殘差產生單元204從視訊資料記憶體230接收當前區塊的原始的未經編碼的版本,並且從模式選擇單元202接收預測區塊。殘差產生單元204計算在當前區塊與預測區塊之間的逐取樣的差。所得到的逐取樣的差定義了用於當前區塊的殘差區塊。在一些實例中,殘差產生單元204亦可以決定在殘差區塊中的取樣值之間的差,以使用殘差差分脈衝譯碼調制(RDPCM)來產生殘差區塊。在一些實例中,可以使用執行二進位減法的一或多個減法器電路來形成殘差產生單元204。Mode selection unit 202 provides the prediction block to residual generation unit 204 . Residual generation unit 204 receives the original uncoded version of the current block from video data memory 230 and the prediction block from mode selection unit 202 . The residual generation unit 204 calculates the sample-by-sample difference between the current block and the prediction block. The resulting sample-by-sample difference defines the residual block for the current block. In some examples, residual generation unit 204 may also determine the difference between the sample values in the residual block to generate the residual block using residual differential pulse coding modulation (RDPCM). In some examples, residual generation unit 204 may be formed using one or more subtractor circuits that perform binary subtraction.

在其中模式選擇單元202將CU分割為PU的實例中,每個PU可以與亮度預測單元和對應的色度預測單元相關聯。視訊編碼器200和視訊解碼器300可以支援具有各種大小的PU。如上所指出的,CU的大小可以代表CU的亮度譯碼區塊的大小,並且PU的大小可以代表PU的亮度預測單元的大小。假設特定CU的大小為2Nx2N,則視訊編碼器200可以支援用於訊框內預測的2Nx2N或NxN的PU大小,以及用於訊框間預測的2Nx2N、2NxN、Nx2N、NxN或類似的對稱的PU大小。視訊編碼器200和視訊解碼器300亦可以支援針對用於訊框間預測的2NxnU、2NxnD、nLx2N和nRx2N的PU大小的非對稱分割。In examples in which mode selection unit 202 partitions the CU into PUs, each PU may be associated with a luma prediction unit and a corresponding chroma prediction unit. Video encoder 200 and video decoder 300 may support PUs having various sizes. As noted above, the size of the CU may represent the size of the CU's luma coding block, and the size of the PU may represent the size of the PU's luma prediction unit. Assuming the size of a particular CU is 2Nx2N, video encoder 200 may support PU sizes of 2Nx2N or NxN for intra-frame prediction, and 2Nx2N, 2NxN, Nx2N, NxN, or similar symmetric PUs for inter-frame prediction size. Video encoder 200 and video decoder 300 may also support asymmetric partitioning for PU sizes of 2NxnU, 2NxnD, nLx2N, and nRx2N for inter-frame prediction.

在其中模式選擇單元202不將CU進一步分割為PU的實例中,每個CU可以與亮度譯碼區塊和對應的色度譯碼區塊相關聯。如前述,CU的大小可以代表CU的亮度譯碼區塊的大小。視訊編碼器200和視訊解碼器300可以支援2Nx2N、2NxN或Nx2N的CU大小。In examples in which mode selection unit 202 does not further partition the CU into PUs, each CU may be associated with a luma coding block and a corresponding chroma coding block. As previously described, the size of the CU may represent the size of the luma coding block of the CU. The video encoder 200 and the video decoder 300 may support a CU size of 2Nx2N, 2NxN or Nx2N.

對於其他視訊譯碼技術(舉幾個實例,諸如區塊內複製模式譯碼、仿射模式譯碼和線性模型(LM)模式譯碼),模式選擇單元202經由與譯碼技術相關聯的相應單元來產生用於正被編碼的當前區塊的預測區塊。在一些實例中(諸如調色板模式譯碼),模式選擇單元202可以不產生預測區塊,而是替代地產生用於指示基於所選擇的調色板來重構區塊的方式的語法元素。在此種模式下,模式選擇單元202可以將該等語法元素提供給熵編碼單元220以進行編碼。For other video coding techniques (such as intra-block copy mode coding, affine mode coding, and linear model (LM) mode coding, to name a few), mode selection unit 202 uses corresponding unit to generate a prediction block for the current block being encoded. In some examples (such as palette mode coding), mode selection unit 202 may not generate predicted blocks, but instead generate syntax elements indicating how the blocks are to be reconstructed based on the selected palette . In such a mode, mode selection unit 202 may provide the syntax elements to entropy encoding unit 220 for encoding.

如前述,殘差產生單元204接收用於當前區塊和對應的預測區塊的視訊資料。隨後,殘差產生單元204為當前區塊產生殘差區塊。為了產生殘差區塊,殘差產生單元204計算在預測區塊與當前區塊之間的逐取樣的差。As mentioned above, the residual generation unit 204 receives video data for the current block and the corresponding prediction block. Subsequently, the residual generating unit 204 generates a residual block for the current block. To generate the residual block, the residual generation unit 204 calculates a sample-by-sample difference between the prediction block and the current block.

變換處理單元206將一或多個變換應用於殘差區塊,以產生變換係數的區塊(本文中被稱為「變換係數區塊」)。變換處理單元206可以將各種變換應用於殘差區塊,以形成變換係數區塊。例如,變換處理單元206可以將離散餘弦變換(DCT)、方向變換、Karhunen-Loeve變換(KLT),或概念上類似的變換應用於殘差區塊。在一些實例中,變換處理單元206可以對殘差區塊執行多種變換,例如,初級變換和二次變換(諸如旋轉變換)。在一些實例中,變換處理單元206不對殘差區塊應用變換。Transform processing unit 206 applies one or more transforms to the residual block to generate a block of transform coefficients (referred to herein as a "transform coefficient block"). Transform processing unit 206 may apply various transforms to the residual blocks to form blocks of transform coefficients. For example, transform processing unit 206 may apply a discrete cosine transform (DCT), a directional transform, a Karhunen-Loeve transform (KLT), or a conceptually similar transform to the residual block. In some examples, transform processing unit 206 may perform various transforms on the residual block, eg, primary transforms and secondary transforms (such as rotational transforms). In some examples, transform processing unit 206 does not apply a transform to the residual block.

量化單元208可以對變換係數區塊中的變換係數進行量化,以產生經量化的變換係數區塊。量化單元208可以根據與當前區塊相關聯的量化參數(QP)值來對變換係數區塊的變換係數進行量化。視訊編碼器200(例如,經由模式選擇單元202)可以經由調整與CU相關聯的QP值,來調整被應用於與當前區塊相關聯的變換係數區塊的量化程度。量化可能引起資訊損失,並且因此,經量化的變換係數可能具有比由變換處理單元206所產生的原始變換係數更低的精度。Quantization unit 208 may quantize the transform coefficients in the transform coefficient block to generate a quantized transform coefficient block. Quantization unit 208 may quantize the transform coefficients of the block of transform coefficients according to a quantization parameter (QP) value associated with the current block. Video encoder 200 (eg, via mode selection unit 202 ) may adjust the degree of quantization applied to the transform coefficient block associated with the current block by adjusting the QP value associated with the CU. Quantization may cause a loss of information, and thus, the quantized transform coefficients may have lower precision than the original transform coefficients produced by transform processing unit 206 .

逆量化單元210和逆變換處理單元212可以將逆量化和逆變換分別應用於經量化的變換係數區塊,以從變換係數區塊重構殘差區塊。重構單元214可以基於經重構的殘差區塊和由模式選擇單元202產生的預測區塊,來產生與當前區塊相對應的重構區塊(儘管潛在地具有某種程度的失真)。例如,重構單元214可以將經重構的殘差區塊的取樣與來自由模式選擇單元202所產生的預測區塊的對應取樣相加,以產生經重構的區塊。Inverse quantization unit 210 and inverse transform processing unit 212 may apply inverse quantization and inverse transform, respectively, to the quantized transform coefficient block to reconstruct a residual block from the transform coefficient block. Reconstruction unit 214 may generate a reconstructed block corresponding to the current block (albeit potentially with some degree of distortion) based on the reconstructed residual block and the predicted block generated by mode selection unit 202 . For example, reconstruction unit 214 may add samples of the reconstructed residual block to corresponding samples from the prediction block generated by mode selection unit 202 to generate a reconstructed block.

濾波器單元216可以對經重構的區塊執行一或多個濾波器操作。例如,濾波器單元216可以執行去區塊操作以減少沿著CU的邊緣的區塊效應偽影。在一些實例中,可以跳過濾波器單元216的操作。Filter unit 216 may perform one or more filter operations on the reconstructed block. For example, filter unit 216 may perform a deblocking operation to reduce blocking artifacts along the edges of the CU. In some instances, the operations of filter unit 216 may be skipped.

視訊編碼器200將經重構的區塊儲存在DPB 218中。例如,在其中不執行濾波器單元216的操作的實例中,重構單元214可以將經重構的區塊儲存到DPB 218中。在其中執行濾波器單元216的操作的實例中,濾波器單元216可以將經濾波的重構區塊儲存到DPB 218中。運動估計單元222和運動補償單元224可以從DPB 218取得根據經重構的(並且潛在地經濾波的)區塊形成的參考圖片,以對後續編碼的圖片的區塊進行訊框間預測。另外,訊框內預測單元226可以使用在DPB 218中的當前圖片的經重構的區塊,來對當前圖片中的其他區塊進行訊框內預測。Video encoder 200 stores the reconstructed blocks in DPB 218 . For example, in instances in which the operations of filter unit 216 are not performed, reconstruction unit 214 may store the reconstructed block into DPB 218 . In examples in which the operations of filter unit 216 are performed, filter unit 216 may store the filtered reconstructed block into DPB 218 . Motion estimation unit 222 and motion compensation unit 224 may fetch reference pictures formed from reconstructed (and potentially filtered) blocks from DPB 218 for inter-frame prediction of blocks of subsequently encoded pictures. Additionally, intra-prediction unit 226 may use the reconstructed blocks of the current picture in DPB 218 to intra-predict other blocks in the current picture.

通常,熵編碼單元220可以對從視訊編碼器200的其他功能元件接收的語法元素進行熵編碼。例如,熵編碼單元220可以對來自量化單元208的經量化的變換係數區塊進行熵編碼。作為另一實例,熵編碼單元220可以對來自模式選擇單元202的預測語法元素(例如,用於訊框間預測的運動資訊或用於訊框內預測的訊框內模式資訊)進行熵編碼。熵編碼單元220可以對作為視訊資料的另一實例的語法元素執行一或多個熵編碼操作,以產生經熵編碼的資料。例如,熵編碼單元220可以執行上下文自我調整可變長度譯碼(CAVLC)操作、CABAC操作、可變到可變(V2V)長度譯碼操作、基於語法的上下文自我調整二進位算術譯碼(SBAC)操作、概率區間分割熵(PIPE)譯碼操作、指數哥倫布編碼操作,或對資料的另一種類型的熵編碼操作。在一些實例中,熵編碼單元220可以在其中語法元素未被熵編碼的旁路模式下操作。In general, entropy encoding unit 220 may entropy encode syntax elements received from other functional elements of video encoder 200 . For example, entropy encoding unit 220 may entropy encode the quantized block of transform coefficients from quantization unit 208. As another example, entropy encoding unit 220 may entropy encode prediction syntax elements from mode selection unit 202 (eg, motion information for inter-frame prediction or intra-frame mode information for intra-frame prediction). Entropy encoding unit 220 may perform one or more entropy encoding operations on syntax elements, which are another example of video data, to generate entropy encoded data. For example, entropy encoding unit 220 may perform context self-adjusting variable length coding (CAVLC) operations, CABAC operations, variable-to-variable (V2V) length coding operations, syntax-based context self-adjusting binary arithmetic coding (SBAC) ) operation, a probability interval partitioning entropy (PIPE) decoding operation, an exponential Golomb encoding operation, or another type of entropy encoding operation on the data. In some examples, entropy encoding unit 220 may operate in a bypass mode in which syntax elements are not entropy encoded.

視訊編碼器200可以輸出位元串流,其包括用於重構切片或圖片的區塊所需要的經熵編碼的語法元素。具體地,熵編碼單元220可以輸出位元串流。Video encoder 200 may output a bitstream that includes entropy-encoded syntax elements needed to reconstruct blocks of slices or pictures. Specifically, the entropy encoding unit 220 may output a bitstream.

關於區塊描述了上述操作。此種描述應當被理解為用於亮度譯碼區塊及/或色度譯碼區塊的操作。如前述,在一些實例中,亮度譯碼區塊和色度譯碼區塊是CU的亮度分量和色度分量。在一些實例中,亮度譯碼區塊和色度譯碼區塊是PU的亮度分量和色度分量。The above operations are described with respect to blocks. Such descriptions should be understood as operations for luma coded blocks and/or chroma coded blocks. As previously described, in some examples, the luma coding blocks and the chroma coding blocks are the luma and chroma components of a CU. In some examples, the luma coding blocks and the chroma coding blocks are the luma and chroma components of the PU.

在一些實例中,不需要針對色度譯碼區塊重複關於亮度譯碼區塊執行的操作。作為一個實例,不需要重複用於辨識用於亮度譯碼區塊的運動向量(MV)和參考圖片的操作來辨識用於色度區塊的MV和參考圖片。而是,可以對用於亮度譯碼區塊的MV進行縮放以決定用於色度區塊的MV,並且參考圖片可以是相同的。作為另一實例,對於亮度譯碼區塊和色度譯碼區塊,訊框內預測程序可以是相同的。In some examples, operations performed with respect to luma coded blocks need not be repeated for chroma coded blocks. As one example, the operations for identifying motion vectors (MVs) and reference pictures for luma coded blocks need not be repeated to identify MVs and reference pictures for chroma blocks. Rather, the MVs for luma coded blocks may be scaled to determine the MVs for chroma blocks, and the reference pictures may be the same. As another example, the intra-frame prediction procedure may be the same for luma coded blocks and chroma coded blocks.

視訊編碼器200表示被配置為對視訊資料進行編碼的設備的實例,該設備包括:記憶體,其被配置為儲存視訊資料;及一或多個處理單元,其在電路系統中實現並且被配置為:構造包含N個條目的一般最可能模式列表,其中一般最可能模式列表的N個條目是訊框內預測模式,並且其中平面模式是一般最可能模式列表中的順序第一條目;從一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;使用主要最可能模式列表或次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式;及使用當前訊框內預測模式來對當前視訊資料區塊進行編碼,以產生經編碼的視訊資料區塊。Video encoder 200 represents an example of a device configured to encode video data, the device comprising: memory configured to store video data; and one or more processing units implemented and configured in circuitry is: constructs a general most probable mode list containing N entries, where the N entries of the general most probable mode list are intra-frame prediction modes, and where the planar mode is the sequential first entry in the general most probable mode list; from First Np entries in the general most probable pattern list construct the primary most probable pattern list, where Np is less than N; construct the secondary most probable pattern list from the remaining (N-Np) entries in the general most probable pattern list; use the main most probable pattern list a list of possible modes or a list of second most probable modes to determine the current intra-frame prediction mode for the current block of video data; and encoding the current block of video data using the current intra-frame prediction mode to generate an encoded Video data block.

圖6是圖示可以執行本案內容的技術的示例性視訊解碼器300的方塊圖。圖6是出於解釋的目的而提供的,並且不對在本案內容中泛泛地舉例說明和描述的技術進行限制。出於解釋的目的,本案內容根據VVC(正在開發的ITU-T H.266)和HEVC(ITU-T H.265)的技術描述了視訊解碼器300。然而,本案內容的技術可以由被配置用於其他視訊譯碼標準的視訊譯碼設備來執行。6 is a block diagram illustrating an exemplary video decoder 300 that may perform the techniques of the present disclosure. FIG. 6 is provided for purposes of explanation and is not intended to limit the techniques generally illustrated and described in the context of this document. For explanatory purposes, the present disclosure describes the video decoder 300 in terms of VVC (ITU-T H.266 under development) and HEVC (ITU-T H.265) techniques. However, the techniques of this disclosure may be performed by video coding devices configured for other video coding standards.

在圖6的實例中,視訊解碼器300包括譯碼圖片緩衝器(CPB)記憶體320、熵解碼單元302、預測處理單元304、逆量化單元306、逆變換處理單元308、重構單元310、濾波器單元312和解碼圖片緩衝器(DPB)314。CPB記憶體320、熵解碼單元302、預測處理單元304、逆量化單元306、逆變換處理單元308、重構單元310、濾波器單元312和DPB 314中的任何一者或全部可以在一或多個處理器中或者在處理電路系統中實現。例如,視訊解碼器300的單元可以被實現為一或多個電路或邏輯元件,作為硬體電路系統的一部分,或者作為處理器、ASIC或FPGA的一部分。此外,視訊解碼器300可以包括額外或替代的處理器或處理電路系統以執行該等和其他功能。In the example of FIG. 6, video decoder 300 includes coded picture buffer (CPB) memory 320, entropy decoding unit 302, prediction processing unit 304, inverse quantization unit 306, inverse transform processing unit 308, reconstruction unit 310, Filter unit 312 and decoded picture buffer (DPB) 314 . Any or all of CPB memory 320, entropy decoding unit 302, prediction processing unit 304, inverse quantization unit 306, inverse transform processing unit 308, reconstruction unit 310, filter unit 312, and DPB 314 may be one or more of implemented in a processor or in processing circuitry. For example, the elements of video decoder 300 may be implemented as one or more circuits or logic elements, as part of hardware circuitry, or as part of a processor, ASIC or FPGA. Furthermore, video decoder 300 may include additional or alternative processors or processing circuitry to perform these and other functions.

預測處理單元304包括運動補償單元316和訊框內預測單元318。預測處理單元304可以包括額外單元,其根據其他預測模式來執行預測。作為實例,預測處理單元304可以包括調色板單元、區塊內複製單元(其可以形成運動補償單元316的一部分)、仿射單元、線性模型(LM)單元等。在其他實例中,視訊解碼器300可以包括更多、更少或不同的功能元件。Prediction processing unit 304 includes motion compensation unit 316 and intra-frame prediction unit 318 . Prediction processing unit 304 may include additional units that perform predictions according to other prediction modes. As examples, prediction processing unit 304 may include a palette unit, an intra-block copy unit (which may form part of motion compensation unit 316), an affine unit, a linear model (LM) unit, and the like. In other examples, video decoder 300 may include more, fewer, or different functional elements.

CPB記憶體320可以儲存要由視訊解碼器300的元件解碼的視訊資料,諸如經編碼的視訊位元串流。例如,可以從電腦可讀取媒體110(圖1)獲得被儲存在CPB記憶體320中的視訊資料。CPB記憶體320可以包括儲存來自經編碼的視訊位元串流的經編碼的視訊資料(例如,語法元素)的CPB。此外,CPB記憶體320可以儲存除了經譯碼的圖片的語法元素之外的視訊資料,諸如用於表示來自視訊解碼器300的各個單元的輸出的臨時資料。DPB 314通常儲存經解碼的圖片,視訊解碼器300可以輸出經解碼的圖片,及/或在解碼經編碼的視訊位元串流的後續資料或圖片時使用經解碼的圖片作為參考視訊資料。CPB記憶體320和DPB 314可以由各種記憶體設備中的任何一種形成,諸如DRAM,包括SDRAM、MRAM、RRAM或其他類型的記憶體設備。CPB記憶體320和DPB 314可以由相同的記憶體設備或分別的記憶體設備來提供。在各個實例中,CPB記憶體320可以與視訊解碼器300的其他元件在晶片上,或者相對於彼等元件在晶片外。CPB memory 320 may store video data, such as an encoded video bitstream, to be decoded by the components of video decoder 300 . For example, video data stored in CPB memory 320 may be obtained from computer readable medium 110 (FIG. 1). CPB memory 320 may include a CPB that stores encoded video data (eg, syntax elements) from the encoded video bitstream. In addition, CPB memory 320 may store video data other than syntax elements of the coded picture, such as temporary data used to represent outputs from various units of video decoder 300 . DPB 314 typically stores decoded pictures, which video decoder 300 may output and/or use as reference video data when decoding subsequent data or pictures of the encoded video bitstream. CPB memory 320 and DPB 314 may be formed from any of a variety of memory devices, such as DRAM, including SDRAM, MRAM, RRAM, or other types of memory devices. CPB memory 320 and DPB 314 may be provided by the same memory device or by separate memory devices. In various examples, CPB memory 320 may be on-chip with other components of video decoder 300, or off-chip relative to those components.

另外或替代地,在一些實例中,視訊解碼器300可以從記憶體120(圖1)取得經譯碼的視訊資料。亦即,記憶體120可以如上文所論述地利用CPB記憶體320來儲存資料。同樣,當視訊解碼器300的一些或全部功能是用要被視訊解碼器300的處理電路系統執行的軟體來實現時,記憶體120可以儲存要被視訊解碼器300執行的指令。Additionally or alternatively, in some examples, video decoder 300 may obtain decoded video data from memory 120 (FIG. 1). That is, memory 120 may utilize CPB memory 320 to store data as discussed above. Likewise, memory 120 may store instructions to be executed by video decoder 300 when some or all of the functions of video decoder 300 are implemented in software to be executed by the processing circuitry of video decoder 300 .

圖示圖6中圖示的各個單元以幫助理解由視訊解碼器300執行的操作。該等單元可以被實現為固定功能電路、可程式設計電路,或其組合。類似於圖5,固定功能電路代表提供特定功能並且關於可以執行的操作而預先設置的電路。可程式設計電路代表可以被程式設計以執行各種任務並且以可以執行的操作來提供靈活功能的電路。例如,可程式設計電路可以執行軟體或韌體,該軟體或韌體使得可程式設計電路以軟體或韌體的指令所定義的方式進行操作。固定功能電路可以執行軟體指令(例如,以接收參數或輸出參數),但是固定功能電路執行的操作的類型通常是不可變的。在一些實例中,該等單元中的一或多個單元可以是不同的電路區塊(固定功能或可程式設計),並且在一些實例中,單元中的一或多個單元可以是積體電路。The various units illustrated in FIG. 6 are illustrated to aid in understanding the operations performed by the video decoder 300 . The units may be implemented as fixed function circuits, programmable circuits, or a combination thereof. Similar to FIG. 5, fixed function circuits represent circuits that provide specific functions and are preset with respect to operations that can be performed. Programmable circuits represent circuits that can be programmed to perform various tasks and provide flexible functionality in operations that can be performed. For example, the programmable circuit may execute software or firmware that causes the programmable circuit to operate in a manner defined by the instructions of the software or firmware. Fixed-function circuits may execute software instructions (eg, to receive parameters or output parameters), but the type of operations performed by fixed-function circuits is generally immutable. In some examples, one or more of the cells may be different circuit blocks (fixed function or programmable), and in some examples, one or more of the cells may be integrated circuits .

視訊解碼器300可以包括由可程式設計電路形成的ALU、EFU、數位電路、類比電路及/或可程式設計核。在其中由在可程式設計電路上執行的軟體執行視訊解碼器300的操作的實例中,晶片上或晶片外記憶體可以儲存視訊解碼器300接收並且執行的軟體的指令(例如,目標代碼)。Video decoder 300 may include ALUs, EFUs, digital circuits, analog circuits, and/or programmable cores formed from programmable circuits. In instances where the operations of video decoder 300 are performed by software executing on programmable circuitry, on-chip or off-chip memory may store instructions (eg, object code) for the software that video decoder 300 receives and executes.

熵譯碼單元302可以從CPB接收經編碼的視訊資料,並且對視訊資料進行熵解碼以重現語法元素。預測處理單元304、逆量化單元306、逆變換處理單元308、重構單元310和濾波器單元312可以基於從位元串流中提取的語法元素來產生經解碼的視訊資料。Entropy coding unit 302 may receive encoded video data from the CPB and entropy decode the video data to reproduce syntax elements. Prediction processing unit 304, inverse quantization unit 306, inverse transform processing unit 308, reconstruction unit 310, and filter unit 312 may generate decoded video data based on syntax elements extracted from the bitstream.

通常,視訊解碼器300在逐區塊的基礎上重構圖片。視訊解碼器300可以單獨地對每個區塊執行重構操作(其中當前正在被重構(亦即,被解碼)的區塊可以被稱為「當前區塊」)。Typically, video decoder 300 reconstructs pictures on a block-by-block basis. Video decoder 300 may perform reconstruction operations on each block individually (where the block currently being reconstructed (ie, decoded) may be referred to as the "current block").

熵解碼單元302可以對用於定義經量化的變換係數區塊的經量化的變換係數的語法元素,以及諸如量化參數(QP)及/或變換模式指示之類的變換資訊進行熵解碼。逆量化單元306可以使用與經量化的變換係數區塊相關聯的QP來決定量化程度,並且同樣地,決定供逆量化單元306應用的逆量化程度。逆量化單元306可以例如執行按位元左移操作,以對經量化的變換係數進行逆量化。逆量化單元306從而可以形成包括變換係數的變換係數區塊。Entropy decoding unit 302 may entropy decode syntax elements used to define the quantized transform coefficients of the quantized transform coefficient block, as well as transform information such as quantization parameters (QPs) and/or transform mode indications. Inverse quantization unit 306 may use the QP associated with the quantized transform coefficient block to determine the degree of quantization and, likewise, the degree of inverse quantization for inverse quantization unit 306 to apply. Inverse quantization unit 306 may, for example, perform a bitwise left shift operation to inverse quantize the quantized transform coefficients. Inverse quantization unit 306 may thus form transform coefficient blocks comprising transform coefficients.

在逆量化單元306形成變換係數區塊之後,逆變換處理單元308可以將一或多個逆變換應用於變換係數區塊,以產生與當前區塊相關聯的殘差區塊。例如,逆變換處理單元308可以將逆DCT、逆整數變換、逆Karhunen-Loeve變換(KLT)、逆旋轉變換、逆方向變換或另一逆變換應用於變化係數區塊。After inverse quantization unit 306 forms the transform coefficient block, inverse transform processing unit 308 may apply one or more inverse transforms to the transform coefficient block to generate a residual block associated with the current block. For example, the inverse transform processing unit 308 may apply an inverse DCT, an inverse integer transform, an inverse Karhunen-Loeve transform (KLT), an inverse rotation transform, an inverse directional transform, or another inverse transform to the coefficient-of-change block.

此外,預測處理單元304根據由熵解碼單元302進行熵解碼的預測資訊語法元素來產生預測區塊。例如,若預測資訊語法元素指示當前區塊是經訊框間預測的,則運動補償單元316可以產生預測區塊。在此種情況下,預測資訊語法元素可以指示在DPB 314中的要從其取得參考區塊的參考圖片,以及用於辨識相對於當前區塊在當前圖片中的位置而言參考區塊在參考圖片中的位置的運動向量。運動補償單元316通常可以以與關於運動補償單元224(圖5)所描述的方式基本類似的方式來執行訊框間預測程序。In addition, prediction processing unit 304 generates prediction blocks from prediction information syntax elements entropy decoded by entropy decoding unit 302 . For example, motion compensation unit 316 may generate a prediction block if the prediction information syntax element indicates that the current block is inter-frame predicted. In this case, the prediction information syntax element may indicate the reference picture in DPB 314 from which the reference block is to be obtained, and is used to identify that the reference block is in reference to the current block's position in the current picture The motion vector of the position in the picture. Motion compensation unit 316 may generally perform an inter-frame prediction procedure in a manner substantially similar to that described with respect to motion compensation unit 224 (FIG. 5).

作為另一實例,若預測資訊語法元素指示當前區塊是經訊框內預測的,則訊框內預測單元318可以根據由預測資訊語法元素指示的訊框內預測模式來產生預測區塊。再次,訊框內預測單元318通常可以以與關於訊框內預測單元226(圖5)所描述的方式基本上類似的方式來執行訊框內預測程序。訊框內預測單元318可以從DPB 314取得當前區塊的相鄰取樣的資料。As another example, if the prediction information syntax element indicates that the current block is intra-predicted, intra-prediction unit 318 may generate the prediction block according to the intra-prediction mode indicated by the prediction information syntax element. Again, intra-frame prediction unit 318 may generally perform the intra-frame prediction procedure in a manner substantially similar to that described with respect to intra-frame prediction unit 226 (FIG. 5). In-frame prediction unit 318 may obtain data from DPB 314 for adjacent samples of the current block.

根據上述技術,訊框內預測單元318可以被配置為:構造包含N個條目的一般最可能模式列表,其中一般最可能模式列表的N個條目是訊框內預測模式,並且其中平面模式是一般最可能模式列表中的順序第一條目;從一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;使用主要最可能模式列表或次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式;及使用當前訊框內預測模式來對當前視訊資料區塊進行解碼,以產生經解碼的視訊資料區塊。In accordance with the techniques described above, intra-frame prediction unit 318 may be configured to construct a general most probable mode list containing N entries, wherein the N entries of the general most probable mode list are intra-frame prediction modes, and wherein the planar mode is general Sequential first entry in the list of most probable patterns; construct the list of primary most likely patterns from the first Np entries in the list of general most probable patterns, where Np is less than N; from the remainder in the list of general most probable patterns (N-Np) entries to construct a secondary most probable mode list; use the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data; and use the current intra-frame prediction mode to The current video data block is decoded to generate a decoded video data block.

重構單元310可以使用預測區塊和殘差區塊來重構當前區塊。例如,重構單元310可以將殘差區塊的取樣與預測區塊的對應取樣相加來重構當前區塊。The reconstruction unit 310 may reconstruct the current block using the prediction block and the residual block. For example, reconstruction unit 310 may reconstruct the current block by adding samples of the residual block to corresponding samples of the prediction block.

濾波器單元312可以對經重構的區塊執行一或多個濾波器操作。例如,濾波器單元312可以執行去區塊操作以減少沿著經重構的區塊的邊緣的區塊效應偽影。不一定在所有實例中皆執行濾波器單元312的操作。Filter unit 312 may perform one or more filter operations on the reconstructed block. For example, filter unit 312 may perform a deblocking operation to reduce blocking artifacts along the edges of the reconstructed block. The operations of filter unit 312 are not necessarily performed in all instances.

視訊解碼器300可以將經重構的區塊儲存在DPB 314中。例如,在其中不執行濾波器單元312的操作的實例中,重構單元310可以將經重構的區塊儲存到DPB 314中。在其中執行濾波器單元312的操作的實例中,濾波器單元312可以將經濾波的重構區塊儲存到DPB 314中。如上所論述的,DPB 314可以將參考資訊(諸如用於訊框內預測的當前圖片以及用於後續運動補償的先前解碼的圖片的取樣)提供給預測處理單元304。此外,視訊解碼器300可以從DPB 314輸出經解碼的圖片(例如,經解碼的視訊),以用於在諸如圖1的顯示設備118之類的顯示設備上的後續呈現。Video decoder 300 may store the reconstructed blocks in DPB 314 . For example, in instances in which the operations of filter unit 312 are not performed, reconstruction unit 310 may store the reconstructed block into DPB 314 . In examples in which the operations of filter unit 312 are performed, filter unit 312 may store the filtered reconstructed block into DPB 314 . As discussed above, DPB 314 may provide reference information, such as the current picture for intra-frame prediction and samples of previously decoded pictures for subsequent motion compensation, to prediction processing unit 304 . Furthermore, video decoder 300 may output decoded pictures (eg, decoded video) from DPB 314 for subsequent presentation on a display device, such as display device 118 of FIG. 1 .

以此種方式,視訊解碼器300表示視訊解碼設備的實例,該視訊解碼設備包括:記憶體,其被配置為儲存視訊資料;及一或多個處理單元,其在電路系統中實現並且被配置為:構造包含N個條目的一般最可能模式列表,其中一般最可能模式列表的N個條目是訊框內預測模式,並且其中平面模式是一般最可能模式列表中的順序第一條目;從一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;使用主要最可能模式列表或次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式;及使用當前訊框內預測模式來對當前視訊資料區塊進行解碼,以產生經解碼的視訊資料區塊。In this manner, video decoder 300 represents an example of a video decoding apparatus including: memory configured to store video data; and one or more processing units implemented and configured in circuitry is: constructs a general most probable mode list containing N entries, where the N entries of the general most probable mode list are intra-frame prediction modes, and where the planar mode is the sequential first entry in the general most probable mode list; from First Np entries in the general most probable pattern list construct the primary most probable pattern list, where Np is less than N; construct the secondary most probable pattern list from the remaining (N-Np) entries in the general most probable pattern list; use the main most probable pattern list a list of possible modes or a list of second most probable modes to determine the current intra-frame prediction mode for the current block of video data; and decoding the current block of video data using the current intra-frame prediction mode to generate a decoded Video data block.

圖7是圖示根據本案內容的技術的用於對當前區塊進行編碼的示例性方法的流程圖。當前區塊可以包括當前CU。儘管關於視訊編碼器200(圖1和圖5)進行了描述,但是應當理解的是,其他設備可以被配置為執行與圖7的方法類似的方法。7 is a flowchart illustrating an exemplary method for encoding a current block in accordance with the techniques of this disclosure. The current block may include the current CU. Although described with respect to video encoder 200 (FIGS. 1 and 5), it should be understood that other devices may be configured to perform a method similar to that of FIG.

在該實例中,視訊編碼器200最初預測當前區塊(350)。例如,視訊編碼器200可以形成用於當前區塊的預測區塊。隨後,視訊編碼器200可以計算用於當前區塊的殘差區塊(352)。為了計算殘差區塊,視訊編碼器200可以計算在原始的未經編碼的區塊與用於當前區塊的預測區塊之間的差。隨後,視訊編碼器200可以對殘差區塊進行變換並且對殘差區塊的變換係數進行量化(354)。接下來,視訊編碼器200可以掃瞄殘差區塊的經量化的變換係數(356)。在掃瞄期間或在掃瞄之後,視訊編碼器200可以對變換係數進行熵編碼(358)。例如,視訊編碼器200可以使用CAVLC或CABAC來對變換係數進行編碼。隨後,視訊編碼器200可以輸出區塊的經熵編碼的資料(360)。In this example, video encoder 200 initially predicts the current block (350). For example, the video encoder 200 may form a predicted block for the current block. Subsequently, video encoder 200 may calculate a residual block for the current block (352). To calculate the residual block, the video encoder 200 may calculate the difference between the original unencoded block and the predicted block for the current block. Then, video encoder 200 may transform the residual block and quantize the transform coefficients of the residual block (354). Next, video encoder 200 may scan the residual block for the quantized transform coefficients (356). During or after the scan, video encoder 200 may entropy encode the transform coefficients (358). For example, video encoder 200 may use CAVLC or CABAC to encode transform coefficients. Video encoder 200 may then output entropy encoded data for the block (360).

圖8是圖示根據本案內容的技術的用於對視訊資料的當前區塊進行解碼的示例性方法的流程圖。當前區塊可以包括當前CU。儘管關於視訊解碼器300(圖1和圖6)進行了描述,但是應當理解的是,其他設備可以被配置為執行與圖8的方法類似的方法。8 is a flowchart illustrating an exemplary method for decoding a current block of video material in accordance with the techniques of this disclosure. The current block may include the current CU. Although described with respect to video decoder 300 (FIGS. 1 and 6), it should be understood that other devices may be configured to perform a method similar to that of FIG.

視訊解碼器300可以接收用於當前區塊的經熵編碼的資料(諸如經熵編碼的預測資訊和用於與當前區塊相對應的殘差區塊的變換係數的經熵編碼的資料)(370)。視訊解碼器300可以對經熵編碼的資料進行熵解碼以決定用於當前區塊的預測資訊並且重現殘差區塊的變換係數(372)。視訊解碼器300可以預測當前區塊(374),例如,使用由當前區塊的預測資訊指示的訊框內或訊框間預測模式來計算用於當前區塊的預測區塊。隨後,視訊解碼器300可以對所重現的變換係數進行逆掃瞄(376),以建立經量化的變換係數的區塊。隨後,視訊解碼器300可以對變換係數進行逆量化並且將逆變換應用於變換係數以產生殘差區塊(378)。視訊解碼器300可以經由將預測區塊和殘差區塊進行組合來最終對當前區塊進行解碼(380)。Video decoder 300 may receive entropy-encoded data for the current block (such as entropy-encoded prediction information and entropy-encoded data for transform coefficients of residual blocks corresponding to the current block) ( 370). Video decoder 300 may entropy decode the entropy encoded data to determine prediction information for the current block and reproduce transform coefficients for the residual block (372). Video decoder 300 may predict the current block (374), eg, calculate a predicted block for the current block using an intra-frame or inter-frame prediction mode indicated by the prediction information for the current block. Video decoder 300 may then inverse scan (376) the reproduced transform coefficients to create blocks of quantized transform coefficients. Subsequently, video decoder 300 may inverse quantize the transform coefficients and apply an inverse transform to the transform coefficients to generate a residual block (378). Video decoder 300 may finally decode the current block by combining the prediction block and the residual block (380).

圖9是圖示根據本案內容的技術的用於對當前區塊進行編碼的另一示例性方法的流程圖。圖9的技術可以由視訊編碼器200的一或多個結構單元(包括圖5的訊框內預測單元226)來執行。9 is a flowchart illustrating another exemplary method for encoding a current block in accordance with the techniques of this disclosure. The techniques of FIG. 9 may be performed by one or more structural units of video encoder 200 , including intra-frame prediction unit 226 of FIG. 5 .

在本案內容的一個實例中,視訊編碼器200被配置為使用訊框內預測來對當前視訊資料區塊進行編碼。視訊編碼器200可以被配置為構造包含N個條目的一般最可能模式列表,其中一般最可能模式列表的N個條目是訊框內預測模式,並且其中平面模式是一般最可能模式列表中的順序第一條目(500)。視訊編碼器200亦可以從一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N(502),並且從一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表(504)。視訊編碼器200亦可以使用主要最可能模式列表或次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式(506),並且使用當前訊框內預測模式來對當前視訊資料區塊進行編碼,以產生經編碼的視訊資料區塊(508)。在一個實例中,N為22並且Np為6。In one example of the present disclosure, video encoder 200 is configured to encode a current block of video data using intra-frame prediction. The video encoder 200 may be configured to construct a general most probable mode list containing N entries, where the N entries of the general most probable mode list are intra-frame prediction modes, and where the planar mode is the order in the general most probable mode list First entry (500). The video encoder 200 may also construct the primary most probable mode list from the first Np entries in the general most probable mode list, where Np is less than N (502), and from the remaining (N-Np) entries in the general most probable mode list Constructs a list of secondary most probable patterns (504). Video encoder 200 may also use the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data (506), and use the current intra-frame prediction mode to determine the current intra-frame prediction mode. The video data block is encoded to generate an encoded video data block (508). In one example, N is 22 and Np is 6.

在本案內容的一個實例中,視訊編碼器200可以被配置為:對主要最可能模式列表或次要最可能模式列表的索引進行編碼,其中該索引指示主要最可能模式列表或次要最可能模式列表中的非平面訊框內預測模式。In one example of the subject matter, the video encoder 200 may be configured to encode an index of the primary most probable mode list or the secondary most probable mode list, wherein the index indicates the primary most probable mode list or the secondary most probable mode list List of non-planar intra-frame prediction modes.

在本案內容的另一實例中,為了對主要最可能模式列表或次要最可能模式列表的索引進行編碼,視訊編碼器200可以被配置為:基於用於當前區塊的譯碼工具來決定用於對索引的第一倉進行熵編碼的上下文;及使用上下文來對索引的第一倉進行熵編碼。在一個實例中,譯碼工具是一般訊框內預測模式、訊框內子分區模式或多輔助線模式中的一項。In another example of the content of the present case, in order to encode the index of the primary most probable mode list or the secondary most probable mode list, the video encoder 200 may be configured to: for entropy encoding the first bin of the index; and using the context to entropy encode the first bin of the index. In one example, the coding tool is one of a general intra-frame prediction mode, an intra-frame subpartition mode, or a multiple auxiliary line mode.

在另一實例中,為了構造一般最可能模式列表,視訊編碼器200亦被配置為:將來自當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到一般最可能模式列表中;及將從來自相應的相鄰區塊的相應的訊框內預測模式偏移的複數個訊框內預測模式添加到一般最可能模式列表中。視訊編碼器200可以基於相應的訊框內預測模式是可用的並且尚未被添加到一般最可能模式列表中,來將來自當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到一般最可能模式列表中。In another example, in order to construct the general most probable mode list, the video encoder 200 is also configured to: add the corresponding intra-frame prediction modes from corresponding neighboring blocks of the current video data block to the general most probable mode list and adding a plurality of intra-frame prediction modes offset from corresponding intra-frame prediction modes from corresponding neighboring blocks to the general most probable mode list. The video encoder 200 may predict corresponding intra-frame predictions from corresponding neighboring blocks of the current block of video data based on the corresponding intra-frame prediction modes being available and not yet added to the general most probable mode list The pattern is added to the general list of most probable patterns.

圖10是圖示根據本案內容的技術的用於對當前區塊進行解碼的另一示例性方法的流程圖。圖10的技術可以由視訊解碼器300的一或多個結構單元(包括圖6的訊框內預測單元318)來執行。10 is a flowchart illustrating another exemplary method for decoding a current block in accordance with the techniques of this disclosure. The techniques of FIG. 10 may be performed by one or more structural units of video decoder 300 , including intra-frame prediction unit 318 of FIG. 6 .

在本案內容的一個實例中,視訊解碼器300可以被配置為使用訊框內預測來對當前視訊資料區塊進行解碼。視訊解碼器300可以被配置為構造包含N個條目的一般最可能模式列表,其中一般最可能模式列表的N個條目是訊框內預測模式,並且其中平面模式是一般最可能模式列表中的順序第一條目(600)。視訊解碼器300亦可以從一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N(602),並且從一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表(604)。隨後,視訊解碼器300可以使用主要最可能模式列表或次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式(606),並且使用當前訊框內預測模式來對當前視訊資料區塊進行解碼,以產生經解碼的視訊資料區塊(608)。在一個實例中,N為22並且Np為6。In one example of the subject matter, video decoder 300 may be configured to decode the current block of video data using intra-frame prediction. Video decoder 300 may be configured to construct a general most probable mode list containing N entries, where the N entries of the general most probable mode list are intra-frame prediction modes, and where the planar mode is the order in the general most probable mode list First entry (600). The video decoder 300 may also construct the primary most probable mode list from the first Np entries in the general most probable mode list, where Np is less than N (602), and from the remaining (N-Np) entries in the general most probable mode list Constructs a list of second most probable modes (604). Then, video decoder 300 may use the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data (606), and use the current intra-frame prediction mode to The current block of video data is decoded to generate a decoded block of video data (608). In one example, N is 22 and Np is 6.

在一個實例中,為了決定當前訊框內預測模式,視訊解碼器300亦可以被配置為:對主要最可能模式列表或次要最可能模式列表的索引進行解碼,其中該索引指示主要最可能模式列表或次要最可能模式列表中的非平面訊框內預測模式。視訊解碼器300可以基於索引來決定用於當前視訊資料區塊的當前訊框內預測模式。In one example, in order to determine the current intra-frame prediction mode, the video decoder 300 may also be configured to: decode an index of the primary most probable mode list or the secondary most probable mode list, where the index indicates the primary most probable mode The non-planar intra-frame prediction modes in the list or the LLM list. The video decoder 300 may determine the current intra-frame prediction mode for the current block of video data based on the index.

在另外的實例中,為了對主要最可能模式列表或次要最可能模式列表的索引進行解碼,視訊解碼器300亦可以基於用於當前區塊的譯碼工具來決定對用於對索引的第一倉進行熵解碼的上下文,並且使用上下文來對索引的第一倉進行熵解碼。在一個實例中,譯碼工具是一般訊框內預測模式、訊框內子分區模式或多輔助線模式中的一項。In another example, in order to decode the index of the primary most probable mode list or the secondary most probable mode list, the video decoder 300 may also decide to use the first index for the index based on the coding tool used for the current block. A context for entropy decoding of one bin, and the context is used to entropy decode the first bin of the index. In one example, the coding tool is one of a general intra-frame prediction mode, an intra-frame subpartition mode, or a multiple auxiliary line mode.

在本案內容的另一實例中,為了構造一般最可能模式列表,視訊解碼器300可以被配置為:將來自當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到一般最可能模式列表中;及將從來自相應的相鄰區塊的相應的訊框內預測模式偏移的複數個訊框內預測模式添加到一般最可能模式列表中。在一個實例中,視訊解碼器300可以基於相應的訊框內預測模式是可用的並且尚未被添加到一般最可能模式列表,來將來自當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到一般最可能模式列表中。In another example of the subject matter, in order to construct the general most probable mode list, the video decoder 300 may be configured to: add the corresponding intra-frame prediction modes from the corresponding neighboring blocks of the current video data block to the and adding to the general most probable mode list a plurality of intra-frame prediction modes offset from corresponding intra-frame prediction modes from corresponding neighboring blocks. In one example, video decoder 300 may convert corresponding blocks from corresponding neighboring blocks of the current block of video data to In-frame prediction mode is added to the list of general most probable modes.

在另一實例中,為了使用主要最可能模式列表或次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式,視訊解碼器300可以被配置為:對指示來自主要最可能模式列表或次要最可能模式列表的當前最可能模式列表的語法元素進行解碼;將索引解碼到當前最可能模式列表;及根據當前最可能模式列表的索引來決定當前訊框內預測模式。In another example, in order to use the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data, the video decoder 300 may be configured to: decoding the syntax elements of the current most probable mode list of the most probable mode list or the second most probable mode list; decoding the index into the current most probable mode list; and determining the current intra-frame prediction mode according to the index of the current most probable mode list .

下文描述了本案內容的額外態樣。Additional aspects of the content of this case are described below.

態樣1A-一種對視訊資料進行譯碼的方法,該方法包括以下步驟:構造包含N個條目的一般最可能模式列表,其中該一般最可能模式列表的該N個條目是訊框內預測模式;從該一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從該一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;及使用該主要最可能模式列表或該次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式。Aspect 1A - A method of decoding video data, the method comprising the steps of: constructing a general most probable mode list comprising N entries, wherein the N entries of the general most probable mode list are intra-frame prediction modes ; construct a primary most probable pattern list from the first Np entries in this general most probable pattern list, where Np is less than N; construct a secondary most probable pattern list from the remaining (N-Np) entries in this general most probable pattern list ; and use the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data.

態樣2A-根據態樣1A之方法,其中N為22並且Np為6。Aspect 2A - The method of Aspect 1A, wherein N is 22 and Np is 6.

態樣3A-根據態樣1A和2A中任一項之方法,其中構造該一般最可能模式列表包括:添加平面模式作為該一般最可能模式列表中的順序第一條目;將來自該當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到該一般最可能模式列表中;及將從來自該等相應的相鄰區塊的該等相應的訊框內預測模式偏移的訊框內預測模式添加到該一般最可能模式列表中。Aspect 3A—The method of any one of Aspects 1A and 2A, wherein constructing the general most probable mode list comprises: adding a planar mode as a sequential first entry in the general most probable mode list; adding the corresponding intra-frame prediction modes of corresponding neighboring blocks of the data block to the general most probable mode list; and from the corresponding intra-frame prediction modes from the corresponding neighboring blocks The offset intra-frame prediction mode is added to the list of general most probable modes.

態樣4A-根據態樣3A之方法,其中將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中包括:若該等相應的訊框內預測模式是可用的並且尚未被添加到該一般最可能模式列表中,則將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中。Aspect 4A—The method of Aspect 3A, wherein adding the corresponding intra-frame prediction modes from the corresponding neighboring blocks of the current block of video data to the general MPM list comprises: If the corresponding intra-frame prediction modes are available and have not been added to the general most probable mode list, then the corresponding information from the corresponding neighboring blocks of the current video data block is In-box prediction modes are added to this list of general most probable modes.

態樣5A-根據態樣1A-4A中任一項之方法,亦包括以下步驟:基於用於該當前區塊的譯碼工具,來決定用於對指示該主要最可能模式列表的非平面模式的索引進行譯碼的上下文。Aspect 5A—The method of any of Aspects 1A-4A, further comprising the step of: determining a non-planar mode for indicating the list of primary most probable modes based on a coding tool for the current block The index of the decoding context.

態樣6A-根據態樣5A之方法,其中該譯碼工具是一般訊框內預測、子分區或多輔助線中的一項。Aspect 6A—The method of Aspect 5A, wherein the coding tool is one of general intra-frame prediction, sub-partitioning, or multiple auxiliary lines.

態樣7A-根據態樣1A-6A中任一項之方法,其中譯碼包括解碼。Aspect 7A—The method of any of Aspects 1A-6A, wherein decoding comprises decoding.

態樣8A-根據態樣1A-7A中任一項之方法,其中譯碼包括編碼。Aspect 8A - The method of any of Aspects 1A-7A, wherein decoding comprises encoding.

態樣9A-一種用於對視訊資料進行譯碼的設備,該設備包括用於執行根據態樣1A-8A中任一項之方法的一或多個構件。Aspect 9A - An apparatus for decoding video data, the apparatus comprising one or more means for performing the method according to any of Aspects 1A-8A.

態樣10A-根據態樣9A之設備,其中該一或多個構件包括在電路系統中實現的一或多個處理器。Aspect 10A - The apparatus of Aspect 9A, wherein the one or more components comprise one or more processors implemented in circuitry.

態樣11A-根據態樣9A和10A中任一項之設備,亦包括用於儲存該視訊資料的記憶體。Aspect 11A - The apparatus of any of Aspects 9A and 10A, further comprising memory for storing the video data.

態樣12A-根據態樣9A-11A中任一項之設備,亦包括被配置為顯示經解碼的視訊資料的顯示器。Aspect 12A - The apparatus of any of Aspects 9A-11A, further comprising a display configured to display the decoded video data.

態樣13A-根據態樣9A-12A中任一項之設備,其中該設備包括相機、電腦、行動設備、廣播接收器設備或機上盒中的一者或多者。Aspect 13A - The apparatus of any of Aspects 9A-12A, wherein the apparatus comprises one or more of a camera, a computer, a mobile device, a broadcast receiver device, or a set-top box.

態樣14A-根據態樣9A-13A中任一項之設備,其中該設備包括視訊解碼器。Aspect 14A - The apparatus of any of Aspects 9A-13A, wherein the apparatus includes a video decoder.

態樣15A-根據態樣9A-14A中任一項之設備,其中該設備包括視訊編碼器。Aspect 15A - The apparatus of any of Aspects 9A-14A, wherein the apparatus comprises a video encoder.

態樣16A-一種具有儲存在其上的指令的電腦可讀取儲存媒體,該等指令在被執行時使得一或多個處理器執行根據態樣1A-8A中任一項之方法。Aspect 16A - A computer-readable storage medium having instructions stored thereon that, when executed, cause one or more processors to perform the method according to any of Aspects 1A-8A.

態樣1B-一種對視訊資料進行解碼的方法,該方法包括以下步驟:構造包含N個條目的一般最可能模式列表,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中平面模式是該一般最可能模式列表中的順序第一條目;從該一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從該一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;使用該主要最可能模式列表或該次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式;及使用該當前訊框內預測模式來對該當前視訊資料區塊進行解碼,以產生經解碼的視訊資料區塊。Aspect 1B - A method of decoding video data, the method comprising the steps of: constructing a general MPC list comprising N entries, wherein the N entries of the general MPM list are intra-frame prediction modes, and where the flat mode is the sequential first entry in the general most probable mode list; construct the main most probable mode list from the first Np entries in the general most probable mode list, where Np is less than N; from the general most probable mode list The remaining (N-Np) entries in the list construct a secondary most probable mode list; use either the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data ; and decoding the current block of video data using the current intra-frame prediction mode to generate a decoded block of video data.

態樣2B-根據態樣1B之方法,其中決定該當前訊框內預測模式亦包括:對該主要最可能模式列表或該次要最可能模式列表的索引進行解碼,其中該索引指示該主要最可能模式列表或該次要最可能模式列表中的非平面訊框內預測模式;及基於該索引來決定用於該當前視訊資料區塊的該當前訊框內預測模式。Aspect 2B—the method of Aspect 1B, wherein determining the current intra-frame prediction mode further comprises: decoding an index to the primary most probable mode list or the secondary most probable mode list, wherein the index indicates the primary most probable mode list a non-planar intra-frame prediction mode in the list of possible modes or the list of second most probable modes; and determining the current intra-frame prediction mode for the current block of video data based on the index.

態樣3B-根據態樣2B之方法,其中對該主要最可能模式列表或該次要最可能模式列表的該索引進行解碼包括:基於用於該當前區塊的譯碼工具來決定用於對該索引的第一倉進行熵解碼的上下文;及使用該上下文來對該索引的該第一倉進行熵解碼。Aspect 3B—The method of Aspect 2B, wherein decoding the index of the primary most probable mode list or the secondary most probable mode list comprises: determining, based on a coding tool for the current block A context for entropy decoding the first bin of the index; and using the context to entropy decode the first bin of the index.

態樣4B-根據態樣3B之方法,其中該譯碼工具是一般訊框內預測模式、訊框內子分區模式或多輔助線模式中的一項。Aspect 4B—The method of Aspect 3B, wherein the coding tool is one of a normal intra-frame prediction mode, an intra-frame sub-partition mode, or a multiple auxiliary line mode.

態樣5B-根據態樣1B之方法,其中N為22並且Np為6。Aspect 5B—The method of Aspect 1B, wherein N is 22 and Np is 6.

態樣6B-根據態樣1B之方法,其中構造該一般最可能模式列表包括:將來自該當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到該一般最可能模式列表中;及將從來自該等相應的相鄰區塊的該等相應的訊框內預測模式偏移的複數個訊框內預測模式添加到該一般最可能模式列表中。Aspect 6B—The method of Aspect 1B, wherein constructing the general most probable mode list comprises: adding to the general most probable mode corresponding intra-frame prediction modes from corresponding neighboring blocks of the current block of video data and adding to the general most probable mode list a plurality of intra-frame prediction modes offset from the corresponding intra-frame prediction modes from the corresponding neighboring blocks.

態樣7B-根據態樣6B之方法,其中將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中包括:基於該等相應的訊框內預測模式是可用的並且尚未被添加到該一般最可能模式列表中,來將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中。Aspect 7B—The method of Aspect 6B, wherein adding the corresponding intra-frame prediction modes from the corresponding neighboring blocks of the current block of video data to the general MPM list comprises: The corresponding information from the corresponding neighboring blocks of the current block of video data are combined based on the corresponding intra-frame prediction modes that are available and have not been added to the general most probable mode list In-box prediction modes are added to this list of general most probable modes.

態樣8B-根據態樣1B之方法,其中使用該主要最可能模式列表或該次要最可能模式列表來決定用於該當前視訊資料區塊的該當前訊框內預測模式包括:對指示來自該主要最可能模式列表或該次要最可能模式列表的當前最可能模式列表的語法元素進行解碼;對該當前最可能模式列表的索引進行解碼;及根據該當前最可能模式列表的該索引來決定該當前訊框內預測模式。Aspect 8B—The method of Aspect 1B, wherein using the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data comprises: decoding the syntax element of the current most probable mode list of the primary most probable mode list or the secondary most probable mode list; decoding the index of the current most probable mode list; and according to the index of the current most probable mode list Determines the current in-frame prediction mode.

態樣9B-根據態樣1B之方法,亦包括以下步驟:顯示包括經解碼的視訊資料區塊的圖片。Aspect 9B—The method of Aspect 1B, further comprising the step of: displaying a picture including the decoded block of video data.

態樣10B-一種被配置為對視訊資料進行解碼的裝置,該裝置包括:記憶體,其被配置為儲存當前視訊資料區塊;及在電路系統中實現並且與該記憶體相通訊的一或多個處理器,該一或多個處理器被配置為:構造包含N個條目的一般最可能模式列表,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中平面模式是該一般最可能模式列表中的順序第一條目;從該一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從該一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;使用該主要最可能模式列表或該次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式;及使用該當前訊框內預測模式來對該當前視訊資料區塊進行解碼,以產生經解碼的視訊資料區塊。Aspect 10B - An apparatus configured to decode video data, the apparatus comprising: a memory configured to store a current block of video data; and an or a circuit implemented in circuitry and in communication with the memory a plurality of processors, the one or more processors configured to: construct a general most probable mode list comprising N entries, wherein the N entries of the general most probable mode list are intra-frame prediction modes, and wherein the plane pattern is the first entry in the order of the general most probable pattern list; construct the main most likely pattern list from the first Np entries in the general most probable pattern list, where Np is less than N; from the general most probable pattern list The remaining (N-Np) entries construct a secondary most probable mode list; use the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data; and use The current intra-frame prediction mode is used to decode the current block of video data to generate a decoded block of video data.

態樣11B-根據態樣10B之裝置,其中為了決定該當前訊框內預測模式,該一或多個處理器亦被配置為:對該主要最可能模式列表或該次要最可能模式列表的索引進行解碼,其中該索引指示該主要最可能模式列表或該次要最可能模式列表中的非平面訊框內預測模式;及基於該索引來決定用於該當前視訊資料區塊的該當前訊框內預測模式。Aspect 11B—the apparatus of Aspect 10B, wherein to determine the current intra-frame prediction mode, the one or more processors are also configured to: a decoding an index, wherein the index indicates a non-planar intra-frame prediction mode in the primary most probable mode list or the secondary most probable mode list; and determining the current information for the current block of video data based on the index In-box prediction mode.

態樣12B-根據態樣11B之裝置,其中為了對該主要最可能模式列表或該次要最可能模式列表的該索引進行解碼,該一或多個處理器亦被配置為:基於用於該當前區塊的譯碼工具來決定用於對該索引的第一倉進行熵解碼的上下文;及Aspect 12B—the apparatus of Aspect 11B, wherein in order to decode the index of the primary most probable mode list or the secondary most probable mode list, the one or more processors are also configured to: the coding tool of the current block to determine the context for entropy decoding the first bin of the index; and

使用該上下文來對該索引的該第一倉進行熵解碼。The first bin of the index is entropy decoded using the context.

態樣13B-根據態樣12B之裝置,其中該譯碼工具是一般訊框內預測模式、訊框內子分區模式或多輔助線模式中的一項。Aspect 13B—the device of Aspect 12B, wherein the coding tool is one of a normal intra-frame prediction mode, an intra-frame subpartition mode, or a multiple helpline mode.

態樣14B-根據態樣10B之裝置,其中N為22並且Np為6。Aspect 14B—the device of Aspect 10B, wherein N is 22 and Np is 6.

態樣15B-根據態樣10B之裝置,其中為了構造該一般最可能模式列表,該一或多個處理器亦被配置為:將來自該當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到該一般最可能模式列表中;及將從來自該等相應的相鄰區塊的該等相應的訊框內預測模式偏移的複數個訊框內預測模式添加到該一般最可能模式列表中。Aspect 15B—the apparatus of Aspect 10B, wherein to construct the general most probable mode list, the one or more processors are also configured to: combine corresponding data from corresponding neighboring blocks of the current block of video data adding the intra-frame prediction modes of The general most probable mode list.

態樣16B-根據態樣15B之裝置,其中為了將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中,該一或多個處理器亦被配置為:基於該等相應的訊框內預測模式是可用的並且尚未被添加到該一般最可能模式列表中,來將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中。Aspect 16B—the apparatus of Aspect 15B, wherein in order to add the corresponding intra-frame prediction modes from the corresponding neighboring blocks of the current block of video data to the general most probable mode list, The one or more processors are also configured to: based on the corresponding intra-frame prediction modes are available and have not been added to the general most probable mode list, to the current video data block from the The corresponding intra-frame prediction modes of the corresponding neighboring blocks are added to the general most probable mode list.

態樣17B-根據態樣10B之裝置,其中為了使用該主要最可能模式列表或該次要最可能模式列表來決定用於該當前視訊資料區塊的該當前訊框內預測模式,該一或多個處理器亦被配置為:對指示來自該主要最可能模式列表或該次要最可能模式列表的當前最可能模式列表的語法元素進行解碼;對該當前最可能模式列表的索引進行解碼;及根據該當前最可能模式列表的該索引來決定該當前訊框內預測模式。Aspect 17B—the apparatus of Aspect 10B, wherein in order to use the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data, the one or The plurality of processors are also configured to: decode a syntax element indicating a current most probable mode list from the primary most probable mode list or the secondary most probable mode list; decode an index to the current most probable mode list; and determining the current intra-frame prediction mode according to the index of the current most probable mode list.

態樣18B-根據態樣10B之裝置,亦包括:顯示器,其被配置為顯示包括經解碼的視訊資料區塊的圖片。Aspect 18B—The device of Aspect 10B, further comprising: a display configured to display a picture including the decoded block of video data.

態樣19B-一種被配置為對視訊資料進行解碼的裝置,該裝置包括:用於構造包含N個條目的一般最可能模式列表的構件,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中平面模式是該一般最可能模式列表中的順序第一條目;用於從該一般最可能模式列表中的前Np個條目構造主要最可能模式列表的構件,其中Np小於N;用於從該一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表的構件;用於使用該主要最可能模式列表或該次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式的構件;及用於使用該當前訊框內預測模式來對該當前視訊資料區塊進行解碼,以產生經解碼的視訊資料區塊的構件。Aspect 19B - An apparatus configured to decode video material, the apparatus comprising: means for constructing a general most probable mode list comprising N entries, wherein the N entries of the general most probable mode list are information In-box prediction mode, and where the planar mode is the ordinal first entry in the general most probable mode list; means for constructing the primary most probable mode list from the first Np entries in the general most probable mode list, where Np less than N; a member for constructing a secondary most probable mode list from the remaining (N-Np) entries in the general most probable mode list; for using the primary most probable mode list or the secondary most probable mode list to means for determining a current intra-frame prediction mode for a current block of video data; and means for decoding the current block of video data using the current intra-frame prediction mode to generate a decoded block of video data member.

態樣20B-根據態樣19B之裝置,其中該用於決定該當前訊框內預測模式的構件亦包括:用於對該主要最可能模式列表或該次要最可能模式列表的索引進行解碼的構件,其中該索引指示該主要最可能模式列表或該次要最可能模式列表中的非平面訊框內預測模式;及用於基於該索引來決定用於該當前視訊資料區塊的該當前訊框內預測模式的構件。Aspect 20B—the apparatus of Aspect 19B, wherein the means for determining the current intra-frame prediction mode further comprises: means for decoding an index to the primary most probable mode list or the secondary most probable mode list means, wherein the index indicates a non-planar intra-frame prediction mode in the primary most probable mode list or the secondary most probable mode list; and for determining the current information for the current block of video data based on the index The building blocks of the in-box prediction mode.

態樣21B-根據態樣20B之裝置,其中用於對該主要最可能模式列表或該次要最可能模式列表的該索引進行解碼的構件包括:用於基於用於該當前區塊的譯碼工具來決定用於對該索引的第一倉進行熵解碼的上下文的構件;及用於使用該上下文來對該索引的該第一倉進行熵解碼的構件。Aspect 21B—the apparatus of Aspect 20B, wherein the means for decoding the index of the primary most probable mode list or the secondary most probable mode list comprises: based on the decoding for the current block means for determining a context for entropy decoding the first bin of the index; and means for entropy decoding the first bin of the index using the context.

態樣22B-一種儲存指令的非暫時性電腦可讀取儲存媒體,該等指令在被執行時使得被配置為對視訊資料進行解碼的一或多個處理器進行以下操作:構造包含N個條目的一般最可能模式列表,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中平面模式是該一般最可能模式列表中的順序第一條目;從該一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從該一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;使用該主要最可能模式列表或該次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式;及使用該當前訊框內預測模式來對該當前視訊資料區塊進行解碼,以產生經解碼的視訊資料區塊。Aspect 22B - A non-transitory computer-readable storage medium storing instructions that, when executed, cause one or more processors configured to decode video data to: construct a structure comprising N entries , where the N entries of the general most probable mode list are intra-frame prediction modes, and where the planar mode is the sequential first entry in the general most probable mode list; from the general most probable mode list First Np entries in the pattern list construct the primary most probable pattern list, where Np is less than N; construct the secondary most probable pattern list from the remaining (N-Np) entries in this general most probable pattern list; use the primary most probable pattern list a mode list or the second most probable mode list to determine the current intra-frame prediction mode for the current block of video data; and decoding the current block of video data using the current intra-frame prediction mode to generate a Decoded video data block.

態樣23B-根據態樣22B之非暫時性電腦可讀取儲存媒體,其中為了決定該當前訊框內預測模式,該等指令亦使得該一或多個處理器進行以下操作:對該主要最可能模式列表或該次要最可能模式列表的索引進行解碼,其中該索引指示該主要最可能模式列表或該次要最可能模式列表中的非平面訊框內預測模式;及基於該索引來決定用於該當前視訊資料區塊的該當前訊框內預測模式。Aspect 23B—the non-transitory computer-readable storage medium of Aspect 22B, wherein in order to determine the current in-frame prediction mode, the instructions also cause the one or more processors to: decoding an index of the list of possible modes or the list of secondary most probable modes, wherein the index indicates a non-planar intra-frame prediction mode in the list of primary most probable modes or the list of secondary most probable modes; and determining based on the index the current intra-frame prediction mode for the current block of video data.

態樣24B-根據態樣23B之非暫時性電腦可讀取儲存媒體,其中為了對該主要最可能模式列表或該次要最可能模式列表的該索引進行解碼,該等指令亦使得該一或多個處理器進行以下操作:基於用於該當前區塊的譯碼工具來決定用於對該索引的第一倉進行熵解碼的上下文;及使用該上下文來對該索引的該第一倉進行熵解碼。Aspect 24B—the non-transitory computer-readable storage medium according to Aspect 23B, wherein the instructions also cause the one or a plurality of processors: determine a context for entropy decoding the first bin of the index based on the coding tool for the current block; and use the context to perform the entropy decoding on the first bin of the index Entropy decoding.

態樣25B-一種被配置為對視訊資料進行編碼的裝置,該裝置包括:記憶體,其被配置為儲存當前視訊資料區塊;及在電路系統中實現並且與該記憶體相通訊的一或多個處理器,該一或多個處理器被配置為:構造包含N個條目的一般最可能模式列表,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中平面模式是該一般最可能模式列表中的順序第一條目;從該一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從該一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;使用該主要最可能模式列表或該次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式;及使用該當前訊框內預測模式來對該當前視訊資料區塊進行編碼,以產生經編碼的視訊資料區塊。Aspect 25B - an apparatus configured to encode video data, the apparatus comprising: a memory configured to store a current block of video data; and an or a circuit implemented in circuitry and in communication with the memory a plurality of processors, the one or more processors configured to: construct a general most probable mode list comprising N entries, wherein the N entries of the general most probable mode list are intra-frame prediction modes, and wherein the plane pattern is the first entry in the order of the general most probable pattern list; construct the main most likely pattern list from the first Np entries in the general most probable pattern list, where Np is less than N; from the general most probable pattern list The remaining (N-Np) entries construct a secondary most probable mode list; use the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data; and use The current intra-frame prediction mode encodes the current block of video data to generate an encoded block of video data.

態樣26B-根據態樣25B之裝置,其中該一或多個處理器亦被配置為:對該主要最可能模式列表或該次要最可能模式列表的索引進行編碼,其中該索引指示該主要最可能模式列表或該次要最可能模式列表中的非平面訊框內預測模式。Aspect 26B—the apparatus of Aspect 25B, wherein the one or more processors are also configured to: encode an index to the primary most probable mode list or the secondary most probable mode list, wherein the index indicates the primary most probable mode list The most probable mode list or the non-planar intra-frame prediction modes in the second most probable mode list.

態樣27B-根據態樣26B之裝置,其中為了對該主要最可能模式列表或該次要最可能模式列表的該索引進行編碼,該一或多個處理器亦被配置為:基於用於該當前區塊的譯碼工具來決定用於對該索引的第一倉進行熵編碼的上下文;及使用該上下文來對該索引的該第一倉進行熵編碼。Aspect 27B—the apparatus of Aspect 26B, wherein in order to encode the index of the primary most probable mode list or the secondary most probable mode list, the one or more processors are also configured to: The coding tool of the current block determines a context for entropy encoding the first bin of the index; and uses the context to entropy encode the first bin of the index.

態樣28B-根據態樣27B之裝置,其中該譯碼工具是一般訊框內預測模式、訊框內子分區模式或多輔助線模式中的一項。Aspect 28B—the device of Aspect 27B, wherein the coding tool is one of a normal intra-frame prediction mode, an intra-frame subpartition mode, or a multiple helpline mode.

態樣29B-根據態樣25B之裝置,其中N為22並且Np為6。Aspect 29B—the device of Aspect 25B, wherein N is 22 and Np is 6.

態樣30B-根據態樣25B之裝置,其中為了構造該一般最可能模式列表,該一或多個處理器亦被配置為:將來自該當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到該一般最可能模式列表中;及將從來自該等相應的相鄰區塊的該等相應的訊框內預測模式偏移的複數個訊框內預測模式添加到該一般最可能模式列表中。Aspect 30B—the apparatus of Aspect 25B, wherein to construct the general most probable mode list, the one or more processors are also configured to: adding the intra-frame prediction modes of The general most probable mode list.

態樣31B-根據態樣25B之裝置,其中為了將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中,該一或多個處理器亦被配置為:基於該等相應的訊框內預測模式是可用的並且尚未被添加到該一般最可能模式列表中,來將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中。Aspect 31B—the device of Aspect 25B, wherein in order to add the corresponding intra-frame prediction modes from the corresponding neighboring blocks of the current video data block to the general most probable mode list, The one or more processors are also configured to: based on the corresponding intra-frame prediction modes are available and have not been added to the general most probable mode list, to the current video data block from the The corresponding intra-frame prediction modes of the corresponding neighboring blocks are added to the general most probable mode list.

態樣32B-根據態樣25B之裝置,亦包括:相機,其被配置為擷取包括該當前視訊資料區塊的圖片。Aspect 32B—the apparatus of aspect 25B, further comprising: a camera configured to capture a picture including the current block of video data.

態樣1C-一種對視訊資料進行解碼的方法,該方法包括以下步驟:構造包含N個條目的一般最可能模式列表,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中平面模式是該一般最可能模式列表中的順序第一條目;從該一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從該一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;使用該主要最可能模式列表或該次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式;及使用該當前訊框內預測模式來對該當前視訊資料區塊進行解碼,以產生經解碼的視訊資料區塊。Aspect 1C - A method of decoding video data, the method comprising the steps of: constructing a general most probable mode list comprising N entries, wherein the N entries of the general most probable mode list are intra-frame prediction modes, and where the flat mode is the sequential first entry in the general most probable mode list; construct the main most probable mode list from the first Np entries in the general most probable mode list, where Np is less than N; from the general most probable mode list The remaining (N-Np) entries in the list construct a secondary most probable mode list; use either the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data ; and decoding the current block of video data using the current intra-frame prediction mode to generate a decoded block of video data.

態樣2C-根據態樣1C之方法,其中決定該當前訊框內預測模式亦包括:對該主要最可能模式列表或該次要最可能模式列表的索引進行解碼,其中該索引指示該主要最可能模式列表或該次要最可能模式列表中的非平面訊框內預測模式;及基於該索引來決定用於該當前視訊資料區塊的該當前訊框內預測模式。Aspect 2C—the method of aspect 1C, wherein determining the current intra-frame prediction mode further comprises: decoding an index to the primary most probable mode list or the secondary most probable mode list, wherein the index indicates the primary most probable mode list a non-planar intra-frame prediction mode in the list of possible modes or the list of second most probable modes; and determining the current intra-frame prediction mode for the current block of video data based on the index.

態樣3C-根據態樣2C之方法,其中對該主要最可能模式列表或該次要最可能模式列表的該索引進行解碼包括:基於用於該當前區塊的譯碼工具來決定用於對該索引的第一倉進行熵解碼的上下文;及使用該上下文來對該索引的該第一倉進行熵解碼。Aspect 3C—The method of Aspect 2C, wherein decoding the index of the primary most probable mode list or the secondary most probable mode list comprises: determining, based on a coding tool for the current block A context for entropy decoding the first bin of the index; and using the context to entropy decode the first bin of the index.

態樣4C-根據態樣3C之方法,其中該譯碼工具是一般訊框內預測模式、訊框內子分區模式或多輔助線模式中的一項。Aspect 4C—The method of Aspect 3C, wherein the coding tool is one of a normal intra-frame prediction mode, an intra-frame subpartition mode, or a multiple auxiliary line mode.

態樣5C-根據態樣1C-4C中任一項之方法,其中N為22並且Np為6。Aspect 5C - The method of any of Aspects 1C-4C, wherein N is 22 and Np is 6.

態樣6C-根據態樣1C-5C中任一項之方法,其中構造該一般最可能模式列表包括:將來自該當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到該一般最可能模式列表中;及將從來自該等相應的相鄰區塊的該等相應的訊框內預測模式偏移的複數個訊框內預測模式添加到該一般最可能模式列表中。Aspect 6C—The method of any of Aspects 1C-5C, wherein constructing the general most probable mode list comprises: converting corresponding intra-frame prediction modes from corresponding neighboring blocks of the current block of video data adding to the general most probable mode list; and adding to the general most probable mode list a plurality of intra-frame prediction modes offset from the corresponding intra-frame prediction modes from the corresponding neighboring blocks middle.

態樣7C-根據態樣6C之方法,其中將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中包括:基於該等相應的訊框內預測模式是可用的並且尚未被添加到該一般最可能模式列表中,來將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中。Aspect 7C—The method of Aspect 6C, wherein adding the corresponding intra-frame prediction modes from the corresponding neighboring blocks of the current block of video data to the general MPM list comprises: The corresponding information from the corresponding neighboring blocks of the current block of video data are combined based on the corresponding intra-frame prediction modes that are available and have not been added to the general most probable mode list In-box prediction modes are added to this list of general most probable modes.

態樣8C-根據態樣1C-7C中任一項之方法,其中使用該主要最可能模式列表或該次要最可能模式列表來決定用於該當前視訊資料區塊的該當前訊框內預測模式包括:對指示來自該主要最可能模式列表或該次要最可能模式列表的當前最可能模式列表的語法元素進行解碼;對該當前最可能模式列表的索引進行解碼;及根據該當前最可能模式列表的該索引來決定該當前訊框內預測模式。Aspect 8C—The method of any of Aspects 1C-7C, wherein the current in-frame prediction for the current block of video data is determined using the primary most probable mode list or the secondary most probable mode list Modes include: decoding a syntax element indicating a current most probable mode list from the primary most probable mode list or the secondary most probable mode list; decoding an index to the current most probable mode list; and according to the current most probable mode list The index of the mode list determines the current intra-frame prediction mode.

態樣9C-根據態樣1C-8C中任一態樣之方法,亦包括以下步驟:顯示包括經解碼的視訊資料區塊的圖片。Aspect 9C—The method of any of Aspects 1C-8C, further comprising the step of displaying a picture including the decoded block of video data.

態樣10C-一種被配置為對視訊資料進行解碼的裝置,該裝置包括:記憶體,其被配置為儲存當前視訊資料區塊;及在電路系統中實現並且與該記憶體相通訊的一或多個處理器,該一或多個處理器被配置為:構造包含N個條目的一般最可能模式列表,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中平面模式是該一般最可能模式列表中的順序第一條目;從該一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從該一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;使用該主要最可能模式列表或該次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式;及使用該當前訊框內預測模式來對該當前視訊資料區塊進行解碼,以產生經解碼的視訊資料區塊。Aspect 10C - an apparatus configured to decode video data, the apparatus comprising: a memory configured to store a current block of video data; and an or a circuit implemented in circuitry and in communication with the memory a plurality of processors, the one or more processors configured to: construct a general most probable mode list comprising N entries, wherein the N entries of the general most probable mode list are intra-frame prediction modes, and wherein the plane pattern is the first entry in the order of the general most probable pattern list; construct the main most likely pattern list from the first Np entries in the general most probable pattern list, where Np is less than N; from the general most probable pattern list The remaining (N-Np) entries construct a secondary most probable mode list; use the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data; and use The current intra-frame prediction mode is used to decode the current block of video data to generate a decoded block of video data.

態樣11C-根據態樣10C之裝置,其中為了決定該當前訊框內預測模式,該一或多個處理器亦被配置為:對該主要最可能模式列表或該次要最可能模式列表的索引進行解碼,其中該索引指示該主要最可能模式列表或該次要最可能模式列表中的非平面訊框內預測模式;及基於該索引來決定用於該當前視訊資料區塊的該當前訊框內預測模式。Aspect 11C—the apparatus of aspect 10C, wherein to determine the current intra-frame prediction mode, the one or more processors are also configured to: decoding an index, wherein the index indicates a non-planar intra-frame prediction mode in the primary most probable mode list or the secondary most probable mode list; and determining the current information for the current block of video data based on the index In-box prediction mode.

態樣12C-根據態樣11C之裝置,其中為了對該主要最可能模式列表或該次要最可能模式列表的該索引進行解碼,該一或多個處理器亦被配置為:基於用於該當前視訊資料區塊的譯碼工具來決定用於對該索引的第一倉進行熵解碼的上下文;及Aspect 12C—the apparatus of Aspect 11C, wherein in order to decode the index of the primary most probable mode list or the secondary most probable mode list, the one or more processors are also configured to: the coding tool for the current block of video data to determine the context for entropy decoding of the first bin of the index; and

使用該上下文來對該索引的該第一倉進行熵解碼。The first bin of the index is entropy decoded using the context.

態樣13C-根據態樣12C之裝置,其中該譯碼工具是一般訊框內預測模式、訊框內子分區模式或多輔助線模式中的一項。Aspect 13C—the device of Aspect 12C, wherein the coding tool is one of a normal intra-frame prediction mode, an intra-frame subpartition mode, or a multiple helpline mode.

態樣14C-根據態樣10C-13C中任一項之裝置,其中N為22並且Np為6。Aspect 14C - The device of any of Aspects 10C-13C, wherein N is 22 and Np is 6.

態樣15C-根據態樣10C-14C中任一項之裝置,其中為了構造該一般最可能模式列表,該一或多個處理器亦被配置為:將來自該當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到該一般最可能模式列表中;及將從來自該等相應的相鄰區塊的該等相應的訊框內預測模式偏移的複數個訊框內預測模式添加到該一般最可能模式列表中。Aspect 15C—The apparatus of any of Aspects 10C-14C, wherein to construct the general most probable mode list, the one or more processors are also configured to: convert corresponding data from the current block of video data adding corresponding intra-frame prediction modes of adjacent blocks to the general most likely mode list; and a plurality of signals offset from the corresponding intra-frame prediction modes from the corresponding adjacent blocks In-box prediction modes are added to this list of general most probable modes.

態樣16C-根據態樣15C之裝置,其中為了將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中,該一或多個處理器亦被配置為:基於該等相應的訊框內預測模式是可用的並且尚未被添加到該一般最可能模式列表中,來將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中。Aspect 16C—the apparatus of Aspect 15C, wherein in order to add the corresponding intra-frame prediction modes from the corresponding neighboring blocks of the current block of video data to the general most probable mode list, The one or more processors are also configured to: based on the corresponding intra-frame prediction modes are available and have not been added to the general most probable mode list, to the current video data block from the The corresponding intra-frame prediction modes of the corresponding neighboring blocks are added to the general most probable mode list.

態樣17C-根據態樣10C-16C中任一項之裝置,其中為了使用該主要最可能模式列表或該次要最可能模式列表來決定用於該當前視訊資料區塊的該當前訊框內預測模式,該一或多個處理器亦被配置為:對指示來自該主要最可能模式列表或該次要最可能模式列表的當前最可能模式列表的語法元素進行解碼;對該當前最可能模式列表的索引進行解碼;及根據該當前最可能模式列表的該索引來決定該當前訊框內預測模式。Aspect 17C—The apparatus of any of Aspects 10C-16C, wherein in order to use the primary most probable mode list or the secondary most probable mode list to determine within the current frame for the current block of video data a prediction mode, the one or more processors are also configured to: decode a syntax element indicating a current most probable mode list from the primary most probable mode list or the secondary most probable mode list; the current most probable mode decoding the index of the list; and determining the current intra-frame prediction mode according to the index of the current most probable mode list.

態樣18C-根據態樣10C-17C中任一項之裝置,亦包括:顯示器,其被配置為顯示包括經解碼的視訊資料區塊的圖片。Aspect 18C - The apparatus of any of Aspects 10C-17C, further comprising: a display configured to display a picture including the decoded block of video data.

態樣19C-一種被配置為對視訊資料進行編碼的裝置,該裝置包括:記憶體,其被配置為儲存當前視訊資料區塊;及在電路系統中實現並且與該記憶體相通訊的一或多個處理器,該一或多個處理器被配置為:構造包含N個條目的一般最可能模式列表,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中平面模式是該一般最可能模式列表中的順序第一條目;從該一般最可能模式列表中的前Np個條目構造主要最可能模式列表,其中Np小於N;從該一般最可能模式列表中的剩餘(N-Np)個條目構造次要最可能模式列表;使用該主要最可能模式列表或該次要最可能模式列表來決定用於當前視訊資料區塊的當前訊框內預測模式;及使用該當前訊框內預測模式來對該當前視訊資料區塊進行編碼,以產生經編碼的視訊資料區塊。Aspect 19C - an apparatus configured to encode video data, the apparatus comprising: a memory configured to store a current block of video data; and an or a circuit implemented in circuitry and in communication with the memory a plurality of processors, the one or more processors configured to: construct a general most probable mode list comprising N entries, wherein the N entries of the general most probable mode list are intra-frame prediction modes, and wherein the plane pattern is the first entry in the order of the general most probable pattern list; construct the main most likely pattern list from the first Np entries in the general most probable pattern list, where Np is less than N; from the general most probable pattern list The remaining (N-Np) entries construct a secondary most probable mode list; use the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data; and use The current intra-frame prediction mode encodes the current block of video data to generate an encoded block of video data.

態樣20C-根據態樣19C之裝置,其中該一或多個處理器亦被配置為:對該主要最可能模式列表或該次要最可能模式列表的索引進行編碼,其中該索引指示該主要最可能模式列表或該次要最可能模式列表中的非平面訊框內預測模式。Aspect 20C—the apparatus of Aspect 19C, wherein the one or more processors are also configured to: encode an index to the primary most probable mode list or the secondary most probable mode list, wherein the index indicates the primary most probable mode list The most probable mode list or the non-planar intra-frame prediction modes in the second most probable mode list.

態樣21C-根據態樣20C之裝置,其中為了對該主要最可能模式列表或該次要最可能模式列表的該索引進行編碼,該一或多個處理器亦被配置為:基於用於該當前視訊資料區塊的譯碼工具來決定用於對該索引的第一倉進行熵編碼的上下文;及使用該上下文來對該索引的該第一倉進行熵編碼。Aspect 21C—the apparatus of aspect 20C, wherein in order to encode the index of the primary most probable mode list or the secondary most probable mode list, the one or more processors are also configured to: The coding tool of the current block of video data determines a context for entropy encoding the first bin of the index; and uses the context to entropy encode the first bin of the index.

態樣22C-根據態樣21C之裝置,其中該譯碼工具是一般訊框內預測模式、訊框內子分區模式或多輔助線模式中的一項。Aspect 22C—the device of Aspect 21C, wherein the coding tool is one of a normal intra-frame prediction mode, an intra-frame subpartition mode, or a multiple helpline mode.

態樣23C-根據態樣19C-22C中任一項之裝置,其中N為22並且Np為6。Aspect 23C - The device of any of Aspects 19C-22C, wherein N is 22 and Np is 6.

態樣24C-根據態樣19C-23C中任一項之裝置,其中為了構造該一般最可能模式列表,該一或多個處理器亦被配置為:將來自該當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到該一般最可能模式列表中;及將從來自該等相應的相鄰區塊的該等相應的訊框內預測模式偏移的複數個訊框內預測模式添加到該一般最可能模式列表中。Aspect 24C—The apparatus of any one of Aspects 19C-23C, wherein to construct the general most likely mode list, the one or more processors are also configured to: convert corresponding data from the current block of video data adding corresponding intra-frame prediction modes of adjacent blocks to the general most likely mode list; and a plurality of signals offset from the corresponding intra-frame prediction modes from the corresponding adjacent blocks In-box prediction modes are added to this list of general most probable modes.

態樣25C-根據態樣19C-24C中任一項之裝置,其中為了將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中,該一或多個處理器亦被配置為:基於該等相應的訊框內預測模式是可用的並且尚未被添加到該一般最可能模式列表中,來將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中。Aspect 25C—the apparatus of any of Aspects 19C-24C, wherein in order to add the corresponding intra-frame prediction modes from the corresponding neighboring blocks of the current block of video data to the general In the list of most probable modes, the one or more processors are also configured to: based on the corresponding intra-frame prediction modes are available and have not been added to the list of general most probable modes. The corresponding intra-frame prediction modes of the corresponding neighboring blocks of the data block are added to the general most probable mode list.

態樣26C-根據態樣19C-25C中任一項之裝置,亦包括:相機,其被配置為擷取包括該當前視訊資料區塊的圖片。Aspect 26C—The apparatus of any of Aspects 19C-25C, further comprising: a camera configured to capture a picture including the current block of video data.

要認識到的是,根據實例,本文描述的任何技術的某些動作或事件可以以不同的順序執行,可以被添加、合併或完全省略(例如,並非所有描述的動作或事件是對於實施該等技術皆是必要的)。此外,在某些實例中,動作或事件可以例如經由多執行緒處理、中斷處理或多個處理器併發地而不是順序地執行。It is recognized that, depending on the example, certain acts or events of any of the techniques described herein may be performed in a different order, may be added, combined, or omitted entirely (eg, not all described acts or events are technology is necessary). Furthermore, in some instances, actions or events may be performed concurrently rather than sequentially, eg, via multi-threaded processing, interrupt processing, or multiple processors.

在一或多個實例中,所描述的功能可以用硬體、軟體、韌體或其任何組合來實現。若用軟體來實現,則該等功能可以作為一或多個指令或代碼儲存在電腦可讀取媒體上或者經由其進行傳輸並且由基於硬體的處理單元執行。電腦可讀取媒體可以包括電腦可讀取儲存媒體,其對應於諸如資料儲存媒體之類的有形媒體或者通訊媒體,該等通訊媒體包括例如根據通訊協定來促進電腦程式從一個地方傳輸到另一個地方的任何媒體。以此種方式,電腦可讀取媒體通常可以對應於(1)非暫時性的有形電腦可讀取儲存媒體,或者(2)諸如信號或載波之類的通訊媒體。資料儲存媒體可以是可以由一或多個電腦或者一或多個處理器存取以取得用於實現在本案內容中描述的技術的指令、代碼及/或資料結構的任何可用的媒體。電腦程式產品可以包括電腦可讀取媒體。In one or more examples, the functions described can be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which correspond to tangible media, such as data storage media, or communication media, including, for example, in accordance with communication protocols that facilitate the transfer of computer programs from one place to another any media in place. In this manner, computer-readable media may generally correspond to (1) non-transitory tangible computer-readable storage media, or (2) communication media such as a signal or carrier wave. Data storage media can be any available media that can be accessed by one or more computers or one or more processors to obtain instructions, code and/or data structures for implementing the techniques described in this context. The computer program product may include a computer-readable medium.

舉例而言而非進行限制,此種電腦可讀取儲存媒體可以包括RAM、ROM、EEPROM、CD-ROM或其他光碟儲存、磁碟儲存或其他磁儲存設備、快閃記憶體,或者能夠用於以指令或資料結構形式儲存期望的程式碼以及能夠由電腦存取的任何其他媒體。此外,任何連接被適當地稱為電腦可讀取媒體。例如,若使用同軸電纜、光纖光纜、雙絞線、數位用戶線路(DSL)或者無線技術(例如,紅外線、無線電和微波)從網站、伺服器或其他遠端源傳輸指令,則同軸電纜、光纖光纜、雙絞線、DSL或者無線技術(例如,紅外線、無線電和微波)被包括在媒體的定義中。然而,應當理解的是,電腦可讀取儲存媒體和資料儲存媒體不包括連接、載波、信號或其他暫時性媒體,而是替代地針對非暫時性的有形儲存媒體。如本文所使用的,磁碟和光碟包括壓縮光碟(CD)、鐳射光碟、光碟、數位多功能光碟(DVD)、軟碟和藍光光碟,其中磁碟通常磁性地複製資料,而光碟利用鐳射來光學地複製資料。上述各項的組合亦應當被包括在電腦可讀取媒體的範疇之內。By way of example and not limitation, such computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or capable of Store the desired code in the form of instructions or data structures and any other medium that can be accessed by the computer. Also, any connection is properly termed a computer-readable medium. For example, if coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies (eg, infrared, radio, and microwave) are used to transmit commands from a website, server, or other remote source, coaxial cable, fiber optic cable, or Fiber optic cable, twisted pair, DSL, or wireless technologies (eg, infrared, radio, and microwave) are included in the definition of media. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory, tangible storage media. As used herein, magnetic and optical discs include compact discs (CDs), laser discs, optical discs, digital versatile discs (DVDs), floppy discs, and blu-ray discs, where magnetic discs generally reproduce data magnetically, while optical discs use lasers to reproduce data Optically reproduce material. Combinations of the above should also be included within the scope of computer-readable media.

指令可以由一或多個處理器來執行,諸如一或多個DSP、通用微處理器、ASIC、FPGA,或其他等效的整合或個別邏輯電路系統。因此,如本文所使用的術語「處理器」和「處理電路系統」可以代表前述結構中的任何一者或者適於實現本文描述的技術的任何其他結構。另外,在一些態樣中,本文描述的功能可以在被配置用於編碼和解碼的專用硬體及/或軟體模組內提供,或者被併入經組合的轉碼器中。此外,該等技術可以完全在一或多個電路或邏輯元件中實現。Instructions may be executed by one or more processors, such as one or more DSPs, general purpose microprocessors, ASICs, FPGAs, or other equivalent integrated or individual logic circuitry. Accordingly, the terms "processor" and "processing circuitry" as used herein may represent any of the foregoing structures or any other structure suitable for implementing the techniques described herein. Additionally, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated into a combined transcoder. Furthermore, the techniques may be implemented entirely in one or more circuits or logic elements.

本案內容的技術可以在多種多樣的設備或裝置中實現,包括無線手機、積體電路(IC)或一組IC(例如,晶片組)。在本案內容中描述了各種元件、模組或單元以強調被配置為執行所揭示的技術的設備的功能性態樣,但是不一定需要經由不同的硬體單元來實現。確切而言,如前述,各種單元可以被組合在轉碼器硬體單元中,或者由可交互操作的硬體單元的集合(包括如前述的一或多個處理器)結合適當的軟體及/或韌體來提供。The techniques disclosed herein can be implemented in a wide variety of devices or devices, including a wireless handset, an integrated circuit (IC), or a set of ICs (eg, a chip set). Various elements, modules, or units are described in this context to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require implementation via distinct hardware units. Rather, various units may be combined in a transcoder hardware unit, as previously described, or by a collection of interoperable hardware units (including one or more processors as previously described) in combination with appropriate software and/or or firmware.

已經描述了各個實例。該等和其他實例在所附的請求項的範疇內。Various examples have been described. These and other examples are within the scope of the appended claims.

100:視訊編碼和解碼系統 102:源設備 104:視訊源 106:記憶體 108:輸出介面 110:電腦可讀取媒體 112:儲存設備 114:檔案伺服器 116:目的地設備 118:顯示設備 120:記憶體 122:輸入介面 130:四叉樹二叉樹(QTBT)結構 132:譯碼樹單元(CTU) 200:視訊編碼器 202:模式選擇單元 204:殘差產生單元 206:變換處理單元 208:量化單元 210:逆量化單元 212:逆變換處理單元 214:重構單元 216:濾波器單元 218:解碼圖片緩衝器(DPB) 220:熵編碼單元 222:運動估計單元 224:運動補償單元 226:訊框內預測單元 230:視訊資料記憶體 300:視訊解碼器 302:熵解碼單元 304:預測處理單元 306:逆量化單元 308:逆變換處理單元 310:重構單元 312:濾波器單元 314:解碼圖片緩衝器(DPB) 316:運動補償單元 318:訊框內預測單元 320:CPB記憶體 350:步驟 352:步驟 354:步驟 356:步驟 358:步驟 360:步驟 370:步驟 372:步驟 374:步驟 376:步驟 378:步驟 380:步驟 400:CU 402:CU 500:步驟 502:步驟 504:步驟 506:步驟 508:步驟 600:步驟 602:步驟 604:步驟 606:步驟 608:步驟 A:上方 AL:左上方 AR:右上方 BL:左下方 L:左側 100: Video Encoding and Decoding Systems 102: Source Device 104: Video source 106: Memory 108: Output interface 110: Computer-readable media 112: Storage Devices 114: file server 116: destination device 118: Display device 120: memory 122: Input interface 130: Quadtree Binary Tree (QTBT) Structure 132: Decoding Tree Unit (CTU) 200: Video encoder 202: Mode selection unit 204: Residual generation unit 206: Transform processing unit 208: Quantization unit 210: Inverse Quantization Unit 212: Inverse transform processing unit 214: Refactoring Unit 216: Filter unit 218: Decoded Picture Buffer (DPB) 220: Entropy coding unit 222: Motion Estimation Unit 224: Motion Compensation Unit 226: In-frame prediction unit 230: Video data memory 300: Video Decoder 302: Entropy decoding unit 304: Prediction Processing Unit 306: Inverse Quantization Unit 308: Inverse transform processing unit 310: Refactoring Unit 312: Filter unit 314: Decoded Picture Buffer (DPB) 316: Motion Compensation Unit 318: In-frame prediction unit 320: CPB memory 350: Steps 352: Steps 354: Steps 356: Steps 358: Steps 360: Steps 370: Steps 372: Steps 374: Steps 376: Steps 378: Steps 380: Steps 400:CU 402:CU 500: Steps 502: Step 504: Step 506: Steps 508: Steps 600: Steps 602: Step 604: Step 606: Steps 608: Steps A: Above AL: top left AR: top right BL: Bottom left L: left

圖1是圖示可以執行本案內容的技術的示例性視訊編碼和解碼系統的方塊圖。1 is a block diagram illustrating an exemplary video encoding and decoding system that may implement the techniques of the present disclosure.

圖2是圖示用於推導最可能模式列表的相鄰區塊的實例的概念圖。2 is a conceptual diagram illustrating an example of a neighboring block for deriving the most likely pattern list.

圖3是圖示用於推導最可能模式列表的相鄰區塊的另一實例的概念圖。3 is a conceptual diagram illustrating another example of neighboring blocks for deriving the most likely pattern list.

圖4A和圖4B是圖示示例性四叉樹二叉樹(QTBT)結構以及對應的譯碼樹單元(CTU)的概念圖。4A and 4B are conceptual diagrams illustrating an example quadtree binary tree (QTBT) structure and corresponding coding tree unit (CTU).

圖5是圖示可以執行本案內容的技術的示例性視訊編碼器的方塊圖。5 is a block diagram illustrating an exemplary video encoder that may perform the techniques of the present disclosure.

圖6是圖示可以執行本案內容的技術的示例性視訊解碼器的方塊圖。6 is a block diagram illustrating an exemplary video decoder that may perform the techniques of the present disclosure.

圖7是圖示根據本案內容的技術的用於對當前區塊進行編碼的示例性方法的流程圖。7 is a flowchart illustrating an exemplary method for encoding a current block in accordance with the techniques of this disclosure.

圖8是圖示根據本案內容的技術的用於對當前區塊進行解碼的示例性方法的流程圖。8 is a flowchart illustrating an exemplary method for decoding a current block in accordance with the techniques of this disclosure.

圖9是圖示根據本案內容的技術的用於對當前區塊進行編碼的另一示例性方法的流程圖。9 is a flowchart illustrating another exemplary method for encoding a current block in accordance with the techniques of this disclosure.

圖10是圖示根據本案內容的技術的用於對當前區塊進行解碼的另一示例性方法的流程圖。10 is a flowchart illustrating another exemplary method for decoding a current block in accordance with the techniques of this disclosure.

國內寄存資訊(請依寄存機構、日期、號碼順序註記) 無 國外寄存資訊(請依寄存國家、機構、日期、號碼順序註記) 無 Domestic storage information (please note in the order of storage institution, date and number) none Foreign deposit information (please note in the order of deposit country, institution, date and number) none

600:步驟 600: Steps

602:步驟 602: Step

604:步驟 604: Step

606:步驟 606: Steps

608:步驟 608: Steps

Claims (32)

一種對視訊資料進行解碼的方法,該方法包括以下步驟: 構造包含N個條目的一一般最可能模式列表,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中一平面模式是該一般最可能模式列表中的一順序第一條目; 從該一般最可能模式列表中的一前Np個條目構造一主要最可能模式列表,其中Np小於N; 從該一般最可能模式列表中的一剩餘(N-Np)個條目構造一次要最可能模式列表; 使用該主要最可能模式列表或該次要最可能模式列表來決定用於一當前視訊資料區塊的一當前訊框內預測模式;及 使用該當前訊框內預測模式來對該當前視訊資料區塊進行解碼,以產生一經解碼的視訊資料區塊。 A method for decoding video information, the method comprising the following steps: constructs a general most probable mode list containing N entries, wherein the N entries of the general most probable mode list are intra-frame prediction modes, and wherein a planar mode is an order first in the general most probable mode list entry; constructs a primary most probable mode list from a first Np entries in the general most probable mode list, where Np is less than N; construct a secondary most probable mode list from a remaining (N-Np) entries in the general most probable mode list; using the primary most probable mode list or the secondary most probable mode list to determine a current intra-frame prediction mode for a current block of video data; and The current block of video data is decoded using the current intra-frame prediction mode to generate a decoded block of video data. 根據請求項1之方法,其中決定該當前訊框內預測模式之步驟亦包括以下步驟: 對該主要最可能模式列表或該次要最可能模式列表的一索引進行解碼,其中該索引指示該主要最可能模式列表或該次要最可能模式列表中的一非平面訊框內預測模式;及 基於該索引來決定用於該當前視訊資料區塊的該當前訊框內預測模式。 The method of claim 1, wherein the step of determining the current intra-frame prediction mode also includes the following steps: decoding an index of the PML list or the LMP list, wherein the index indicates a non-planar intra-frame prediction mode in the LMP list or the LPM list; and The current intra-frame prediction mode for the current block of video data is determined based on the index. 根據請求項2之方法,其中對該主要最可能模式列表或該次要最可能模式列表的該索引進行解碼之步驟包括以下步驟: 基於用於該當前視訊資料區塊的一譯碼工具來決定用於對該索引的一第一倉進行熵解碼的一上下文;及 使用該上下文來對該索引的該第一倉進行熵解碼。 The method of claim 2, wherein the step of decoding the index of the primary most probable mode list or the secondary most probable mode list comprises the steps of: determining a context for entropy decoding of a first bin of the index based on a coding tool for the current block of video data; and The first bin of the index is entropy decoded using the context. 根據請求項3之方法,其中該譯碼工具是一般訊框內預測模式、訊框內子分區模式或多輔助線模式中的一項。The method of claim 3, wherein the coding tool is one of a general intra-frame prediction mode, an intra-frame subpartition mode, or a multiple auxiliary line mode. 根據請求項1之方法,其中N為22並且Np為6。The method of claim 1, wherein N is 22 and Np is 6. 根據請求項1之方法,其中構造該一般最可能模式列表之步驟包括以下步驟: 將來自該當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到該一般最可能模式列表中;及 將從來自該等相應的相鄰區塊的該等相應的訊框內預測模式偏移的複數個訊框內預測模式添加到該一般最可能模式列表中。 The method of claim 1, wherein the step of constructing the list of general most probable patterns comprises the steps of: adding corresponding intra-frame prediction modes from corresponding neighboring blocks of the current block of video data to the general most probable mode list; and A plurality of intra-frame prediction modes offset from the corresponding intra-frame prediction modes from the corresponding neighboring blocks are added to the general most probable mode list. 根據請求項6之方法,其中將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中之步驟包括以下步驟:基於該等相應的訊框內預測模式是可用的並且尚未被添加到該一般最可能模式列表中,來將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中。The method of claim 6, wherein the step of adding the corresponding intra-frame prediction modes from the corresponding neighboring blocks of the current block of video data to the general most probable mode list comprises the steps of: The corresponding information from the corresponding neighboring blocks of the current block of video data are combined based on the corresponding intra-frame prediction modes that are available and have not been added to the general most probable mode list In-box prediction modes are added to this list of general most probable modes. 根據請求項1之方法,其中使用該主要最可能模式列表或該次要最可能模式列表來決定用於該當前視訊資料區塊的該當前訊框內預測模式之步驟包括以下步驟: 對指示來自該主要最可能模式列表或該次要最可能模式列表的一當前最可能模式列表的一語法元素進行解碼; 對該當前最可能模式列表的一索引進行解碼;及 根據該當前最可能模式列表的該索引來決定該當前訊框內預測模式。 The method of claim 1, wherein the step of using the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data comprises the steps of: decoding a syntax element indicating a current most probable mode list from the primary most probable mode list or the secondary most probable mode list; decoding an index into the current list of most probable patterns; and The current intra-frame prediction mode is determined according to the index of the current most probable mode list. 根據請求項1之方法,亦包括以下步驟:顯示包括該經解碼的視訊資料區塊的一圖片。The method of claim 1, further comprising the step of: displaying a picture including the decoded block of video data. 一種被配置為對視訊資料進行解碼的裝置,該裝置包括: 一記憶體,其被配置為儲存一當前視訊資料區塊;及 在電路系統中實現並且與該記憶體相通訊的一或多個處理器,該一或多個處理器被配置為: 構造包含N個條目的一一般最可能模式列表,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中一平面模式是該一般最可能模式列表中的一順序第一條目; 從該一般最可能模式列表中的一前Np個條目構造一主要最可能模式列表,其中Np小於N; 從該一般最可能模式列表中的一剩餘(N-Np)個條目構造一次要最可能模式列表; 使用該主要最可能模式列表或該次要最可能模式列表來決定用於一當前視訊資料區塊的一當前訊框內預測模式;及 使用該當前訊框內預測模式來對該當前視訊資料區塊進行解碼,以產生一經解碼的視訊資料區塊。 An apparatus configured to decode video data, the apparatus comprising: a memory configured to store a current block of video data; and One or more processors implemented in circuitry and in communication with the memory, the one or more processors configured to: constructs a general most probable mode list containing N entries, wherein the N entries of the general most probable mode list are intra-frame prediction modes, and wherein a planar mode is an order first in the general most probable mode list entry; constructs a primary most probable mode list from a first Np entries in the general most probable mode list, where Np is less than N; construct a secondary most probable mode list from a remaining (N-Np) entries in the general most probable mode list; using the primary most probable mode list or the secondary most probable mode list to determine a current intra-frame prediction mode for a current block of video data; and The current block of video data is decoded using the current intra-frame prediction mode to generate a decoded block of video data. 根據請求項10之裝置,其中為了決定該當前訊框內預測模式,該一或多個處理器亦被配置為: 對該主要最可能模式列表或該次要最可能模式列表的一索引進行解碼,其中該索引指示該主要最可能模式列表或該次要最可能模式列表中的一非平面訊框內預測模式;及 基於該索引來決定用於該當前視訊資料區塊的該當前訊框內預測模式。 The apparatus of claim 10, wherein in order to determine the current intra-frame prediction mode, the one or more processors are also configured to: decoding an index of the PML list or the LMP list, wherein the index indicates a non-planar intra-frame prediction mode in the LMP list or the LPM list; and The current intra-frame prediction mode for the current block of video data is determined based on the index. 根據請求項11之裝置,其中為了對該主要最可能模式列表或該次要最可能模式列表的該索引進行解碼,該一或多個處理器亦被配置為: 基於用於該當前視訊資料區塊的一譯碼工具來決定用於對該索引的一第一倉進行熵解碼的一上下文;及 使用該上下文來對該索引的該第一倉進行熵解碼。 The apparatus of claim 11, wherein in order to decode the index of the primary most probable mode list or the secondary most probable mode list, the one or more processors are also configured to: determining a context for entropy decoding of a first bin of the index based on a coding tool for the current block of video data; and The first bin of the index is entropy decoded using the context. 根據請求項12之裝置,其中該譯碼工具是一般訊框內預測模式、訊框內子分區模式或多輔助線模式中的一項。The apparatus of claim 12, wherein the coding tool is one of a normal intra-frame prediction mode, an intra-frame sub-partition mode, or a multiple auxiliary line mode. 根據請求項10之裝置,其中N為22並且Np為6。The apparatus of claim 10, wherein N is 22 and Np is 6. 根據請求項10之裝置,其中為了構造該一般最可能模式列表,該一或多個處理器亦被配置為: 將來自該當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到該一般最可能模式列表中;及 將從來自該等相應的相鄰區塊的該等相應的訊框內預測模式偏移的複數個訊框內預測模式添加到該一般最可能模式列表中。 The apparatus of claim 10, wherein to construct the general most probable mode list, the one or more processors are also configured to: adding corresponding intra-frame prediction modes from corresponding neighboring blocks of the current block of video data to the general most probable mode list; and A plurality of intra-frame prediction modes offset from the corresponding intra-frame prediction modes from the corresponding neighboring blocks are added to the general most probable mode list. 根據請求項15之裝置,其中為了將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中,該一或多個處理器亦被配置為:基於該等相應的訊框內預測模式是可用的並且尚未被添加到該一般最可能模式列表中,來將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中。The apparatus of claim 15, wherein in order to add the corresponding intra-frame prediction modes from the corresponding neighboring blocks of the current block of video data to the general most probable mode list, the one or more a processor is also configured to: based on the corresponding intra-frame prediction modes are available and have not been added to the general most probable mode list, the corresponding neighbors from the current video data block The corresponding intra-frame prediction modes of the block are added to the list of general most probable modes. 根據請求項10之裝置,其中為了使用該主要最可能模式列表或該次要最可能模式列表來決定用於該當前視訊資料區塊的該當前訊框內預測模式,該一或多個處理器亦被配置為: 對指示來自該主要最可能模式列表或該次要最可能模式列表的一當前最可能模式列表的一語法元素進行解碼; 對該當前最可能模式列表的一索引進行解碼;及 根據該當前最可能模式列表的該索引來決定該當前訊框內預測模式。 The apparatus of claim 10, wherein in order to use the primary most probable mode list or the secondary most probable mode list to determine the current intra-frame prediction mode for the current block of video data, the one or more processors is also configured as: decoding a syntax element indicating a current most probable mode list from the primary most probable mode list or the secondary most probable mode list; decoding an index into the current list of most probable patterns; and The current intra-frame prediction mode is determined according to the index of the current most probable mode list. 根據請求項10之裝置,亦包括: 一顯示器,其被配置為顯示包括該經解碼的視訊資料區塊的一圖片。 An apparatus according to claim 10, further comprising: A display configured to display a picture including the decoded block of video data. 一種被配置為對視訊資料進行解碼的裝置,該裝置包括: 用於構造包含N個條目的一一般最可能模式列表的構件,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中一平面模式是該一般最可能模式列表中的一順序第一條目; 用於從該一般最可能模式列表中的一前Np個條目構造一主要最可能模式列表的構件,其中Np小於N; 用於從該一般最可能模式列表中的一剩餘(N-Np)個條目構造一次要最可能模式列表的構件; 用於使用該主要最可能模式列表或該次要最可能模式列表來決定用於一當前視訊資料區塊的一當前訊框內預測模式的構件;及 用於使用該當前訊框內預測模式來對該當前視訊資料區塊進行解碼,以產生一經解碼的視訊資料區塊的構件。 An apparatus configured to decode video data, the apparatus comprising: Means for constructing a general most probable mode list containing N entries, wherein the N entries of the general most probable mode list are intra-frame prediction modes, and wherein a planar mode is in the general most probable mode list a sequential first entry; means for constructing a primary most probable mode list from a first Np entries in the general most probable mode list, where Np is less than N; means for constructing a secondary most probable mode list from a remaining (N-Np) entries in the general most probable mode list; means for using the primary most probable mode list or the secondary most probable mode list to determine a current intra-frame prediction mode for a current block of video data; and Means for decoding the current block of video data using the current intra-frame prediction mode to generate a decoded block of video data. 根據請求項19之裝置,其中該用於決定該當前訊框內預測模式的構件亦包括: 用於對該主要最可能模式列表或該次要最可能模式列表的一索引進行解碼的構件,其中該索引指示該主要最可能模式列表或該次要最可能模式列表中的一非平面訊框內預測模式;及 用於基於該索引來決定用於該當前視訊資料區塊的該當前訊框內預測模式的構件。 The apparatus of claim 19, wherein the means for determining the current intra-frame prediction mode further comprises: means for decoding an index of the primary most probable mode list or the secondary most probable mode list, wherein the index indicates a non-planar frame in the primary most probable mode list or the secondary most probable mode list intra-prediction mode; and Means for determining the current intra-frame prediction mode for the current block of video data based on the index. 根據請求項20之裝置,其中該用於對該主要最可能模式列表或該次要最可能模式列表的該索引進行解碼的構件包括: 用於基於用於該當前視訊資料區塊的一譯碼工具來決定用於對該索引的一第一倉進行熵解碼的一上下文的構件;及 用於使用該上下文來對該索引的該第一倉進行熵解碼的構件。 The apparatus of claim 20, wherein the means for decoding the index of the primary most probable mode list or the secondary most probable mode list comprises: means for determining a context for entropy decoding a first bin of the index based on a coding tool for the current block of video data; and Means for entropy decoding the first bin of the index using the context. 一種儲存指令的非暫時性電腦可讀取儲存媒體,該等指令在被執行時使得被配置為對視訊資料進行解碼的一或多個處理器進行以下操作: 構造包含N個條目的一一般最可能模式列表,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中一平面模式是該一般最可能模式列表中的一順序第一條目; 從該一般最可能模式列表中的一前Np個條目構造一主要最可能模式列表,其中Np小於N; 從該一般最可能模式列表中的一剩餘(N-Np)個條目構造一次要最可能模式列表; 使用該主要最可能模式列表或該次要最可能模式列表來決定用於一當前視訊資料區塊的一當前訊框內預測模式;及 使用該當前訊框內預測模式來對該當前視訊資料區塊進行解碼,以產生一經解碼的視訊資料區塊。 A non-transitory computer-readable storage medium storing instructions that, when executed, cause one or more processors configured to decode video data to: constructs a general most probable mode list containing N entries, wherein the N entries of the general most probable mode list are intra-frame prediction modes, and wherein a planar mode is an order first in the general most probable mode list entry; constructs a primary most probable mode list from a first Np entries in the general most probable mode list, where Np is less than N; construct a secondary most probable mode list from a remaining (N-Np) entries in the general most probable mode list; using the primary most probable mode list or the secondary most probable mode list to determine a current intra-frame prediction mode for a current block of video data; and The current block of video data is decoded using the current intra-frame prediction mode to generate a decoded block of video data. 根據請求項22之非暫時性電腦可讀取儲存媒體,其中為了決定該當前訊框內預測模式,該等指令亦使得該一或多個處理器進行以下操作: 對該主要最可能模式列表或該次要最可能模式列表的一索引進行解碼,其中該索引指示該主要最可能模式列表或該次要最可能模式列表中的一非平面訊框內預測模式;及 基於該索引來決定用於該當前視訊資料區塊的該當前訊框內預測模式。 The non-transitory computer-readable storage medium of claim 22, wherein in order to determine the current in-frame prediction mode, the instructions also cause the one or more processors to: decoding an index of the PML list or the LMP list, wherein the index indicates a non-planar intra-frame prediction mode in the LMP list or the LPM list; and The current intra-frame prediction mode for the current block of video data is determined based on the index. 根據請求項23之非暫時性電腦可讀取儲存媒體,其中為了對該主要最可能模式列表或該次要最可能模式列表的該索引進行解碼,該等指令亦使得該一或多個處理器進行以下操作: 基於用於該當前視訊資料區塊的一譯碼工具來決定用於對該索引的一第一倉進行熵解碼的一上下文;及 使用該上下文來對該索引的該第一倉進行熵解碼。 The non-transitory computer-readable storage medium of claim 23, wherein the instructions also cause the one or more processors to decode the index of the primary most probable mode list or the secondary most probable mode list Do the following: determining a context for entropy decoding of a first bin of the index based on a coding tool for the current block of video data; and The first bin of the index is entropy decoded using the context. 一種被配置為對視訊資料進行編碼的裝置,該裝置包括: 一記憶體,其被配置為儲存一當前視訊資料區塊;及 在電路系統中實現並且與該記憶體相通訊的一或多個處理器,該一或多個處理器被配置為: 構造包含N個條目的一一般最可能模式列表,其中該一般最可能模式列表的該N個條目是訊框內預測模式,並且其中一平面模式是該一般最可能模式列表中的一順序第一條目; 從該一般最可能模式列表中的一前Np個條目構造一主要最可能模式列表,其中Np小於N; 從該一般最可能模式列表中的一剩餘(N-Np)個條目構造一次要最可能模式列表; 使用該主要最可能模式列表或該次要最可能模式列表來決定用於一當前視訊資料區塊的一當前訊框內預測模式;及 使用該當前訊框內預測模式來對該當前視訊資料區塊進行編碼,以產生一經編碼的視訊資料區塊。 An apparatus configured to encode video data, the apparatus comprising: a memory configured to store a current block of video data; and One or more processors implemented in circuitry and in communication with the memory, the one or more processors configured to: constructs a general most probable mode list containing N entries, wherein the N entries of the general most probable mode list are intra-frame prediction modes, and wherein a planar mode is an order first in the general most probable mode list entry; construct a list of major most probable modes from a first Np entries in the general list of most probable modes, where Np is less than N; construct a secondary most probable mode list from a remaining (N-Np) entries in the general most probable mode list; using the primary most probable mode list or the secondary most probable mode list to determine a current intra-frame prediction mode for a current block of video data; and The current block of video data is encoded using the current intra-frame prediction mode to generate an encoded block of video data. 根據請求項25之裝置,其中該一或多個處理器亦被配置為:對該主要最可能模式列表或該次要最可能模式列表的一索引進行編碼,其中該索引指示該主要最可能模式列表或該次要最可能模式列表中的一非平面訊框內預測模式。The apparatus of claim 25, wherein the one or more processors are also configured to: encode an index to the primary most probable mode list or the secondary most probable mode list, wherein the index indicates the primary most probable mode A non-planar intra-frame prediction mode in the list or the LLM list. 根據請求項26之裝置,其中為了對該主要最可能模式列表或該次要最可能模式列表的該索引進行編碼,該一或多個處理器亦被配置為: 基於用於該當前視訊資料區塊的一譯碼工具來決定用於對該索引的一第一倉進行熵編碼的一上下文;及 使用該上下文來對該索引的該第一倉進行熵編碼。 The apparatus of claim 26, wherein in order to encode the index of the primary most probable mode list or the secondary most probable mode list, the one or more processors are also configured to: determining a context for entropy encoding a first bin of the index based on a coding tool for the current block of video data; and The first bin of the index is entropy encoded using the context. 根據請求項27之裝置,其中該譯碼工具是一般訊框內預測模式、訊框內子分區模式或多輔助線模式中的一項。The apparatus of claim 27, wherein the coding tool is one of a normal intra-frame prediction mode, an intra-frame sub-partition mode, or a multiple auxiliary line mode. 根據請求項25之裝置,其中N為22並且Np為6。The apparatus of claim 25, wherein N is 22 and Np is 6. 根據請求項25之裝置,其中為了構造該一般最可能模式列表,該一或多個處理器亦被配置為: 將來自該當前視訊資料區塊的相應的相鄰區塊的相應的訊框內預測模式添加到該一般最可能模式列表中;及 將從來自該等相應的相鄰區塊的該等相應的訊框內預測模式偏移的複數個訊框內預測模式添加到該一般最可能模式列表中。 The apparatus of claim 25, wherein to construct the general most probable mode list, the one or more processors are also configured to: adding corresponding intra-frame prediction modes from corresponding neighboring blocks of the current block of video data to the general most probable mode list; and A plurality of intra-frame prediction modes offset from the corresponding intra-frame prediction modes from the corresponding neighboring blocks are added to the general most probable mode list. 根據請求項25之裝置,其中為了將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中,該一或多個處理器亦被配置為:基於該等相應的訊框內預測模式是可用的並且尚未被添加到該一般最可能模式列表中,來將來自該當前視訊資料區塊的該等相應的相鄰區塊的該等相應的訊框內預測模式添加到該一般最可能模式列表中。The apparatus of claim 25, wherein in order to add the corresponding intra-frame prediction modes from the corresponding neighboring blocks of the current block of video data to the general most probable mode list, the one or more a processor is also configured to: based on the corresponding intra-frame prediction modes are available and have not been added to the general most probable mode list, the corresponding neighbors from the current video data block The corresponding intra-frame prediction modes of the block are added to the list of general most probable modes. 根據請求項25之裝置,亦包括: 一相機,其被配置為擷取包括該當前視訊資料區塊的一圖片。 An apparatus according to claim 25, further comprising: a camera configured to capture a picture including the current video data block.
TW110147632A 2020-12-28 2021-12-20 Most probable modes for intra prediction for video coding TW202232954A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202063131115P 2020-12-28 2020-12-28
US63/131,115 2020-12-28
US17/456,080 2021-11-22
US17/456,080 US11863752B2 (en) 2020-12-28 2021-11-22 Most probable modes for intra prediction for video coding

Publications (1)

Publication Number Publication Date
TW202232954A true TW202232954A (en) 2022-08-16

Family

ID=79927367

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110147632A TW202232954A (en) 2020-12-28 2021-12-20 Most probable modes for intra prediction for video coding

Country Status (5)

Country Link
EP (1) EP4268454A1 (en)
JP (1) JP2024501138A (en)
KR (1) KR20230125781A (en)
TW (1) TW202232954A (en)
WO (1) WO2022146583A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118301354A (en) * 2023-01-04 2024-07-05 维沃移动通信有限公司 List construction method and terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3422716A1 (en) * 2017-06-26 2019-01-02 Thomson Licensing Method and apparatus for most probable mode (mpm) sorting and signaling in video encoding and decoding
WO2020224660A1 (en) * 2019-05-09 2020-11-12 Beijing Bytedance Network Technology Co., Ltd. Most probable mode list construction for screen content coding

Also Published As

Publication number Publication date
WO2022146583A1 (en) 2022-07-07
JP2024501138A (en) 2024-01-11
EP4268454A1 (en) 2023-11-01
KR20230125781A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
TW202218422A (en) Multiple neural network models for filtering during video coding
CN114868387B (en) Chroma transform skipping and joint chroma coding enablement for blocks in video coding
WO2021041153A1 (en) Chroma quantization parameter (qp) derivation for video coding
CN114208199A (en) Chroma intra prediction unit for video coding
WO2021133731A1 (en) Inferring intra coding mode in bdpcm coded block
KR20230038709A (en) Multiple adaptive loop filter sets
WO2021061616A1 (en) Inter-layer reference picture signaling in video coding
TW202228437A (en) Decoder side intra mode derivation for most probable mode list construction in video coding
US11729381B2 (en) Deblocking filter parameter signaling
US20210176468A1 (en) Residual coding selection and low-level signaling based on quantization parameter
WO2021207232A1 (en) Signaling number of subblock merge candidates in video coding
TW202147855A (en) Adaptive scaling list control for video coding
EP4088468A1 (en) Multiple transform set signaling for video coding
CN114830657A (en) Low frequency non-separable transforms (LFNST) with reduced zeroing in video coding
US20220400257A1 (en) Motion vector candidate construction for geometric partitioning mode in video coding
US11863752B2 (en) Most probable modes for intra prediction for video coding
TW202232954A (en) Most probable modes for intra prediction for video coding
TW202228441A (en) Multiple hypothesis prediction for video coding
KR20220073755A (en) Coding scheme signaling for residual values in transform skip for video coding
WO2021061618A1 (en) Signaling number of sub-pictures in high-level syntax for video coding
TWI853918B (en) Intra block copy merging data syntax for video coding
TW202215847A (en) Extended low-frequency non-separable transform (lfnst) designs with worst-case complexity handling
TW202234887A (en) Sign prediction for multiple color components in video coding
TW202345598A (en) Methods for adaptive signaling of maximum number of merge candidates in multiple hypothesis prediction
CN116724551A (en) Most probable mode of intra prediction for video coding