TW202404371A - Neural network based filtering process for multiple color components in video coding - Google Patents

Neural network based filtering process for multiple color components in video coding Download PDF

Info

Publication number
TW202404371A
TW202404371A TW112121642A TW112121642A TW202404371A TW 202404371 A TW202404371 A TW 202404371A TW 112121642 A TW112121642 A TW 112121642A TW 112121642 A TW112121642 A TW 112121642A TW 202404371 A TW202404371 A TW 202404371A
Authority
TW
Taiwan
Prior art keywords
block
filtering
color component
syntax element
video
Prior art date
Application number
TW112121642A
Other languages
Chinese (zh)
Inventor
王洪濤
賽謬詹姆斯 伊戴
莫哈美德塞伊德 克班
瑪塔 卡克基維克茲
Original Assignee
美商高通公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/331,674 external-priority patent/US20240015312A1/en
Application filed by 美商高通公司 filed Critical 美商高通公司
Publication of TW202404371A publication Critical patent/TW202404371A/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)

Abstract

A method of processing video data includes receiving a syntax element that defines a filtering mode for a neural network (NN) model for both a first color component and a second color component, applying an instance of the NN model, in the defined filtering mode, to a first block of the first color component to generate a first filtered block, and storing the first filtered block for a coding unit (CU).

Description

視訊譯碼中用於多種顏色分量的基於神經網路的濾波程序Neural network-based filtering procedures for multiple color components in video decoding

本專利申請案主張享受2022年7月5日提出申請的美國臨時申請案第63/367,713號的優先權,故以引用方式將該申請案的全部內容併入本文。This patent application claims priority from U.S. Provisional Application No. 63/367,713, filed on July 5, 2022, and the entire content of this application is incorporated herein by reference.

本案內容係關於視訊編碼和視訊解碼。The content of this case is about video encoding and video decoding.

數位視訊能力可以併入到各種各樣的設備中,該設備包括數位電視、數位直接廣播系統、無線廣播系統、個人數位助理(PDA)、膝上型電腦或桌面型電腦、平板電腦、電子書閱讀器、數位相機、數位記錄裝置、數位媒體播放機、視訊遊戲裝置、視訊遊戲控制台、蜂巢或衛星無線電話、所謂的「智慧型電話」、視訊電話會議設備、視訊流式設備等等。數位視訊設備實現視訊譯碼技術,諸如由MPEG-2、MPEG-4、ITU-T H.263、ITU-T H.264/MPEG-4第10部分、高級視訊譯碼(AVC)、ITU-T H.265/高效率視訊譯碼(HEVC)、ITU-T H.266/通用視訊譯碼(VVC)所規定的標準和這些標準的擴展、以及專有視訊轉碼器/格式(諸如由開放媒體聯盟開發的AOMedia視訊1(AV1)裡所描述的那些技術。視訊設備可以經由實現這些視訊譯碼技術,更高效地發送、接收、編碼、解碼及/或儲存數位視訊資訊。Digital video capabilities can be incorporated into a variety of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, tablets, and e-books Readers, digital cameras, digital recording devices, digital media players, video game devices, video game consoles, cellular or satellite wireless phones, so-called "smart phones," video conference calling equipment, video streaming equipment, and more. Digital video equipment implements video decoding technologies such as MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4 Part 10, Advanced Video Coding (AVC), ITU-T Standards specified by T H.265/High Efficiency Video Coding (HEVC), ITU-T H.266/Versatile Video Coding (VVC) and extensions to these standards, as well as proprietary video transcoders/formats such as Technologies described in AOMedia Video 1 (AV1) developed by the Alliance for Open Media. Video devices can more efficiently send, receive, encode, decode and/or store digital video information by implementing these video decoding technologies.

視訊譯碼技術包括空間(訊框內)預測及/或時間(訊框間)預測以減少或消除視訊序列中固有的冗餘。對於基於塊的視訊譯碼,可以將視訊切片(例如,視訊圖片或視訊圖片的一部分)劃分為視訊塊,視訊塊亦可以稱為譯碼樹單元(CTU)、譯碼單元(CU)及/或譯碼節點。使用相對於同一圖片中相鄰塊中的參考取樣的空間預測,對圖片的訊框內解碼(I)切片中的視訊塊進行編碼。圖片的訊框間解碼(P或B)切片中的視訊塊可以使用相對於同一圖片中相鄰塊中的參考取樣的空間預測,或者使用相對於其他參考圖片中的參考取樣的時間預測。圖片可以稱為訊框,並且參考圖片可以稱為參考訊框。Video decoding techniques include spatial (intra-frame) prediction and/or temporal (inter-frame) prediction to reduce or eliminate redundancy inherent in video sequences. For block-based video coding, a video slice (e.g., a video picture or a portion of a video picture) can be divided into video blocks, which can also be called coding tree units (CTUs), coding units (CUs), and/or or decoding node. Video blocks in an intra-frame decoded (I) slice of a picture are encoded using spatial prediction relative to reference samples in adjacent blocks in the same picture. Video blocks in an inter-frame decoded (P or B) slice of a picture may use spatial prediction relative to reference samples in adjacent blocks in the same picture, or temporal prediction relative to reference samples in other reference pictures. The picture may be called a frame, and the reference picture may be called a reference frame.

通常,本案內容描述了用於對失真圖片進行濾波程序的技術。濾波程序可以是基於諸如神經網路技術的機器學習技術。可以在高級視訊轉碼器的上下文中使用實例技術,諸如根據通用視訊譯碼(VVC)標準、VVC的擴展、下一代視訊譯碼標準或任何其他視訊轉碼器。In general, this article describes techniques used to perform filtering procedures on distorted images. The filtering procedure may be based on machine learning techniques such as neural network techniques. Example techniques may be used in the context of advanced video transcoders, such as those based on the Versatile Video Coding (VVC) standard, extensions to VVC, next-generation video coding standards, or any other video transcoder.

如更詳細地描述的,本案內容描述了基於神經網路的濾波的實例,其中視訊資料的顏色分量(例如,色度分量)共享相同的神經網路模式。例如,視訊轉碼器和視訊解碼器可以針對兩個色度分量選擇相同的神經網路模式,而不是針對兩個色度分量選擇不同的神經網路方式。用此方式,因為與為兩個色度分量選擇不同的神經網路模式的情況相比,視訊轉碼器和視訊解碼器可以執行更少的濾波選擇及/或濾波執行次數,這可以減少複雜性和處理時間。As described in more detail, this document describes an example of neural network-based filtering in which the color components (eg, chroma components) of video data share the same neural network pattern. For example, the video transcoder and video decoder can select the same neural network mode for the two chroma components instead of selecting different neural network modes for the two chroma components. In this way, the complexity is reduced because the video transcoder and video decoder can perform fewer filter selections and/or filter executions than if different neural network modes were selected for the two chroma components. performance and processing time.

在一個實例中,本案內容描述了對視訊資料進行處理的方法,該方法包括:接收語法元素,該語法元素定義用於第一顏色分量和第二顏色分量兩者的用於神經網路(NN)模型的濾波模式;在所定義的濾波模式中,將該NN模型的實例應用於第一顏色分量的第一塊以產生第一經濾波塊;並且針對譯碼單元(CU)儲存第一濾波塊。In one example, the present disclosure describes a method of processing video data, the method including: receiving a syntax element defining a neural network (NN) for both a first color component and a second color component. ) filtering mode of the model; in the defined filtering mode, apply an instance of the NN model to the first block of the first color component to produce the first filtered block; and store the first filtering for the coding unit (CU) block.

在一個實例中,本案內容描述了一種用於處理視訊資料的設備,該設備包括:配置為儲存該視訊資料的記憶體;及耦合到該記憶體的利用電路來實現的一或多個處理器,其配置為:接收語法元素,該語法元素定義用於第一顏色分量和第二顏色分量兩者的用於神經網路(NN)模型的濾波模式;在所定義的濾波模式中,將該NN模型的實例應用於第一顏色分量的第一塊以產生第一經濾波塊;並且針對譯碼單元(CU)儲存第一經濾波塊。In one example, this case content describes a device for processing video data, the device includes: a memory configured to store the video data; and one or more processors implemented using circuitry coupled to the memory , which is configured to: receive a syntax element defining a filtering mode for a neural network (NN) model for both the first color component and the second color component; in the defined filtering mode, the An instance of the NN model is applied to the first block of the first color component to produce a first filtered block; and the first filtered block is stored for a coding unit (CU).

在一個實例中,本案內容描述了在其上儲存指令的電腦可讀取儲存媒體,當該等指令被執行時,使一或多個處理器用於:接收語法元素,該語法元素定義用於第一顏色分量和第二顏色分量兩者的用於神經網路(NN)模型的濾波模式;在所定義的濾波模式中,將該NN模型的實例應用於第一顏色分量的第一塊以產生第一經濾波塊;並且針對譯碼單元(CU)儲存第一經濾波塊。In one example, the present disclosure describes a computer-readable storage medium having instructions stored thereon that, when executed, cause one or more processors to: receive syntax elements defined for A filtering mode for a neural network (NN) model for both a first color component and a second color component; in the defined filtering mode, an instance of the NN model is applied to the first block of the first color component to produce a first filtered block; and storing the first filtered block for a coding unit (CU).

在附圖和下文的說明書中闡述了一或多個實例的細節。根據說明書、附圖以及申請專利範圍,其他特徵、物件和優點將變得顯而易見。The details of one or more examples are set forth in the accompanying drawings and the description below. Other features, features and advantages will become apparent from the description, drawings and claims.

視訊譯碼技術包括濾波以減少譯碼偽影。在一些實例中,濾波技術可以是基於機器學習的濾波技術。例如,可以利用神經網路來執行機器學習技術。例如,在基於神經網路(NN)的濾波中,重構的取樣是輸入,以及中間輸出是殘差取樣,將這些殘差取樣添加回輸入以精調輸入取樣。Video decoding techniques include filtering to reduce decoding artifacts. In some instances, the filtering technique may be a machine learning-based filtering technique. For example, neural networks can be utilized to perform machine learning techniques. For example, in neural network (NN) based filtering, the reconstructed samples are the input, and the intermediate outputs are the residual samples, which are added back to the input to fine-tune the input samples.

視訊資料可以包括多個顏色分量,諸如紅綠藍(RGB)或亮度和兩個色度分量(YCbCr)等。在一些情況下,針對每個顏色分量可能有不同的基於NN的濾波器。然而,針對每個顏色分量具有不同的基於NN的濾波器可能是計算和處理時間密集型的。Video data can include multiple color components, such as red, green, and blue (RGB) or luminance and two chrominance components (YCbCr). In some cases, there may be different NN-based filters for each color component. However, having different NN-based filters for each color component can be computationally and processing time intensive.

在一些情況下,使用具有不同NN模型模式的單個神經網路模型可以是可能的。亦即,在這些情況下,針對NN模型的第一濾波模式被用於對第一顏色分量進行濾波,並且針對該NN模型的第二濾波模式被用於對第二顏色分量進行濾波。視訊轉碼器及/或視訊解碼器可以針對兩個色度分量中的每一者選擇不同的模式,這可能需要在解碼器側執行兩次濾波。例如,在此類實例中,視訊轉碼器和視訊解碼器可以在第一濾波模式中執行NN模型以對第一顏色分量的第一塊進行濾波,隨後在第二濾波模式中第二次執行NN模型,以對第二顏色分量的第二塊進行濾波。用於濾波的NN模型的這種雙重執行是計算密集型的。In some cases, it may be possible to use a single neural network model with different NN model modes. That is, in these cases, a first filtering mode for the NN model is used to filter the first color component, and a second filtering mode for the NN model is used to filter the second color component. The video transcoder and/or video decoder may select different modes for each of the two chroma components, which may require two filterings to be performed on the decoder side. For example, in such instances, the video transcoder and video decoder may execute the NN model in a first filtering mode to filter a first block of first color components, followed by a second execution in a second filtering mode. NN model to filter the second block of the second color component. This dual execution of NN models for filtering is computationally intensive.

根據本案內容中描述的一或多個實例,由於不同顏色分量之間的相似性(例如,兩個色度分量之間的相似性),視訊轉碼器和視訊解碼器可以利用濾波控制機制,使兩個或更多顏色分量(例如,其成為色度分量)共享相同的神經網路模式選擇。例如,視訊轉碼器和視訊解碼器可以在規定的濾波模式下,將NN模型的實例應用於第一顏色分量的第一塊,並且在規定的濾波模式下,將NN模型的相同實例應用於第二顏色分量的第二塊。除了色度分量(例如,第一顏色分量和第二顏色分量)之外,NN模型的輸入可以是亮度分量,但是NN模型的輸出可以是經濾波的色度分量,但不是經濾波的亮度分量(儘管在一些實例中可能輸出經濾波的亮度分量)。According to one or more examples described in this content, video transcoders and video decoders may utilize filtering control mechanisms due to similarities between different color components (e.g., similarity between two chroma components). Make two or more color components (e.g., chroma components) share the same neural network mode selection. For example, the video transcoder and video decoder may apply an instance of the NN model to the first block of the first color component in a specified filtering mode, and apply the same instance of the NN model to the first block of the first color component in a specified filtering mode. The second block of the second color component. In addition to the chroma components (e.g., the first color component and the second color component), the input to the NN model may be a luma component, but the output of the NN model may be a filtered chroma component, but not a filtered luma component (Although in some instances a filtered luminance component may be output).

為了確保NN模型利用相同的濾波模式被應用於第一顏色分量的第一塊和第二顏色分量的第二塊兩者(例如,在對兩種顏色分量皆啟用NN模型濾波的情況下),視訊轉碼器可以發訊號通知定義用於第一顏色分量和第二顏色分量兩者的NN模型的濾波模式的語法元素,並且視訊解碼器可以接收該語法元素。在該實例中,視訊解碼器可以將用於第一顏色分量和第二顏色分量兩者的濾波模式設置為相同的濾波模式。To ensure that the NN model is applied to both the first block of the first color component and the second block of the second color component with the same filtering mode (e.g., where NN model filtering is enabled for both color components), The video transcoder can signal a syntax element that defines a filtering mode of the NN model for both the first color component and the second color component, and the video decoder can receive the syntax element. In this example, the video decoder may set the filtering mode for both the first color component and the second color component to the same filtering mode.

經由這種方式,視訊轉碼器和視訊解碼器可以使用NN模型的一個執行實例來產生濾波塊。利用本案內容中描述的實例技術,與其中不同的色度分量可能使用不同的神經網路模式的技術相比,可以降低計算複雜度,並且可以不降低率失真效能。In this way, the video transcoder and video decoder can use an execution instance of the NN model to generate the filter block. Using the example techniques described in this case content, computational complexity can be reduced without compromising rate-distortion performance compared to techniques where different chroma components may use different neural network modes.

圖1是示出可以執行本案內容的技術的實例視訊編碼和解碼系統100的方塊圖。本案內容的技術通常針對於譯碼(編碼及/或解碼)視訊資料。通常,視訊資料包括用於處理視訊的任何資料。因此,視訊資料可以包括原始的、未編碼的視訊、經編碼的視訊、經解碼的(例如,重構的)視訊以及視訊中繼資料(諸如訊號傳遞資料)。FIG. 1 is a block diagram illustrating an example video encoding and decoding system 100 that may perform the techniques described herein. The technology in this case is generally directed at decoding (encoding and/or decoding) video data. Generally, video data includes any data used to process video. Accordingly, video data may include original, unencoded video, encoded video, decoded (eg, reconstructed) video, and video relay data (such as signaling data).

如圖1中所示,在該實例中,系統100包括源設備102,其提供要由目的地設備116進行解碼和顯示的經編碼的視訊資料。具體而言,源設備102經由電腦可讀取媒體110,將視訊資料提供給目的地設備116。源設備102和目的地設備116可以包括廣泛的設備中的任何設備,包括桌面型電腦、筆記本(例如,膝上型)電腦、行動設備、平板電腦、機上盒、電話手持裝置(例如,智慧型電話)、電視、照相機、顯示裝置、數位媒體播放機、視訊遊戲控制台、視訊流式設備、廣播接收器設備等等。在一些情況下,源設備102和目的地設備116可以被配備用於無線通訊,因此可以稱為無線通訊設備。As shown in FIG. 1 , in this example, system 100 includes source device 102 that provides encoded video material to be decoded and displayed by destination device 116 . Specifically, the source device 102 provides video data to the destination device 116 via the computer-readable medium 110 . Source device 102 and destination device 116 may include any of a wide range of devices, including desktop computers, notebook (e.g., laptop) computers, mobile devices, tablet computers, set-top boxes, telephone handsets (e.g., smart phones). telephones), televisions, cameras, display devices, digital media players, video game consoles, video streaming devices, broadcast receiver devices, and more. In some cases, source device 102 and destination device 116 may be equipped for wireless communications and thus may be referred to as wireless communications devices.

在圖1的實例中,源設備102包括視訊源104、記憶體106、視訊轉碼器200和輸出介面108。目的地設備116包括輸入介面122、視訊解碼器300、記憶體120和顯示裝置118。根據本案內容,源設備102的視訊轉碼器200和目的地設備116的視訊解碼器300可以被配置為應用以下的技術:視訊譯碼中用於多種顏色分量的基於機器學習(例如,神經網路)的濾波程序。因此,源設備102表示視訊編碼設備的實例,而目的地設備116表示視訊解碼設備的實例。在其他實例中,源設備和目的地設備可以包括其他部件或佈置。例如,源設備102可以從諸如外部照相機之類的外部視訊源接收視訊資料。同樣,目的地設備116可以與外部顯示裝置對接,而不是包括集成顯示裝置。In the example of FIG. 1, source device 102 includes video source 104, memory 106, video transcoder 200, and output interface 108. Destination device 116 includes input interface 122, video decoder 300, memory 120, and display device 118. According to the content of this case, the video transcoder 200 of the source device 102 and the video decoder 300 of the destination device 116 may be configured to apply the following technology: machine learning-based (eg, neural network) for multiple color components in video decoding path) filtering program. Thus, source device 102 represents an instance of a video encoding device, and destination device 116 represents an instance of a video decoding device. In other examples, the source device and destination device may include other components or arrangements. For example, source device 102 may receive video material from an external video source, such as an external camera. Likewise, destination device 116 may interface with an external display device rather than include an integrated display device.

如圖1中所示的系統100僅僅是一個實例。通常,任何數位視訊編碼及/或解碼設備皆可以執行用於以下的技術:視訊譯碼中用於多種顏色分量的基於機器學習(例如,神經網路)的濾波程序。源設備102和目的地設備116僅僅是此類譯碼設備的實例,其中源設備102產生用於向目的地設備116傳輸的經譯碼的視訊資料。本案內容將「譯碼」設備稱為執行對資料的譯碼(編碼及/或解碼)的設備。因此,視訊轉碼器200和視訊解碼器300分別表示譯碼設備(具體而言,視訊轉碼器和視訊解碼器)的實例。在一些實例中,源設備102和目的地設備116可以以基本上對稱的方式操作,使得源設備102和目的地設備116中的每一者包括視訊編碼和解碼用部件。因此,系統100可以支援在源設備102和目的地設備116之間的單向或雙向視訊傳輸,例如,用於視訊流式傳輸、視訊重播、視訊廣播或視訊電話。System 100 as shown in Figure 1 is only one example. In general, any digital video encoding and/or decoding device can perform the following techniques: machine learning (eg, neural network) based filtering procedures for multiple color components in video decoding. Source device 102 and destination device 116 are merely examples of such decoding devices, with source device 102 generating decoded video material for transmission to destination device 116 . This case refers to "decoding" equipment as equipment that performs decoding (encoding and/or decoding) of data. Accordingly, video transcoder 200 and video decoder 300 represent examples of decoding devices (specifically, a video transcoder and a video decoder), respectively. In some examples, source device 102 and destination device 116 may operate in a substantially symmetrical manner such that source device 102 and destination device 116 each include components for video encoding and decoding. Thus, system 100 may support one-way or two-way video transmission between source device 102 and destination device 116, for example, for video streaming, video replay, video broadcasting, or video telephony.

通常,視訊源104代表視訊資料(亦即,原始的、未編碼的視訊資料)的源,並將視訊資料的圖片(亦稱為「訊框」)的連續序列提供給視訊轉碼器200,其中視訊轉碼器200對針對圖片的資料進行編碼。源設備102的視訊源104可以包括視訊擷取裝置(諸如攝像機)、包含先前擷取的原始視訊的視訊檔案、及/或用於從視訊內容提供者接收視訊的視訊饋送介面。作為進一步的替代方案,視訊源104可以產生基於電腦圖形的資料作為源視訊、或者即時視訊、存檔視訊和電腦產生的視訊的組合。在每種情況下,視訊轉碼器200對擷取的、預擷取的或電腦產生的視訊資料進行編碼。視訊轉碼器200可以將圖片從接收到的順序(有時稱為「顯示順序」)重新排列為用於譯碼的譯碼順序。視訊轉碼器200可以產生包括經編碼的視訊資料的位元串流。隨後,源設備102可以經由輸出介面108將經編碼的視訊資料輸出到電腦可讀取媒體110上,以經由例如目的地設備116的輸入介面122進行接收及/或取回。Typically, video source 104 represents a source of video data (i.e., raw, unencoded video data) and provides a contiguous sequence of pictures (also referred to as "frames") of the video data to video transcoder 200. The video transcoder 200 encodes image-specific data. Video source 104 of source device 102 may include a video capture device (such as a video camera), a video file containing previously captured raw video, and/or a video feed interface for receiving video from a video content provider. As a further alternative, video source 104 may generate computer graphics-based material as the source video, or a combination of live video, archived video, and computer-generated video. In each case, video transcoder 200 encodes captured, pre-captured, or computer-generated video data. Video transcoder 200 may rearrange pictures from the order in which they are received (sometimes referred to as "display order") to the decoding order for decoding. Video transcoder 200 may generate a bit stream including encoded video data. Source device 102 may then output the encoded video data onto computer-readable media 110 via output interface 108 for reception and/or retrieval via, for example, input interface 122 of destination device 116 .

源設備102的記憶體106和目的地設備116的記憶體120代表通用記憶體。在一些實例中,記憶體106、120可以儲存原始視訊資料,例如,來自視訊源104的原始視訊和來自視訊解碼器300的原始的經解碼的視訊資料。補充地或替代地,記憶體106、120可以儲存由例如視訊轉碼器200和視訊解碼器300分別可執行的軟體指令。儘管在該實例中將記憶體106和記憶體120與視訊轉碼器200和視訊解碼器300分開地示出,但應當理解,視訊轉碼器200和視訊解碼器300亦可以包括內部記憶體,以實現功能上相似或等效的目的。此外,記憶體106、120可以儲存經編碼的視訊資料(例如,從視訊轉碼器200輸出以及輸入到視訊解碼器300的)。在一些實例中,可以將記憶體106、120的一部分分配為一或多個視訊緩衝器,例如,用於儲存原始的、經解碼及/或經編碼的視訊資料。Memory 106 of source device 102 and memory 120 of destination device 116 represent general purpose memory. In some examples, memory 106 , 120 may store raw video data, such as raw video from video source 104 and raw decoded video data from video decoder 300 . Additionally or alternatively, memories 106, 120 may store software instructions executable by, for example, video transcoder 200 and video decoder 300, respectively. Although memory 106 and memory 120 are shown separately from video transcoder 200 and video decoder 300 in this example, it should be understood that video transcoder 200 and video decoder 300 may also include internal memory. To achieve functionally similar or equivalent purposes. Additionally, memory 106, 120 may store encoded video data (eg, output from video transcoder 200 and input to video decoder 300). In some examples, a portion of memory 106, 120 may be allocated as one or more video buffers, for example, for storing raw, decoded and/or encoded video data.

電腦可讀取媒體110可以代表能夠將經編碼的視訊資料從源設備102傳輸到目的地設備116的任何類型的媒體或設備。在一個實例中,電腦可讀取媒體110代表用於使源設備102能夠即時地例如經由射頻網路或基於電腦的網路將經編碼的視訊資料直接發送到目的地設備116的通訊媒體。輸出介面108可以對包括經編碼的視訊資料的傳輸訊號進行調制,並且輸入介面122可以根據諸如無線通訊協定之類的通訊標準,對接收到的傳輸訊號進行解調。通訊媒體可以包括任何無線或有線通訊媒體,例如射頻(RF)頻譜或一或多條實體傳輸線。通訊媒體可以形成諸如區域網路、廣域網或全球網路(例如,網際網路)的基於封包的網路的一部分。通訊媒體可以包括路由器、交換機、基地台或者有助於從源設備102到目的地設備116的通訊的任何其他設備。Computer-readable media 110 may represent any type of media or device capable of transmitting encoded video material from source device 102 to destination device 116 . In one example, computer-readable media 110 represents a communication medium that enables source device 102 to transmit encoded video data directly to destination device 116 in real time, such as via a radio frequency network or a computer-based network. The output interface 108 can modulate the transmission signal including the encoded video data, and the input interface 122 can demodulate the received transmission signal according to a communication standard such as a wireless communication protocol. Communication media may include any wireless or wired communication media, such as the radio frequency (RF) spectrum or one or more physical transmission lines. The communication medium may form part of a packet-based network such as a local area network, a wide area network, or a global network (eg, the Internet). The communication media may include routers, switches, base stations, or any other device that facilitates communication from source device 102 to destination device 116 .

在一些實例中,源設備102可以將經編碼的資料從輸出介面108輸出到存放裝置112。類似地,目的地設備116可以經由輸入介面122,從存放裝置112存取編碼的資料。存放裝置112可以包括多種分散式或本端存取的資料儲存媒體(諸如硬碟、藍光光碟、DVD、CD-ROM、快閃記憶體、揮發性或非揮發性記憶體)或者用於儲存經編碼的視訊資料的任何其他適當的數位儲存媒體中的任何一種。In some examples, source device 102 may output encoded data from output interface 108 to storage device 112 . Similarly, destination device 116 may access encoded data from storage device 112 via input interface 122 . The storage device 112 may include a variety of distributed or local-access data storage media (such as hard drives, Blu-ray discs, DVDs, CD-ROMs, flash memories, volatile or non-volatile memories) or may be used to store files. Any other suitable digital storage medium for encoding video data.

在一些實例中,源設備102可以將經編碼的視訊資料輸出到檔案伺服器114或者可以儲存由源設備102產生的經編碼的視訊資料的另一個中間存放裝置。目的地設備116可以經由流傳輸或下載,從檔案伺服器114存取所儲存的視訊資料。In some examples, source device 102 may output the encoded video data to file server 114 or another intermediate storage device that may store the encoded video data generated by source device 102 . Destination device 116 may access stored video data from file server 114 via streaming or downloading.

檔案伺服器114可以是能夠儲存經編碼的視訊資料並將經編碼的視訊資料發送到目的地設備116的任何類型的伺服器設備。檔案伺服器114可以代表網路服務器(例如,用於網站)、被配置為提供檔案傳輸通訊協定服務(諸如檔案傳輸通訊協定(FTP)或單向傳送檔案傳輸(FLUTE)協定)的伺服器、內容傳送網路(CDN)設備、超文字傳輸協定(HTTP)伺服器、多媒體廣播多播服務(MBMS)或增強型MBMS(eMBMS)伺服器及/或網路連接儲存(NAS)設備。檔案伺服器114可以補充地或替代地實現一或多個HTTP流式協定,例如HTTP承載動態自我調整流式(DASH)、HTTP即時流式(HLS)、即時資料流式通訊協定(RTSP)、HTTP動態流式等等。File server 114 may be any type of server device capable of storing encoded video data and sending the encoded video data to destination device 116 . File server 114 may represent a network server (e.g., for a website), a server configured to provide file transfer protocol services such as File Transfer Protocol (FTP) or File Transfer One-Way (FLUTE) protocol, Content delivery network (CDN) equipment, Hypertext Transfer Protocol (HTTP) server, Multimedia Broadcast Multicast Service (MBMS) or enhanced MBMS (eMBMS) server and/or Network Attached Storage (NAS) equipment. File server 114 may additionally or alternatively implement one or more HTTP streaming protocols, such as HTTP Bearer Dynamic Self-Adapting Streaming (DASH), HTTP Real-Time Streaming (HLS), Real-time Data Streaming Protocol (RTSP), HTTP dynamic streaming and more.

目的地設備116可以經由包括網際網路連接的任何標準資料連接,從檔案伺服器114存取編碼的視訊資料。這可以包括適合於存取已儲存在檔案伺服器114上的經編碼視訊資料的無線通道(例如,Wi-Fi連接)、有線連接(例如,數位用戶線路(DSL)、纜線數據機等等)或二者的組合。輸入介面122可以被配置為根據上面論述的用於從檔案伺服器114取回或接收媒體資料的各種協定中的任何一或多個、或者用於取回媒體資料的其他此類協定進行操作。Destination device 116 may access encoded video data from file server 114 via any standard data connection, including an Internet connection. This may include wireless channels (e.g., Wi-Fi connections), wired connections (e.g., Digital Subscriber Line (DSL)), cable modems, etc. suitable for accessing the encoded video data stored on file server 114 ) or a combination of both. Input interface 122 may be configured to operate in accordance with any one or more of the various protocols discussed above for retrieving or receiving media data from file server 114, or other such protocols for retrieving media data.

輸出介面108和輸入介面122可以代表無線發射器/接收器、數據機、有線網路部件(例如,乙太網路卡)、根據各種IEEE 802.11標準中的任何一種進行操作的無線通訊部件、或者其他實體部件。在其中輸出介面108和輸入介面122包括無線部件的實例中,輸出介面108和輸入介面122可以被配置為根據諸如4G、4G-LTE(長期進化)、高級LTE、5G等等的蜂巢通訊標準,來傳輸諸如經編碼的視訊資料的資料。在輸出介面108包括無線發射器的一些實例中,輸出介面108和輸入介面122可以被配置為根據其他無線標準(諸如IEEE 802.11規範、IEEE 802.15規範(例如,ZigBee™)、Bluetooth™標準等等),來傳輸諸如經編碼的視訊資料的資料。在一些實例中,源設備102及/或目的地設備116可以包括各自的片上系統(SoC)設備。例如,源設備102可以包括SoC設備以執行歸屬於視訊轉碼器200及/或輸出介面108的功能,並且目的地設備116可以包括SoC設備以執行歸屬於視訊解碼器300及/或輸入介面122的功能。Output interface 108 and input interface 122 may represent a wireless transmitter/receiver, a modem, a wired network component (eg, an Ethernet network card), a wireless communications component operating in accordance with any of the various IEEE 802.11 standards, or Other physical parts. In instances where the output interface 108 and the input interface 122 include wireless components, the output interface 108 and the input interface 122 may be configured in accordance with cellular communications standards such as 4G, 4G-LTE (Long Term Evolution), LTE-Advanced, 5G, etc., to transmit data such as encoded video data. In some instances in which output interface 108 includes a wireless transmitter, output interface 108 and input interface 122 may be configured to transmit signals in accordance with other wireless standards, such as the IEEE 802.11 specification, the IEEE 802.15 specification (eg, ZigBee™), the Bluetooth™ standard, etc. , to transmit data such as encoded video data. In some examples, source device 102 and/or destination device 116 may include respective system-on-chip (SoC) devices. For example, source device 102 may include an SoC device to perform functions attributed to video transcoder 200 and/or output interface 108 , and destination device 116 may include an SoC device to perform functions attributed to video decoder 300 and/or input interface 122 function.

本案內容的技術可以應用於支援各種多媒體應用中的任何一種的視訊譯碼,該多媒體應用諸如為空中電視廣播、有線電視傳輸、衛星電視傳輸、網際網路流式視訊傳輸(如,基於HTTP的動態自我調整流式媒體(DASH)、編碼到資料儲存媒體上的數位視訊),對儲存在資料儲存媒體上的數位視訊的解碼、或者其他應用。The technology described in this case can be applied to support video decoding in any of a variety of multimedia applications, such as over-the-air television broadcasts, cable television transmissions, satellite television transmissions, and Internet streaming video transmissions (e.g., HTTP-based Dynamic self-adjusting streaming (DASH), digital video encoded on data storage media), decoding of digital video stored on data storage media, or other applications.

目的地設備116的輸入介面122從電腦可讀取媒體110(例如,通訊媒體、存放裝置112、檔案伺服器114等等)接收經編碼的視訊位元串流。經編碼的視訊位元串流可以包括由視訊轉碼器200定義的訊號傳遞資訊,而且其亦由視訊解碼器300使用,諸如具有描述視訊塊或其他譯碼單元(例如,切片、圖片、圖片組、序列等等)的特性和處理的值的語法元素。顯示裝置118向使用者顯示經解碼的視訊資料的解碼圖片。顯示裝置118可以代表多種顯示裝置中的任何一種,諸如液晶顯示器(LCD)、電漿顯示器、有機發光二極體(OLED)顯示器、或者另一種類型的顯示裝置。The input interface 122 of the destination device 116 receives the encoded video bit stream from the computer-readable medium 110 (eg, communication media, storage device 112, file server 114, etc.). The encoded video bitstream may include signaling information defined by video transcoder 200 and used by video decoder 300 , such as information describing video blocks or other coding units (e.g., slices, pictures, pictures, groups, sequences, etc.) properties and syntax elements for handling values. The display device 118 displays decoded pictures of the decoded video data to the user. Display device 118 may represent any of a variety of display devices, such as a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or another type of display device.

儘管沒有在圖1中示出,但在一些實例中,視訊轉碼器200和視訊解碼器300可以分別與音訊編碼器及/或音訊解碼器集成在一起,並且可以包括適當的MUX-DEMUX單元或者其他硬體及/或軟體,以處理包括共同資料串流中的音訊和視訊兩者的多工串流。Although not shown in FIG. 1 , in some examples, the video transcoder 200 and the video decoder 300 may be integrated with the audio encoder and/or the audio decoder, respectively, and may include appropriate MUX-DEMUX units. or other hardware and/or software to handle multiplexed streams including both audio and video in a common data stream.

視訊轉碼器200和視訊解碼器300各自可以實現為多種適當的編碼器及/或解碼器電路中的任何一種,諸如一或多個微處理器、數位訊號處理器(DSP)、特殊應用積體電路(ASIC)、現場可程式設計閘陣列(FPGA)、離散邏輯、軟體、硬體、韌體或者其任意組合。當部分地以軟體實現這些技術時,設備可以將用於軟體的指令儲存在適當的非暫時性電腦可讀取媒體中,並使用一或多個處理器以硬體方式執行這些指令以執行本案內容的技術。視訊轉碼器200和視訊解碼器300中的每一者可以被包括在一或多個編碼器或解碼器中,編碼器或解碼器中的任何一者可以集成為相應設備中的組合編碼器/解碼器(CODEC)的一部分。包括視訊轉碼器200及/或視訊解碼器300的設備可以包括積體電路、微處理器及/或無線通訊設備(諸如,蜂巢式電話)。Video transcoder 200 and video decoder 300 may each be implemented as any of a variety of suitable encoder and/or decoder circuits, such as one or more microprocessors, digital signal processors (DSPs), application special products. ASIC, field programmable gate array (FPGA), discrete logic, software, hardware, firmware, or any combination thereof. When these technologies are implemented partially in software, the device may store instructions for the software in an appropriate non-transitory computer-readable medium and execute these instructions in hardware using one or more processors to perform the project. Content technology. Each of the video transcoder 200 and the video decoder 300 may be included in one or more encoders or decoders, and any of the encoders or decoders may be integrated as a combined encoder in the corresponding device. / part of the CODEC. Devices including video transcoder 200 and/or video decoder 300 may include integrated circuits, microprocessors, and/or wireless communication devices (such as cellular phones).

視訊轉碼器200和視訊解碼器300可以根據諸如ITU-T H.265(亦稱為高效視訊譯碼(HEVC))或者其擴展(諸如,多視圖及/或可伸縮視訊譯碼擴展)的視訊譯碼標準進行操作。替代地,視訊轉碼器200和視訊解碼器300可以根據諸如ITU-T H.266(亦稱為通用視訊譯碼(VVC))的其他專有或工業標準來操作。在其他實例中,視訊轉碼器200和視訊解碼器300可以根據諸如AOMedia Video 1(AV1)、AV1的擴展及/或AV1的後續版本(例如,AV2)的專有視訊轉碼器/格式進行操作。在其他實例中,視訊轉碼器200和視訊解碼器300可以根據其他專有格式或行業標準進行操作。然而,本案內容的技術並不限於任何特定的譯碼標準或格式。通常,視訊轉碼器200和視訊解碼器300可以被配置為結合使用濾波的任何視訊譯碼技術(例如,基於機器學習(例如,神經網路)的視訊資料濾波)來執行本案內容的技術。Video transcoder 200 and video decoder 300 may be configured in accordance with ITU-T H.265 (also known as High Efficiency Video Coding (HEVC)) or extensions thereof (such as multi-view and/or scalable video coding extensions). Video decoding standards operate. Alternatively, video transcoder 200 and video decoder 300 may operate according to other proprietary or industry standards such as ITU-T H.266 (also known as Universal Video Coding (VVC)). In other examples, video transcoder 200 and video decoder 300 may be implemented in accordance with a proprietary video transcoder/format such as AOMedia Video 1 (AV1), extensions of AV1, and/or subsequent versions of AV1 (eg, AV2) operate. In other examples, video transcoder 200 and video decoder 300 may operate according to other proprietary formats or industry standards. However, the technology in this case is not limited to any specific decoding standard or format. In general, video transcoder 200 and video decoder 300 may be configured to perform the techniques described herein in conjunction with any video decoding technique that uses filtering (eg, machine learning (eg, neural network)-based filtering of video data).

通常,視訊轉碼器200和視訊解碼器300可以執行對圖片的基於塊的譯碼。術語「塊」通常是指包括要處理的資料的結構(例如,在編碼及/或解碼程序中使用的經編碼的、經解碼的或其他形式)。例如,塊可以包括亮度及/或色度資料的取樣的二維矩陣。通常,視訊轉碼器200和視訊解碼器300可以對以YUV(例如,Y、Cb、Cr)格式表示的視訊資料進行解碼。亦即,視訊轉碼器200和視訊解碼器300不是對圖片的取樣的紅色、綠色和藍色(RGB)資料進行譯碼,而是可以對亮度和色度分量進行解碼,其中色度分量可以包括紅色和藍色色度分量兩者。在一些實例中,視訊轉碼器200在編碼之前,將接收到的RGB格式的資料轉換成YUV表示,並且視訊解碼器300將YUV表示轉換成RGB格式。替代地,預處理和後處理單元(未圖示)可以執行這些轉換。Generally, video transcoder 200 and video decoder 300 may perform block-based coding of pictures. The term "chunk" generally refers to a structure (eg, encoded, decoded or other form used in an encoding and/or decoding process) containing data to be processed. For example, a block may include a two-dimensional matrix of samples of luma and/or chroma data. Generally, the video transcoder 200 and the video decoder 300 can decode video data represented in YUV (eg, Y, Cb, Cr) format. That is, instead of decoding the sampled red, green, and blue (RGB) data of the picture, the video transcoder 200 and the video decoder 300 may decode the luma and chroma components, where the chroma component may Includes both red and blue chroma components. In some examples, the video transcoder 200 converts the received RGB format data into YUV representation before encoding, and the video decoder 300 converts the YUV representation into the RGB format. Alternatively, pre- and post-processing units (not shown) can perform these transformations.

本案內容通常可以涉及圖片的譯碼(例如,編碼和解碼),以包括對圖片的資料進行編碼或解碼的程序。類似地,本案內容可以涉及對圖片的塊的譯碼,以包括對這些塊的資料進行編碼或解碼的程序(例如,預測及/或殘餘譯碼)。經編碼的視訊位元串流通常包括用於語法元素的一系列值,該語法元素表示譯碼決策(例如,譯碼模式)以及將圖片劃分成塊。因此,對圖片或塊進行譯碼的引用,通常應當被理解為用於形成該圖片或塊的語法元素的譯碼值。The content of this case may generally involve the decoding (eg, encoding and decoding) of pictures, to include procedures for encoding or decoding the material of the pictures. Similarly, the context may relate to the coding of blocks of pictures to include procedures for encoding or decoding data in those blocks (eg, prediction and/or residual coding). An encoded video bitstream typically includes a series of values for syntax elements that represent coding decisions (eg, coding modes) and partitioning of pictures into blocks. Therefore, a reference to a picture or block being coded should generally be understood to be the coded value of the syntax element used to form that picture or block.

HEVC定義了各種塊,包括譯碼單元(CU)、預測單元(PU)和變換單元(TU)。根據HEVC,視訊解碼器(諸如視訊轉碼器200)根據四叉樹結構將譯碼樹單元(CTU)劃分為CU。亦即,視訊解碼器將CTU和CU劃分為四個相等的、不重疊的正方形,並且四叉樹的每個節點具有零個或四個子節點。沒有子節點的節點可以稱為「葉節點」,並且這種葉節點的CU可以包括一或多個PU及/或一或多個TU。視訊解碼器可以進一步分割PU和TU。例如,在HEVC中,殘餘四叉樹(RQT)表示TU的劃分。在HEVC中,PU表示訊框間預測資料,而TU表示殘差資料。經訊框內預測的CU包括訊框內預測資訊(諸如訊框內模式指示)。HEVC defines various blocks, including coding units (CU), prediction units (PU), and transform units (TU). According to HEVC, a video decoder (such as video transcoder 200) divides coding tree units (CTUs) into CUs according to a quad-tree structure. That is, the video decoder divides the CTU and CU into four equal, non-overlapping squares, and each node of the quadtree has zero or four child nodes. A node without child nodes may be called a "leaf node", and the CU of such a leaf node may include one or more PUs and/or one or more TUs. Video decoders can further split PUs and TUs. For example, in HEVC, a residual quadtree (RQT) represents the partitioning of TUs. In HEVC, PU represents inter-frame prediction data, and TU represents residual data. An intra-predicted CU includes intra-prediction information (such as an intra-mode indication).

再舉一個實例,視訊轉碼器200和視訊解碼器300可以被配置為根據VVC進行操作。根據VVC,視訊解碼器(諸如視訊轉碼器200)將圖片劃分為複數個解碼樹單元(CTU)。視訊轉碼器200可以根據諸如四叉樹-二叉樹(QTBT)結構或多類型樹(MTT)結構的樹結構對CTU進行劃分。QTBT結構消除了多個劃分類型的概念,諸如HEVC的CU、PU和TU之間的分隔。QTBT結構包括兩個層級:根據四叉樹劃分而劃分的第一層級,以及根據二叉樹劃分而劃分的第二層級。QTBT結構的根節點對應於CTU。二叉樹的葉節點對應於譯碼單元(CU)。As another example, the video transcoder 200 and the video decoder 300 may be configured to operate according to VVC. According to VVC, a video decoder (such as video transcoder 200) divides a picture into a plurality of decoding tree units (CTUs). The video transcoder 200 may divide the CTU according to a tree structure such as a quadtree-binary tree (QTBT) structure or a multi-type tree (MTT) structure. The QTBT structure eliminates the concept of multiple partition types, such as the separation between CU, PU and TU of HEVC. The QTBT structure includes two levels: the first level divided according to quadtree partitioning, and the second level divided according to binary tree partitioning. The root node of the QTBT structure corresponds to the CTU. The leaf nodes of the binary tree correspond to coding units (CUs).

在MTT劃分結構中,可以使用四叉樹(QT)分割、二叉樹(BT)分割和一或多個類型的三叉樹(TT)(亦稱為三叉樹型(TT))分割對塊進行劃分。三叉樹或三叉樹型分割是將塊分割成三個子塊的劃分。在一些實例中,三叉樹或三叉樹型分割將塊劃分為三個子塊,而不穿過中心來劃分原始塊。MTT中的劃分類型(例如,QT、BT和TT)可以是對稱的或不對稱的。In the MTT partitioning structure, blocks can be partitioned using quadtree (QT) partitioning, binary tree (BT) partitioning, and one or more types of ternary tree (TT) (also known as ternary tree (TT)) partitioning. A ternary tree or ternary tree-type partition is a partition that splits a block into three sub-blocks. In some instances, a ternary tree or ternary tree-type partition divides the block into three sub-blocks without dividing the original block through the center. Partition types in MTT (e.g., QT, BT, and TT) can be symmetric or asymmetric.

當根據AV1轉碼器進行操作時,視訊轉碼器200和視訊解碼器300可以被配置為以塊的形式對視訊資料進行譯碼。在AV1中,可以處理的最大譯碼塊稱為超塊。在AV1中,超塊可以是128x128亮度取樣或64x64亮度取樣。然而,在後續的視訊譯碼格式(例如,AV2)中,可以經由不同的(例如,更大的)亮度取樣大小來定義超塊。在一些實例中,超塊是塊四叉樹的頂層。視訊轉碼器200可以進一步將超塊分割成更小的譯碼塊。視訊轉碼器200可以使用正方形或非正方形分割將超塊和其他譯碼塊分割成更小的塊。非正方形塊可以包括N/2xN、NxN/2、N/4xN和NxN/4塊。視訊轉碼器200和視訊解碼器300可以對每個譯碼塊執行單獨的預測和變換程序。When operating in accordance with the AV1 transcoder, video transcoder 200 and video decoder 300 may be configured to decode video material in blocks. In AV1, the largest decoding block that can be processed is called a superblock. In AV1, a superblock can be 128x128 luma samples or 64x64 luma samples. However, in subsequent video coding formats (eg, AV2), super-blocks may be defined via different (eg, larger) luma sample sizes. In some instances, the superblock is the top level of a block quadtree. Video transcoder 200 may further partition the super-block into smaller decoding blocks. Video transcoder 200 may partition super-blocks and other coding blocks into smaller blocks using square or non-square partitioning. Non-square blocks may include N/2xN, NxN/2, N/4xN, and NxN/4 blocks. Video transcoder 200 and video decoder 300 may perform separate prediction and transformation procedures for each coding block.

AV1亦定義了視訊資料的瓦片。瓦片是可以獨立於其他瓦片進行譯碼的超塊的矩形陣列。亦即,視訊轉碼器200和視訊解碼器300可以分別編碼和解碼在瓦片內的譯碼塊,而不使用來自其他瓦片的視訊資料。然而,視訊轉碼器200和視訊解碼器300可以跨瓦片邊界來執行濾波。瓦片的大小可以是統一的或不統一的。基於瓦片的譯碼可以實現用於編碼器和解碼器實現的並行處理及/或多執行緒。AV1 also defines tiles for video data. A tile is a rectangular array of superblocks that can be decoded independently of other tiles. That is, the video transcoder 200 and the video decoder 300 can respectively encode and decode the decoding blocks within a tile without using video data from other tiles. However, video transcoder 200 and video decoder 300 may perform filtering across tile boundaries. Tiles can be uniform or non-uniform in size. Tile-based coding enables parallel processing and/or multiple threads for encoder and decoder implementations.

在一些實例中,視訊轉碼器200和視訊解碼器300可以使用單個QTBT或MTT結構來表示亮度和色度分量中的每一者,而在其他實例中,視訊轉碼器200和視訊解碼器300可以使用兩個或更多QTBT或MTT結構,諸如一個QTBT/MTT結構用於亮度分量,以及另一個QTBT/MTT結構用於兩個色度分量(或者兩個QTBT/MTT結構用於相應的色度分量)。In some examples, video transcoder 200 and video decoder 300 may use a single QTBT or MTT structure to represent each of the luma and chrominance components, while in other examples, video transcoder 200 and video decoder 300 may use two or more QTBT or MTT structures, such as one QTBT/MTT structure for the luma component and another QTBT/MTT structure for the two chrominance components (or two QTBT/MTT structures for the corresponding chroma component).

視訊轉碼器200和視訊解碼器300可以被配置為使用四叉樹劃分、QTBT劃分、MTT劃分、超塊劃分或其他劃分結構。The video transcoder 200 and the video decoder 300 may be configured to use quad-tree partitioning, QTBT partitioning, MTT partitioning, super-block partitioning, or other partitioning structures.

在一些實例中,CTU包括亮度取樣的譯碼樹塊(CTB)、具有三個取樣陣列的圖片的色度取樣的兩個對應的CTB、或單色圖片或者使用三個獨立的色彩平面和用於對取樣進行譯碼的語法結構進行譯碼的圖片的取樣的CTB。CTB可以是某個N值的NxN取樣塊,使得組成部分到CTB的分割是劃分。組成部分是來自組成4:2:0、4:2:2或4:4:4顏色格式的圖片的三個陣列(亮度和兩個色度)之一的陣列或單個取樣,或者是組成單色格式圖片的陣列中的陣列或單個取樣。在一些實例中,譯碼塊是針對一些M和N值的取樣的MxN取樣塊,使得CTB到譯碼塊的分割是劃分。In some examples, the CTU includes a coding tree block (CTB) of luma samples, two corresponding CTBs of chroma samples for a picture with three sample arrays, or a monochrome picture or using three independent color planes and using The CTB of the sample in the picture that is used to decode the syntax structure for decoding the sample. A CTB can be an NxN sample block of some N value such that the partitioning of components into CTBs is a partitioning. A component is an array or a single sample from one of the three arrays (luma and two chroma) that make up a picture in a 4:2:0, 4:2:2, or 4:4:4 color format, or a single sample that makes up a picture. An array within an array or a single sample of a color format picture. In some examples, the coding block is an MxN sample block for some M and N values of samples, such that the partitioning of CTBs into coding blocks is a partitioning.

可以以各種方式,在圖片中對塊(例如,CTU或CU)進行群組。舉一個實例,磚塊(brick)可以代表圖片中特定瓦片(tile)內的具有CTU行的矩形區域。瓦片可以是圖片中的特定瓦片列和特定瓦片行內的具有CTU的矩形區域。瓦片列是指高度等於圖片的高度、並且具有(例如,諸如在圖片參數集中)由語法元素指定的寬度的矩形區域的CTU。瓦片行是指具有(例如,諸如在圖片參數集中)由語法元素指定的高度、並且寬度等於圖片的寬度的矩形區域的CTU。Blocks (eg, CTUs or CUs) can be grouped in a picture in various ways. As an example, a brick can represent a rectangular area with CTU rows within a specific tile in the image. A tile can be a rectangular area with CTU within a specific tile column and specific tile row in the picture. A tile column refers to a CTU of a rectangular area with a height equal to the height of the picture and a width specified by a syntax element (eg, such as in a picture parameter set). A tile row refers to a CTU of a rectangular area with a height specified by a syntax element (eg, such as in a picture parameter set) and a width equal to the width of the picture.

在一些實例中,可以將瓦片劃分成多個磚塊,每個磚塊可以包括瓦片內的一或多個CTU行。沒有被劃分為多個磚塊的瓦片,亦可以稱為磚塊。然而,作為瓦片的真實子集的磚塊不能稱為瓦片。亦可以將圖片中的磚塊排列在切片中。切片可以是圖片中能夠專門地被包含在單個網路抽象層(NAL)單元中的整數數量磚塊。在一些實例中,切片包括多個完整瓦片,或者僅包括一個瓦片的連續序列的完整磚塊。In some examples, a tile may be divided into multiple bricks, each of which may include one or more CTU rows within the tile. Tiles that are not divided into multiple bricks can also be called bricks. However, bricks, which are a true subset of tiles, cannot be called tiles. You can also arrange the bricks in the picture into slices. A slice can be an integer number of bricks in a picture that can be specifically contained within a single Network Abstraction Layer (NAL) unit. In some instances, a slice includes multiple complete tiles, or a contiguous sequence of complete bricks including just one tile.

本案內容可以按照豎直和水平維度,互換地使用「NxN」和「N乘N」來代表塊(諸如CU或其他視訊塊)的取樣尺寸,例如16x16取樣或16乘16取樣。通常,16x16 CU在豎直方向上將有16個取樣(y = 16),並且在水平方向上將有16個取樣(x = 16)。同樣地,NxN CU通常在豎直方向上具有N個取樣,並且在水平方向上具有N個取樣,其中N表示非負整數值。可以按行和列來排列CU中的取樣。此外,CU在水平方向上不必具有與豎直方向上相同數量的取樣。舉例而言,CU可以包含NxM個取樣,其中M不一定等於N。This content can use "NxN" and "N by N" interchangeably to represent the sampling size of a block (such as a CU or other video block) in terms of vertical and horizontal dimensions, such as 16x16 samples or 16x16 samples. Typically, a 16x16 CU will have 16 samples in the vertical direction (y=16), and 16 samples in the horizontal direction (x=16). Likewise, a NxN CU typically has N samples in the vertical direction and N samples in the horizontal direction, where N represents a non-negative integer value. Samples in a CU can be arranged by rows and columns. Furthermore, a CU does not have to have the same number of samples in the horizontal direction as in the vertical direction. For example, a CU may contain NxM samples, where M is not necessarily equal to N.

視訊轉碼器200對用於表示預測及/或殘差資訊以及其他資訊的CU的視訊資料進行編碼。預測資訊指示將如何預測CU,以便形成用於該CU的預測塊。殘差資訊通常表示在編碼之前的CU的取樣與預測塊之間的逐取樣差異。Video transcoder 200 encodes video data for CUs representing prediction and/or residual information and other information. Prediction information indicates how a CU is to be predicted in order to form a prediction block for that CU. Residual information usually represents the sample-by-sample difference between the samples of the CU before encoding and the prediction block.

為了預測CU,視訊轉碼器200通常可以經由訊框間預測或訊框內預測來形成用於CU的預測塊。訊框間預測通常代表從先前譯碼的圖片的資料中預測CU,而訊框內預測通常代表從同一圖片的先前譯碼的資料中預測CU。為了執行訊框間預測,視訊轉碼器200可以使用一或多個運動向量來產生預測塊。視訊轉碼器200通常可以例如從CU與參考塊之間的差異角度,執行運動搜尋以辨識與CU緊密匹配的參考塊。視訊轉碼器200可以使用絕對差之和(SAD)、平方差之和(SSD)、平均絕對差(MAD)、均方差(MSD)或其他此類差值計算來計算差值度量,以決定參考塊是否緊密匹配當前CU。在一些實例中,視訊轉碼器200可以使用單向預測或雙向預測來預測當前CU。To predict a CU, video transcoder 200 may typically form prediction blocks for the CU via inter prediction or intra prediction. Inter-frame prediction usually means predicting a CU from data of a previously decoded picture, while intra-frame prediction usually means predicting a CU from previously decoded data of the same picture. To perform inter-frame prediction, video transcoder 200 may use one or more motion vectors to generate prediction blocks. Video transcoder 200 may typically perform a motion search to identify a reference block that closely matches the CU, such as from the perspective of differences between the CU and the reference block. Video transcoder 200 may calculate the difference metric using sum of absolute differences (SAD), sum of squared differences (SSD), mean absolute difference (MAD), mean square error (MSD), or other such difference calculations to determine Whether the reference block closely matches the current CU. In some examples, video transcoder 200 may use unidirectional prediction or bidirectional prediction to predict the current CU.

VVC的一些實例亦提供仿射運動補償模式,其可以被認為是訊框間預測模式。在仿射運動補償模式中,視訊轉碼器200可以決定表示非平移運動的兩個或更多運動向量,諸如縮小或放大、旋轉、透視運動或其他不規則運動類型。Some examples of VVC also provide an affine motion compensation mode, which can be considered an inter-frame prediction mode. In affine motion compensation mode, video transcoder 200 may determine two or more motion vectors that represent non-translational motion, such as zoom-in or zoom-in, rotation, perspective motion, or other irregular motion types.

為了執行訊框內預測,視訊轉碼器200可以選擇訊框內預測模式來產生預測塊。VVC的一些實例提供了67種訊框內預測模式,包括各種定向模式以及平面模式和DC模式。通常,視訊轉碼器200選擇訊框內預測模式,該訊框內預測模式描述了相對於當前塊(例如,CU的塊)的相鄰取樣,從相鄰取樣中預測當前塊的取樣。假設視訊轉碼器200以光柵掃瞄順序(從左到右、從上到下)對CTU和CU進行解碼,則這些取樣通常可以在與當前塊相同的圖片中的當前塊的上方、左上方或左側。To perform intra prediction, video transcoder 200 may select an intra prediction mode to generate prediction blocks. Some instances of VVC offer 67 intra-frame prediction modes, including various directional modes as well as planar and DC modes. Typically, video transcoder 200 selects an intra prediction mode, which describes predicting samples of the current block from adjacent samples relative to adjacent samples of the current block (eg, a block of a CU). Assuming that video transcoder 200 decodes CTUs and CUs in raster scan order (left to right, top to bottom), these samples can typically be above, above, and left of the current block in the same picture as the current block. Or the left side.

視訊轉碼器200對表示用於當前塊的預測模式的資料進行編碼。例如,對於訊框間預測模式,視訊轉碼器200可以對表示使用各種可用訊框間預測模式中的哪一個模式的資料、以及用於對應模式的運動資訊進行編碼。對於單向或雙向訊框間預測,例如,視訊轉碼器200可以使用高級運動向量預測(AMVP)或合併模式,來對運動向量進行編碼。視訊轉碼器200可以使用類似模式來對用於仿射運動補償模式的運動向量進行編碼。Video transcoder 200 encodes data indicating the prediction mode used for the current block. For example, for inter prediction modes, video transcoder 200 may encode data indicating which of various available inter prediction modes to use, as well as motion information for the corresponding mode. For unidirectional or bidirectional inter-frame prediction, for example, the video transcoder 200 may use advanced motion vector prediction (AMVP) or merge mode to encode motion vectors. Video transcoder 200 may use a similar mode to encode motion vectors for affine motion compensation mode.

AV1包括用於對視訊資料的譯碼塊進行編碼和解碼的兩種通用技術。這兩種通用技術是訊框內預測(例如,訊框內預測或空間預測)和訊框間預測(例如,訊框間預測或時間預測)。在AV1的上下文中,當使用訊框內預測模式來預測視訊資料的當前訊框的塊時,視訊轉碼器200和視訊解碼器300不使用來自視訊資料的其他訊框的視訊資料。對於大多數訊框內預測模式,視訊轉碼器200基於當前塊中的取樣值與根據同一訊框中的參考取樣產生的預測值之間的差異,對當前訊框的塊進行編碼。視訊轉碼器200基於訊框內預測模式,決定根據參考取樣產生的預測值。AV1 includes two common technologies for encoding and decoding coding blocks of video material. The two common techniques are intra-frame prediction (eg, intra-frame prediction or spatial prediction) and inter-frame prediction (eg, inter-frame prediction or temporal prediction). In the context of AV1, when using intra-frame prediction mode to predict blocks of the current frame of video data, video transcoder 200 and video decoder 300 do not use video data from other frames of video data. For most intra-frame prediction modes, video transcoder 200 encodes blocks of the current frame based on differences between sample values in the current block and prediction values generated from reference samples in the same frame. The video transcoder 200 determines prediction values based on the reference samples based on the intra prediction mode.

在諸如塊的訊框內預測或訊框間預測之類的預測之後,視訊轉碼器200可以計算針對該塊的殘差資料。殘差資料(諸如殘差塊)表示在塊與使用相應預測模式形成的針對該塊的預測塊之間的逐取樣差異。視訊轉碼器200可以向殘差塊應用一或多個變換,以在變換域而非取樣域中產生經變換的資料。例如,視訊轉碼器200可以向殘差視訊資料應用離散餘弦變換(DCT)、整數變換、小波變換或概念上類似的變換。另外地,視訊轉碼器200可以在第一變換之後應用次級變換,諸如依賴於模式的不可分離次級變換(MDNSST)、依賴於訊號的變換、Karhunen-Loeve變換(KLT)等等。視訊轉碼器200在應用該一或多個變換之後,產生變換係數。After prediction, such as intra prediction or inter prediction for a block, video transcoder 200 may calculate residual data for the block. Residual information, such as a residual block, represents the sample-by-sample difference between a block and a predicted block formed for that block using the corresponding prediction mode. Video transcoder 200 may apply one or more transforms to the residual block to produce transformed data in the transform domain rather than the sample domain. For example, video transcoder 200 may apply a discrete cosine transform (DCT), integer transform, wavelet transform, or conceptually similar transform to the residual video data. Additionally, the video transcoder 200 may apply a secondary transform after the first transform, such as mode-dependent non-separable secondary transform (MDNSST), signal-dependent transform, Karhunen-Loeve transform (KLT), etc. The video transcoder 200 generates transform coefficients after applying the one or more transforms.

如前述,在進行任何變換以產生變換係數之後,視訊轉碼器200可以執行變換係數的量化。量化通常代表此類程序:對變換係數進行量化,以可能地減少用於表示這些變換係數的資料量,從而提供進一步壓縮。經由執行量化程序,視訊轉碼器200可以減小與一些或所有這些變換係數相關聯的位深度。例如,視訊轉碼器200可以在量化期間,將 n位元值捨入為 m位元值,其中 n大於 m。在一些實例中,為了執行量化,視訊轉碼器200可以執行待量化的值的按位元右移。 As mentioned above, after performing any transformation to generate transform coefficients, the video transcoder 200 may perform quantization of the transform coefficients. Quantization generally represents the procedure of quantizing transform coefficients to possibly reduce the amount of data used to represent them, thus providing further compression. By performing a quantization procedure, video transcoder 200 may reduce the bit depth associated with some or all of these transform coefficients. For example, the video transcoder 200 may round an n- bit value to an m -bit value during quantization, where n is greater than m . In some examples, to perform quantization, video transcoder 200 may perform a bitwise right shift of the value to be quantized.

在量化之後,視訊轉碼器200可以掃瞄變換係數,根據包括經量化的變換係數的二維矩陣產生一維向量。可以將掃瞄設計為將較高能量(並且因此較低頻率)的變換係數放在向量的前面,並將較低能量(並且因此較高頻率)的變換係數放在向量的後面。在一些實例中,視訊轉碼器200可以利用預定義的掃瞄順序來掃瞄經量化的變換係數,以產生經序列化的向量,並且隨後對向量的經量化的變換係數進行熵編碼。在其他實例中,視訊轉碼器200可以執行自我調整掃瞄。在掃瞄經量化的變換係數以形成一維向量之後,視訊轉碼器200可以例如根據上下文自我調整二進位算術譯碼(CABAC)對一維向量進行熵編碼。視訊轉碼器200亦可以對語法元素的值進行熵編碼,語法元素描述與經編碼的視訊資料相關聯的中繼資料,以供視訊解碼器300在對視訊資料進行解碼時使用。After quantization, the video transcoder 200 may scan the transform coefficients and generate a one-dimensional vector based on the two-dimensional matrix including the quantized transform coefficients. The sweep can be designed to place higher energy (and therefore lower frequency) transform coefficients at the front of the vector, and lower energy (and therefore higher frequency) transform coefficients at the back of the vector. In some examples, video transcoder 200 may scan the quantized transform coefficients using a predefined scan order to generate a serialized vector, and then entropy encode the quantized transform coefficients of the vector. In other examples, video transcoder 200 may perform self-adjusting scanning. After scanning the quantized transform coefficients to form a one-dimensional vector, the video transcoder 200 may entropy encode the one-dimensional vector, such as context-based self-adaptive binary arithmetic coding (CABAC). The video transcoder 200 may also entropy encode the value of a syntax element that describes relay data associated with the encoded video data for use by the video decoder 300 when decoding the video data.

為了執行CABAC,視訊轉碼器200可以將上下文模型內的上下文分配給要發送的符號。例如,上下文可以涉及符號的相鄰值是否是零值。概率決定可以基於分配給符號的上下文。To perform CABAC, video transcoder 200 may assign context within the context model to the symbols to be sent. For example, the context may relate to whether a symbol's adjacent value is zero. Probabilistic decisions can be based on the context assigned to the symbol.

視訊轉碼器200亦可以在諸如圖片頭、塊頭、片段頭或其他語法資料中(諸如序列參數集(SPS)、圖片參數集(PPS)或視訊參數集(VPS)),產生針對視訊解碼器300的語法資料(諸如基於塊的語法資料、基於圖片的語法資料、以及基於序列的語法資料)。視訊解碼器300可以類似地對此類語法資料進行解碼,以決定如何解碼對應的視訊資料。The video transcoder 200 may also generate information specific to the video decoder in, for example, picture headers, block headers, segment headers or other syntax data (such as sequence parameter set (SPS), picture parameter set (PPS) or video parameter set (VPS)). 300 grammar materials (such as block-based grammar materials, picture-based grammar materials, and sequence-based grammar materials). The video decoder 300 can similarly decode such syntax data to determine how to decode the corresponding video data.

用此方式,視訊轉碼器200可以產生包括經編碼的視訊資料的位元串流,例如,用於描述將圖片劃分成塊(例如,CU)的語法元素、以及用於塊的預測及/或殘差資訊的語法元素。最終,視訊解碼器300可以接收位元串流,並對經編碼的視訊資料進行解碼。In this manner, video transcoder 200 may generate a bitstream that includes encoded video data, such as syntax elements describing partitioning of a picture into blocks (eg, CUs), and prediction and/or prediction of the blocks. or syntax element for residual information. Finally, the video decoder 300 can receive the bit stream and decode the encoded video data.

通常,視訊解碼器300執行與視訊轉碼器200所執行的程序相互的程序,以解碼位元串流的經編碼的視訊資料。例如,視訊解碼器300可以以與視訊轉碼器200的CABAC編碼程序實質上相似的方式(儘管相互),使用CABAC,對位元串流的語法元素的值進行解碼。語法元素可以定義用於將圖片劃分為CTU、以及根據對應的分割結構(諸如QTBT結構)對每個CTU進行劃分的劃分資訊,以定義CTU的CU。語法元素可以進一步規定用於視訊資料的塊(例如,CU)的預測和殘差資訊。Typically, the video decoder 300 executes a process identical to that executed by the video transcoder 200 to decode the encoded video data of the bit stream. For example, video decoder 300 may use CABAC to decode the values of syntax elements of the bit stream in a manner that is substantially similar to (albeit mutually) similar to the CABAC encoding process of video transcoder 200 . The syntax element may define partitioning information for partitioning the picture into CTUs and partitioning each CTU according to a corresponding partitioning structure (such as a QTBT structure) to define CUs of the CTU. Syntax elements may further specify prediction and residual information for blocks (eg, CUs) of video data.

可以經由例如經量化的變換係數來表示殘差資訊。視訊解碼器300可以對塊的經量化的變換係數進行逆量化和逆變換,以再現該塊的殘差塊。視訊解碼器300使用用訊號通知的預測模式(訊框內或訊框間預測)和相關的預測資訊(例如,用於訊框間預測的運動資訊)來形成用於該塊的預測塊。隨後,視訊解碼器300可以組合預測塊和殘差塊(在逐個取樣的基礎上)以再現原始塊。視訊解碼器300可以執行其他的處理,諸如執行去塊化處理以減少沿塊的邊界的視覺偽影。The residual information may be represented via, for example, quantized transform coefficients. Video decoder 300 may inverse-quantize and inverse-transform the quantized transform coefficients of a block to reproduce a residual block of the block. Video decoder 300 uses the signaled prediction mode (intra-frame or inter-frame prediction) and the associated prediction information (eg, motion information for inter-frame prediction) to form a prediction block for the block. Video decoder 300 may then combine the prediction block and the residual block (on a sample-by-sample basis) to reproduce the original block. Video decoder 300 may perform other processing, such as deblocking to reduce visual artifacts along block boundaries.

本案內容通常涉及「用訊號通知」某些資訊(諸如語法元素)。術語「用訊號通知」通常可以代表傳送用於語法元素的值及/或用於對經編碼的視訊資料進行解碼的其他資料。亦即,視訊轉碼器200可以在位元串流中用訊號通知用於語法元素的值。通常,用訊號通知是指在位元串流中產生值。如前述,源設備102可以基本即時地或者不即時地,將位元串流傳輸到目的地設備116,諸如在將語法元素儲存到存放裝置112以便稍後由目的地設備116取回時,可能發生這種情形。The content of this case usually involves "signaling" certain information (such as grammatical elements). The term "signaling" may generally refer to transmitting values for syntax elements and/or other data used to decode encoded video data. That is, video transcoder 200 may signal the values for the syntax elements in the bitstream. Typically, signaling means producing a value in a bit stream. As mentioned above, source device 102 may stream bits to destination device 116 substantially instantaneously or non-immediately, such as when storing syntax elements to storage 112 for later retrieval by destination device 116 . This happens.

如上面所提及的,視訊譯碼標準包括ITU-T H.261、ISO/IEC MPEG-1視覺、ITU-T H.262或ISO/IEC MPEG-2視覺、ITU-TH.263、ISO/IEC MPEG-4視覺和ITU-T H.264(亦稱為ISO/IEC MPEG-4 AVC)、高效視訊譯碼(HEVC)或ITU-T H.265,其包括其範圍擴展、多視點擴展(MV-HEVC)和可擴展擴展(SHVC)。最近,ITU-T視訊譯碼專家組(VCEG)和ISO/IEC運動影像專家組(MPEG)的聯合視訊專家團隊(JVET)開發了最新的標準通用視訊譯碼(VVC)或ITU-T H.26。VVC規範的第1版已於本月定稿,下文稱為VVC FDIS,該規範可從http://phenix.int-evry.fr/jvet/doc_end_user/documents/19_Teleconference/wg11/JVET-S2001-v17.zip獲得。As mentioned above, video coding standards include ITU-T H.261, ISO/IEC MPEG-1 visual, ITU-T H.262 or ISO/IEC MPEG-2 visual, ITU-TH.263, ISO/ IEC MPEG-4 Vision and ITU-T H.264 (also known as ISO/IEC MPEG-4 AVC), High Efficiency Video Coding (HEVC), or ITU-T H.265, including its range extensions, multi-view extensions ( MV-HEVC) and scalable extensions (SHVC). Recently, the Joint Video Experts Team (JVET) of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) developed the latest standard Universal Video Coding (VVC) or ITU-T H. 26. Version 1 of the VVC specification, hereafter referred to as VVC FDIS, was finalized this month and is available at http://phenix.int-evry.fr/jvet/doc_end_user/documents/19_Teleconference/wg11/JVET-S2001-v17. zip obtained.

視訊譯碼標準基於所謂的混合視訊譯碼原理,如圖6中所示,並且類似於圖2的描述。術語混合代表兩種技術的組合以減少視訊訊號中的冗餘,諸如具有預測殘差的量化的預測和變換譯碼。預測和變換經由去相關來減少視訊訊號中的冗餘,而量化經由降低它們的精度來減少變換係數表示的資料,理想情況下經由僅去除不相關的細節。在兩個最新的標準HEVC和VVC中,亦使用這種混合視訊譯碼設計原理。如圖6中所示,並且亦參照圖2更詳細地描述,現代混合視訊解碼器由各種處理組成。The video coding standard is based on the so-called hybrid video coding principle, as shown in Figure 6 and similar to the description of Figure 2. The term hybrid represents the combination of two techniques to reduce redundancy in video signals, such as quantized prediction with prediction residuals and transform coding. Prediction and transform reduce redundancy in the video signal through decorrelation, while quantization reduces the data represented by the transform coefficients by reducing their accuracy, ideally by removing only irrelevant details. This hybrid video decoding design principle is also used in the two latest standards, HEVC and VVC. As shown in Figure 6, and also described in more detail with reference to Figure 2, modern hybrid video decoders consist of various processes.

如圖6中所示,現代混合視訊解碼器130通常執行塊劃分、運動補償或圖片間預測、圖片內預測、變換、量化、熵譯碼、以及環後/環內濾波。在圖2的實例中,視訊解碼器130包括求和單元134、變換單元136、量化單元138、熵譯碼單元140、逆量化單元142、逆變換單元144、求和單元146、迴路濾波單元148、解碼圖片緩衝器(DPB)150、訊框內預測單元152、訊框間預測單元154和運動估計單元156。As shown in Figure 6, modern hybrid video decoders 130 typically perform block partitioning, motion compensation or inter-picture prediction, intra-picture prediction, transform, quantization, entropy coding, and post-/in-loop filtering. In the example of FIG. 2 , the video decoder 130 includes a summation unit 134 , a transformation unit 136 , a quantization unit 138 , an entropy decoding unit 140 , an inverse quantization unit 142 , an inverse transform unit 144 , a summation unit 146 , and a loop filtering unit 148 , decoded picture buffer (DPB) 150 , intra prediction unit 152 , inter prediction unit 154 and motion estimation unit 156 .

通常,視訊解碼器130在對視訊資料進行編碼時,可以接收輸入視訊資料132。塊劃分用於將接收到的視訊資料的圖片(影像)分割成更小的塊,以用於預測和變換程序的操作。早期的視訊譯碼標準使用固定的塊大小,其通常為16×16取樣。諸如HEVC和VVC之類的最近標準,採用基於樹的分割結構來提供靈活的劃分。Generally, the video decoder 130 may receive input video data 132 when encoding the video data. Block partitioning is used to divide the picture (image) of the received video data into smaller blocks for the operation of prediction and transformation procedures. Early video coding standards used a fixed block size, usually 16×16 samples. Recent standards, such as HEVC and VVC, employ tree-based partitioning structures to provide flexible partitioning.

運動估計單元156和訊框間預測單元154可以例如根據DPB 150的先前解碼的資料,來預測輸入視訊資料132。運動補償或圖片間預測利用了視訊序列的圖片之間(因此是「圖片間」)存在的冗餘。根據在所有現代視訊轉碼器中使用的基於塊的運動補償,根據一或多個先前解碼的圖片(亦即,參考圖片)來獲得預測。經由包括運動向量和參考圖片索引的運動資訊,來指示用於產生訊框間預測的對應區域。在最近的視訊轉碼器中,應用圖片組(GOP)內部的分層預測結構來提高解碼效率。在圖7中圖示具有等於16的大小的GOP 700的實例。Motion estimation unit 156 and inter-frame prediction unit 154 may predict input video data 132 based on previously decoded data of DPB 150, for example. Motion compensation or inter-picture prediction exploits the redundancy that exists between pictures in a video sequence (hence "inter-picture"). According to block-based motion compensation used in all modern video transcoders, predictions are obtained from one or more previously decoded pictures (ie, reference pictures). The corresponding region for generating inter-frame prediction is indicated via motion information including a motion vector and a reference picture index. In recent video transcoders, a hierarchical prediction structure within a group of pictures (GOP) is applied to improve decoding efficiency. An example of a GOP 700 having a size equal to 16 is illustrated in FIG. 7 .

求和單元134可以將殘差資料計算為在輸入視訊資料132與來自訊框內預測單元152或訊框間預測單元154的預測資料之間的差。求和單元134將殘差塊提供給變換單元136,變換單元136向殘差塊應用一或多個變換以產生變換塊。量化單元138對變換塊進行量化以形成經量化的變換係數。熵譯碼單元140對經量化的變換係數以及諸如運動資訊或訊框內預測資訊之類的其他語法元素進行熵編碼,以產生輸出位元串流158。Summing unit 134 may calculate residual data as the difference between input video data 132 and prediction data from intra prediction unit 152 or inter prediction unit 154 . Summing unit 134 provides the residual block to transform unit 136, which applies one or more transforms to the residual block to produce a transform block. Quantization unit 138 quantizes the transform blocks to form quantized transform coefficients. Entropy coding unit 140 entropy encodes the quantized transform coefficients and other syntax elements such as motion information or intra-prediction information to produce an output bit stream 158 .

同時,逆量化單元142對經量化的變換係數進行逆量化,逆變換單元144對變換係數進行逆變換,以再現殘差塊。求和單元146將殘差塊與預測塊進行組合(在逐個取樣的基礎上)以產生視訊資料的解碼塊。迴路濾波單元148向解碼塊應用一或多個濾波器(例如,基於神經網路的濾波器、基於神經網路的迴路濾波器、基於神經網路的後迴路濾波器、自我調整迴路濾波器、或預定義的自我調整迴路濾波器中的至少一個)以產生經濾波的解碼塊。At the same time, the inverse quantization unit 142 inversely quantizes the quantized transform coefficients, and the inverse transform unit 144 inversely transforms the transform coefficients to reproduce the residual block. Summing unit 146 combines the residual block with the prediction block (on a sample-by-sample basis) to produce a decoded block of the video material. Loop filtering unit 148 applies one or more filters (eg, neural network-based filters, neural network-based loop filters, neural network-based back-loop filters, self-adjusting loop filters, or at least one of a predefined self-adjusting loop filter) to produce a filtered decoding block.

根據本案內容的技術,迴路濾波單元148的神經網路濾波單元可以從求和單元146和從混合視訊解碼器130的一或多個其他單元(例如,變換單元136、量化單元138、訊框內預測單元152、訊框間預測單元154、運動估計單元156、及/或迴路濾波單元148內的一或多個其他濾波單元)接收視訊資料的解碼圖片的資料。例如,神經網路濾波單元可以從迴路濾波單元148的去塊濾波單元(亦稱為「去塊單元」)接收資料。In accordance with the techniques of this disclosure, the neural network filtering unit of loop filtering unit 148 may be derived from summing unit 146 and from one or more other units of hybrid video decoder 130 (e.g., transform unit 136, quantization unit 138, in-frame One or more other filtering units within prediction unit 152, inter-frame prediction unit 154, motion estimation unit 156, and/or loop filtering unit 148) receive data for decoded pictures of video data. For example, the neural network filtering unit may receive data from the deblocking filtering unit (also referred to as the “deblocking unit”) of the loop filtering unit 148 .

在圖6中,圖片內預測經由從已經編碼/解碼的空間相鄰(參考)取樣中匯出塊的預測,利用圖片記憶體在的空間冗餘(因此為「訊框內預測」)。在最新的視訊轉碼器(包括AVC、HEVC和VVC)中,使用了方向角預測、DC預測、以及平面或平面化預測。In Figure 6, intra-picture prediction exploits the spatial redundancy in the picture memory (hence "intra-frame prediction") by deriving predictions of blocks from spatially adjacent (reference) samples that have been encoded/decoded. In the latest video transcoders (including AVC, HEVC and VVC), directional angle prediction, DC prediction, and planar or planar prediction are used.

變換:混合視訊譯碼標準對預測殘差應用塊變換(無論其來自圖片間預測還是圖片內預測)。在包括H.261/262/263在內的早期標準中,採用了離散餘弦變換(DCT)。在HEVC和VVC中,除了DCT之外,亦應用了更多的變換核心,以考慮特定視訊訊號中的不同統計特性。Transform: The hybrid video coding standard applies block transformations to prediction residuals (whether they come from inter-picture prediction or intra-picture prediction). In early standards, including H.261/262/263, the discrete cosine transform (DCT) was used. In HEVC and VVC, in addition to DCT, more transformation cores are also applied to consider different statistical characteristics in specific video signals.

量化意欲降低輸入值或一組輸入值的精度,以便減少表示這些值所需要的資料量。在混合視訊譯碼中,通常將量化應用於各個變換的殘差取樣(亦即,變換係數),從而產生整數係數水平。在最近的視訊譯碼標準中,根據控制保真度和位元速率的所謂量化參數(QP)來匯出步長。較大的步長降低了位元速率,但亦降低了品質,這例如導致視訊圖片呈現出塊偽影和模糊的細節。Quantization is intended to reduce the precision of an input value or set of input values in order to reduce the amount of data required to represent those values. In hybrid video coding, quantization is typically applied to the residual samples of each transform (ie, the transform coefficients), resulting in integer coefficient levels. In recent video coding standards, the step size is derived according to a so-called quantization parameter (QP) that controls fidelity and bit rate. Larger step sizes reduce bit rate, but also reduce quality, which can cause video images to exhibit blocking artifacts and blurry details, for example.

熵譯碼:上下文自我調整二進位算術譯碼(CABAC)由於其高效率而用於最近的視訊轉碼器(例如,AVC、HEVC和VVC)。Entropy Coding: Contextual Self-Adjusting Binary Arithmetic Coding (CABAC) is used in recent video transcoders (e.g., AVC, HEVC, and VVC) due to its high efficiency.

環後/環內濾波是一種應用於重構圖片以減少譯碼偽影的濾波程序(或這些程序的組合)。濾波程序的輸入通常是經重構的圖片,它是經重構的殘留訊號(包括量化誤差)和預測的組合。亦即,濾波程序的輸入是將預測與殘留訊號相加的結果。如圖6中所示,儲存在環內濾波之後的重構圖片,並將其用作後續圖片的圖片間預測的參考。主要經由QP來決定譯碼偽影,因此通常將QP資訊用於濾波程序的設計。在HEVC中,環內濾波包括去塊濾波和取樣自我調整偏移(SAO)濾波。在VVC標準中,引入了自我調整迴路濾波器(ALF)作為第三濾波器。ALF的濾波程序如下所示, (1) Post-/in-loop filtering is a filtering procedure (or a combination of these procedures) applied to reconstruct a picture to reduce decoding artifacts. The input to the filtering process is usually a reconstructed image, which is a combination of the reconstructed residual signal (including quantization error) and the prediction. That is, the input to the filtering procedure is the sum of the prediction and residual signals. As shown in Figure 6, the reconstructed picture after in-loop filtering is stored and used as a reference for inter-picture prediction of subsequent pictures. Decoding artifacts are mainly determined through QP, so QP information is usually used for the design of filter procedures. In HEVC, in-loop filtering includes deblocking filtering and sample self-adjusting offset (SAO) filtering. In the VVC standard, the self-adjusting loop filter (ALF) is introduced as the third filter. The filtering program of ALF is as follows, (1)

其中容器 是濾波程序之前的取樣, 是濾波程序之後的取樣值。 表示濾波器係數, 是削波函數,以及 表示削波參數。變數k和l在 之間變化,其中 L表示濾波器長度。削波函數 ,其對應於函數 。削波操作引入了非線性,經由減少與當前取樣值相差太大的相鄰取樣值的影響,使ALF更加高效。在VVC中,可以在位元串流中用訊號通知濾波參數,可以從預定義的濾波器集合中選擇濾波參數。亦可以將ALF濾波程序概括為以下等式。 (2) where container is the sample before the filtering procedure, is the sample value after the filtering procedure. represents the filter coefficient, is the clipping function, and Indicates clipping parameters. The variables k and l are in and varies between, where L represents the filter length. clipping function , which corresponds to the function . The clipping operation introduces nonlinearity, making ALF more efficient by reducing the impact of adjacent sample values that are too different from the current sample value. In VVC, filter parameters can be signaled in the bit stream and can be selected from a predefined set of filters. The ALF filtering procedure can also be summarized as the following equation. (2)

下文描述了用於視訊譯碼的基於神經網路(NN)的濾波。將神經網路嵌入到混合視訊譯碼框架中,可以提高壓縮效率。神經網路已經應用於訊框內預測、訊框間預測等模組中,以提高預測效率。基於NN的環內濾波器亦是可能的。在一些實例中,將濾波程序應用為後濾波器,在這種情況下,濾波程序僅應用於輸出圖片,並且將未濾波的圖片用作參考圖片。Neural network (NN) based filtering for video decoding is described below. Embedding neural networks into the hybrid video decoding framework can improve compression efficiency. Neural networks have been used in modules such as intra-frame prediction and inter-frame prediction to improve prediction efficiency. NN based in-loop filters are also possible. In some instances, the filtering procedure is applied as a post-filter, in which case the filtering procedure is applied only to the output picture and the unfiltered picture is used as the reference picture.

在諸如去塊濾波器、SAO和ALF之類的現有濾波器之外,亦可以另外應用基於NN的濾波器。在設計為取代所有現有的濾波器的情況下,它亦可以進行專門地應用。In addition to existing filters such as deblocking filters, SAO and ALF, NN-based filters can also be additionally applied. It can also be used in specialized applications where it is designed to replace all existing filters.

如圖8中所示,基於NN的濾波程序將經重構取樣800作為輸入,以及中間輸出是殘差取樣(例如,殘差取樣802),將殘差取樣添加回到輸入(例如,重構取樣800),以精調輸入取樣並產生經濾波取樣804。NN濾波器可以使用所有顏色分量作為輸入,來利用交叉分量相關性。不同的分量可以共享相同的濾波器(包括網路結構和模型參數),或者每個分量皆有自己的特定濾波器。As shown in Figure 8, the NN-based filtering procedure takes reconstructed samples 800 as input, and the intermediate outputs are residual samples (eg, residual samples 802) that are added back to the input (eg, reconstructed samples 800) to fine-tune the input samples and produce filtered samples 804. NN filters can use all color components as input to exploit cross-component correlation. Different components can share the same filter (including network structure and model parameters), or each component can have its own specific filter.

作為一實例,即使正在對顏色分量的一個子集進行濾波,NN濾波器亦可以接收所有顏色分量。例如,對於亮度和色度分量,NN濾波器可以接收亮度塊和色度塊兩者。在該實例中,假設Cb色度塊要進行濾波,而亮度塊或Cr色度塊不進行濾波。亮度塊、Cb色度塊和Cr色度塊皆是NN濾波器的輸入。在此類實例中,NN濾波器可以產生經濾波的Cb色度塊和經濾波的Cr色度塊。可以不產生經濾波的亮度塊,但是這些技術要求產生經濾波的亮度塊。然而,視訊轉碼器200和視訊解碼器300可以丟棄經濾波的Cr色度塊。亦即,包括亮度塊和色度塊的經重構譯碼單元(CU)可以包括經重構的亮度取樣(未濾波)、經重構的Cr色度取樣(未濾波)和經濾波的經重構的Cb色度取樣。As an example, a NN filter can receive all color components even if a subset of the color components is being filtered. For example, for luma and chroma components, a NN filter can receive both luma and chroma blocks. In this example, it is assumed that the Cb chrominance block is filtered and the luma or Cr chrominance blocks are not filtered. The luma block, Cb chroma block and Cr chroma block are all inputs to the NN filter. In such instances, the NN filter may produce filtered Cb chroma blocks and filtered Cr chroma blocks. Filtered luma blocks may not be generated, but these techniques require that filtered luma blocks be generated. However, video transcoder 200 and video decoder 300 may discard the filtered Cr chroma blocks. That is, a reconstructed coding unit (CU) including a luma block and a chroma block may include reconstructed luma samples (unfiltered), reconstructed Cr chroma samples (unfiltered), and filtered Reconstructed Cb chroma sampling.

亦可以如下所示來概括濾波程序: (3) The filtering procedure can also be summarized as follows: (3)

基於NN的濾波器的模型結構和模型參數可以是預先定義的,並儲存在視訊轉碼器200和視訊解碼器300處。亦可以在位元串流中用訊號通知這些濾波器。The model structure and model parameters of the NN-based filter may be predefined and stored at the video transcoder 200 and the video decoder 300 . These filters can also be signaled in the bit stream.

當在視訊譯碼中應用基於NN的濾波時,可以將整個視訊訊號分割為多個處理成分,並且每個處理成分可以進行單獨地處理。處理成分的可能選擇包括:訊框、切片/瓦片、CTU或任何預定義的或用訊號通知的形狀和尺寸。亦即,術語「處理成分」可以代表圖片或訊框中可能發生處理的部分。When applying NN-based filtering in video decoding, the entire video signal can be divided into multiple processing components, and each processing component can be processed independently. Possible choices for processing components include: frames, slices/tiles, CTUs, or any predefined or signaled shape and size. That is, the term "processing component" can refer to the portion of an image or frame where processing may occur.

下文描述了具有多模式設計的基於NN的濾波。為了進一步提高基於NN的濾波效能,可以設計多模式的解決方案。例如,對於每個處理成分,視訊轉碼器200可以基於率失真最佳化在一組模式中進行選擇,並且可以在位元串流中用訊號通知該選擇。不同的模式可以包括不同的NN模型、用作NN模型的輸入資訊的不同值等等。舉一實例,JVET-Z0113:Y.Li,K.Zhang,L.Zhang,H.Wang,M.Coban,A.M.Kotra,M.Karczewicz,F.Galpin,K.Andersson,J.StröM,D.Liu,R.Sjöberg,EE1-1.7:EE1-1.6和EE1-1.3的組合測試,JVET-Z0113,2022年4月(以下簡稱JVET-Z0113),提出了一種基於NN的濾波解決方案,該解決方案經由使用不同的QP值作為用於不同模式的NN模型的輸入,基於單個NN模型來建立多個模式。NN-based filtering with multi-modal design is described below. In order to further improve the performance of NN-based filtering, multi-mode solutions can be designed. For example, for each processing component, video transcoder 200 may select among a set of modes based on rate-distortion optimization, and may signal the selection in the bitstream. Different modes may include different NN models, different values used as input information for the NN model, etc. To give an example, JVET-Z0113: Y. Li, K. Zhang, L. Zhang, H. Wang, M. Coban, A. M. Kotra, M. Karczewicz, F. Galpin, K. Andersson, J. StröM, D. Liu , R.Sjöberg, EE1-1.7: Combined Testing of EE1-1.6 and EE1-1.3, JVET-Z0113, April 2022 (hereinafter referred to as JVET-Z0113), proposes a NN-based filtering solution via Multiple modes are built based on a single NN model using different QP values as input to NN models for different modes.

使用基於NN的濾波技術可能存在某些問題。對於具有多個顏色分量的視訊訊號(例如,視訊資料),不同顏色分量的濾波程序可能不同。然而,對於某些類型的濾波(例如,基於神經網路的濾波),對每個顏色分量應用不同的濾波器可能顯著地增加計算複雜性和記憶體需求。在這些情況下,可以經由設計能夠覆蓋多個顏色分量的濾波器,並對這些濾波器在視訊轉碼器中的應用方式進行適當控制,來實現壓縮效能和複雜度成本之間的更好權衡。There may be certain problems using NN-based filtering techniques. For video signals with multiple color components (eg, video data), the filtering procedures for different color components may be different. However, for some types of filtering (e.g., neural network-based filtering), applying a different filter to each color component can significantly increase computational complexity and memory requirements. In these cases, a better trade-off between compression performance and complexity cost can be achieved by designing filters that cover multiple color components and having appropriate control over how these filters are applied in the video transcoder. .

例如,在JVET-Z0113中,提出了一種具有多種模式的基於NN的濾波解決方案,該解決方案對不同的模式使用不同的QP值,視訊轉碼器200和視訊解碼器300可以針對處理成分來從不同的模式中進行選擇。在JVET-Z0113的這種技術中,只有一個模型(例如,只有一種神經網路模型)可以用於切片的色度分量的濾波(例如,其中切片是處理成分的實例)。兩個色度分量可以選擇不同的模式,這使得需要在解碼器側執行兩次濾波來進行這兩個色度分量的推斷。For example, in JVET-Z0113, a NN-based filtering solution with multiple modes is proposed, which uses different QP values for different modes. The video transcoder 200 and the video decoder 300 can target processing components. Choose from different modes. In this technique of JVET-Z0113, only one model (e.g., only one neural network model) can be used for filtering of the chroma component of a slice (e.g., where a slice is an instance of a processing component). The two chroma components can choose different modes, which makes it necessary to perform two filterings on the decoder side for the inference of these two chroma components.

在一些技術(諸如,JVET-Z0113的那些技術)中,若顏色分量具有NN濾波器的不同模式,則視訊轉碼器200和視訊解碼器300將在相應不同的模式下,執行NN濾波器的兩個實例。對於不同的模式,在JVET-Z0113中,輸入是亮度和色度取樣,但輸出可以僅是為對色度分量進行濾波而設的用於NN濾波器的色度取樣。亦即,用於色度分量的NN濾波模式可以使用亮度分量作為輸入,但不使用輸出亮度取樣。例如,假設將第一NN濾波模式應用於第一顏色分量(例如,Cb),並且將第二NN濾波模式應用於第二顏色分量(例如,Cr)。In some technologies (such as those of JVET-Z0113), if the color components have different modes of the NN filter, the video transcoder 200 and the video decoder 300 will perform the NN filter in the corresponding different modes. Two examples. For a different mode, in JVET-Z0113 the inputs are luma and chroma samples, but the output can be just chroma samples for a NN filter designed to filter the chroma components. That is, the NN filtering mode for the chroma component can use the luma component as input, but not the output luma samples. For example, assume that a first NN filtering mode is applied to a first color component (eg, Cb), and a second NN filtering mode is applied to a second color component (eg, Cr).

在該實例中,視訊轉碼器200和視訊解碼器300將在第一NN濾波模式中應用NN濾波器的第一實例,其輸入為亮度、Cb和Cr分量(例如,亮度塊、Cb塊和Cr塊的經重構取樣)。視訊轉碼器200和視訊解碼器300將忽略經濾波的Cr取樣,並維持經濾波的Cb取樣。In this example, video transcoder 200 and video decoder 300 will apply a first instance of the NN filter in a first NN filtering mode, with inputs being luma, Cb, and Cr components (eg, luma block, Cb block, and Reconstructed samples of Cr blocks). Video transcoder 200 and video decoder 300 will ignore the filtered Cr samples and maintain the filtered Cb samples.

接下來或並行地,視訊轉碼器200和視訊解碼器300將在第二NN濾波模式中應用NN濾波器的第二實例,其輸入為亮度、Cb和Cr分量(例如,亮度塊、Cb塊和Cr塊的重構取樣)。視訊轉碼器200和視訊解碼器300將忽略經濾波的Cb取樣,並維持經濾波的Cr取樣。Next or in parallel, video transcoder 200 and video decoder 300 will apply a second instance of the NN filter in a second NN filtering mode with the inputs being luma, Cb, and Cr components (eg, luma block, Cb block and reconstruction sampling of Cr blocks). Video transcoder 200 and video decoder 300 will ignore the filtered Cb samples and maintain the filtered Cr samples.

在該實例中,視訊轉碼器200和視訊解碼器300可以維持來自NN濾波器的第一實例的經濾波Cb取樣,維持來自NN濾波器的第二實例的經濾波Cr取樣,並丟棄其餘取樣。因此,在該實例中,為了產生包括亮度和色度分量的經濾波CU,視訊轉碼器200和視訊解碼器300以不同的模式來應用NN濾波器的兩個實例。NN濾波器的兩個實例在不同模式下的這種執行,可能是處理密集型的,並延遲影像的重建。In this example, video transcoder 200 and video decoder 300 may maintain the filtered Cb samples from the first instance of the NN filter, maintain the filtered Cr samples from the second instance of the NN filter, and discard the remaining samples. . Therefore, in this example, to produce filtered CUs that include luma and chroma components, video transcoder 200 and video decoder 300 apply two instances of NN filters in different modes. This execution of two instances of NN filters in different modes can be processing intensive and delay image reconstruction.

CU是包括成分塊的合成塊的實例。例如,CU包括亮度塊和色度塊。在本案內容中,將CU用作實例,通常是指包括不同顏色分量(如亮度和色度分量)的合成塊。術語CU不應限於視訊譯碼標準中定義的CU,或者是視訊轉碼器的一部分,除非另外明確規定。A CU is an instance of a composite block that includes component blocks. For example, a CU includes luma blocks and chroma blocks. In this context, CU is used as an example, which generally refers to a composite block that includes different color components (such as luma and chroma components). The term CU shall not be limited to CUs as defined in video coding standards, or as part of a video transcoder, unless explicitly stated otherwise.

本案內容描述了限制用於不同顏色分量的不同模式的實例技術。亦即,用於一個顏色分量的NN濾波模式與用於另一顏色分量的濾波模式相同。這樣,視訊轉碼器200和視訊解碼器300可以在不同模式中應用NN濾波器的一個實例,而不是應用NN濾波器的兩個實例。This content describes example techniques that restrict different modes for different color components. That is, the NN filtering mode for one color component is the same as the filtering mode for the other color component. In this way, the video transcoder 200 and the video decoder 300 can apply one instance of the NN filter in different modes instead of applying two instances of the NN filter.

此外,本案內容描述了允許選擇性地決定是利用來自多個顏色分量還是更少顏色分量(例如,一個顏色分量)的濾波取樣的實例技術。以這種方式,不需要使用所有的經濾波分量,從而允許經由利用經濾波分量的子集,產生更好的解碼效率或更高的影像內容的情況。Additionally, this document describes example techniques that allow for selectively deciding whether to utilize filtered sampling from multiple color components or fewer color components (eg, one color component). In this way, all filtered components do not need to be used, allowing the situation to result in better decoding efficiency or higher image content by utilizing a subset of filtered components.

此外,如更詳細描述的,本案內容描述了用訊號發送不同級別參數,以指示將使用哪種NN濾波模式的實例技術。例如,切片級語法元素可以具有複數個值。第一值可以指示對顏色分量禁用NN濾波。該複數個值的值子集可以各自對應於不同的NN濾波模式。最後一個值可以指示NN濾波是啟用還是禁用,並且若啟用,則在較低級別(例如,在用於CTU的所有塊的CTU級、在CU級等等)指示NN濾波模式。在此類實例中,視訊轉碼器200可以用訊號通知用於一或多個塊的額外語法元素,並且視訊解碼器300可以進行接收,這些額外語法元素指示是啟用還是禁用NN濾波,並且若啟用的話,則指示NN濾波模式。Additionally, as described in greater detail, this document describes example techniques for signaling different levels of parameters to indicate which NN filtering mode will be used. For example, a slice-level syntax element can have a plurality of values. The first value may indicate that NN filtering is disabled for the color component. The value subsets of the plurality of values may each correspond to a different NN filtering mode. The last value may indicate whether NN filtering is enabled or disabled, and if enabled, indicates the NN filtering mode at a lower level (eg, at the CTU level for all blocks of the CTU, at the CU level, etc.). In such instances, video transcoder 200 may signal, and video decoder 300 may receive, additional syntax elements that indicate whether NN filtering is enabled or disabled, and if If enabled, indicates NN filtering mode.

在本案內容中描述的一或多個實例中,由於兩個色度分量之間的相似性,可以設計濾波控制機制,使色度分量共享相同的NN模式選擇,以在不犧牲率失真效能的情況下儘可能降低計算複雜度。亦即,本案內容描述了用於濾波程序的實例技術,其中濾波操作的一部分或全部(若有的話)由多個顏色分量共用。In one or more of the examples described in this content, due to the similarity between the two chroma components, a filtering control mechanism can be designed so that the chroma components share the same NN mode selection to achieve the desired performance without sacrificing rate-distortion performance. Reduce computational complexity as much as possible. That is, this disclosure describes example techniques for filtering procedures in which some or all, if any, of the filtering operations are shared by multiple color components.

下文描述本案內容中描述的實例技術的第一實現方式。作為一實現方式,對於具有一個亮度分量和兩個色度分量的顏色格式(例如,YCbCr顏色格式),對於每個處理成分,最多允許一個基於NN的濾波器來完成兩個色度分量的濾波程序。The following describes a first implementation of the example technology described in the content of this case. As an implementation, for color formats with one luma component and two chroma components (e.g., YCbCr color format), at most one NN-based filter is allowed for each processing component to complete the filtering of two chroma components. program.

在第一實例中,用於色度顏色分量的NN濾波器可以被設計為能夠在單個濾波程序後輸出兩個經濾波的色度分量。可以聯合地執行兩個色度分量的濾波控制。例如,兩個色度分量在如何應用NN濾波器態樣具有完全相同的選擇,包括用於NN濾波的開/關切換、如何建立NN模型的輸入資料、以及所有輸入元素的值,以便在單個濾波程序之後可以獲得兩個色度分量的經濾波取樣。In a first example, a NN filter for chroma color components may be designed to be able to output two filtered chroma components after a single filtering procedure. Filter control of the two chrominance components can be performed jointly. For example, both chroma components have exactly the same choices for how to apply the NN filter, including the on/off switch for NN filtering, the input data for how to build the NN model, and the values of all input elements so that they can be used in a single After the filtering procedure, filtered samples of the two chroma components are obtained.

類似於JVET-Z0113,對於色度分量,有一個模型用於NN濾波,在建立NN模型的輸入的程序中,經由使用不同的QP值來建立多個模式。如下所述,用訊號通知和控制NN濾波器配置的選擇:Similar to JVET-Z0113, for the chroma component, there is a model for NN filtering, and in the procedure of establishing the input of the NN model, multiple modes are established by using different QP values. The choice of NN filter configuration is signaled and controlled as follows:

在切片級(作為處理成分的一實例,但其他實例亦是可能的),針對色度成分用訊號通知具有[0,4]的值範圍的語法元素(稱為「第一級濾波模式語法元素」)。以下列出了5個語法值的含義: a. 0:對於該切片,關閉色度分量的NN濾波。 b. 1:對於該切片的色度分量,開啟NN濾波,並且在建立NN濾波器的輸入的程序中,使用預定義或用訊號通知的集合中的第一QP。 c. 2:對於該切片的色度分量,開啟NN濾波,並且在建立NN濾波器的輸入的程序中,使用預定義或用訊號通知的集合中的第二QP。 d. 3:對於該切片的色度分量,開啟NN濾波,並且在建立NN濾波器的輸入的程序中,使用預定義或用訊號通知的集合中的第三QP。 e. 4:發送額外語法元素,以針對每個處理成分來單獨地控制NN濾波。 At the slice level (as one example of a processing component, but other examples are possible), a syntax element with a value range of [0,4] is signaled for the chroma component (called the "first-level filter mode syntax element" ”). The following lists the meanings of the 5 syntax values: a. 0: For this slice, turn off the NN filtering of the chroma component. b. 1: For the chroma component of this slice, turn on NN filtering, and in the procedure that creates the input to the NN filter, use the first QP in the predefined or signaled set. c. 2: For the chroma component of this slice, turn on NN filtering, and in the procedure that creates the input to the NN filter, use the second QP from the predefined or signaled set. d. 3: For the chroma component of this slice, turn on NN filtering, and in the procedure that creates the input to the NN filter, use the third QP from the predefined or signaled set. e. 4: Send additional syntax elements to control NN filtering individually for each processing component.

若在切片級用訊號通知模式4,則對於每個處理成分,用訊號通知對應色度分量的值範圍為[0,3]的額外語法元素(例如,「第二級濾波模式語法元素」)。下文列出了4個語法值的含義: a. 0:對於該處理成分,關閉色度分量的NN濾波。 b. 1:對於該處理成分的色度分量,開啟NN濾波,並且在建立NN濾波器的輸入的程序中,使用預定義或用訊號通知的集合中的第一QP。 c. 2:對於該處理成分的色度分量,開啟NN濾波,並且在建立NN濾波器的輸入的程序中,使用預定義或用訊號通知的集合中的第二QP。 d. 3:對於該處理成分的色度分量,開啟NN濾波,並且在建立NN濾波器的輸入的程序中,使用預定義或用訊號通知的集合中的第三QP。 If mode 4 is signaled at the slice level, then for each processing component, an additional syntax element with a value in the range [0,3] for the corresponding chroma component is signaled (e.g., a "second-level filtering mode syntax element") . The following lists the meanings of the 4 syntax values: a. 0: For this processing component, turn off the NN filtering of the chroma component. b. 1: For the chrominance component of this processing component, NN filtering is turned on, and in the procedure that creates the input to the NN filter, the first QP in the predefined or signaled set is used. c. 2: For the chrominance component of the processed component, turn on NN filtering, and in the procedure that creates the input to the NN filter, use the second QP from the predefined or signaled set. d. 3: For the chroma component of the processed component, turn on NN filtering, and in the procedure that creates the input to the NN filter, use the third QP from the predefined or signaled set.

例如,在上面的實例中,視訊轉碼器200可以用訊號通知語法元素(例如,第一級濾波模式語法元素),該語法元素定義用於第一顏色分量和第二顏色分量兩者的用於神經網路(NN)模型的濾波模式,並且視訊解碼器300可以接收該語法元素。例如,在上面的實例中,第一級語法元素的值在[0,4]的範圍內。若第一級語法元素的值是1、2或3,則第一級語法元素為第一顏色分量和第二顏色分量兩者定義NN模型的濾波模式(例如,NN濾波模式)。例如,值1意味著濾波模式基於預定義或用訊號通知的集合中的第一QP,值2意味著濾波模式基於預定義或用訊號通知的集合中的第二QP,以及值3意味著濾波模式基於預定義或用訊號通知的集合中的第三QP。For example, in the above example, video transcoder 200 may signal a syntax element (eg, a first-level filter mode syntax element) that defines the usage of both the first color component and the second color component. The filtering mode is based on a neural network (NN) model, and the video decoder 300 can receive the syntax element. For example, in the above example, the value of the first-level syntax element is in the range [0,4]. If the value of the first-level syntax element is 1, 2, or 3, the first-level syntax element defines the filtering mode of the NN model (eg, NN filtering mode) for both the first color component and the second color component. For example, a value of 1 means that the filtering mode is based on the first QP in a predefined or signaled set, a value of 2 means that the filtering mode is based on the second QP in a predefined or signaled set, and a value of 3 means that the filtering mode is based on the first QP in a predefined or signaled set. The pattern is based on a third QP in a predefined or signaled set.

然而,若第一級濾波模式語法元素的值為4,則視訊轉碼器200可以用訊號通知語法元素(例如,第二級濾波模式語法元素),該語法元素定義了用於第一顏色分量和第二顏色分量兩者的用於神經網路(NN)模型的濾波模式,並且視訊解碼器300可以接收該語法元素。例如,在上面的實例中,第二級語法元素的值在[0,3]的範圍內。若第一級語法元素的值是1、2或3,則第二級語法元素定義用於第一顏色分量和第二顏色分量兩者的NN模型的濾波模式(例如,NN濾波模式)。例如,值1意味著濾波模式基於預定義或用訊號通知的集合中的第一QP,值2意味著濾波模式基於預定義或用訊號通知的集合中的第二QP,值3意味著濾波模式基於預定義或用訊號通知的集合中的第三QP。亦即,視訊轉碼器200可以用訊號通知語法元素,該語法元素定義用於第一顏色分量和第二顏色分量兩者的用於NN模型的濾波模式,並且視訊解碼器300可以接收該語法元素。在一個實例中,該語法元素是第一級濾波模式語法元素。在另一個實例中,該語法元素是第二級濾波模式語法元素。例如,視訊轉碼器200可以用訊號通知可應用於複數個CU(例如,一切片的CU)的第一語法元素(例如,第一級濾波模式語法元素),並且視訊解碼器300可以接收該第一語法元素,其中第一語法元素指示針對該複數個CU中的CU子集來解析第二語法元素(例如,第一級濾波模式語法元素的值為4,因此對第二級濾波模式語法元素進行解析)。CU子集包括具有正在重構的CU的一或多個CU。亦即,第一級濾波模式語法元素應用於比第二級濾波模式語法元素更多的CU。However, if the value of the first-level filter mode syntax element is 4, the video transcoder 200 may signal a syntax element (eg, a second-level filter mode syntax element) that defines the syntax for the first color component. and the second color component, and the video decoder 300 may receive the syntax element. For example, in the above example, the value of the second-level syntax element is in the range [0,3]. If the value of the first-level syntax element is 1, 2, or 3, the second-level syntax element defines the filtering mode (eg, NN filtering mode) of the NN model for both the first color component and the second color component. For example, a value of 1 means that the filtering mode is based on the first QP in a predefined or signaled set, a value of 2 means that the filtering mode is based on the second QP in a predefined or signaled set, and a value of 3 means that the filtering mode is based on the second QP in a predefined or signaled set. Based on a third QP in a predefined or signaled set. That is, video transcoder 200 may signal a syntax element that defines a filtering pattern for the NN model for both the first color component and the second color component, and video decoder 300 may receive the syntax element. In one example, the syntax element is a first level filter mode syntax element. In another example, the syntax element is a second level filter mode syntax element. For example, video transcoder 200 may signal a first syntax element (eg, a first-level filter mode syntax element) that is applicable to a plurality of CUs (eg, a sliced CU), and video decoder 300 may receive the A first syntax element, wherein the first syntax element indicates that the second syntax element is parsed for a subset of CUs in the plurality of CUs (e.g., the first level filtering mode syntax element has a value of 4, so the second level filtering mode syntax elements are parsed). The CU subset includes one or more CUs with the CU being reconstructed. That is, first-level filtering mode syntax elements apply to more CUs than second-level filtering mode syntax elements.

視訊轉碼器200可以基於第一語法元素指示針對CU子集來解析第二語法元素來用訊號通知第二語法元素,並且視訊解碼器300可以接收第二語法元素。亦即,在第一級濾波模式語法元素的值為4的條件下,視訊轉碼器200可以用訊號通知用於CU子集的第二級濾波模式語法元素,並且視訊解碼器300可以進行接收。Video transcoder 200 may parse the second syntax element for the CU subset based on the first syntax element indication to signal the second syntax element, and video decoder 300 may receive the second syntax element. That is, under the condition that the value of the first-level filtering mode syntax element is 4, the video transcoder 200 can signal the second-level filtering mode syntax element for the CU subset, and the video decoder 300 can receive .

具有第一級濾波模式語法元素和第二級濾波模式語法元素的實例僅處於說明目的提供,並且不應被視為限制性的。亦即,在一些實例中,可以存在定義CU的濾波模式的一個濾波模式語法元素。這一個濾波模式語法元素可以在圖片級(例如,用於圖片的所有CU)、在切片級(例如,用於切片的所有CU)、在CTU級(例如,用於CTU的所有CU)、或者在CU級(例如,逐塊)。Examples with first-level filtering mode syntax elements and second-level filtering mode syntax elements are provided for illustrative purposes only and should not be considered limiting. That is, in some examples, there may be one filter mode syntax element that defines the filter mode of the CU. This filter mode syntax element can be at the picture level (e.g., for all CUs of the picture), at the slice level (e.g., for all CUs of the slice), at the CTU level (e.g., for all CUs of the CTU), or At the CU level (e.g., block-by-block).

視訊轉碼器200和視訊解碼器300可以在定義的濾波模式下,將NN模型的實例應用於第一顏色分量的第一塊,以產生第一經濾波塊。例如,第一塊可以是圖8的重構取樣800的一部分,並且第一殘差值可以是圖8的殘差取樣802的值。視訊轉碼器200和視訊解碼器300可以基於第一塊和第一殘差值(例如,經由對經重構取樣800和殘差取樣802求和)來產生第一經濾波塊(例如,用於第一顏色分量的經濾波取樣804)。Video transcoder 200 and video decoder 300 may apply an instance of the NN model to the first block of the first color component under a defined filtering mode to produce a first filtered block. For example, the first block may be part of the reconstructed samples 800 of FIG. 8 and the first residual value may be the value of the residual samples 802 of FIG. 8 . Video transcoder 200 and video decoder 300 may generate a first filtered block (e.g., using at the filtered samples of the first color component 804).

在一或多個實例中,由於語法元素(例如,第一級濾波模式語法元素或第二級濾波模式語法元素)定義了用於第一顏色分量和第二顏色分量兩者的用於NN模型的濾波模式,因此視訊轉碼器200和視訊解碼器300可以在定義的濾波模式中,向第二顏色分量的第二塊應用NN模型的相同實例,以產生第二經濾波塊殘差值。例如,視訊轉碼器200和視訊解碼器300可以在定義的濾波模式中,將NN模型的相同實例應用於第二顏色分量的第二塊,以產生第二殘差值。視訊轉碼器200和視訊解碼器300可以基於第二塊和第二殘差值來產生第二經濾波塊(例如,將第二塊與第二殘差相加)。In one or more instances, since a syntax element (eg, a first-level filtering mode syntax element or a second-level filtering mode syntax element) defines a NN model for both the first color component and the second color component, filtering mode, so video transcoder 200 and video decoder 300 may apply the same instance of the NN model to the second block of the second color component in the defined filtering mode to produce a second filtered block residual value. For example, video transcoder 200 and video decoder 300 may apply the same instance of the NN model to a second block of the second color component in a defined filtering mode to produce a second residual value. Video transcoder 200 and video decoder 300 may generate a second filtered block based on the second block and the second residual value (eg, adding the second block and the second residual value).

在本案內容中,除非另外說明,否則應用NN模型的相同實例意味著:NN模型的一次執行導致針對多個顏色分量的經濾波取樣。例如,NN濾波器可以接收亮度取樣、Cb取樣和Cr取樣作為輸入。可以經由語法元素定義NN濾波模式(例如,對於第一級濾波模式語法元素或對於第二級濾波模式語法元素,值為1、2或3)。視訊轉碼器200和視訊解碼器300可以在定義的濾波模式中應用NN濾波器的實例,並且輸出可以是經濾波的Cb取樣和經濾波的Cr取樣。In this context, unless otherwise stated, applying the same instance of a NN model means that one execution of the NN model results in filtered sampling for multiple color components. For example, a NN filter can receive luma samples, Cb samples, and Cr samples as input. The NN filtering mode may be defined via syntax elements (eg, a value of 1, 2, or 3 for the first level filtering mode syntax element or for the second level filtering mode syntax element). Video transcoder 200 and video decoder 300 may apply instances of NN filters in defined filtering modes, and the outputs may be filtered Cb samples and filtered Cr samples.

為了描述經濾波的Cb取樣和經濾波的Cr取樣是根據應用NN模型的相同實例產生的,本案內容將視訊轉碼器200和視訊解碼器300描述為:(1)在定義的濾波模式下,將NN模型的實例應用於第一顏色分量的第一塊以產生第一經濾波塊(例如,基於第一塊和第一殘差值),以及(2)在所定義的濾波模式中,將NN模型的相同實例應用於第二顏色分量的第二塊以產生第二經濾波塊(例如,基於第二塊和第二殘差值)。亦即,僅執行NN模型的一個實例來產生用於第一顏色分量的第一經濾波塊和用於第二顏色分量的第二經濾波塊。In order to describe that the filtered Cb samples and the filtered Cr samples are generated according to the same instance of applying the NN model, the content of this case describes the video transcoder 200 and the video decoder 300 as: (1) Under the defined filtering mode, apply an instance of the NN model to the first block of the first color component to produce the first filtered block (e.g., based on the first block and the first residual value), and (2) in the defined filtering mode, The same instance of the NN model is applied to the second block of the second color component to produce a second filtered block (eg, based on the second block and the second residual value). That is, only one instance of the NN model is executed to produce a first filtered block for the first color component and a second filtered block for the second color component.

因此,在該實例中,一個語法元素(例如,第一級濾波模式語法元素或第二級濾波模式語法元素)定義了用於兩個色度分量的濾波模式。在一些其他技術(例如,Z-0133)中,一個語法元素可以定義用於第一顏色分量的濾波模式,而另一個語法元素可以定義用於第二顏色分量的濾波模式。利用本案內容中描述的實例技術,一個語法元素定義用於多個顏色分量的用於NN模型的濾波模式。Thus, in this example, one syntax element (eg, a first-level filtering mode syntax element or a second-level filtering mode syntax element) defines a filtering mode for two chroma components. In some other technologies (eg, Z-0133), one syntax element may define the filtering mode for the first color component, and another syntax element may define the filtering mode for the second color component. Using the example technique described in this case content, a syntax element defines a filtering mode for a NN model for multiple color components.

在上面的實例中,視訊轉碼器200和視訊解碼器300可以基於第一經濾波塊和第二經濾波塊來儲存用於解碼單元(CU)的取樣值。例如,視訊轉碼器200和視訊解碼器300可以在各自的緩衝器中,儲存第一經濾波塊和第二經濾波塊作為CU的一部分,使得若CU用於訊框間預測,則使用第一經濾波塊及第二經濾波塊來進行訊框間預測。再舉一個實例,視訊解碼器300可以在緩衝器中,儲存第一經濾波塊和第二經濾波塊作為CU的一部分,使得當顯示包括CU的圖片時,基於第一經濾波塊和第二經濾波塊來產生影像內容。In the above example, video transcoder 200 and video decoder 300 may store sample values for a decoding unit (CU) based on the first filtered block and the second filtered block. For example, the video transcoder 200 and the video decoder 300 may store the first filtered block and the second filtered block as part of the CU in their respective buffers, such that if the CU is used for inter-frame prediction, the first filtered block is used. The first filtered block and the second filtered block are used for inter-frame prediction. As another example, the video decoder 300 may store the first filtered block and the second filtered block in the buffer as part of the CU, so that when a picture including the CU is displayed, the first filtered block and the second filtered block are displayed based on the first filtered block and the second filtered block. Filter blocks are used to generate image content.

如前述,一個語法元素定義了用於第一顏色分量和第二顏色分量兩者的NN模型的濾波模式(例如,用於NN濾波器的NN濾波模式)。在一些實例中,針對CU儲存用於第一顏色分量的第一經濾波塊和用於第二顏色分量的第二經濾波塊兩者。然而,在一些實例中,可能不針對CU儲存第一經濾波塊和第二經濾波塊兩者。As mentioned before, one syntax element defines the filtering mode of the NN model for both the first color component and the second color component (eg, NN filtering mode for NN filter). In some examples, both the first filtered block for the first color component and the second filtered block for the second color component are stored for the CU. However, in some examples, both the first filtered block and the second filtered block may not be stored for the CU.

應當注意的是,在一些實例中,即使沒有針對CU儲存用於第一顏色分量的第一經濾波塊或用於第二顏色分量的第二經濾波塊中的一者,亦可以產生第一經濾波塊和第二經濾波塊。例如,假設需要經濾波的Cb分量,但針對該CU不需要經濾波的Cr分量。在該實例中,NN模型可以接收亮度和色度分量兩者,並產生經濾波的Cb分量和經濾波的Cr分量。然而,視訊轉碼器200和視訊解碼器300可以丟棄經濾波的Cr分量。為了儲存針對CU的值,視訊轉碼器200和視訊解碼器300可以儲存經濾波的Cb分量和Cr分量的原始值(例如,未經濾波的經重構取樣)。It should be noted that in some examples, the first filtered block for the first color component or the second filtered block for the second color component may be generated even if one of the first filtered block for the first color component or the second filtered block for the second color component is not stored for the CU. filtered block and a second filtered block. For example, assume that the filtered Cb component is required, but the filtered Cr component is not required for this CU. In this example, the NN model can receive both luma and chroma components and produce a filtered Cb component and a filtered Cr component. However, the video transcoder 200 and the video decoder 300 may discard the filtered Cr component. To store values for CUs, video transcoder 200 and video decoder 300 may store the original values of the filtered Cb and Cr components (eg, unfiltered reconstructed samples).

下文描述了用於選擇哪些經濾波分量用於儲存針對CU的值的實例技術。為了便於描述起見,本案內容描述了用於切換NN濾波器開啟或關閉的濾波控制。NN濾波器開啟或關閉的此類揭示,可以參考是否使用經濾波的顏色分量的實例。可能仍然產生經濾波的顏色分量,但在關閉該顏色分量的濾波的實例中,丟棄經濾波的顏色分量。Example techniques for selecting which filtered components are used to store values for a CU are described below. For ease of description, this case content describes the filter control used to switch the NN filter on or off. Such revelation of whether the NN filter is on or off can be seen in the example of whether filtered color components are used. A filtered color component may still be produced, but in instances where filtering is turned off for that color component, the filtered color component is discarded.

若啟用濾波,則針對CU儲存用於第一顏色分量的第一經濾波塊和用於第二顏色分量的第二經濾波塊的上述實例稱為第一實例技術。亦即,在第一實例技術中,一個語法元素定義用於第一顏色分量和第二顏色分量兩者的用於NN模型的濾波模式。此外,在該第一實例技術中,視訊轉碼器200和視訊解碼器300儲存用於第一顏色分量的第一經濾波塊和用於第二顏色分量的第二經濾波塊作為針對CU的值。亦即,視訊轉碼器200和視訊解碼器300可以在定義的濾波模式中,將NN模型的相同實例應用於第一顏色分量的第一塊和第二顏色分量的第二塊,以分別產生第一經濾波塊和第二經濾波塊。The above example of storing a first filtered block for a first color component and a second filtered block for a second color component for a CU if filtering is enabled is referred to as the first example technique. That is, in the first example technique, one syntax element defines the filtering mode for the NN model for both the first color component and the second color component. Furthermore, in this first example technique, video transcoder 200 and video decoder 300 store a first filtered block for a first color component and a second filtered block for a second color component as CU-specific value. That is, the video transcoder 200 and the video decoder 300 may apply the same instance of the NN model to the first block of the first color component and the second block of the second color component in a defined filtering mode to generate respectively A first filtered block and a second filtered block.

例如,視訊轉碼器200和視訊解碼器300可以在定義的濾波模式下,將NN模型的相同實例應用於第一顏色分量的第一塊和第二顏色分量的第二塊,以分別產生第一殘差值和第二結果值。視訊轉碼器200和視訊解碼器300可以基於第一塊和第一殘差值來產生第一經濾波塊(例如,經由將它們相加在一起),並且基於第二塊和第二殘差值來產生第二經濾波塊(例如,經由將它們相加在一起)。For example, the video transcoder 200 and the video decoder 300 may apply the same instance of the NN model to a first block of a first color component and a second block of a second color component under a defined filtering mode to produce a second block of a second color component, respectively. a residual value and a second result value. Video transcoder 200 and video decoder 300 may generate a first filtered block based on the first block and the first residual value (eg, via adding them together), and a second filtered block based on the second block and the second residual value. values to produce the second filtered block (eg, by adding them together).

在第二實例中,與第一實例類似,用於色度顏色分量的NN濾波器被設計為能夠在單個濾波程序之後輸出兩個色度分量。濾波控制允許分別用訊號通知用於兩個色度分量的NN濾波器的開/關切換,此外,若用於兩個色度分量的NN濾波開啟,則這兩個色度分量的輸出應當經由單個濾波程序來產生。亦即,兩個經濾波的色度塊皆可以是NN濾波器的輸出,但是可以丟棄至少一個經濾波的色度塊。In the second example, similar to the first example, the NN filter for the chroma color components is designed to be able to output both chroma components after a single filtering procedure. The filtering control allows the on/off switching of the NN filter for the two chroma components to be signaled separately. Additionally, if the NN filtering for the two chroma components is on, the output of the two chroma components should be passed through A single filter program is generated. That is, both filtered chroma blocks may be the output of the NN filter, but at least one filtered chroma block may be discarded.

在切片級(作為處理成分的實例),針對第一色度分量用訊號通知具有[0,4]的值範圍的語法元素(在以下描述中稱為first_chroma_mode_slice)。對於第二色度分量,若first_chroma_mode_slice==0,則用訊號通知用於第二色度分量的具有[0,4]的值範圍的另一語法元素(在下文的描述中稱為second_chroma_mode_slice),以指示對用於第二色度分量的NN濾波的選擇。在其他情況下(first_chroma_mode_slice !=0),將用訊號通知值範圍為[0,1]的標誌,若該標誌為0,則將second_chroma_mode_slice設置為0,否則將second_chroma_mode_slice設置為first_ chroma_mode _slice。At the slice level (as an example of a processing component), a syntax element with a value range of [0,4] (called first_chroma_mode_slice in the following description) is signaled for the first chroma component. For the second chroma component, if first_chroma_mode_slice==0, then another syntax element with a value range of [0,4] for the second chroma component (referred to as second_chroma_mode_slice in the description below) is signaled, to indicate the selection of NN filtering for the second chroma component. In other cases (first_chroma_mode_slice !=0), a flag with a value in the range [0,1] is signaled, and if the flag is 0, second_chroma_mode_slice is set to 0, otherwise second_chroma_mode_slice is set to first_chroma_mode_slice.

除了每個模式的範圍可以限於對應的色度分量之外,first_chroma_mode_slice和second_chroma_mode_slice的5個語法值的含義與第一實例中的含義相同。亦即,first_chroma_mode_slice和second_ chroma_mode_slice的值可以與上面描述的用於第一級濾波模式語法元素或第二級濾波模式語法元素的值相同。可以將解碼/匯出first_chroma_mode_slice和second_ chroma_mode_slice的邏輯概括為以下偽代碼。在下文的偽代碼中,以粗體字來提供在位元串流中編碼的語法元素,並且在解碼/匯出邏輯開始之前,假設所有語法值皆為0。 first_chroma_mode_slice if (first_chroma_mode_slice == 0) { second_chroma_mode_slice } else { second_chroma_flag_slice if (second_chroma_flag_slice == 1) { second_chroma_mode_slice = first_chroma_mode_slice } else { second_chroma_mode_slice = 0 } } The meanings of the five syntax values of first_chroma_mode_slice and second_chroma_mode_slice are the same as in the first example, except that the range of each mode can be limited to the corresponding chroma component. That is, the values of first_chroma_mode_slice and second_chroma_mode_slice may be the same as those described above for the first-level filtering mode syntax element or the second-level filtering mode syntax element. The logic of decoding/exporting first_chroma_mode_slice and second_chroma_mode_slice can be summarized as the following pseudocode. In the pseudocode below, the syntax elements encoded in the bitstream are provided in bold and all syntax values are assumed to be 0 before the decoding/export logic begins. first_chroma_mode_slice if (first_chroma_mode_slice == 0) { second_chroma_mode_slice } else { second_chroma_flag_slice if (second_chroma_flag_slice == 1) { second_chroma_mode_slice = first_chroma_mode_slice } else { second_chroma_mode_slice = 0 } }

類似於存在第一級濾波模式語法元素和第二級濾波模式語法元素的第一實例,若first_chroma_mode_slice或second_chroma_mode_slice等於4(例如,第一級濾波模式語法元素具有值4),則用訊號通知額外語法元素(例如,第二級濾波模式語法元素),以用於分別為每個處理成分選擇NN濾波。可以在下文的偽代碼中,概括解碼/匯出的邏輯。以粗體字來提供在位元串流中譯碼的語法元素,並且在解碼/匯出邏輯開始之前,假設所有語法值皆為0。 if (first_chroma_mode_slice == 4 || second_chroma_mode_slice == 4) { if(first_chroma_mode_slice == 0) first_chroma_mode_processing_unit = 0 else first_chroma_mode_processing_unit if (second_chroma_mode_slice == 0) second_chroma_mode_processing_unit = 0 else if (first_chroma_mode_processing_unit != 0) { second_chroma_flag_processing_unit if (second_chroma_flag_processing_unit == 1) second_chroma_mode_processing_unit = first_chroma_mode_processing_unit else second_chroma_mode_processing_unit = 0 } else second_chroma_mode_processing_unit } Similar to the first example where there is a first-level filtering mode syntax element and a second-level filtering mode syntax element, additional syntax is signaled if first_chroma_mode_slice or second_chroma_mode_slice is equal to 4 (e.g., the first-level filtering mode syntax element has a value of 4) Element (e.g., second-level filter mode syntax element) for selecting NN filtering separately for each processing component. The decoding/exporting logic can be summarized in the pseudocode below. Syntax elements decoded in the bitstream are provided in bold, and all syntax values are assumed to be 0 before the decoding/export logic begins. if (first_chroma_mode_slice == 4 || second_chroma_mode_slice == 4) { if(first_chroma_mode_slice == 0) first_chroma_mode_processing_unit = 0 else first_chroma_mode_processing_unit if (second_chroma_mode_slice == 0) second_chroma_mode_processing_unit = 0 else if (first_chroma_mode_processing_unit != 0) { second_chroma_flag_processing_unit if (second_chroma_flag_processing_unit == 1) second_chroma_mode_processing_unit = first_chroma_mode_processing_unit else second_chroma_mode_processing_unit = 0 } else second_chroma_mode_processing_unit }

在該實例中,切片級和處理成分級模式值的解釋與上面針對第一級濾波模式語法元素和第二級濾波模式語法元素的解釋相同。在切片級,0表示NN濾波關閉,1~3對應於可以在建立針對NN模型的輸入資訊的程序中使用的3個QP選擇,以及4表示在處理成分級別(例如,在用於CTB的所有CU的CTB級、在CU級等等)用訊號通知額外資訊,以指示每個處理成分的NN濾波選擇。類似地,在處理成分級,0表示NN濾波關閉,1~3對應於可以在建立針對NN模型的輸入資訊的程序中使用的3個QP選擇。In this example, the interpretation of the slice level and processing into hierarchical mode values is the same as the explanation above for the first level filtering mode syntax element and the second level filtering mode syntax element. At the slice level, 0 means NN filtering is off, 1 to 3 correspond to the 3 QP choices that can be used in the procedure to build the input information for the NN model, and 4 means at the processing component level (e.g., in all QPs used for CTB). CTB level of the CU, at the CU level, etc.) signals additional information to indicate the NN filter selection for each processing component. Similarly, in the processing hierarchy, 0 indicates that NN filtering is turned off, and 1 to 3 correspond to the 3 QP choices that can be used in the process of establishing input information for the NN model.

因此,在一或多個實例中,視訊轉碼器200用訊號通知並且視訊解碼器300接收的語法元素可以具有指示針對第一顏色分量啟用基於NN的濾波的值(例如,first_chroma_mode_slice為true)。在該實例中,視訊轉碼器200可以用訊號通知指示針對第二顏色分量啟用基於NN的濾波的標誌(例如,second_chroma_flag_slice為true),並且視訊解碼器300可以接收該標誌(例如,second_chroma_flag_slice)。在該實例中,視訊轉碼器200和視訊解碼器300可以經由在所定義的濾波模式中將NN模型的相同實例應用於第二塊,基於第二顏色分量的第二塊來產生第二經濾波塊。為了儲存針對CU的值,視訊轉碼器200和視訊解碼器300可以基於第一經濾波塊和第二經濾波塊來儲存針對CU的值。Thus, in one or more examples, syntax elements signaled by video transcoder 200 and received by video decoder 300 may have a value indicating that NN-based filtering is enabled for the first color component (eg, first_chroma_mode_slice is true). In this example, video transcoder 200 may signal a flag indicating that NN-based filtering is enabled for the second color component (eg, second_chroma_flag_slice is true), and video decoder 300 may receive the flag (eg, second_chroma_flag_slice). In this example, video transcoder 200 and video decoder 300 may generate a second filter based on the second block of the second color component by applying the same instance of the NN model to the second block in a defined filtering mode. filter block. To store the CU-specific value, video transcoder 200 and video decoder 300 may store the CU-specific value based on the first filtered block and the second filtered block.

在一些實例中,可能不將經濾波的第二顏色分量用作針對CU的值。例如,視訊轉碼器200可以用訊號通知指示針對第二顏色分量禁用基於NN的濾波的標誌(例如,second_chroma_flag_slice為false),並且視訊解碼器300可以接收該標誌(例如,second_chroma_flag_slice)。在該實例中,視訊轉碼器200和視訊解碼器300可以基於第一顏色分量的第一經濾波塊(例如,使用NN濾波的塊)和第二顏色分量的第二塊(例如,未使用NN濾波的塊)來儲存針對CU的值。In some examples, the filtered second color component may not be used as the value for the CU. For example, video transcoder 200 may signal a flag indicating that NN-based filtering is disabled for the second color component (eg, second_chroma_flag_slice is false), and video decoder 300 may receive the flag (eg, second_chroma_flag_slice). In this example, video transcoder 200 and video decoder 300 may be based on a first filtered block of a first color component (e.g., using a NN filtered block) and a second block of a second color component (e.g., not using NN filtered block) to store the value for CU.

在一些實例中,可以不對第一顏色分量進行濾波(例如,first_chroma_mode_slice==0)。在該實例中,視訊轉碼器200可以用訊號通知指示用於第二顏色分量的濾波模式的另一個語法元素(例如,second_chroma_mode_slice),並且視訊解碼器300可以進行接收。在該實例中,視訊轉碼器200和視訊解碼器300可以基於用於第一顏色分量的第一塊(例如,未使用NN進行濾波的塊)和第二顏色分量的第二濾波塊(例如,使用NN進行濾波的塊)來儲存針對CU的值。In some examples, the first color component may not be filtered (eg, first_chroma_mode_slice==0). In this example, video transcoder 200 may signal another syntax element (eg, second_chroma_mode_slice) indicating the filtering mode for the second color component, and video decoder 300 may receive it. In this example, video transcoder 200 and video decoder 300 may be based on a first block for a first color component (e.g., a block not filtered using NN) and a second filtered block for a second color component (e.g., , blocks filtered using NN) to store values for CU.

通常,對於第二實例,可以認為first_chroma_mode_slice語法元素是定義用於第一顏色分量和第二顏色分量兩者的用於神經網路(NN)模型的濾波模式的語法元素。例如,若first_chroma_mode_slice語法元素不為0,則針對第一顏色分量啟用NN濾波,以及second_chroma_flag_slice指示針對第二顏色分量啟用濾波,則first_chroma_mode_slice定義了用於第一顏色分量和第二顏色分量兩者的是哪種濾波模式。即使在其中second_chroma_flag_slice為假(false)的實例中,亦可以認為first_chroma_mode_slice定義了用於第一顏色分量和第二顏色分量兩者的濾波模式,這是因為不需要指示濾波模式的額外語法元素。Generally, for the second example, the first_chroma_mode_slice syntax element can be considered to be a syntax element that defines a filtering mode for a neural network (NN) model for both the first color component and the second color component. For example, if the first_chroma_mode_slice syntax element is non-zero, NN filtering is enabled for the first color component, and second_chroma_flag_slice indicates that filtering is enabled for the second color component, then first_chroma_mode_slice defines for both the first and second color components Which filtering mode. Even in instances where second_chroma_flag_slice is false, first_chroma_mode_slice can be considered to define the filtering mode for both the first and second color components, since no additional syntax element indicating the filtering mode is required.

作為第三實例,第三實例與第二實例相同,只是使用了解碼/匯出NN濾波選擇的替代方式。對於每個顏色分量,對用於指示開啟/關閉選擇的標誌進行解碼,並且若標誌中的至少一個指示NN濾波是開啟的,則用訊號通知額外語法元素。切片級(作為處理成分的實例,但其他實例亦是可能的)譯碼概括如下。以粗體字來提供在位元串流中譯碼的語法元素,並且在解碼/匯出邏輯開始之前,假設所有語法值皆為0。 first_chroma_flag_slice second_chroma_flag_slice if (first_chroma_flag_slice || second_chroma_flag_slice) { chroma_mode_slice if(first_chroma_flag_slice==1) first_chroma_mode_slice = chroma_mode_slice + 1 if(second_chroma_flag_slice==1) second_chroma_mode_slice = chroma_mode_slice + 1 } As a third example, the third example is the same as the second example, except that an alternative way of decoding/exporting NN filter selections is used. For each color component, flags indicating on/off selection are decoded, and if at least one of the flags indicates that NN filtering is on, additional syntax elements are signaled. Slice-level (as an example of a processing component, but other examples are possible) decoding is summarized below. Syntax elements decoded in the bitstream are provided in bold, and all syntax values are assumed to be 0 before the decoding/export logic begins. first_chroma_flag_slice second_chroma_flag_slice if (first_chroma_flag_slice || second_chroma_flag_slice) { chroma_mode_slice if(first_chroma_flag_slice==1) first_chroma_mode_slice = chroma_mode_slice + 1 if(second_chroma_flag_slice==1) second_chroma_mode_slice = chroma_mode_slice + 1 }

在該實例中,first_chroma_flag_slice和second_chroma_filg_slice具有[0,1]的範圍值,並且chroma_mode_slice具有[0,3]的範圍,因此first_chrome_mode_stice和second_chrome_mode _slice的值範圍保持與第二實例中的值範圍相同。範圍[0,4]和數值的解釋亦一致。亦即,可以認為chroma_mode_slice是定義用於第一顏色分量和第二顏色分量兩者的用於神經網路(NN)模型的濾波模式的實例語法元素。用於chroma_mode_slice的值可以與上面描述的第一級濾波模式語法元素相同。In this instance, first_chroma_flag_slice and second_chroma_filg_slice have a range value of [0,1], and chroma_mode_slice has a range of [0,3], so the value ranges of first_chrome_mode_stice and second_chrome_mode_slice remain the same as in the second instance. The range [0,4] and numerical interpretation are also consistent. That is, chroma_mode_slice can be considered an example syntax element that defines a filtering mode for a neural network (NN) model for both the first color component and the second color component. The value used for chroma_mode_slice can be the same as the first-level filter mode syntax element described above.

例如,視訊轉碼器200可以用訊號通知針對第一顏色分量啟用基於NN的濾波的標誌,並且視訊解碼器300可以進行接收。在該實例中,視訊轉碼器200可以用訊號通知針對第一顏色分量啟用基於NN的濾波的first_chroma_flag_slice,並且視訊解碼器300可以進行接收。類似地,視訊轉碼器200可以用訊號通知針對第二顏色分量啟用基於NN的濾波的標誌,並且視訊解碼器300可以進行接收。在該實例中,視訊轉碼器200可以用訊號通知針對第二顏色分量啟用基於NN的濾波的second_chroma_flag_slice,並且視訊解碼器300可以進行接收。For example, video transcoder 200 may signal a flag to enable NN-based filtering for the first color component, and video decoder 300 may receive it. In this example, video transcoder 200 may signal first_chroma_flag_slice that enables NN-based filtering for the first color component, and video decoder 300 may receive it. Similarly, video transcoder 200 may signal a flag to enable NN-based filtering for the second color component, and video decoder 300 may receive it. In this example, video transcoder 200 may signal second_chroma_flag_slice that enables NN-based filtering for the second color component, and video decoder 300 may receive it.

視訊轉碼器200可以用訊號通知語法元素,該語法元素基於指示針對第一顏色分量啟用基於NN的濾波的標誌,來定義用於第一顏色分量和第二顏色分量兩者的用於NN模型的濾波模式,視訊解碼器300可以接收該語法元素。例如,視訊轉碼器200可以基於first_chroma_flag_slice為真(true)來用訊號通知chroma_mode_slice語法元素,並且視訊解碼器300可以接收該語法元素。類似地,視訊轉碼器200可以用訊號通知語法元素,該語法元素基於指示針對第二顏色分量啟用基於NN的濾波的標誌,來定義用於第一顏色分量和第二顏色分量兩者的用於NN模型的濾波模式,並且視訊解碼器300可以接收該語法元素。例如,視訊轉碼器200可以基於second_chroma_flag_slice為真來用訊號通知chroma_mode_slice語法元素,並且視訊解碼器300可以接收該語法元素。Video transcoder 200 may signal a syntax element that defines a NN model for both the first color component and the second color component based on a flag indicating that NN-based filtering is enabled for the first color component. In the filtering mode, the video decoder 300 can receive the syntax element. For example, video transcoder 200 may signal a chroma_mode_slice syntax element based on first_chroma_flag_slice being true, and video decoder 300 may receive the syntax element. Similarly, video transcoder 200 may signal a syntax element that defines usage for both the first color component and the second color component based on a flag indicating that NN-based filtering is enabled for the second color component. The filtering mode of the NN model is used, and the video decoder 300 can receive the syntax element. For example, video transcoder 200 may signal a chroma_mode_slice syntax element based on second_chroma_flag_slice being true, and video decoder 300 may receive the syntax element.

若針對第一和第二顏色分量中的一者或兩者啟用(例如,開啟)NN濾波,則視訊解碼器300可以基於chroma_mode_slice的值來決定濾波模式。如上所示,決定用於兩種顏色分量的濾波模式的等式是相同的(例如,用於兩種顏色分量的濾波模式皆是chroma_mode_slice+1)。因此,chroma_mode_slice定義了用於第一顏色分量和第二顏色分量兩者的用於NN模型的濾波模式。If NN filtering is enabled (eg, turned on) for one or both of the first and second color components, the video decoder 300 may determine the filtering mode based on the value of chroma_mode_slice. As shown above, the equation that determines the filter mode used for both color components is the same (for example, the filter mode used for both color components is chroma_mode_slice+1). Therefore, chroma_mode_slice defines the filtering mode for the NN model for both the first color component and the second color component.

可以經由修改推導和解釋語法元素的方式,來建立該實例的均等物。在後面的實例中,描述這種等效設計的實例。The equivalent of this instance can be established by modifying the way in which grammatical elements are derived and interpreted. In the following examples, examples of this equivalent design are described.

在該實例中,若first_chroma_mode_slice或second_chroma_mode_slic等於4,則用訊號通知針對分別用於每個處理成分的NN濾波選擇的額外語法元素(例如,類似於第二級濾波模式語法元素)。可以將解碼/匯出的邏輯概括為偽代碼。以粗體字來提供在位元串流中編碼的語法元素,並且在解碼/匯出邏輯開始之前,假設所有語法值皆為0。 if (first_chroma_mode_slice == 4 || second_chroma_mode_slice == 4) { if (first_chroma_mode_slice == 0) first_chroma_mode_processing_unit = 0 else first_chroma_flag_processing_unit if (second_chroma_mode_slice == 0) second_chroma_mode_processing_unit = 0 else second_chroma_flag_processing_unit if (first_chroma_flag_processing_unit != 0 || first_chroma_flag_processing_unit != 0) chroma_mode_processing_unit if (first_chroma_flag_processing_unit == 1) first_chroma_mode_processing_unit = chroma_mode_processing_unit+1 else first_chroma_mode_processing_unit = 0 if (second_chroma_flag_processing_unit == 1) second_chroma_mode_processing_unit = chroma_mode_processing_unit + 1 else second_chroma_mode_processing_unit = 0 } In this example, if first_chroma_mode_slice or second_chroma_mode_slic equals 4, then additional syntax elements are signaled for the NN filtering selection respectively for each processing component (eg, similar to the second-level filter mode syntax element). The decoding/exporting logic can be summarized as pseudocode. Syntax elements encoded in the bitstream are provided in bold, and all syntax values are assumed to be 0 before the decoding/export logic begins. if (first_chroma_mode_slice == 4 || second_chroma_mode_slice == 4) { if (first_chroma_mode_slice == 0) first_chroma_mode_processing_unit = 0 else first_chroma_flag_processing_unit if (second_chroma_mode_slice == 0) second_chroma_mode_processing_unit = 0 else second_chroma_flag_processing_unit if (first_chroma_flag_processing_unit != 0 || first_chroma_flag_processing_unit != 0) chroma_mode_processing_unit if (first_chroma_flag_processing_unit == 1) first_chroma_mode_processing_unit = chroma_mode_processing_unit+1 else first_chroma_mode_processing_unit = 0 if (second_chroma_flag_processing_unit == 1) second_chroma_mode_processing_unit = chroma_mode_processing_unit + 1 else second_chroma_mode_processing_unit = 0 }

在該實例中,first_chroma_flag_processing_unit和second_chroma_flag_processing_unit具有[0,1]的範圍值,以及chroma_mode_processing_unit具有[0,2]的範圍值域,因此first_chroma_mode_processing_unit和second_chroma_flag_processing_unit的值範圍保持與第二實例中的值範圍相同。範圍[0,3]和數值的解釋亦一致。可以經由修改推導和解釋語法元素的方式來建立該實例的均等物。在後面的實例中描述這種等效設計的實例。In this example, first_chroma_flag_processing_unit and second_chroma_flag_processing_unit have a range value of [0,1], and chroma_mode_processing_unit has a range value of [0,2], so the value ranges of first_chroma_mode_processing_unit and second_chroma_flag_processing_unit remain the same as in the second example. The range [0,3] and numerical interpretation are also consistent. The equivalent of this instance can be established by modifying the way in which syntax elements are derived and interpreted. Examples of such equivalent designs are described in later examples.

作為第四實例,第四實例與第三實例等效,語法值的推導/解釋不同,但實際NN控制相同。可以如下來概括切片級解碼。以粗體字來提供在位元串流中譯碼的語法元素,並且在解碼/匯出邏輯開始之前,假設所有語法值皆初始化為0。 first_chroma_flag_slice second_chroma_flag_slice if (first_chroma_flag_slice || second_chroma_flag_slice) { chroma_mode_slice if(first_chroma_flag_slice==1) first_chroma_mode_slice = chroma_mode_slice if(second_chroma_flag_slice==1) second_chroma_mode_slice = chroma_mode_slice } As a fourth instance, the fourth instance is equivalent to the third instance, the derivation/interpretation of the syntax values is different, but the actual NN control is the same. Slice-level decoding can be summarized as follows. Syntax elements decoded in the bitstream are provided in boldface, and all syntax values are assumed to be initialized to 0 before the decoding/export logic begins. first_chroma_flag_slice second_chroma_flag_slice if (first_chroma_flag_slice || second_chroma_flag_slice) { chroma_mode_slice if(first_chroma_flag_slice==1) first_chroma_mode_slice = chroma_mode_slice if(second_chroma_flag_slice==1) second_chroma_mode_slice = chroma_mode_slice }

在這種情況下,first_chroma_flag_slice和second_chroma_flag_slice的範圍值為[0,1],以及chroma_mode_slice的範圍值是[0,3]。結果,first_chroma_mode_slice和second_chroma_mode_slice的值範圍變為[0,3]。根據以下解釋,該實例等效於第三實例: a.若first_chroma_flag_slice==0,則針對第一色度分量關閉NN濾波。 b.若first_chroma_flag_slice==1,則進一步發訊號通知/匯出NN濾波選擇,如下所述: i.若first_chroma_mode_slice==0,則在建立NN濾波器的輸入的程序中,使用預定義或用訊號通知的集合中的第一QP。 ii.若first_chroma_mode_slice==1,則在建立NN濾波器的輸入的程序中,使用預定義或用訊號通知的集合中的第二QP。 iii.若first_chroma_mode_slice==2,則在建立NN濾波器的輸入的程序中,使用預定義或用訊號通知的集合中的第三QP。 iv.若first_chroma_mode_slice==4,則發送額外語法元素來分別控制用於每個處理成分的NN濾波。 In this case, the range values of first_chroma_flag_slice and second_chroma_flag_slice are [0,1], and the range value of chroma_mode_slice is [0,3]. As a result, the value range of first_chroma_mode_slice and second_chroma_mode_slice becomes [0,3]. This instance is equivalent to the third instance according to the following explanation: a. If first_chroma_flag_slice==0, turn off NN filtering for the first chroma component. b. If first_chroma_flag_slice==1, further signal/export the NN filter selection, as follows: i. If first_chroma_mode_slice==0, then in the procedure to create the input of the NN filter, use the first QP in the predefined or signaled set. ii. If first_chroma_mode_slice==1, then in the procedure that creates the input of the NN filter, use the second QP from the predefined or signaled set. iii. If first_chroma_mode_slice==2, then in the procedure to create the input of the NN filter, use the third QP from the predefined or signaled set. iv. If first_chroma_mode_slice==4, send additional syntax elements to control NN filtering for each processing component separately.

類似於切片級,當在處理成分級用訊號通知NN濾波選擇時,當NN濾波開啟時,first_chroma_mode_processing_unit和second_chroma_mode_processing_unit分配有chroma_mode_processing_unit(如下文的偽代碼中所示)。以粗體字來提供在位元串流中譯碼的語法元素,並且在解碼/匯出邏輯開始之前,假設所有語法值皆初始化為0。 if (first_chroma_mode_slice == 4 || second_chroma_mode_slice == 4) { if (first_chroma_mode_slice == 0) first_chroma_mode_processing_unit = 0 else first_chroma_flag_processing_unit if (second_chroma_mode_slice == 0) second_chroma_mode_processing_unit = 0 else second_chroma_flag_processing_unit if (first_chroma_flag_processing_unit != 0 || first_chroma_flag_processing_unit != 0) chroma_mode_processing_unit if (first_chroma_flag_processing_unit) first_chroma_mode_processing_unit = chroma_mode_processing_unit else first_chroma_mode_processing_unit = 0 if (second_chroma_flag_processing_unit) second_chroma_mode_processing_unit = chroma_mode_processing_unit else second_chroma_mode_processing_unit = 0 } Similar to the slice level, when NN filtering selection is signaled at the processing level, first_chroma_mode_processing_unit and second_chroma_mode_processing_unit are assigned a chroma_mode_processing_unit when NN filtering is turned on (as shown in the pseudocode below). Syntax elements decoded in the bitstream are provided in boldface, and all syntax values are assumed to be initialized to 0 before the decoding/export logic begins. if (first_chroma_mode_slice == 4 || second_chroma_mode_slice == 4) { if (first_chroma_mode_slice == 0) first_chroma_mode_processing_unit = 0 else first_chroma_flag_processing_unit if (second_chroma_mode_slice == 0) second_chroma_mode_processing_unit = 0 else second_chroma_flag_processing_unit if (first_chroma_flag_processing_unit != 0 || first_chroma_flag_processing_unit != 0) chroma_mode_processing_unit if (first_chroma_flag_processing_unit) first_chroma_mode_processing_unit = chroma_mode_processing_unit else first_chroma_mode_processing_unit = 0 if (second_chroma_flag_processing_unit) second_chroma_mode_processing_unit = chroma_mode_processing_unit else second_chroma_mode_processing_unit = 0 }

解釋可以與切片級相同,除了可能在處理成分級中不存在模式值4: a.若first_chroma_flag_processing_unit==0,則針對第一色度分量關閉NN濾波。 b.若first_chroma_flag_processing_unit==1,則進一步用訊號通知/匯出NN濾波選擇,如下所述: i.若first_chroma_mode_processing_unit==0,則在建立NN濾波器的輸入的程序中,使用預定義或用訊號通知的集合中的第一QP。 ii.若first_chroma_mode_processing_unit==1,則在建立NN濾波器的輸入的程序中,使用預定義或用訊號通知的集合中的第二QP。 iii.若first_chroma_mode_processing_unit==2,則在建立NN濾波器的輸入的程序中,使用預定義或用訊號通知的集合中的第三QP。 The interpretation can be the same as at the slice level, except that the mode value 4 may not be present in the processing component level: a. If first_chroma_flag_processing_unit==0, turn off NN filtering for the first chroma component. b. If first_chroma_flag_processing_unit==1, then further signal/export the NN filter selection, as follows: i. If first_chroma_mode_processing_unit==0, then in the procedure of establishing the input of the NN filter, the first QP in the predefined or signaled set is used. ii. If first_chroma_mode_processing_unit==1, then in the procedure of establishing the input of the NN filter, use the second QP from the predefined or signaled set. iii. If first_chroma_mode_processing_unit==2, then in the procedure of establishing the input of the NN filter, use the third QP from the predefined or signaled set.

可以將第一到第四實例擴展到任何多模式NN濾波設計,並且本案內容中描述的實例技術仍然可以應用。例如,在一或多個實例中,定義用於第一顏色分量和第二顏色分量兩者的用於NN模型的濾波模式的語法元素,定義從複數個參數中選擇哪個參數,並且所定義的參數定義濾波模式。亦即,定義濾波模式的語法元素可以意味著語法元素定義要使用哪個參數,並且該參數定義濾波模式。The first to fourth examples can be extended to any multi-mode NN filtering design, and the example techniques described in the content of this case can still be applied. For example, in one or more instances, a syntax element defining a filtering mode for a NN model for both a first color component and a second color component, defining which parameter to select from a plurality of parameters, and defined Parameters define the filtering mode. That is, a syntax element defining a filtering mode may mean that the syntax element defines which parameter is to be used, and that parameter defines the filtering mode.

例如,語法元素可以具有值1、2或3,其中值1指示來自預定義集合或用訊號通知的集合的第一參數定義濾波模式。值2指示來自預定義集合或用訊號通知的集合的第二參數定義濾波模式,值3指示來自預定義集合或用訊號通知的集合的第三參數定義濾波模式。在一些實例中,參數是量化參數。For example, the syntax element may have a value of 1, 2, or 3, where a value of 1 indicates that the first parameter from a predefined set or a signaled set defines the filtering mode. A value of 2 indicates that a second parameter from a predefined set or a signaled set defines the filtering mode, and a value of 3 indicates that a third parameter from a predefined set or a signaled set defines the filtering mode. In some instances, the parameters are quantized parameters.

舉一些從中選擇參數的額外參數或集合的實例: a.代替從具有3的大小的預定義/用訊號通知的集合中選擇QP值。該方法可以擴展到使用N個值的候選集合的情況。 b.可以利用在建立NN濾波模型的輸入的程序中使用的任何值來設計多模式NN濾波方法,而不是針對不同模式使用不同的QP值。 c.多模式設計可以針對不同的模式使用不同的NN模型,而不是針對不同的模式使用不同的QP值。 d.可以將上面所描述的設計多模式濾波方法的不同方式組合起來,以建立更複雜的多模式設計。例如,可以對這些技術進行組合以建立M個模型,並且對於每個模型,在建立針對模型的輸入時存在可以使用的N個QP值。隨後,總共建立了M*N個模式。可以經由使用兩個模型和為每個模型選擇兩個QP,來建立4-模型設計。 Give some examples of additional parameters or collections from which parameters are selected: a. Instead select the QP value from a predefined/signaled set with size 3. The method can be extended to the case where a candidate set of N values is used. b. A multi-modal NN filtering method can be designed using any value used in the program that builds the input of the NN filtering model, instead of using different QP values for different modes. c. Multi-mode design can use different NN models for different modes instead of using different QP values for different modes. d. The different ways of designing multi-mode filtering methods described above can be combined to create more complex multi-mode designs. For example, these techniques can be combined to build M models, and for each model there are N QP values that can be used when building the inputs to the model. Subsequently, a total of M*N patterns were established. A 4-model design can be established by using two models and selecting two QPs for each model.

作為第二實現方式,可以將第一實現方式的實例技術擴展到覆蓋更多的顏色分量。例如,可以將第一實現方式的所有實例皆設計為使得NN模型能夠輸出所有三種顏色分量(RGB、YUV等),並且可以使用本案內容中描述的濾波控制來實現對於所有三種顏色分量執行最多一次NN濾波操作。As a second implementation, the example technique of the first implementation can be extended to cover more color components. For example, all instances of the first implementation can be designed so that the NN model can output all three color components (RGB, YUV, etc.), and the filtering controls described in this case can be used to perform at most once for all three color components. NN filtering operation.

圖2是示出可以執行本案內容的技術的實例視訊轉碼器200的方塊圖。提供圖2以便於解釋的目的,故其不應被認為是對本案內容中廣泛例示和描述的技術的限制。為了便於說明起見,本案內容根據VVC技術(在開發中的ITU-T H.266)和HEVC(ITU-T H.265)技術來描述視訊轉碼器200。然而,本案內容的技術可以由被配置為實現其他視訊譯碼標準和視訊譯碼格式(諸如AV1和AV1視訊譯碼格式的後續)的視訊編碼設備來執行。FIG. 2 is a block diagram illustrating an example video transcoder 200 that may perform the techniques of this disclosure. Figure 2 is provided for purposes of explanation and should not be considered a limitation on the techniques broadly illustrated and described in this context. For ease of explanation, the content of this case describes the video transcoder 200 based on VVC technology (ITU-T H.266 under development) and HEVC (ITU-T H.265) technology. However, the techniques of this disclosure may be performed by video encoding devices configured to implement other video encoding standards and video encoding formats, such as AV1 and its successors to the AV1 video encoding format.

在圖2的實例中,視訊轉碼器200包括視訊資料記憶體230、模式選擇單元202、殘差產生單元204、變換處理單元206、量化單元208、逆量化單元210、逆變換處理單元212、重構單元214、濾波單元216、解碼圖片緩衝器(DPB)218和熵編碼單元220。視訊資料記憶體230、模式選擇單元202、殘差產生單元204、變換處理單元206、量化單元208、逆量化單元210、逆變換處理單元212、重構單元214、濾波單元216、DPB 218和熵編碼單元220中的任何一個或全部,可以在一或多個處理器中或者在處理電路中實現。例如,可以將視訊轉碼器200的單元實現為一或多個電路或邏輯部件,作為硬體電路的一部分,或者作為處理器、ASIC或FPGA的一部分。此外,視訊轉碼器200可以包括補充的或替代的處理器或處理電路,以執行這些功能和其他功能。In the example of FIG. 2, the video transcoder 200 includes a video data memory 230, a mode selection unit 202, a residual generation unit 204, a transformation processing unit 206, a quantization unit 208, an inverse quantization unit 210, an inverse transformation processing unit 212, Reconstruction unit 214, filtering unit 216, decoded picture buffer (DPB) 218, and entropy encoding unit 220. Video data memory 230, mode selection unit 202, residual generation unit 204, transform processing unit 206, quantization unit 208, inverse quantization unit 210, inverse transform processing unit 212, reconstruction unit 214, filter unit 216, DPB 218 and entropy Any or all of the encoding units 220 may be implemented in one or more processors or in processing circuitry. For example, units of the video transcoder 200 may be implemented as one or more circuits or logic components, as part of a hardware circuit, or as part of a processor, ASIC, or FPGA. Additionally, video transcoder 200 may include supplemental or alternative processors or processing circuitry to perform these and other functions.

視訊資料記憶體230可以儲存將由視訊轉碼器200的部件編碼的視訊資料。視訊轉碼器200可以從例如視訊源104(圖1)接收儲存在視訊資料記憶體230中的視訊資料。DPB 218可以充當參考圖片記憶體,該參考圖片記憶體儲存參考視訊資料,以供視訊轉碼器200預測後續視訊資料時使用。視訊資料記憶體230和DPB 218可以由多種存放裝置(諸如,動態隨機存取記憶體(DRAM)(其包括同步DRAM(SDRAM))、磁阻RAM(MRAM)、電阻性RAM(RRAM)或其他類型的存放裝置)中的任何一個形成。視訊資料記憶體230和DPB 218可以由相同的存放裝置或不同的存放裝置提供。在各個實例中,視訊資料記憶體230可以與視訊轉碼器200的其他部件一起在晶片上,如圖所示,或者相對於那些部件在晶片外。Video data memory 230 may store video data to be encoded by components of video transcoder 200 . Video transcoder 200 may receive video data stored in video data memory 230 from, for example, video source 104 (FIG. 1). The DPB 218 may serve as a reference picture memory that stores reference video data for use by the video transcoder 200 when predicting subsequent video data. Video data memory 230 and DPB 218 may be composed of a variety of storage devices such as dynamic random access memory (DRAM) (including synchronous DRAM (SDRAM)), magnetoresistive RAM (MRAM), resistive RAM (RRAM), or other type of storage device). Video data memory 230 and DPB 218 may be provided by the same storage device or different storage devices. In various examples, video data memory 230 may be on-chip with other components of video transcoder 200, as shown, or off-chip relative to those components.

在本案內容中,對視訊資料記憶體230的引用不應被解釋為限於視訊轉碼器200內部的記憶體(除非如此具體地描述),亦不應被解釋為限於視訊轉碼器200之外的記憶體(除非如此具體地描述)。而是,對視訊資料記憶體230的引用應當被理解為儲存視訊資料的參考記憶體,其中視訊轉碼器200接收該視訊資料(例如,針對要編碼的當前塊的視訊資料)以進行編碼。圖1的記憶體106亦可以提供對來自視訊轉碼器200的各個單元的輸出的臨時儲存。In the context of this case, references to video data memory 230 should not be construed as being limited to memory internal to video transcoder 200 (unless so specifically described), nor should it be construed as being limited to memory outside video transcoder 200 of memory (unless so specifically described). Rather, a reference to the video data memory 230 should be understood as a reference memory that stores the video data that the video transcoder 200 receives for encoding (eg, the video data for the current block to be encoded). Memory 106 of FIG. 1 may also provide temporary storage of outputs from various units of video transcoder 200.

圖示圖2的各個單元以説明理解由視訊轉碼器200執行的操作。這些單元可以實現為固定功能電路、可程式設計電路或者其組合。固定功能電路代表提供特定功能、並在可以執行的操作上預先設置的電路。可程式設計電路代表可以被程式設計以執行各種任務,並且在可以執行的操作中提供靈活功能的電路。例如,可程式設計電路可以執行使可程式設計電路以軟體或韌體的指令所定義的方式進行操作的軟體或韌體。固定功能電路可以執行軟體指令(例如,用於接收參數或輸出參數),但是固定功能電路執行的操作的類型通常是不可變的。在一些實例中,這些單元中的一或多個單元可以是不同的電路塊(固定功能或可程式設計),並且在一些實例中,該一或多個單元可以是積體電路。The various units of FIG. 2 are illustrated to illustrate understanding the operations performed by the video transcoder 200. These units may be implemented as fixed function circuits, programmable circuits, or a combination thereof. Fixed function circuits represent circuits that provide a specific function and are preset in terms of the operations they can perform. Programmable circuits represent circuits that can be programmed to perform a variety of tasks and provide flexible functionality in the operations that can be performed. For example, the programmable circuit may execute software or firmware that causes the programmable circuit to operate in a manner defined by the instructions of the software or firmware. Fixed-function circuits can execute software instructions (for example, for receiving parameters or outputting parameters), but the types of operations performed by fixed-function circuits are generally immutable. In some examples, one or more of the units may be distinct circuit blocks (fixed function or programmable), and in some examples, the one or more units may be integrated circuits.

視訊轉碼器200可以包括由可程式設計電路形成的算數邏輯單位(ALU)、基本功能單元(EFU)、數位電路、類比電路及/或可程式設計核。在使用由可程式設計電路執行的軟體來執行視訊轉碼器200的操作的實例中,記憶體106(圖1)可以儲存視訊轉碼器200接收並執行的軟體的指令(例如,目標代碼),或者視訊轉碼器200中的另一個記憶體(未圖示)可以儲存此類指令。The video transcoder 200 may include an arithmetic logic unit (ALU), an elementary functional unit (EFU), a digital circuit, an analog circuit, and/or a programmable core formed of programmable circuits. In examples where software executed by programmable circuitry is used to perform operations of video transcoder 200 , memory 106 ( FIG. 1 ) may store instructions (eg, object code) for the software that video transcoder 200 receives and executes. , or another memory (not shown) in the video transcoder 200 can store such instructions.

視訊資料記憶體230被配置為儲存接收到的視訊資料。視訊轉碼器200可以從視訊資料記憶體230取回索視訊資料的圖片,並將視訊資料提供給殘差產生單元204和模式選擇單元202。視訊資料記憶體230中的視訊資料可以是要進行編碼的原始視訊資料。The video data memory 230 is configured to store received video data. The video transcoder 200 can retrieve the picture of the video data from the video data memory 230 and provide the video data to the residual generation unit 204 and the mode selection unit 202. The video data in the video data memory 230 may be original video data to be encoded.

模式選擇單元202包括運動估計單元222、運動補償單元224和訊框內預測單元226。模式選擇單元202可以包括其他功能單元,以根據其他預測模式來執行視訊預測。舉例而言,模式選擇單元202可以包括調色板單元、塊內複製單元(其可以是運動估計單元222及/或運動補償單元224的一部分)、仿射單元、線性模型(LM)單元等等。Mode selection unit 202 includes a motion estimation unit 222, a motion compensation unit 224, and an intra prediction unit 226. The mode selection unit 202 may include other functional units to perform video prediction according to other prediction modes. For example, mode selection unit 202 may include palette units, intra-block replication units (which may be part of motion estimation unit 222 and/or motion compensation unit 224), affine units, linear model (LM) units, and the like. .

模式選擇單元202通常協調多個編碼通路,以測試編碼參數的組合以及針對此類組合的最終率失真值。編碼參數可以包括:CTU到CU的劃分、用於CU的預測模式、用於CU的殘差資料的變換類型、用於CU的殘差資料的量化參數等等。模式選擇單元202可以最終選擇具有比其他測試的組合更好的率失真值的編碼參數的組合。Mode selection unit 202 typically coordinates multiple encoding passes to test combinations of encoding parameters and resulting rate-distortion values for such combinations. Coding parameters may include: CTU to CU partitioning, prediction mode for CU, transform type for residual data of CU, quantization parameters for residual data of CU, etc. The mode selection unit 202 may ultimately select a combination of encoding parameters that has a better rate-distortion value than other tested combinations.

視訊轉碼器200可以將從視訊資料記憶體230取回的圖片劃分為一系列CTU,並將一或多個CTU封裝在片段內。模式選擇單元202可以根據樹結構(諸如上面所描述的MTT結構、QTBT結構、超塊結構或四叉樹結構)來劃分圖片的CTU。如前述,視訊轉碼器200可以經由根據樹結構劃分CTU來形成一或多個CU。此類CU通常亦可以稱為「視訊塊」或「塊」。The video transcoder 200 may divide the picture retrieved from the video data memory 230 into a series of CTUs, and encapsulate one or more CTUs in a segment. The mode selection unit 202 may divide the CTU of the picture according to a tree structure (such as the MTT structure, QTBT structure, super-block structure, or quad-tree structure described above). As mentioned above, the video transcoder 200 may form one or more CUs by dividing the CTU according to a tree structure. Such CUs may also be commonly referred to as "video blocks" or "blocks".

通常,模式選擇單元202亦控制其部件(例如,運動估計單元222、運動補償單元224和訊框內預測單元226)以產生針對當前塊(例如,當前CU、或者在HEVC中,PU和TU的重疊部分)的預測塊。對於當前塊的訊框間預測,運動估計單元222可以執行運動搜尋以辨識在一或多個參考圖片(例如,儲存在DPB 218中的一或多個先前經譯碼的圖片)中的一或多個緊密匹配的參考塊。具體而言,運動估計單元222可以例如根據絕對差之和(SAD)、平方差之和(SSD)、平均絕對差(MAD)、均方差(MSD)等等,來計算表示潛在參考塊與當前塊有多麼相似的值。運動估計單元222通常可以使用在當前塊與正考慮的參考塊之間的逐取樣差來執行這些計算。運動估計單元222可以辨識具有由這些計算所產生的最小值的參考塊,該參考塊指示與當前塊最緊密匹配的參考塊。Typically, mode select unit 202 also controls its components (eg, motion estimation unit 222, motion compensation unit 224, and intra prediction unit 226) to generate overlapping PUs and TUs for the current block (eg, the current CU, or in HEVC, PU and TU). part) prediction block. For inter-frame prediction of the current block, motion estimation unit 222 may perform a motion search to identify one or more reference pictures (e.g., one or more previously coded pictures stored in DPB 218) or Multiple closely matched reference blocks. Specifically, the motion estimation unit 222 may calculate, for example, based on the sum of absolute differences (SAD), the sum of squared differences (SSD), the mean absolute difference (MAD), the mean square error (MSD), etc., to represent the difference between the potential reference block and the current How similar values do blocks have. Motion estimation unit 222 may typically perform these calculations using sample-by-sample differences between the current block and the reference block under consideration. Motion estimation unit 222 may identify the reference block with the minimum value produced by these calculations, which reference block indicates the reference block that most closely matches the current block.

運動估計單元222可以形成一或多個運動向量(MV),MV定義參考圖片中的參考塊相對於當前圖片中的當前塊的位置。隨後,運動估計單元222可以將運動向量提供給運動補償單元224。例如,對於單向訊框間預測,運動估計單元222可以提供單個運動向量,而對於雙向訊框間預測,運動估計單元222可以提供兩個運動向量。隨後,運動補償單元224可以使用運動向量來產生預測塊。例如,運動補償單元224可以使用運動向量來取回參考塊的資料。再舉一個實例,若運動向量具有分數取樣精度,則運動補償單元224可以根據一或多個內插濾波器,對用於預測塊的值進行內插。此外,對於雙向訊框間預測,運動補償單元224可以例如經由逐取樣平均或加權平均,來取回由相應的運動向量辨識的兩個參考塊的資料,並對取回的資料進行組合。Motion estimation unit 222 may form one or more motion vectors (MVs) that define the location of the reference block in the reference picture relative to the current block in the current picture. Motion estimation unit 222 may then provide the motion vector to motion compensation unit 224. For example, for unidirectional inter-frame prediction, motion estimation unit 222 may provide a single motion vector, and for bi-directional inter-frame prediction, motion estimation unit 222 may provide two motion vectors. Motion compensation unit 224 may then use the motion vectors to generate prediction blocks. For example, motion compensation unit 224 may use motion vectors to retrieve reference block information. As another example, if the motion vector has fractional sampling precision, motion compensation unit 224 may interpolate the value for the prediction block according to one or more interpolation filters. In addition, for bidirectional inter-frame prediction, the motion compensation unit 224 may retrieve data of two reference blocks identified by corresponding motion vectors, for example, via sample-by-sample averaging or weighted averaging, and combine the retrieved data.

當根據AVl視訊譯碼格式進行操作時,運動估計單元222和運動補償單元224可以被配置為使用平移運動補償、仿射運動補償、重疊塊運動補償(OBMC)及/或複合訊框內預測,對視訊資料的譯碼塊(例如,亮度和色度譯碼塊)進行編碼。When operating in accordance with the AV1 video coding format, motion estimation unit 222 and motion compensation unit 224 may be configured to use translational motion compensation, affine motion compensation, overlapping block motion compensation (OBMC), and/or composite intra-frame prediction, Encodes coding blocks of video data (eg, luma and chroma coding blocks).

再舉一個實例,對於訊框內預測或訊框內預測譯碼,訊框內預測單元226可以根據與當前塊相鄰的取樣來產生預測塊。例如,對於定向模式,訊框內預測單元226通常可以在數學上組合相鄰取樣的值,並跨當前塊沿定義方向填充這些計算的值以產生預測塊。再舉一個實例,對於DC模式,訊框內預測單元226可以計算與當前塊的相鄰取樣的平均值,並且產生預測塊以包括針對預測塊的每個取樣的該所得平均值。As another example, for intra prediction or intra prediction coding, intra prediction unit 226 may generate a prediction block based on samples adjacent to the current block. For example, for directional mode, intra prediction unit 226 may typically mathematically combine values from adjacent samples and pad these calculated values in a defined direction across the current block to produce a prediction block. As another example, for DC mode, intra prediction unit 226 may calculate an average of adjacent samples to the current block and generate a prediction block to include the resulting average for each sample of the prediction block.

當根據AVl視訊譯碼格式進行操作時,訊框內預測單元226可以被配置為使用定向訊框內預測、非定向訊框內預測、遞迴濾波訊框內預測、色度從亮度(CFL)預測、塊內複製(IBC)及/或調色板模式,對視訊資料的譯碼塊(例如,亮度和色度譯碼塊)進行編碼。模式選擇單元202可以包括另外的功能單元,以根據其他預測模式來執行視訊預測。When operating according to the AV1 video coding format, the intra prediction unit 226 may be configured to use directional intra prediction, non-directional intra prediction, recursive filtered intra prediction, chroma from luma (CFL) Prediction, intra-block copy (IBC), and/or palette modes encode coding blocks of video data (eg, luma and chroma coding blocks). The mode selection unit 202 may include additional functional units to perform video prediction according to other prediction modes.

模式選擇單元202將預測塊提供給殘差產生單元204。殘差產生單元204從視訊資料記憶體230接收當前塊的原始未編碼版本,並從模式選擇單元202接收預測塊。殘差產生單元204計算當前塊和預測塊之間的逐取樣差。所得的逐取樣差定義了當前塊的殘差塊。在一些實例中,殘差產生單元204亦可以決定殘差塊中的取樣值之間的差,以使用殘差差分脈衝碼調制(RDPCM)來產生殘差塊。在一些實例中,可以使用執行二進位減法的一或多個減法器電路,來形成殘差產生單元204。The mode selection unit 202 supplies the prediction block to the residual generation unit 204. The residual generation unit 204 receives the original unencoded version of the current block from the video data memory 230 and the prediction block from the mode selection unit 202 . The residual generation unit 204 calculates the sample-by-sample difference between the current block and the prediction block. The resulting sample-by-sample difference defines the residual block for the current block. In some examples, the residual generation unit 204 may also determine the difference between the sample values in the residual block to generate the residual block using residual differential pulse code modulation (RDPCM). In some examples, residual generation unit 204 may be formed using one or more subtractor circuits that perform binary subtraction.

在模式選擇單元202將CU劃分為PU的實例中,每個PU可以與亮度預測單元和對應的色度預測單元相關聯。視訊轉碼器200和視訊解碼器300可以支援具有各種大小的PU。如前述,CU的大小可以代表CU的亮度譯碼塊的大小,以及PU的大小可以代表PU的亮度預測單元的大小。假設特定CU的大小為2Nx2N,則視訊轉碼器200可以支援2Nx2N或NxN的PU大小來用於訊框內預測,並支援2Nx2N、2NxN、Nx2N、NxN等等的對稱PU大小來用於訊框間預測。視訊轉碼器200和視訊解碼器300亦可以支援PU大小為2NxnU、2NxnD、nLx2N和nRx2N的非對稱劃分用於訊框間預測。In examples where mode selection unit 202 partitions a CU into PUs, each PU may be associated with a luma prediction unit and a corresponding chroma prediction unit. The video transcoder 200 and the video decoder 300 can support PUs of various sizes. As mentioned above, the size of the CU may represent the size of the luma coding block of the CU, and the size of the PU may represent the size of the luma prediction unit of the PU. Assuming that the size of a specific CU is 2Nx2N, the video transcoder 200 can support a PU size of 2Nx2N or NxN for intra-frame prediction, and support a symmetric PU size of 2Nx2N, 2NxN, Nx2N, NxN, etc. for intra-frame prediction. prediction. The video transcoder 200 and the video decoder 300 may also support asymmetric partitioning of PU sizes of 2NxnU, 2NxnD, nLx2N, and nRx2N for inter-frame prediction.

在模式選擇單元202不將CU進一步劃分成PU的實例中,每個CU可以與亮度譯碼塊和對應的色度譯碼塊相關聯。如前述,CU的大小可以代表CU的亮度譯碼塊的大小。視訊轉碼器200和視訊解碼器300可以支援2Nx2N、2NxN或Nx2N的CU大小。In instances where mode selection unit 202 does not further partition the CU into PUs, each CU may be associated with a luma coding block and a corresponding chroma coding block. As mentioned above, the size of the CU may represent the size of the luma decoding block of the CU. The video transcoder 200 and the video decoder 300 may support a CU size of 2Nx2N, 2NxN or Nx2N.

對於其他視訊譯碼技術(諸如訊框內塊複製模式譯碼、仿射模式譯碼和線性模型(LM)模式譯碼,僅舉一些實例),模式選擇單元202經由與譯碼技術相關聯的各個單元,針對正在被編碼的當前塊來產生預測塊。在一些實例中(諸如調色板模式譯碼),模式選擇單元202可以不產生預測塊,而是產生語法元素,這些語法元素指示基於所選的調色板來重構塊的方式。在此類模式下,模式選擇單元202可以將這些語法元素提供給熵編碼單元220以進行編碼。For other video coding techniques, such as intra-block copy mode coding, affine mode coding, and linear model (LM) mode coding, to name some examples, mode selection unit 202 selects Each unit generates a prediction block for the current block being encoded. In some examples (such as palette mode coding), mode selection unit 202 may not generate predictive blocks but instead generate syntax elements that indicate how the blocks are reconstructed based on the selected palette. In such modes, mode selection unit 202 may provide these syntax elements to entropy encoding unit 220 for encoding.

如前述,殘差產生單元204接收用於當前塊和對應的預測塊的視訊資料。隨後,殘差產生單元204產生用於當前塊的殘差塊。為了產生殘差塊,殘差產生單元204計算預測塊和當前塊之間的逐取樣差。As mentioned above, the residual generation unit 204 receives video data for the current block and the corresponding prediction block. Subsequently, the residual generation unit 204 generates a residual block for the current block. To generate a residual block, residual generation unit 204 calculates the sample-by-sample difference between the prediction block and the current block.

變換處理單元206將一或多個變換應用於殘差塊以產生變換係數的塊(本文稱為「變換係數塊」)。變換處理單元206可以將各種變換應用於殘差塊以形成變換係數塊。例如,變換處理單元206可以將離散餘弦變換(DCT)、方向變換、Karhunen-Loeve變換(KLT)或者概念上類似的變換應用於殘差塊。在一些實例中,變換處理單元206可以對殘差塊執行多個變換(例如,主變換和次要變換(如,旋轉變換))。在一些實例中,變換處理單元206不向殘差塊應用變換。Transform processing unit 206 applies one or more transforms to the residual block to produce a block of transform coefficients (referred to herein as a "transform coefficient block"). Transform processing unit 206 may apply various transforms to the residual block to form a block of transform coefficients. For example, transform processing unit 206 may apply a discrete cosine transform (DCT), a directional transform, a Karhunen-Loeve transform (KLT), or a conceptually similar transform to the residual block. In some examples, transform processing unit 206 may perform multiple transforms (eg, a primary transform and a secondary transform (eg, a rotation transform)) on the residual block. In some examples, transform processing unit 206 does not apply transforms to the residual blocks.

當根據AVl操作時,變換處理單元206可以將一或多個變換應用於殘差塊以產生變換係數塊(本文稱為「變換係數塊」)。變換處理單元206可以對殘差塊應用各種變換以形成變換係數塊。例如,變換處理單元206可以應用水平/豎直變換組合,該組合可以包括離散餘弦變換(DCT)、非對稱離散正弦變換(ADST)、翻轉ADST(例如,逆序的ADST)和身份變換(IDTX)。當使用恒等變換時,會在豎直或水平方向之一跳過變換。在一些實例中,可以跳過變換處理。When operating in accordance with AV1, transform processing unit 206 may apply one or more transforms to the residual block to produce a block of transform coefficients (referred to herein as a "transform coefficient block"). Transform processing unit 206 may apply various transforms to the residual block to form a block of transform coefficients. For example, transform processing unit 206 may apply a horizontal/vertical transform combination, which may include a discrete cosine transform (DCT), an asymmetric discrete sine transform (ADST), a flipped ADST (eg, inverse ADST), and an identity transform (IDTX) . When using an identity transformation, the transformation is skipped in one of the vertical or horizontal directions. In some instances, transformation processing may be skipped.

量化單元208可以對變換係數塊中的變換係數進行量化,以產生經量化的變換係數塊。量化單元208可以根據與當前塊相關聯的量化參數(QP)值,來量化變換係數塊的變換係數。視訊轉碼器200(例如,經由模式選擇單元202)可以經由調整與CU相關聯的QP值,來調整應用於與當前塊相關聯的變換係數塊的量化程度。量化可能導致資訊的丟失,因此,量化後的變換係數的精度可能比變換處理單元206所產生的原始變換係數的精度低。Quantization unit 208 may quantize the transform coefficients in the block of transform coefficients to produce a quantized block of transform coefficients. Quantization unit 208 may quantize the transform coefficients of the block of transform coefficients based on a quantization parameter (QP) value associated with the current block. Video transcoder 200 (eg, via mode selection unit 202) may adjust the degree of quantization applied to the block of transform coefficients associated with the current block by adjusting the QP value associated with the CU. Quantization may result in loss of information, and therefore, the accuracy of the quantized transform coefficients may be lower than the accuracy of the original transform coefficients generated by the transform processing unit 206 .

逆量化單元210和逆變換處理單元212可以將逆量化和逆變換分別應用於量化的變換係數塊,以根據變換係數塊來重建殘差塊。重構單元214可以基於重構的殘差塊和由模式選擇單元202產生的預測塊,來產生與當前塊相對應的重構塊(儘管可能具有一定程度的失真)。例如,重構單元214可以將重構的殘差塊的取樣添加到模式選擇單元202所產生的預測塊中的對應取樣,以產生經重構的塊。The inverse quantization unit 210 and the inverse transform processing unit 212 may respectively apply inverse quantization and inverse transform to the quantized transform coefficient block to reconstruct the residual block from the transform coefficient block. Reconstruction unit 214 may generate a reconstructed block corresponding to the current block (although possibly with a certain degree of distortion) based on the reconstructed residual block and the prediction block generated by mode selection unit 202 . For example, reconstruction unit 214 may add samples of the reconstructed residual block to corresponding samples in the prediction block generated by mode selection unit 202 to produce a reconstructed block.

濾波單元216可以對經重構的塊執行一或多個濾波操作。例如,濾波單元216可以執行去塊化操作以減少沿著CU的邊緣的塊狀偽影。在一些實例中,可以跳過濾波單元216的操作。Filtering unit 216 may perform one or more filtering operations on the reconstructed blocks. For example, filtering unit 216 may perform deblocking operations to reduce blocking artifacts along edges of CUs. In some examples, operation of filtering unit 216 may be skipped.

根據本案內容中描述的一或多個實例,濾波單元216可以被配置為執行基於神經網路的濾波技術。例如,補充或者替代諸如ALF和SAO之類的其他濾波技術,濾波單元216可以執行基於神經網路的濾波技術。例如,濾波單元216可以是重構迴路的一部分,該重構迴路包括視訊轉碼器200執行的解碼程序。濾波單元216可以執行本案內容中描述的基於神經網路的濾波,作為視訊轉碼器200的解碼程序的重構迴路中的環內濾波的一部分。According to one or more examples described in this content, the filtering unit 216 may be configured to perform neural network-based filtering techniques. For example, filtering unit 216 may perform neural network-based filtering techniques in addition to or instead of other filtering techniques such as ALF and SAO. For example, the filtering unit 216 may be part of a reconstruction loop that includes the decoding process performed by the video transcoder 200 . The filtering unit 216 may perform the neural network-based filtering described in this context as part of the in-loop filtering in the reconstruction loop of the decoding process of the video transcoder 200 .

當根據AVl操作時,濾波單元216可以對經重構的塊執行一或多個濾波操作。例如,濾波單元216可以執行去塊化操作,以減少沿CU邊緣的塊狀偽影。在其他實例中,濾波單元216可以應用約束方向增強濾波(CDEF),其可以在去塊化之後應用,並且可以包括基於估計的邊緣方向應用不可分離的非線性低通方向濾波。濾波單元216亦可以包括在CDEF之後應用的迴路恢復濾波器,並且可以包括可分離的對稱正規化維納濾波器或雙自導濾波器。When operating according to AV1, filtering unit 216 may perform one or more filtering operations on the reconstructed blocks. For example, filtering unit 216 may perform deblocking operations to reduce blocking artifacts along CU edges. In other examples, filtering unit 216 may apply constrained direction enhancement filtering (CDEF), which may be applied after deblocking, and may include applying non-separable, non-linear low-pass directional filtering based on the estimated edge directions. The filtering unit 216 may also include a loop recovery filter applied after the CDEF, and may include a separable symmetric normalized Wiener filter or a dual self-steering filter.

視訊轉碼器200將經重構的塊儲存在DPB 218中。例如,在不執行濾波單元216的操作的實例中,重構單元214可以將經重構的塊儲存到DPB 218中。在執行濾波單元216的操作的實例中,濾波單元216可以將經濾波的經重構的塊儲存到DPB 218中。運動估計單元222和運動補償單元224可以從DPB 218取回參考圖片,參考圖片由經重構(並且可能經濾波)的塊形成,以對隨後編碼的圖片進行訊框間預測。另外,訊框內預測單元226可以使用當前圖片的DPB 218中的經重構的塊,以對當前圖片中的其他塊進行訊框內預測。Video transcoder 200 stores the reconstructed blocks in DPB 218. For example, in instances where the operations of filtering unit 216 are not performed, reconstruction unit 214 may store the reconstructed blocks into DPB 218 . In an example in which the operations of filtering unit 216 are performed, filtering unit 216 may store the filtered reconstructed blocks into DPB 218 . Motion estimation unit 222 and motion compensation unit 224 may retrieve reference pictures from DPB 218, which are formed from reconstructed (and possibly filtered) blocks for inter-frame prediction of subsequently encoded pictures. Additionally, intra prediction unit 226 may use reconstructed blocks in DPB 218 of the current picture to perform intra prediction on other blocks in the current picture.

通常,熵編碼單元220可以對從視訊轉碼器200的其他功能部件接收的語法元素進行熵編碼。例如,熵編碼單元220可以對來自量化單元208的經量化的變換係數塊進行熵編碼。再舉一個實例,熵編碼單元220可以對來自模式選擇單元202的預測語法元素(例如,用於訊框間預測的運動資訊或者用於訊框內預測的訊框內模式資訊)進行熵編碼。熵編碼單元220可以對作為視訊資料的另一個實例的語法元素執行一或多個熵編碼操作,以產生經熵編碼的資料。例如,熵編碼單元220可以執行上下文自我調整可變長度譯碼(CAVLC)操作、CABAC操作、變數至變數(V2V)長度譯碼操作、基於語法的上下文自我調整二進位算術譯碼(SBAC)操作、概率間隔分割熵(PIPE)譯碼操作、指數葛籣布編碼操作、或者對資料的另一種類型的熵編碼操作。在一些實例中,熵編碼單元220可以在不對語法元素進行熵編碼的旁通模式下操作。Generally, entropy encoding unit 220 may entropy encode syntax elements received from other functional components of video transcoder 200 . For example, entropy encoding unit 220 may entropy encode the quantized transform coefficient block from quantization unit 208. As another example, entropy encoding unit 220 may entropy encode prediction syntax elements from mode selection unit 202 (eg, motion information for inter prediction or intra mode information for intra prediction). Entropy encoding unit 220 may perform one or more entropy encoding operations on syntax elements that are another instance of video data to generate entropy-encoded data. For example, entropy encoding unit 220 may perform context self-adjusting variable length coding (CAVLC) operations, CABAC operations, variable-to-variable (V2V) length coding operations, syntax-based context self-adjusting binary arithmetic coding (SBAC) operations , a Probabilistic Interval Partitioning Entropy (PIPE) decoding operation, an exponential gorilla coding operation, or another type of entropy coding operation on the data. In some examples, entropy encoding unit 220 may operate in a bypass mode that does not entropy encode syntax elements.

視訊轉碼器200可以輸出位元串流,該位元串流包括用於重構片段或圖片的塊所需要的經熵編碼的語法元素。具體而言,熵編碼單元220可以輸出位元串流。Video transcoder 200 may output a bitstream that includes entropy-encoded syntax elements required for reconstructing blocks of segments or pictures. Specifically, the entropy encoding unit 220 may output a bit stream.

根據AV1,熵編碼單元220可以被配置為符號到符號自我調整多符號算術解碼器。AV1中的語法元素包括N個元素的字母表,而上下文(例如,概率模型)包括一組N個概率。熵編碼單元220可以將概率儲存為n位元(例如,15位元)累積分佈函數(CDF)。熵編碼單元22可以使用基於字母表大小的更新因數執行遞迴縮放,以更新上下文。According to AV1, the entropy encoding unit 220 may be configured as a symbol-to-symbol self-adjusting multi-symbol arithmetic decoder. A grammar element in AV1 consists of an alphabet of N elements, while a context (e.g., a probabilistic model) consists of a set of N probabilities. Entropy encoding unit 220 may store the probabilities as n-bit (eg, 15-bit) cumulative distribution functions (CDFs). Entropy encoding unit 22 may perform recursive scaling using an update factor based on the alphabet size to update the context.

關於塊描述了上面所描述的操作。此類描述應當被理解為用於亮度譯碼塊及/或色度譯碼塊的操作。如前述,在一些實例中,亮度譯碼塊和色度譯碼塊是CU的亮度和色度分量。在一些實例中,亮度譯碼塊和色度譯碼塊是PU的亮度和色度分量。The operations described above are described with respect to blocks. Such descriptions should be understood to be for the operation of the luma coding block and/or the chroma coding block. As mentioned previously, in some examples, the luma coding block and the chroma coding block are the luma and chroma components of the CU. In some examples, the luma coding block and the chroma coding block are the luma and chroma components of the PU.

在一些實例中,不需要針對色度譯碼塊重複針對亮度譯碼塊執行的操作。舉一個實例,不需要重多工於辨識亮度譯碼塊的運動向量(MV)和參考圖片的操作,來辨識用於色度塊的MV和參考圖片。相反,可以縮放用於亮度譯碼塊的MV以決定用於色度塊的MV,並且參考圖片可以是相同的。再舉一個實例,對於亮度譯碼塊和色度譯碼塊,訊框內預測處理可以是相同的。In some examples, operations performed for the luma coding block need not be repeated for the chroma coding block. As an example, the operation of identifying motion vectors (MVs) and reference pictures for luma coding blocks does not need to be repeated to identify MVs and reference pictures for chroma blocks. Instead, the MV for the luma coding block can be scaled to determine the MV for the chroma block, and the reference pictures can be the same. As another example, the intra prediction process may be the same for luma coding blocks and chroma coding blocks.

視訊轉碼器200表示被配置為對視訊資料進行編碼的設備的實例,該設備包括被配置為儲存視訊資料的記憶體、以及利用電路來實現的一或多個處理單元,該一或多個處理單元被配置為:利用來自單個神經網路(NN)模型的基於NN的濾波器,對視訊資料的兩個或兩個以上顏色分量進行濾波,並利用基於NN的濾波器,在單個濾波程序之後輸出該兩個或兩個以上顏色分量。在一些實例中,濾波包括:決定如何應用基於NN的濾波器,是開啟亦是關閉基於NN的濾波器,如何建立用於單個NN模型的輸入資料,以及針對兩個或兩個以上顏色分量以相同方式決定所有輸入元素的值。Video transcoder 200 represents an example of a device configured to encode video data. The device includes a memory configured to store video data, and one or more processing units implemented using circuitry. The one or more processing units The processing unit is configured to: filter two or more color components of the video material using a NN-based filter from a single neural network (NN) model, and to use the NN-based filter in a single filter program The two or more color components are then output. In some examples, filtering includes: deciding how to apply a NN-based filter, whether to turn a NN-based filter on or off, how to create input data for a single NN model, and targeting two or more color components and Determine the values of all input elements in the same way.

在一些實例中,視訊轉碼器200可以被配置為:用訊號通知指示針對兩個或兩個以上顏色分量中的每一者是開啟還是關閉基於NN的濾波器的資訊。在此類實例中,濾波包括:根據對於兩個或兩個以上顏色分量中的每一者是開啟還是關閉基於NN的濾波器,來進行濾波。In some examples, video transcoder 200 may be configured to signal information indicating whether a NN-based filter is on or off for each of two or more color components. In such instances, filtering includes filtering based on whether a NN-based filter is on or off for each of the two or more color components.

例如,視訊轉碼器200可以用訊號通知語法元素,該語法元素定義用於第一顏色分量和第二顏色分量兩者的用於神經網路(NN)模型的濾波模式。語法元素的實例包括如前述的第一級濾波模式語法元素、第二級濾波模式語法元素、first_chroma_mode_slice(例如,其中first_chroma_mode_slice不為零,以及second_chroma_flag_slice為真)和chroma_mode_slice(例如,其中first_chroma_flag_slice及/或second_chroma_flag_slice為真)。For example, video transcoder 200 may signal syntax elements that define filtering modes for a neural network (NN) model for both the first color component and the second color component. Examples of syntax elements include the first-level filtering mode syntax element, the second-level filtering mode syntax element as described above, first_chroma_mode_slice (e.g., where first_chroma_mode_slice is non-zero, and second_chroma_flag_slice is true), and chroma_mode_slice (e.g., where first_chroma_flag_slice and/or second_chroma_flag_slice is true).

濾波單元216可以被配置為在定義的濾波模式中,將NN模型的實例應用於第一顏色分量的第一塊以產生第一濾波塊。例如,濾波單元216可以使用第一顏色分量(例如,第一色度塊)的經重構取樣來產生第一殘差取樣,並且將第一殘差取樣添加到第一顏色分量的經重構取樣以產生第一經濾波塊。The filtering unit 216 may be configured to apply an instance of the NN model to the first block of the first color component in a defined filtering mode to produce a first filtered block. For example, filtering unit 216 may generate first residual samples using reconstructed samples of a first color component (eg, a first chroma block) and add the first residual samples to the reconstructed samples of the first color component. Samples are taken to produce a first filtered block.

濾波單元216可以被配置為在定義的濾波模式中,將NN模型的相同實例應用於第二顏色分量的第二塊以產生第二濾波塊。例如,濾波單元216可以使用第二顏色分量的經重構取樣(例如,第二色度塊)來產生第二殘差取樣,並且將第二殘差取樣添加到第二顏色分量的經重構取樣以產生第二經濾波塊。The filtering unit 216 may be configured to apply the same instance of the NN model to the second block of the second color component in a defined filtering mode to produce a second filtered block. For example, filtering unit 216 may use the reconstructed samples of the second color component (eg, the second chroma block) to generate second residual samples and add the second residual samples to the reconstructed samples of the second color component. Samples are taken to produce a second filtered block.

在一或多個實例中,濾波單元216可以儲存第一經濾波塊和第二經濾波塊。例如,濾波單元216針對CU儲存在DPB 218中的取樣值可以是第一經濾波塊和第二經濾波塊。這樣,若CU用於訊框間預測,則經濾波塊將用於訊框間預測。In one or more examples, filtering unit 216 may store the first filtered block and the second filtered block. For example, the sample values that filtering unit 216 stores in DPB 218 for a CU may be a first filtered block and a second filtered block. In this way, if the CU is used for inter-frame prediction, the filtered block will be used for inter-frame prediction.

在一些情況下,僅當針對第二顏色分量啟用濾波時,濾波單元216才可以儲存第二經濾波塊。例如,對於NN模型的相同(例如,單個)實例,濾波單元216可以產生第一經濾波塊和第二經濾波塊。然而,有時可能不儲存(例如,丟棄)第二經濾波塊。In some cases, filtering unit 216 may store the second filtered block only when filtering is enabled for the second color component. For example, filtering unit 216 may generate a first filtered block and a second filtered block for the same (eg, single) instance of the NN model. However, sometimes the second filtered block may not be stored (eg, discarded).

舉一個實例,視訊轉碼器200可以將上面描述的second_chroma_flag_slice標誌用訊號通知為真,以指示針對第二顏色分量啟用基於NN的濾波。在該實例中,濾波單元216可以基於第一經濾波塊(針對第一顏色分量)和第二經濾波塊(針對第二顏色分量)來儲存針對CU的值。在另一個實例中,視訊轉碼器200可以將上面描述的second_chroma_flag_slice標誌用訊號通知為假,以指示針對第二顏色分量禁用基於NN的濾波。在該實例中,濾波單元216可以基於第一經濾波塊(針對第一顏色分量)和第二塊(針對第二顏色分量)儲存針對CU的值,而不進行濾波。再次,可以經由應用NN模型的實例來產生第二經濾波塊;然而,當second_chroma_flag_slice標誌為假時,可以丟棄第二經濾波塊。As an example, video transcoder 200 may signal the second_chroma_flag_slice flag described above as true to indicate that NN-based filtering is enabled for the second color component. In this example, filtering unit 216 may store the value for the CU based on the first filtered block (for the first color component) and the second filtered block (for the second color component). In another example, video transcoder 200 may signal the second_chroma_flag_slice flag described above as false to indicate that NN-based filtering is disabled for the second color component. In this example, filtering unit 216 may store values for the CU based on the first filtered block (for the first color component) and the second block (for the second color component) without filtering. Again, the second filtered block may be generated via applying an instance of the NN model; however, when the second_chroma_flag_slice flag is false, the second filtered block may be discarded.

圖3是示出可以執行本案內容的技術的實例視訊解碼器300的方塊圖。提供圖3以便於解釋的目的,故其不應被認為是對本案內容中廣泛例示和描述的技術的限制。為了便於說明起見,本案內容根據VVC技術(在開發中的ITU-T H.266)和HEVC技術(ITU-T H.265)來描述了視訊解碼器300。但是,本案內容的技術可以由被配置為實現其他視訊譯碼標準的視訊譯碼設備來執行。FIG. 3 is a block diagram illustrating an example video decoder 300 that may perform the techniques of this disclosure. Figure 3 is provided for purposes of explanation and should not be considered a limitation on the techniques broadly illustrated and described in this context. For ease of explanation, the content of this case describes the video decoder 300 based on VVC technology (ITU-T H.266 under development) and HEVC technology (ITU-T H.265). However, the techniques described in this case may be performed by video decoding devices configured to implement other video decoding standards.

在圖3的實例中,視訊解碼器300包括譯碼圖片緩衝器(CPB)記憶體320、熵解碼單元302、預測處理單元304、逆量化單元306、逆變換處理單元308、重構單元310、濾波單元312和解碼圖片緩衝器(DPB)314。CPB記憶體320、熵解碼單元302、預測處理單元304、逆量化單元306、逆變換處理單元308、重構單元310、濾波單元312和DPB 314中的任何一個或全部,可以在一或多個處理器中或者在處理電路中實現。例如,可以將視訊解碼器300的單元實現為一或多個電路或邏輯部件,作為硬體電路的一部分,或者作為處理器、ASIC或FPGA的一部分。此外,視訊解碼器300可以包括補充的或替代的處理器或處理電路,以執行這些功能和其他功能。In the example of FIG. 3, the video decoder 300 includes a coded picture buffer (CPB) memory 320, an entropy decoding unit 302, a prediction processing unit 304, an inverse quantization unit 306, an inverse transform processing unit 308, a reconstruction unit 310, Filter unit 312 and decoded picture buffer (DPB) 314. Any one or all of the CPB memory 320, the entropy decoding unit 302, the prediction processing unit 304, the inverse quantization unit 306, the inverse transform processing unit 308, the reconstruction unit 310, the filtering unit 312 and the DPB 314 may be one or more implemented in a processor or in processing circuitry. For example, the units of the video decoder 300 may be implemented as one or more circuits or logic components, as part of a hardware circuit, or as part of a processor, ASIC, or FPGA. Additionally, video decoder 300 may include supplemental or alternative processors or processing circuitry to perform these and other functions.

預測處理單元304包括運動補償單元316和訊框內預測單元318。預測處理單元304可以包括用於根據其他預測模式來執行預測的其他單元。舉例而言,預測處理單元304可以包括調色板單元、塊內複製單元(其可以形成運動補償單元316的一部分)、仿射單元、線性模型(LM)單元等等。在其他實例中,視訊解碼器300可以包括更多、更少或者不同的功能部件。Prediction processing unit 304 includes motion compensation unit 316 and intra prediction unit 318. Prediction processing unit 304 may include other units for performing predictions according to other prediction modes. For example, prediction processing unit 304 may include palette units, intra-block copy units (which may form part of motion compensation unit 316), affine units, linear model (LM) units, and the like. In other examples, video decoder 300 may include more, fewer, or different functional components.

當根據AVl操作時,補償單元316可以被配置為使用平移運動補償、仿射運動補償、OBMC及/或複合訊框間-訊框內預測,對視訊資料的譯碼塊(例如,亮度和色度譯碼塊)進行解碼,如前述。訊框內預測單元318可以被配置為使用定向訊框內預測、非定向訊框內預測、遞迴濾波訊框內預測、CFL、訊框內塊複製(IBC)及/或調色板模式,對視訊資料的譯碼塊(例如,亮度和色度譯碼塊)進行解碼,如前述。When operating in accordance with AV1, compensation unit 316 may be configured to use translational motion compensation, affine motion compensation, OBMC, and/or composite inter-intra prediction to code blocks of video data (e.g., luma and color decoding block), as described above. Intra-prediction unit 318 may be configured to use directional intra-prediction, non-directional intra-prediction, recursive filtered intra-prediction, CFL, intra-block copy (IBC), and/or palette mode, Decode the coding blocks of the video data (eg, luma and chroma coding blocks) as described above.

CPB記憶體320可以儲存將由視訊解碼器300的部件解碼的視訊資料(諸如,經編碼的視訊位元串流)。例如,可以從電腦可讀取媒體110(圖1)中獲得儲存在CPB記憶體320中的視訊資料。CPB記憶體320可以包括儲存來自經編碼視訊位元串流的經編碼視訊資料(例如,語法元素)的CPB。而且,CPB記憶體320可以儲存除經譯碼圖片的語法元素之外的視訊資料,諸如,表示來自視訊解碼器300的各個單元的輸出的臨時資料。DPB 314通常儲存經解碼的圖片,視訊解碼器300在解碼經編碼的視訊位元串流的後續資料或圖片時,可以輸出及/或使用經解碼圖片作為參考視訊資料。CPB記憶體320和DPB 314可以由諸如DRAM(其包括SDRAM)、MRAM、RRAM或其他類型的存放裝置之類的各種存放裝置中的任何一種來形成。CPB記憶體320和DPB 314可以由相同的存放裝置或不同的存放裝置來提供。在各個實例中,CPB記憶體320可以與視訊解碼器300的其他部件一起在晶片上,或者相對於那些部件在晶片外。CPB memory 320 may store video data (such as an encoded video bit stream) to be decoded by components of video decoder 300 . For example, the video data stored in the CPB memory 320 can be obtained from the computer readable medium 110 (FIG. 1). CPB memory 320 may include a CPB that stores encoded video data (eg, syntax elements) from the encoded video bit stream. Furthermore, CPB memory 320 may store video data in addition to the syntax elements of the coded picture, such as temporary data representing the output from the various units of video decoder 300 . The DPB 314 typically stores decoded pictures, and the video decoder 300 may output and/or use the decoded pictures as reference video data when decoding subsequent data or pictures in the encoded video bit stream. CPB memory 320 and DPB 314 may be formed from any of a variety of storage devices, such as DRAM (including SDRAM), MRAM, RRAM, or other types of storage devices. CPB memory 320 and DPB 314 may be provided by the same storage device or different storage devices. In various examples, CPB memory 320 may be on-die with other components of video decoder 300 or off-die relative to those components.

補充地或替代地,在一些實例中,視訊解碼器300可以從記憶體120(圖1)中取回經譯碼的視訊資料。亦即,記憶體120可以儲存資料,如上面參照CPB記憶體320所論述的。同樣,當視訊解碼器300的一些或全部功能利用由視訊解碼器300的處理電路執行的軟體來實現時,記憶體120可以儲存將由視訊解碼器300執行的指令。Additionally or alternatively, in some examples, video decoder 300 may retrieve decoded video data from memory 120 (FIG. 1). That is, memory 120 may store data, as discussed above with reference to CPB memory 320 . Likewise, memory 120 may store instructions to be executed by video decoder 300 when some or all of the functions of video decoder 300 are implemented using software executed by the processing circuitry of video decoder 300 .

圖示圖3的各個單元以説明理解由視訊解碼器300執行的操作。這些單元可以實現為固定功能電路、可程式設計電路或者其組合。類似於圖2,固定功能電路代表提供特定功能、並在可以執行的操作上預先設置的電路。可程式設計電路代表可以被程式設計以執行各種任務,並且在可以執行的操作中提供靈活功能的電路。例如,可程式設計電路可以執行軟體或韌體,軟體或韌體使可程式設計電路以軟體或韌體的指令所定義的方式進行操作。固定功能電路可以執行軟體指令(例如,用於接收參數或輸出參數),但是固定功能電路執行的操作的類型通常是不可變的。在一些實例中,這些單元中的一或多個單元可以是不同的電路塊(固定功能或可程式設計),並且在一些實例中,該一或多個單元可以是積體電路。The various units of FIG. 3 are illustrated to illustrate understanding the operations performed by the video decoder 300. These units may be implemented as fixed function circuits, programmable circuits, or a combination thereof. Similar to Figure 2, a fixed-function circuit represents a circuit that provides a specific function and is preset in terms of the operations it can perform. Programmable circuits represent circuits that can be programmed to perform a variety of tasks and provide flexible functionality in the operations that can be performed. For example, the programmable circuit may execute software or firmware that causes the programmable circuit to operate in a manner defined by the instructions of the software or firmware. Fixed-function circuits can execute software instructions (for example, for receiving parameters or outputting parameters), but the types of operations performed by fixed-function circuits are generally immutable. In some examples, one or more of the units may be distinct circuit blocks (fixed function or programmable), and in some examples, the one or more units may be integrated circuits.

視訊解碼器300可以包括ALU、EFU、數位電路、類比電路及/或由可程式設計電路形成的可程式設計核。在經由在可程式設計電路上執行的軟體來執行視訊解碼器300的操作的實例中,片上或片外記憶體可以儲存視訊解碼器300接收並執行的軟體的指令(例如,目標代碼)。Video decoder 300 may include ALU, EFU, digital circuits, analog circuits, and/or a programmable core formed of programmable circuits. In instances where operations of video decoder 300 are performed via software executing on programmable circuitry, on-chip or off-chip memory may store instructions (eg, object code) for the software that video decoder 300 receives and executes.

熵解碼單元302可以從CPB接收經編碼的視訊資料,並且對視訊資料進行熵解碼以再現語法元素。預測處理單元304、逆量化單元306、逆變換處理單元308、重構單元310和濾波單元312可以基於從位元串流中提取的語法元素來產生經解碼的視訊資料。Entropy decoding unit 302 may receive encoded video data from the CPB and entropy decode the video data to reproduce syntax elements. Prediction processing unit 304, inverse quantization unit 306, inverse transform processing unit 308, reconstruction unit 310 and filtering unit 312 may generate decoded video material based on syntax elements extracted from the bit stream.

通常,視訊解碼器300在逐塊的基礎上重構圖片。視訊解碼器300可以單獨地對每個區塊執行重構操作(其中當前正在重構(亦即,解碼)的塊可以稱為「當前塊」)。Typically, video decoder 300 reconstructs pictures on a block-by-block basis. Video decoder 300 may perform reconstruction operations on each block individually (where the block currently being reconstructed (ie, decoded) may be referred to as the "current block").

熵解碼單元302可以對定義經量化的變換係數塊的量化變換係數的語法元素、以及諸如量化參數(QP)及/或變換模式指示之類的變換資訊進行熵解碼。逆量化單元306可以使用與經量化的變換係數塊相關聯的QP來決定量化程度,並且同樣地,決定用於逆量化單元306應用的逆量化程度。例如,逆量化單元306可以執行按位元左移運算以對經量化的變換係數進行逆量化。逆量化單元306可以由此形成包括變換係數的變換係數塊。Entropy decoding unit 302 may entropy decode syntax elements that define quantized transform coefficients of a quantized transform coefficient block, and transform information such as quantization parameters (QPs) and/or transform mode indications. Inverse quantization unit 306 may use the QP associated with the quantized transform coefficient block to determine the degree of quantization and, likewise, determine the degree of inverse quantization for inverse quantization unit 306 to apply. For example, inverse quantization unit 306 may perform a bitwise left shift operation to inversely quantize the quantized transform coefficients. Inverse quantization unit 306 may thereby form a transform coefficient block including transform coefficients.

在逆量化單元306形成變換係數塊之後,逆變換處理單元308可以將一或多個逆變換應用於變換係數塊以產生與當前塊相關聯的殘差塊。例如,逆變換處理單元308可以向變換係數塊應用逆DCT、逆整數變換、逆Karhunen-Loeve變換(KLT)、逆旋轉變換、逆方向變換或者另一種逆變換。After inverse quantization unit 306 forms the block of transform coefficients, inverse transform processing unit 308 may apply one or more inverse transforms to the block of transform coefficients to produce a residual block associated with the current block. For example, the inverse transform processing unit 308 may apply an inverse DCT, an inverse integer transform, an inverse Karhunen-Loeve transform (KLT), an inverse rotation transform, an inverse direction transform, or another inverse transform to the transform coefficient block.

此外,預測處理單元304根據由熵解碼單元302進行熵解碼的預測資訊語法元素,來產生預測塊。例如,若預測資訊語法元素指示當前塊是訊框間預測的,則運動補償單元316可以產生預測塊。在這種情況下,預測資訊語法元素可以指示DPB 314中的從其取回參考塊的參考圖片、以及標識參考圖片中的參考塊相對於當前圖片中的當前塊的位置的運動向量。運動補償單元316通常可以以與關於運動補償單元224(圖2)所描述的方式基本上相似的方式,來執行訊框間預測處理。In addition, the prediction processing unit 304 generates prediction blocks based on the prediction information syntax elements entropy-decoded by the entropy decoding unit 302 . For example, if the prediction information syntax element indicates that the current block is inter-predicted, motion compensation unit 316 may generate a prediction block. In this case, the prediction information syntax element may indicate the reference picture in DPB 314 from which the reference block was retrieved, and a motion vector identifying the position of the reference block in the reference picture relative to the current block in the current picture. Motion compensation unit 316 may generally perform inter-frame prediction processing in a manner substantially similar to that described with respect to motion compensation unit 224 (FIG. 2).

再舉一個實例,若預測資訊語法元素指示當前塊是訊框內預測的,則訊框內預測單元318可以根據由預測資訊語法元素指示的訊框內預測模式來產生預測塊。再次,訊框內預測單元318通常可以以與關於訊框內預測單元226(圖2)所描述的方式基本上相似的方式來執行訊框內預測處理。訊框內預測單元318可以從DPB 314取回當前塊的相鄰取樣的資料。As another example, if the prediction information syntax element indicates that the current block is intra-predicted, the intra prediction unit 318 may generate the prediction block according to the intra prediction mode indicated by the prediction information syntax element. Again, intra-prediction unit 318 may generally perform intra-prediction processing in a manner substantially similar to that described with respect to intra-prediction unit 226 (FIG. 2). Intra-frame prediction unit 318 may retrieve adjacent sample data of the current block from DPB 314 .

重構單元310可以使用預測塊和殘差塊來重構當前塊。例如,重構單元310可以將殘差塊的取樣添加到預測塊的對應取樣以重構當前的塊。Reconstruction unit 310 may reconstruct the current block using the prediction block and the residual block. For example, reconstruction unit 310 may add samples of the residual block to corresponding samples of the prediction block to reconstruct the current block.

濾波單元312可以對經重構的塊執行一或多個濾波操作。例如,濾波單元312可以執行去塊化操作,以減少沿著經重構的塊的邊緣的塊狀偽影。不一定在所有實例中皆執行濾波單元312的操作。Filtering unit 312 may perform one or more filtering operations on the reconstructed blocks. For example, filtering unit 312 may perform deblocking operations to reduce blocking artifacts along edges of reconstructed blocks. The operation of filter unit 312 may not necessarily be performed in all instances.

根據本案內容中描述的一或多個實例,濾波單元312可以被配置為執行基於神經網路的濾波技術。例如,補充或者替代諸如ALF和SAO之類的其他濾波技術,濾波單元312可以執行基於神經網路的濾波技術。在一些實例中,濾波單元312可以執行本案內容中描述的基於神經網路的濾波,作為環內濾波的一部分。儘管未圖示,但在一些實例中,濾波單元可以耦合到DPB 314的輸出,並且來自該濾波單元的輸出可以是經解碼的視訊。在一些實例中,耦合到DPB 314的輸出的濾波單元可以被配置為執行本案內容中描述的示例性基於神經網路的濾波技術,作為環後濾波的一部分。According to one or more examples described in this content, the filtering unit 312 may be configured to perform neural network-based filtering techniques. For example, the filtering unit 312 may perform neural network-based filtering techniques in addition to or instead of other filtering techniques such as ALF and SAO. In some examples, filtering unit 312 may perform the neural network-based filtering described in this context as part of in-loop filtering. Although not shown, in some examples, a filtering unit may be coupled to the output of DPB 314, and the output from the filtering unit may be decoded video. In some examples, a filter unit coupled to the output of DPB 314 may be configured to perform the example neural network-based filtering techniques described in this context as part of post-loop filtering.

視訊解碼器300可以將經重構的塊儲存在DPB 314中。例如,在不執行濾波單元312的操作的實例中,重構單元310可以將經重構的塊儲存到DPB 314中。在執行濾波單元312的操作的實例中,濾波單元312可以將經濾波的經重構的塊儲存在DPB 314中。如前述,DPB 314可以向預測處理單元304提供參考資訊,例如用於訊框內預測的當前圖片的取樣以及用於後續運動補償的先前解碼的圖片。此外,視訊解碼器300可以從DPB 314輸出解碼的圖片(例如,解碼的視訊),以便隨後在諸如圖1的顯示裝置118之類的顯示裝置上呈現。Video decoder 300 may store the reconstructed blocks in DPB 314. For example, in instances where the operations of filtering unit 312 are not performed, reconstruction unit 310 may store the reconstructed blocks into DPB 314. In an example in which the operations of filtering unit 312 are performed, filtering unit 312 may store the filtered reconstructed blocks in DPB 314 . As mentioned above, DPB 314 may provide reference information to prediction processing unit 304, such as samples of the current picture for intra-frame prediction and previously decoded pictures for subsequent motion compensation. Additionally, video decoder 300 may output decoded pictures (eg, decoded video) from DPB 314 for subsequent presentation on a display device, such as display device 118 of FIG. 1 .

用此方式,視訊解碼器300表示視訊解碼設備的實例,該視訊解碼設備包括被配置為儲存視訊資料的記憶體、以及利用電路實現的一或多個處理單元,其中該一或多個處理單元被配置為:利用來自單個神經網路(NN)模型的基於NN的濾波器,對視訊資料的兩個或兩個以上顏色分量進行濾波,並利用基於NN的濾波器,在單個濾波程序之後輸出這兩個或兩個以上顏色分量。在一些實例中,濾波包括:決定如何應用基於NN的濾波器,是開啟亦是關閉基於NN的濾波器,如何建立用於單個NN模型的輸入資料,以及針對兩個或兩個以上顏色分量以相同方式決定所有輸入元素的值。In this manner, video decoder 300 represents an example of a video decoding device that includes a memory configured to store video data, and one or more processing units implemented with circuitry, where the one or more processing units Configured to filter two or more color components of video material using a NN-based filter from a single neural network (NN) model, and to output after a single filtering procedure using the NN-based filter These two or more color components. In some examples, filtering includes: deciding how to apply a NN-based filter, whether to turn a NN-based filter on or off, how to create input data for a single NN model, and targeting two or more color components and Determine the values of all input elements in the same way.

在一些實例中,視訊解碼器300可以被配置為:接收用於指示針對兩個或兩個以上顏色分量中的每一者是開啟還是關閉基於NN的濾波器的資訊。在此類實例中,濾波包括:基於針對兩個或兩個以上顏色分量中的每一者是開啟還是關閉基於NN的濾波器,來進行濾波。In some examples, video decoder 300 may be configured to receive information indicating whether the NN-based filter is on or off for each of two or more color components. In such instances, filtering includes filtering based on whether a NN-based filter is on or off for each of the two or more color components.

例如,濾波單元312可以接收語法元素,該語法元素定義用於第一顏色分量和第二顏色分量兩者的用於神經網路(NN)模型的濾波模式。語法元素的實例包括如前述的第一級濾波模式語法元素、第二級濾波模式語法元素、first_chroma_mode_slice(例如,其中first_chroma_mode_slice不為零,second_chroma_flag_slice為真)和chroma_mode_slice(例如,其中first_chroma_flag_slice及/或second_chroma_flag_slice為真)。For example, filtering unit 312 may receive a syntax element that defines a filtering mode for a neural network (NN) model for both the first color component and the second color component. Examples of syntax elements include the aforementioned first-level filtering mode syntax element, second-level filtering mode syntax element, first_chroma_mode_slice (for example, where first_chroma_mode_slice is non-zero and second_chroma_flag_slice is true) and chroma_mode_slice (for example, where first_chroma_flag_slice and/or second_chroma_flag_slice is true) real).

在一些實例中,語法元素定義從複數個參數中選擇哪個參數,並且所定義的參數規定了濾波模式。舉一個實例,參數是量化參數,並且該複數個參數是預定義集合或用訊號通知的集合中的一者。例如,針對第一級濾波模式語法元素的值為1,指示由預定義集合或用訊號通知的集合中的第一參數來定義濾波模式,針對第一級濾波模式語法元素的值為2,指示由預定義集合或用訊號通知的集合中的第二參數來定義濾波模式,以及針對第一級濾波模式語法元素的值為3,指示由預定義集合或用訊號通知的集合中的第三參數來定義濾波模式。第二級濾波模式語法元素first_chroma_mode_slice和chroma_mode_stice的值可以類似地定義濾波模式,如上面的實例中所述。In some examples, the syntax element defines which parameter is selected from a plurality of parameters, and the defined parameter specifies the filtering mode. As an example, the parameter is a quantized parameter and the plurality of parameters is one of a predefined set or a signaled set. For example, a value of 1 for the first level filter mode syntax element indicates that the filter mode is defined by the first parameter in a predefined set or a signaled set, and a value of 2 for the first level filter mode syntax element indicates that A filter mode is defined by a second parameter in a predefined set or a signaled set, and a value of 3 for the first level filter mode syntax element indicates a third parameter in a predefined set or a signaled set to define the filter mode. The values of the second-level filter mode syntax elements first_chroma_mode_slice and chroma_mode_stice can similarly define the filter mode, as described in the example above.

濾波單元312可以在定義的濾波模式中,將NN模型的實例應用於第一顏色分量的第一塊以產生第一經濾波塊。此外,濾波單元312可以在定義的濾波模式中,將NN模型的相同實例應用於第二顏色分量的第二塊以產生第二經濾波塊。濾波單元312可以基於第一經濾波塊來儲存針對CU的值。在對第一顏色分量和第二顏色分量皆啟用基於NN的濾波的情況下,濾波單元312可以基於第一經濾波塊和第二經濾波塊來儲存針對CU的值。如前述,將CU用作描述包括多個不同顏色分量的合成塊的術語。Filtering unit 312 may apply an instance of the NN model to the first block of the first color component in a defined filtering mode to produce a first filtered block. Additionally, filtering unit 312 may apply the same instance of the NN model to the second block of the second color component in a defined filtering mode to produce a second filtered block. Filtering unit 312 may store the value for the CU based on the first filtered block. With NN-based filtering enabled for both the first and second color components, filtering unit 312 may store the value for the CU based on the first and second filtered blocks. As mentioned previously, CU is used as a term to describe a composite block that includes multiple different color components.

以這種方式,一個語法元素定義了用於兩個或兩個以上顏色分量的濾波模式,而不是讓語法元素分別針對每個顏色分量定義濾波模式。例如,在NN模型的執行中,在定義的濾波模式中,濾波單元312可以產生用於第一顏色分量的第一經濾波塊和用於第二顏色分量的第二經濾波塊。濾波單元312可以基於第一濾波塊來儲存針對CU的值,並且若針對第二顏色分量啟用了濾波,則基於第一濾波塊和第二濾波塊來儲存針對CU的數值。In this way, one syntax element defines the filtering mode for two or more color components, rather than having the syntax element define the filtering mode for each color component separately. For example, in execution of a NN model, filtering unit 312 may generate a first filtered block for a first color component and a second filtered block for a second color component in a defined filtering mode. Filtering unit 312 may store the value for the CU based on the first filtering block and, if filtering is enabled for the second color component, the value for the CU based on the first and second filtering blocks.

基於第一經濾波塊及/或第二經濾波塊來儲存CU的值,可以意味著儲存在DPB 314中的CU的取樣是第一經濾波塊和第二經濾波塊,或者若沒有針對第二顏色分量啟用濾波,則儲存的CU的取樣是第一經濾波塊和第二塊(無濾波)。再舉一個實例,DPB 314輸出的經解碼視訊資料可以包括第一經濾波塊和第二經濾波塊(若對兩者皆啟用濾波的話),或者若沒有對第二顏色分量啟用濾波,則DPB 314輸出的經解碼視訊資料可以包括第一經濾波塊和第二塊(無濾波)。Storing the value of the CU based on the first filtered block and/or the second filtered block may mean that the samples of the CU stored in the DPB 314 are the first filtered block and the second filtered block, or if there is no filtered block for the first filtered block and/or the second filtered block. If filtering is enabled for the second color component, the samples of the stored CU are the first filtered block and the second block (unfiltered). As another example, the decoded video data output by DPB 314 may include a first filtered block and a second filtered block (if filtering is enabled for both), or if filtering is not enabled for the second color component, the DPB The decoded video data output by 314 may include a first filtered block and a second block (unfiltered).

舉一個實例,語法元素可以是適用於複數個CU的語法元素。例如,語法元素可以是定義用於複數個CU(例如,切片內的CU)的濾波模式的第一級濾波模式語法元素。再舉一個實例,語法元素可以是定義用於CU子集的濾波模式的第二級濾波模式語法元素。例如,視訊解碼器300可以接收可應用於複數個CU的第一語法元素,該第一語法元素指示針對該複數個CU中的CU子集來解析第二語法元素(例如,在上面的實例中,第一級濾波模式語法元素是4)。在該實例中,CU子集包括儲存了其值的一或多個CU。視訊解碼器300可以基於指示針對CU子集來解析第二語法元素的第一語法元素(例如,第一級濾波模式語法元素),來接收第二語法元素(例如,第二級濾波模式語法元素)。亦即,在上面的實例中,當第一級語法元素的值為4時,視訊解碼器300可以接收值在0到3之間的第二級語法元素。As an example, the syntax element may be a syntax element applicable to a plurality of CUs. For example, the syntax element may be a first level filter mode syntax element that defines a filter mode for a plurality of CUs (eg, CUs within a slice). As another example, the syntax element may be a second-level filter mode syntax element that defines a filter mode for a subset of CUs. For example, video decoder 300 may receive a first syntax element applicable to a plurality of CUs, the first syntax element indicating that a second syntax element is to be parsed for a subset of CUs in the plurality of CUs (e.g., in the example above , the first level filtering mode syntax element is 4). In this example, the CU subset includes one or more CUs whose values are stored. Video decoder 300 may receive a second syntax element (e.g., a second level filtering mode syntax element) based on a first syntax element (e.g., a first level filtering mode syntax element) indicating that the second syntax element is to be parsed for a subset of CUs. ). That is, in the above example, when the value of the first-level syntax element is 4, the video decoder 300 may receive a second-level syntax element with a value between 0 and 3.

如前述,可以基於用訊號通知的參數,來設置是否使用第二經濾波塊(例如,其根據基於NN的濾波來產生)。例如,視訊解碼器300接收的定義用於第一顏色分量和第二顏色分量兩者的用於NN模型的濾波模式的語法元素,可以具有指示針對第一顏色分量啟用基於NN的濾波的值。例如,first_chroma_mode_slice的值可以為非零。在該實例中,若標誌(例如,second_chroma_flag_slice)指示針對第二顏色分量啟用了基於NN的濾波,則濾波單元312可以經由在所定義的濾波模式中將NN模型的相同實例應用於第二塊,基於第二顏色分量的第二塊來產生第二經濾波塊。As mentioned before, whether to use the second filtered block (eg, generated according to NN-based filtering) may be set based on signaled parameters. For example, a syntax element received by video decoder 300 that defines a filtering mode for a NN model for both a first color component and a second color component may have a value indicating that NN-based filtering is enabled for the first color component. For example, the value of first_chroma_mode_slice can be non-zero. In this example, if a flag (e.g., second_chroma_flag_slice) indicates that NN-based filtering is enabled for the second color component, filtering unit 312 may apply the same instance of the NN model to the second block via the defined filtering mode, A second filtered block is generated based on the second block of second color components.

亦即,first_chroma_mode_slice值可以定義用於NN模型的濾波模式,並且若對第二顏色分量啟用了濾波,則first_chroma_mode_slice的值定義用於第二顏色分量的NN模型的濾波模式。因此,在該實例中,濾波單元312可以在定義的濾波模式中,利用經由應用NN模型的相同實例而產生的第一經濾波塊和第二經濾波塊來儲存針對CU的值。That is, the first_chroma_mode_slice value may define the filtering mode for the NN model, and if filtering is enabled for the second color component, the value of first_chroma_mode_slice defines the filtering mode for the NN model for the second color component. Thus, in this example, filtering unit 312 may store values for CUs in a defined filtering mode using the first and second filtered blocks generated by applying the same instance of the NN model.

然而,在second_chroma_flag_slice為假的實例中(其指示針對第二顏色分量禁用基於NN的濾波),濾波單元312可以基於第一經濾波塊和第二顏色分量的第二塊來儲存針對CU的值。亦即,濾波單元312可以丟棄第二顏色分量的第二經濾波塊,而替代地以將第一經濾波塊和第二塊儲存在DPB 314中(例如,利用NN濾波)。However, in instances where second_chroma_flag_slice is false (which indicates that NN-based filtering is disabled for the second color component), filtering unit 312 may store the value for the CU based on the first filtered block and the second block of the second color component. That is, filtering unit 312 may discard the second filtered block of the second color component and instead store the first filtered block and the second block in DPB 314 (eg, using NN filtering).

second_chroma_flag_slice的使用是一個實例。在一些實例中,諸如在定義用於兩個顏色分量的濾波模式的語法為chroma_mode_slice的情況下,視訊解碼器300可以接收指示針對第一顏色分量啟用基於NN的濾波的標誌(例如,接收first_chroma_flag_slice或second_chroma_flag_slice)。在此類實例中,視訊解碼器300可以基於指示針對第一顏色分量啟用基於NN的濾波的標誌,來接收語法元素(例如,chroma_mode_slice)。The use of second_chroma_flag_slice is an example. In some examples, such as where the syntax defining a filtering mode for two color components is chroma_mode_slice, video decoder 300 may receive a flag indicating that NN-based filtering is enabled for the first color component (eg, receive first_chroma_flag_slice or second_chroma_flag_slice). In such instances, video decoder 300 may receive a syntax element (eg, chroma_mode_slice) based on a flag indicating that NN-based filtering is enabled for the first color component.

圖4是示出用於根據本案內容的技術,對當前塊進行編碼的實例方法的流程圖。當前塊可以包括當前CU。儘管關於視訊轉碼器200(圖1和圖2)進行了描述,但應當理解,其他設備亦可以被配置為執行與圖4類似的方法。4 is a flowchart illustrating an example method for encoding a current block in accordance with the techniques of this disclosure. The current block may include the current CU. Although described with respect to the video transcoder 200 (Figs. 1 and 2), it should be understood that other devices may be configured to perform similar methods to that of Fig. 4.

在該實例中,視訊轉碼器200初始地預測當前塊(350)。例如,視訊轉碼器200可以形成針對當前塊的預測塊。隨後,視訊轉碼器200可以針對當前塊來計算殘差塊(352)。為了計算殘差塊,視訊轉碼器200可以計算在原始未編碼的塊與當前塊的預測塊之間的差。隨後,視訊轉碼器200可以對殘差塊進行變換,並量化殘差塊的變換係數(354)。接下來,視訊轉碼器200可以掃瞄殘差塊的經量化的係數(356)。在掃瞄期間或者在掃瞄之後,視訊轉碼器200可以對變換係數進行熵編碼(358)。例如,視訊轉碼器200可以使用CAVLC或CABAC,對變換係數進行編碼。隨後,視訊轉碼器200可以輸出塊的經熵編碼的資料(360)。In this example, video transcoder 200 initially predicts the current block (350). For example, video transcoder 200 may form a prediction block for the current block. Video transcoder 200 may then calculate a residual block for the current block (352). To calculate the residual block, video transcoder 200 may calculate the difference between the original uncoded block and the predicted block of the current block. Subsequently, the video transcoder 200 may transform the residual block and quantize the transform coefficients of the residual block (354). Next, video transcoder 200 may scan the quantized coefficients of the residual block (356). During or after scanning, video transcoder 200 may entropy encode the transform coefficients (358). For example, the video transcoder 200 may use CAVLC or CABAC to encode the transform coefficients. Video transcoder 200 may then output the entropy-encoded data of the block (360).

圖5是示出用於根據本案內容的技術,對視訊資料的當前塊進行解碼的實例方法的流程圖。當前塊可以包括當前CU。儘管關於視訊解碼器300(圖1和3)進行了描述,但應當理解,其他設備亦可以被配置為執行與圖5類似的方法。5 is a flowchart illustrating an example method for decoding a current block of video material in accordance with the techniques of this disclosure. The current block may include the current CU. Although described with respect to video decoder 300 (Figures 1 and 3), it should be understood that other devices may be configured to perform methods similar to Figure 5.

視訊解碼器300可以接收當前塊的經熵編碼的資料(例如,經熵編碼的預測資訊和用於對應於當前塊的殘差塊的變換係數的經熵編碼的資料)(370)。視訊解碼器300可以對經熵編碼的資料進行熵解碼,以決定當前塊的預測資訊並再現殘差塊的變換係數(372)。視訊解碼器300可以例如使用如針對當前塊的預測資訊所指示的訊框內或訊框間預測模式來預測當前塊(374),以計算針對當前塊的預測塊。隨後,視訊解碼器300可以對再現的變換係數進行逆掃瞄(376),以建立經量化的變換係數的塊。隨後,視訊解碼器300可以對變換係數進行逆量化,並將逆變換應用於這些變換係數以產生殘差塊(378)。視訊解碼器300可以經由組合預測塊和殘差塊,來最終解碼當前塊(380)。Video decoder 300 may receive entropy-encoded information for the current block (eg, entropy-encoded prediction information and entropy-encoded information for transform coefficients of a residual block corresponding to the current block) (370). Video decoder 300 may entropy decode the entropy-encoded data to determine prediction information for the current block and reproduce transform coefficients of the residual block (372). Video decoder 300 may predict the current block (374), for example using an intra or inter prediction mode as indicated by prediction information for the current block, to calculate a predicted block for the current block. Video decoder 300 may then inverse scan (376) the rendered transform coefficients to create a block of quantized transform coefficients. Video decoder 300 may then inversely quantize the transform coefficients and apply the inverse transform to the transform coefficients to produce a residual block (378). Video decoder 300 may finally decode the current block by combining the prediction block and the residual block (380).

在一或多個實例中,視訊解碼器300亦可以被配置為根據本案內容中描述的一或多個實例來執行基於神經網路的濾波。例如,濾波單元312可以被配置為使用基於神經網路的濾波來執行環內濾波。在一些實例中,耦合到DPB 314的輸出的濾波單元可以被配置為使用本案內容中描述的基於神經網路的濾波來執行濾波,作為環後濾波的一部分。In one or more examples, the video decoder 300 may also be configured to perform neural network-based filtering according to one or more examples described in this document. For example, filtering unit 312 may be configured to perform in-loop filtering using neural network-based filtering. In some examples, the filtering unit coupled to the output of DPB 314 may be configured to perform filtering using neural network-based filtering as described in this context as part of post-loop filtering.

儘管關於視訊解碼器300來描述了圖5,但在一些實例中,視訊轉碼器200亦包括重構程序,並且視訊轉碼器200(例如經由逆量化單元210、逆變換處理單元212和重建單元214)可以被配置為執行圖5的實例技術。在一些實例中,濾波單元216可以被配置為執行本案內容中描述的基於神經網路的濾波技術。例如,濾波單元216可以執行環內濾波,作為視訊轉碼器200的重構迴路的一部分。Although FIG. 5 is described with respect to video decoder 300, in some examples, video transcoder 200 also includes a reconstruction process, and video transcoder 200 (eg, via inverse quantization unit 210, inverse transform processing unit 212, and reconstruction Unit 214) may be configured to perform the example techniques of Figure 5. In some examples, filtering unit 216 may be configured to perform the neural network-based filtering techniques described in this context. For example, filtering unit 216 may perform in-loop filtering as part of the reconstruction loop of video transcoder 200 .

圖9是示出用於根據本案內容的技術,對視訊資料進行處理的實例方法的流程圖。為了便於說明起見,關於視訊解碼器300來描述這些實例,但視訊轉碼器200可以執行類似的操作。9 is a flowchart illustrating an example method for processing video data according to the techniques of this disclosure. For ease of illustration, these examples are described with respect to video decoder 300, although video transcoder 200 may perform similar operations.

視訊解碼器300可以接收語法元素,該語法元素定義用於第一顏色分量和第二顏色分量兩者的用於神經網路(NN)模型的濾波模式(900)。語法元素的實例包括如前述的第一級濾波模式語法元素、第二級濾波模式語法元素、first_chroma_mode_slice(例如,其中first_chroma_mode_slice不為零,second_chroma_flag_slice為真)和chroma_mode_slice(例如,first_chroma_flag_slice及/或second_chroma_flag_slice為真)。在一些實例中,語法元素定義從複數個參數中選擇哪個參數,並且所定義的參數規定了濾波模式。舉一個實例,參數是量化參數,並且該複數個參數是預定義集合或用訊號通知的集合中的一者。Video decoder 300 may receive a syntax element that defines a filtering pattern for a neural network (NN) model for both the first color component and the second color component (900). Examples of syntax elements include the first-level filtering mode syntax element, the second-level filtering mode syntax element as described above, first_chroma_mode_slice (e.g., where first_chroma_mode_slice is non-zero and second_chroma_flag_slice is true) and chroma_mode_slice (e.g., where first_chroma_flag_slice and/or second_chroma_flag_slice is true) ). In some examples, the syntax element defines which parameter is selected from a plurality of parameters, and the defined parameter specifies the filtering mode. As an example, the parameter is a quantized parameter and the plurality of parameters is one of a predefined set or a signaled set.

如前述,第一級濾波模式語法元素的值可以在0和4之間。值為0可以指示針對切片內的塊的顏色分量禁用基於NN的濾波(作為一個實例)。值1、2或3可以指示用於第一顏色分量和第二顏色分量兩者的用於NN模型的濾波模式(例如,定義是否使用集合中的第一、第二或第三QP,以及由集合中的第一、第二或第三QP定義的濾波模式)。值為4可以指示:由第二級濾波模式語法元素在較低級別(例如,在CTU級或CU級)定義用於NN模型的濾波模式。第二級濾波模式語法元素的值為0,可以指示針對顏色分量禁用基於NN的濾波(作為一個實例)。值1、2或3可以指示用於第一顏色分量和第二顏色分量兩者的用於NN模型的濾波模式(例如,定義是否使用集合中的第一、第二或第三QP,以及由集合中的第一、第二、或第三QP定義的濾波模式)。As mentioned above, the value of the first-level filter mode syntax element can be between 0 and 4. A value of 0 may indicate that NN-based filtering is disabled for the color components of the blocks within the slice (as an example). A value of 1, 2, or 3 may indicate a filtering mode for the NN model for both the first and second color components (e.g., defining whether to use the first, second, or third QP in the set, and by filter mode defined by the first, second, or third QP in the set). A value of 4 may indicate that the filtering mode for the NN model is defined at a lower level (e.g., at the CTU level or CU level) by the second-level filtering mode syntax element. A value of 0 for the second-level filter mode syntax element may indicate that NN-based filtering is disabled for the color component (as an example). A value of 1, 2, or 3 may indicate a filtering mode for the NN model for both the first and second color components (e.g., defining whether to use the first, second, or third QP in the set, and by filter mode defined by the first, second, or third QP in the set).

first_chroma_mode_slice是語法元素的另一個實例,其定義了用於第一顏色分量和第二顏色分量兩者的用於神經網路(NN)模型的濾波模式。例如,若first_chroma_mode_slice具有非零值,並且針對第二顏色分量啟用了基於NN的濾波,則first_chroma_mode_slice定義了用於第一顏色分量和第二顏色分量兩者的濾波模式。first_chroma_mode_slice is another example of a syntax element that defines a filtering mode for a neural network (NN) model for both the first color component and the second color component. For example, if first_chroma_mode_slice has a non-zero value and NN-based filtering is enabled for the second color component, then first_chroma_mode_slice defines the filtering mode for both the first and second color components.

chroma_mode_slice是定義用於第一顏色分量和第二顏色分量兩者的用於神經網路(NN)模型的濾波模式的語法元素的另一個實例。例如,在first_chroma_flag_slice及/或second_chroma_flag_slice為真的情況下,chroma_mode_slice的值可以定義濾波模式(例如,該集合中的第一、第二或第三QP值是否用於定義濾波模式)。chroma_mode_slice is another example of a syntax element that defines a filtering mode for a neural network (NN) model for both the first color component and the second color component. For example, where first_chroma_flag_slice and/or second_chroma_flag_slice are true, the value of chroma_mode_slice may define the filtering mode (eg, whether the first, second, or third QP value in the set is used to define the filtering mode).

濾波單元312可以在所定義的濾波模式中,將NN模型的實例應用於第一顏色分量的第一塊,以產生第一經濾波塊(902)。例如,濾波單元312可以在所定義的濾波模式中,將NN模型的實例應用於第一顏色分量的第一塊以產生第一殘差值,並且基於第一塊和第一殘差值來產生第一經濾波塊(例如,經由將第一殘差值和第一濾波塊求和)。Filtering unit 312 may apply an instance of the NN model to the first block of the first color component in the defined filtering mode to produce a first filtered block (902). For example, filtering unit 312 may apply an instance of the NN model to a first block of a first color component to generate a first residual value in a defined filtering mode, and generate a first residual value based on the first block and the first residual value. A first filtered block (eg, via summing a first residual value and a first filtered block).

在一些實例中,若針對第一顏色分量啟用了基於NN的濾波,則針對第二顏色分量自動地啟用基於NN的濾波。亦即,在一些實例中,不能選擇性地不使用第二顏色分量的濾波。因此,在一或多個實例中,濾波單元312可以在所定義的濾波模式中,將NN模型的相同實例應用於第二顏色分量的第二塊,以產生第二經濾波塊(906)。如前述,應用NN模型的相同實例,可以意味著NN模型接收第一顏色分量的第一塊和第二顏色分量的第二塊(例如,包括亮度分量)二者作為輸入,並且產生第一顏色分量的第一經濾波塊和第二顏色分量的第二經濾波塊作為輸出。In some examples, if NN-based filtering is enabled for the first color component, NN-based filtering is automatically enabled for the second color component. That is, in some instances, filtering of the second color component cannot be selectively omitted. Thus, in one or more instances, filtering unit 312 may apply the same instance of the NN model to the second block of the second color component in the defined filtering mode to produce a second filtered block (906). As mentioned previously, applying the same instance of a NN model may mean that the NN model receives as input both a first block of first color components and a second block of second color components (e.g., including a luminance component), and produces the first color A first filtered block of color components and a second filtered block of second color components are output.

亦即,在一些實例中,用於色度分量(例如,第一顏色分量和第二顏色分量)的NN模型亦可以接收亮度分量作為輸入,但輸出第一顏色分量的第一經濾波塊和第二顏色分量的第二經濾波塊。NN模型可以不產生經濾波的亮度塊,但是在一些實例中,NN模型可能產生經濾波的亮度塊。That is, in some examples, a NN model for chrominance components (eg, a first color component and a second color component) may also receive a luma component as input, but output a first filtered block of the first color component and A second filtered block of a second color component. The NN model may not produce filtered luma blocks, but in some instances the NN model may produce filtered luma blocks.

在這種情況下,濾波單元312可以儲存第一經濾波塊和第二經濾波塊(908)。例如,濾波單元312針對CU儲存在DPB 314中的取樣值可以是第一經濾波塊和第二經濾波塊的值。In this case, filtering unit 312 may store the first filtered block and the second filtered block (908). For example, the sample values that filtering unit 312 stores in DPB 314 for the CU may be the values of the first filtered block and the second filtered block.

然而,在一些實例中,利用第二經濾波塊可能並不總是成立的,並且可能是可選的。例如,在一些實例中,濾波單元312可以決定是否針對第二顏色分量啟用基於NN的濾波(904)。例如,濾波單元312可以決定first_chroma_flag_slice或second_chroma_flag_slice的值。However, in some instances, utilizing the second filtered block may not always be true and may be optional. For example, in some examples, filtering unit 312 may determine whether to enable NN-based filtering for the second color component (904). For example, the filtering unit 312 may determine the value of first_chroma_flag_slice or second_chroma_flag_slice.

例如,定義濾波模式的語法元素可以是first_chroma_mode_slice。若first_chroma_mode_slice的值為非零,則濾波單元312可以決定標誌(例如,second_chroma_flag_slice)的值。亦即,濾波單元312可以接收指示針對第二顏色分量啟用基於NN的濾波的標誌(904的「是」)。在該實例中,濾波單元312可以在定義的濾波模式中,將NN模型的相同實例應用於第二顏色分量的第二塊,以產生第二經濾波塊(906),並儲存第一經濾波塊和第二經濾波塊(908)。For example, the syntax element defining the filtering mode could be first_chroma_mode_slice. If the value of first_chroma_mode_slice is non-zero, the filtering unit 312 may determine the value of the flag (eg, second_chroma_flag_slice). That is, filtering unit 312 may receive a flag indicating that NN-based filtering is enabled for the second color component ("Yes" of 904). In this example, filtering unit 312 may apply the same instance of the NN model to the second block of the second color component in a defined filtering mode to produce a second filtered block (906), and store the first filtered block block and the second filtered block (908).

然而,在一些實例中,濾波單元312可以接收指示針對第二顏色分量禁用基於NN的濾波的標誌(904的「否」)。在該實例中,濾波單元312可以儲存第一經濾波塊和第二塊(例如,非NN濾波塊)(910)。However, in some examples, filtering unit 312 may receive a flag indicating that NN-based filtering is disabled for the second color component (NO of 904 ). In this example, filtering unit 312 may store the first filtered block and the second block (eg, non-NN filtered block) (910).

下文描述根據本案內容中描述的一或多個實例的實例技術。Example techniques based on one or more examples described in this context are described below.

條款1A。一種對視訊資料進行處理的方法,該方法包括:利用來自單個神經網路(NN)模型的基於NN的濾波器,對該視訊資料的兩個或兩個以上顏色分量進行濾波;及利用該基於單個NN的濾波器,在單個濾波程序之後輸出該兩個或兩個以上顏色分量。Clause 1A. A method of processing video data, the method comprising: filtering two or more color components of the video data using a NN-based filter from a single neural network (NN) model; and using the NN-based filter A single NN filter outputs the two or more color components after a single filtering procedure.

條款2A。根據條款1A所述的方法,其中濾波包括:決定如何應用該基於NN的濾波器,是否開啟或關閉基於該NN的濾波器,如何建立用於該單個NN模型的輸入資料,以及對於該兩個或兩個以上顏色分量,以相同方式決定所有輸入元素的值。Clause 2A. The method according to clause 1A, wherein filtering includes: deciding how to apply the NN-based filter, whether to turn on or off the NN-based filter, how to establish input data for the single NN model, and for the two or two or more color components that determine the value of all input elements in the same way.

條款3A。根據條款1A所述的方法,亦包括:接收或用訊號通知指示針對該兩個或兩個以上顏色分量中的每一者是開啟亦是關閉該基於NN的濾波器的資訊,濾波包括:基於針對該兩個或兩個以上顏色分量中的每一者是開啟亦是關閉該基於NN的濾波器來進行濾波。Clause 3A. The method according to clause 1A, further comprising: receiving or signaling information indicating whether the NN-based filter is turned on or off for each of the two or more color components, the filtering comprising: The NN-based filter is turned on or off for filtering for each of the two or more color components.

條款4A。根據條款1A至3A中的任何一項所述的方法,亦包括:重構圖片的取樣值,其中濾波包括:對該圖片的該經重構取樣值的該兩個或兩個以上顏色分量進行濾波。Clause 4A. The method according to any one of clauses 1A to 3A, further comprising: reconstructing sample values of the picture, wherein filtering includes: performing filtering on the two or more color components of the reconstructed sample values of the picture filter.

條款5A。根據條款4A所述的方法,其中濾波包括:視訊轉碼器或視訊解碼器中的環內濾波。Clause 5A. The method according to clause 4A, wherein the filtering includes: in-loop filtering in a video transcoder or a video decoder.

條款6A。根據條款4A所述的方法,其中濾波包括:視訊解碼器中的環後濾波。Clause 6A. The method of clause 4A, wherein filtering includes: post-loop filtering in a video decoder.

條款7A。一種用於對視訊資料進行處理的設備,該設備包括:配置為儲存該視訊資料的記憶體;及處理電路,其配置為執行根據條款1A-6A中的任何一項所述的方法。Clause 7A. A device for processing video data, the device comprising: a memory configured to store the video data; and a processing circuit configured to perform the method according to any one of Clauses 1A-6A.

條款8A。根據條款7A所述的設備,亦包括被配置為顯示經解碼的視訊資料的顯示器。Clause 8A. The device described in clause 7A also includes a display configured to display the decoded video data.

條款9A。根據條款7A和8A中的任何一項所述的設備,其中該設備包括以下中的一或多個:照相機、電腦、行動設備、廣播接收器設備或機上盒。Clause 9A. A device according to any of clauses 7A and 8A, wherein the device includes one or more of the following: a camera, a computer, a mobile device, a broadcast receiver device or a set-top box.

條款10A。根據條款7A-9A中的任何一項所述的設備,其中該設備包括視訊解碼器。Clause 10A. Equipment according to any of clauses 7A-9A, wherein the equipment includes a video decoder.

條款11A。根據條款7A-10A中的任何一項所述的設備,其中該設備包括視訊轉碼器。Clause 11A. Equipment according to any of clauses 7A-10A, wherein the equipment includes a video transcoder.

條款12A。一種其上儲存有指令的電腦可讀取儲存媒體,當該等指令被執行時,導致一或多個處理器執行條款1A-6A中的任何一項所述的方法。Clause 12A. A computer-readable storage medium having instructions stored thereon that, when executed, cause one or more processors to perform the method described in any one of Clauses 1A-6A.

條款13A。一種用於對視訊資料進行處理的設備,該設備包括:用於執行根據條款1A-6A中的任何一項所述的方法的單元。Clause 13A. An apparatus for processing video data, the apparatus comprising: a unit for performing a method according to any one of clauses 1A-6A.

條款1B。一種對視訊資料進行處理的方法,該方法包括:接收語法元素,該語法元素定義用於第一顏色分量和第二顏色分量兩者的用於神經網路(NN)模型的濾波模式;在所定義的濾波模式中,將該NN模型的實例應用於該第一顏色分量的第一塊以產生第一經濾波塊;及針對譯碼單元(CU)儲存該第一經濾波塊。Clause 1B. A method of processing video data, the method comprising: receiving a syntax element defining a filtering pattern for a neural network (NN) model for both a first color component and a second color component; applying the instance of the NN model to the first block of the first color component in a defined filtering mode to produce a first filtered block; and storing the first filtered block for a coding unit (CU).

條款2B。根據條款1B所述的方法,亦包括:在所定義的濾波模式中,將該NN模型的該相同實例應用於該第二顏色分量的第二塊以產生第二經濾波塊,其中儲存包括:針對該CU儲存該第一經濾波塊和該第二經濾波塊。Clause 2B. The method of clause 1B, further comprising applying the same instance of the NN model to the second block of the second color component in the defined filtering mode to produce a second filtered block, wherein storing includes: The first filtered block and the second filtered block are stored for the CU.

條款3B。根據條款1B和2B中的任何一項所述的方法,其中接收該語法元素包括:接收可應用於複數個CU的該語法元素,其中該CU是該複數個CU中的一個CU。Clause 3B. A method as in any one of clauses 1B and 2B, wherein receiving the syntax element includes receiving the syntax element applicable to a plurality of CUs, wherein the CU is one of the plurality of CUs.

條款4B。根據條款1B-3B中的任何一項所述的方法,其中該語法元素是第二語法元素,該方法亦包括:接收可應用於複數個CU的第一語法元素,該第一語法元素指示針對該複數個CU中的CU子集來解析該第二語法元素,其中該CU子集包括包含該CU的一或多個CU,其中接收該語法元素包括:基於該第一語法元素指示針對該CU子集來解析該第二語法元素,來接收該第二語法元素。Clause 4B. A method according to any one of clauses 1B-3B, wherein the syntax element is a second syntax element, the method further comprising: receiving a first syntax element applicable to a plurality of CUs, the first syntax element indicating that for A CU subset of the plurality of CUs parses the second syntax element, wherein the CU subset includes one or more CUs that include the CU, and wherein receiving the syntax element includes: indicating that the syntax element is targeted for the CU based on the first syntax element. subset to parse the second syntax element to receive the second syntax element.

條款5B。根據條款1B-4B中的任何一項所述的方法,其中該語法元素具有指示針對該第一顏色分量啟用基於NN的濾波的值。Clause 5B. A method as in any of clauses 1B-4B, wherein the syntax element has a value indicating that NN-based filtering is enabled for the first color component.

條款6B。根據條款5B所述的方法,亦包括:接收指示針對該第二顏色分量啟用基於NN的濾波的標誌;及經由在所定義的濾波模式中將該NN模型的該相同實例應用於該第二塊,基於該第二顏色分量的第二塊來產生第二經濾波塊,其中儲存包括:針對該CU儲存該第一經濾波塊和該第二經濾波塊。Clause 6B. The method of clause 5B, further comprising: receiving a flag indicating that NN-based filtering is enabled for the second color component; and applying the same instance of the NN model to the second block via a defined filtering mode. , generating a second filtered block based on the second block of the second color component, wherein storing includes storing the first filtered block and the second filtered block for the CU.

條款7B。根據條款5B所述的方法,亦包括:接收指示針對該第二顏色分量禁用基於NN的濾波的標誌,其中儲存包括:針對該CU儲存該第一經濾波塊和該第二顏色分量的第二塊。Clause 7B. The method of clause 5B, further comprising receiving a flag indicating that NN-based filtering is disabled for the second color component, wherein storing includes storing, for the CU, a second filtered block of the first filtered block and the second color component. block.

條款8B。根據條款1B-7B中的任何一項所述的方法,亦包括:接收指示針對該第一顏色分量啟用基於NN的濾波的標誌,其中接收該語法元素包括:基於該標誌指示針對該第一顏色分量啟用基於NN的濾波,來接收該語法元素。Clause 8B. The method of any one of clauses 1B-7B, further comprising: receiving a flag indicating that NN-based filtering is enabled for the first color component, wherein receiving the syntax element includes indicating, based on the flag, that NN-based filtering is enabled for the first color component. The component enables NN-based filtering to receive this syntax element.

條款9B。根據條款1B-8B中的任何一項所述的方法,其中該語法元素定義從複數個參數中選擇哪個參數,並且其中該定義的參數規定該濾波模式。Clause 9B. A method according to any of clauses 1B-8B, wherein the syntax element defines which parameter is selected from the plurality of parameters, and wherein the defined parameter specifies the filtering mode.

條款10B。根據條款9B所述的方法,其中該參數是量化參數,並且該複數個參數是預定義集合或用訊號通知的集合中的一者。Clause 10B. A method as described in clause 9B, wherein the parameter is a quantized parameter and the plurality of parameters is one of a predefined set or a signaled set.

條款11B。根據條款1B-10B中的任何一項所述的方法,其中該第一顏色分量是第一色度分量,並且該第二顏色分量是第二色度分量。Clause 11B. A method as in any of clauses 1B-10B, wherein the first color component is a first chrominance component and the second color component is a second chrominance component.

條款12B。根據條款1B-11B中的任何一項所述的方法,其中在該定義的濾波模式中將該NN模式的該實例應用於該第一顏色分量的該第一塊以產生該第一經濾波塊,包括:在所定義的濾波模式中,將該NN模型的該實例應用於該第一顏色分量的該第一塊,以產生第一殘差值;及基於該第一塊和該第一殘差值,產生該第一經濾波塊。Clause 12B. A method according to any of clauses 1B-11B, wherein the instance of the NN mode is applied to the first block of the first color component in the defined filtering mode to produce the first filtered block , including: applying the instance of the NN model to the first block of the first color component in a defined filtering mode to produce a first residual value; and based on the first block and the first residual value difference, resulting in the first filtered block.

條款13B。一種用於對視訊資料進行處理的設備,該設備包括:配置為儲存該視訊資料的記憶體;及耦合到該記憶體的利用電路來實現的一或多個處理器,其被配置為:接收語法元素,該語法元素定義用於第一顏色分量和第二顏色分量的用於神經網路(NN)模型的濾波模式;在所定義的濾波模式中,將該NN模型的實例應用於該第一顏色分量的第一塊以產生第一經濾波塊;及在該記憶體中針對譯碼單元(CU)儲存該第一經濾波塊。Clause 13B. A device for processing video data, the device includes: a memory configured to store the video data; and one or more processors implemented by circuitry coupled to the memory, configured to: receive A syntax element that defines a filtering mode for a neural network (NN) model for the first color component and the second color component; in the defined filtering mode, an instance of the NN model is applied to the a first block of color components to generate a first filtered block; and storing the first filtered block in the memory for a coding unit (CU).

條款14B。根據條款13B所述的設備,其中該一或多個處理器被配置為:在所定義的濾波模式中,將該NN模型的該相同實例應用於該第二顏色分量的第二塊以產生第二經濾波塊,其中為了儲存,該一或多個處理器被配置為:針對該CU儲存該第一經濾波塊和該第二經濾波塊。Clause 14B. An apparatus as described in clause 13B, wherein the one or more processors are configured to apply the same instance of the NN model to the second block of the second color component in a defined filtering mode to produce a Two filtered blocks, wherein for storage, the one or more processors are configured to store the first filtered block and the second filtered block for the CU.

條款15B。根據條款13B或14B中的任何一項所述的設備,其中為了接收該語法元素,該一或多個處理器被配置為:接收可應用於複數個CU的該語法元素,其中該CU是該複數個CU中的一個CU。Clause 15B. An apparatus according to any of clauses 13B or 14B, wherein to receive the syntax element, the one or more processors are configured to: receive the syntax element applicable to a plurality of CUs, wherein the CU is the A CU among a plurality of CUs.

條款16B。根據條款13B-15B中的任何一項所述的設備,其中所述該語法元素是第二語法元素,並且其中該一或多個處理器被配置為:接收可應用於複數個CU的第一語法元素,該第一語法元素指示針對該複數個CU中的CU子集來解析該第二語法元素,其中該CU子集包括具有該CU的一或多個CU,其中為了接收該語法元素,該一或多個處理器被配置為:基於該第一語法元素指示針對該CU子集來解析該第二語法元素,來接收該第二語法元素。Clause 16B. An apparatus as in any one of clauses 13B-15B, wherein the syntax element is a second syntax element, and wherein the one or more processors are configured to receive a first syntax element applicable to a plurality of CUs. a syntax element, the first syntax element indicating that the second syntax element is to be parsed for a subset of CUs in the plurality of CUs, wherein the CU subset includes one or more CUs having the CU, wherein to receive the syntax element, The one or more processors are configured to receive the second syntax element based on the first syntax element indicating that the second syntax element is to be parsed for the subset of CUs.

條款17B。根據條款13B-16B中的任何一項所述的設備,其中該語法元素具有指示針對該第一顏色分量啟用基於NN的濾波的值。Clause 17B. A device as in any one of clauses 13B-16B, wherein the syntax element has a value indicating that NN-based filtering is enabled for the first color component.

條款18B。根據條款17B所述的設備,其中該一或多個處理器被配置為:接收指示針對該第二顏色分量啟用基於NN的濾波的標誌;及經由在所定義的濾波模式中將該NN模型的該相同實例應用於該第二塊,基於該第二顏色分量的第二塊來產生第二經濾波塊,其中為了儲存,該一或多個處理器被配置為:針對該CU儲存該第一經濾波塊和該第二經濾波塊。Clause 18B. The apparatus of clause 17B, wherein the one or more processors are configured to: receive a flag indicating that NN-based filtering is enabled for the second color component; and The same example applies to the second block, generating a second filtered block based on the second block of second color components, wherein for storage the one or more processors are configured to: store the first block for the CU filtered block and the second filtered block.

條款19B。根據條款17B所述的設備,其中該一或多個處理器被配置為:接收指示針對該第二顏色分量禁用基於NN的濾波的標誌,其中為了儲存,該一或多個處理器被配置為:針對該CU儲存該第一經濾波塊和該第二顏色分量的第二塊。Clause 19B. The apparatus of clause 17B, wherein the one or more processors are configured to receive a flag indicating that NN-based filtering is disabled for the second color component, wherein for storage, the one or more processors are configured to : Stores the first filtered block and the second block of the second color component for the CU.

條款20B。根據條款13B-19B中的任何一項所述的設備,其中該一或多個處理器被配置為:接收指示針對該第一顏色分量啟用基於NN的濾波的標誌,其中為了接收該語法元素,該一或多個處理器被配置為:基於該標誌指示針對該第一顏色分量啟用基於NN的濾波,來接收該語法元素。Clause 20B. An apparatus according to any of clauses 13B-19B, wherein the one or more processors are configured to: receive a flag indicating that NN-based filtering is enabled for the first color component, wherein to receive the syntax element, The one or more processors are configured to receive the syntax element based on the flag indicating that NN-based filtering is enabled for the first color component.

條款21B。根據條款13B-20B中的任何一項所述的設備,其中該語法元素定義從複數個參數中選擇哪個參數,並且其中該定義的參數規定該濾波模式。Clause 21B. An apparatus according to any one of clauses 13B-20B, wherein the syntax element defines which parameter is selected from the plurality of parameters, and wherein the defined parameter specifies the filtering mode.

條款22B。根據條款21B所述的設備,其中該參數是量化參數,並且該複數個參數是預定義集合或用訊號通知的集合中的一者。Clause 22B. An apparatus as described in clause 21B, wherein the parameter is a quantized parameter and the plurality of parameters is one of a predefined set or a signaled set.

條款23B。根據條款13B-22B中的任何一項所述的設備,其中該第一顏色分量是第一色度分量,並且該第二顏色分量是第二色度分量。Clause 23B. An apparatus according to any of clauses 13B-22B, wherein the first color component is a first chrominance component and the second color component is a second chrominance component.

條款24B。根據條款13B-23B中的任何一項所述的設備,其中為了在該定義的濾波模式中將該NN模式的該實例應用於該第一顏色分量的該第一塊以產生該第一經濾波塊,該一或多個處理器被配置為:在所定義的濾波模式中,將該NN模型的該實例應用於該第一顏色分量的該第一塊,以產生第一殘差值;及基於該第一塊和該第一殘差值,產生該第一經濾波塊。Clause 24B. Apparatus according to any of clauses 13B-23B, wherein in order to apply the instance of the NN mode to the first block of the first color component in the defined filtering mode to produce the first filtered block, the one or more processors configured to: apply the instance of the NN model to the first block of the first color component in a defined filtering mode to produce a first residual value; and Based on the first block and the first residual value, the first filtered block is generated.

條款25B。一種其上儲存有指令的電腦可讀取儲存媒體,當該等指令被執行時,導致一或多個處理器執行以下操作:接收語法元素,該語法元素定義用於第一顏色分量和第二顏色分量兩者的用於神經網路(NN)模型的濾波模式;在所定義的濾波模式中,將該NN模型的實例應用於該第一顏色分量的第一塊以產生第一經濾波塊;及針對譯碼單元(CU)儲存該第一經濾波塊。Clause 25B. A computer-readable storage medium having instructions stored thereon that, when executed, cause one or more processors to perform the following operations: receive a syntax element defined for a first color component and a second A filtering mode for a neural network (NN) model for both color components; in the defined filtering mode, an instance of the NN model is applied to the first block of the first color component to produce the first filtered block ; and storing the first filtered block for a coding unit (CU).

應當認識到,根據實例,本文所描述的任何技術的某些動作或事件可以以不同的循序執行、可以進行添加、合併或者完全省略(例如,並非所有描述的動作或事件皆是實施該技術所必需的)。此外,在某些實例中,可以例如經由多執行緒處理、中斷處理或多個處理器併發地而不是順序地執行動作或事件。It should be appreciated that, depending on the example, certain actions or events of any technology described herein may be performed in a different order, may be added, combined, or omitted entirely (e.g., not all described actions or events may be required to implement the technology). required). Furthermore, in some instances, actions or events may be performed concurrently rather than sequentially, such as via multi-thread processing, interrupt processing, or multiple processors.

在一或多個實例中,所描述的功能可以利用硬體、軟體、韌體或者其任意組合來實現。當利用軟體實現時,可以將這些功能儲存在電腦可讀取媒體上,或者作為電腦可讀取媒體上的一或多個指令或代碼進行傳輸,並由基於硬體的處理單元來執行。電腦可讀取媒體可以包括電腦可讀取儲存媒體,電腦可讀取儲存媒體對應於諸如資料儲存媒體或通訊媒體之類的有形媒體,其中通訊媒體包括有助於例如根據通訊協定,將電腦程式從一個地方傳送到另一個地方的任何媒體。用此方式,電腦可讀取媒體通常可以對應於:(1)非暫時性的有形電腦可讀取儲存媒體;或者(2)諸如訊號或載波波形之類的通訊媒體。資料儲存媒體可以是一或多個電腦或者一或多個處理器能夠進行存取以獲取用於實現本案內容中描述的技術的指令、代碼及/或資料結構的任何可用媒體。電腦程式產品可以包括電腦可讀取媒體。In one or more examples, the described functions may be implemented using hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which correspond to tangible media such as data storage media or communication media, where communication media include devices that facilitate, for example, a computer program in accordance with a communications protocol. Any media transmitted from one place to another. In this manner, computer-readable media may generally correspond to: (1) non-transitory tangible computer-readable storage media; or (2) communications media such as signals or carrier waveforms. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to obtain instructions, code, and/or data structures for implementing the techniques described in this document. Computer program products may include computer-readable media.

舉例而言,但非做出限制,這種電腦可讀取儲存媒體可以包括RAM、ROM、EEPROM、CD-ROM或者其他光碟記憶體、磁碟記憶體或其他磁存放裝置、快閃記憶體或者能夠用於儲存具有指令或資料結構形式的期望的程式碼並能夠由電腦進行存取的任何其他媒體。此外,可以將任何連接適當地稱作電腦可讀取媒體。舉例而言,若指令是使用同軸電纜、光纖光纜、雙絞線、數位用戶線路(DSL)或者諸如紅外線、無線和微波之類的無線技術,從網站、伺服器或其他遠端源傳輸的,則該同軸電纜、光纖光纜、雙絞線、DSL或者諸如紅外線、無線和微波之類的無線技術包括在該媒體的定義中。但是,應當理解的是,電腦可讀取儲存媒體和資料儲存媒體並不包括連接、載波波形、訊號或者其他臨時媒體,而是針對於非臨時的有形儲存媒體。如本文所使用的,磁碟和光碟包括壓縮光碟CD、鐳射光碟、光碟、數位多功能光碟(DVD)、軟碟和藍光光碟,其中磁碟通常磁性地複製資料,而光碟則用鐳射來光學地複製資料。上述的組合亦應當包括在電腦可讀取媒體的保護範疇之內。By way of example, but not limitation, such computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk memory, magnetic disk memory or other magnetic storage devices, flash memory or Any other medium that can be used to store desired program code in the form of instructions or data structures that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, wireless, and microwave, Then the coaxial cable, fiber optic cable, twisted pair, DSL or wireless technologies such as infrared, wireless and microwave are included in the definition of media. However, it should be understood that computer-readable storage media and data storage media do not include connections, carrier waveforms, signals or other temporary media, but are directed to non-transitory tangible storage media. As used herein, disks and optical discs include compact discs (CDs), laser discs, optical discs, digital versatile discs (DVDs), floppy disks, and Blu-ray Discs, where disks usually copy data magnetically, while optical discs use lasers to optically copy data. Copy the data. The above combinations should also be included in the scope of protection of computer-readable media.

指令可以由諸如一或多個DSP、通用微處理器、ASIC、FPGA之類的一或多個處理器或者其他等同的整合或個別邏輯電路來執行。因此,如本文所使用的,術語「處理器」和「處理電路」可以代表前述的結構或者適合於實現本文所描述的技術的任何其他結構中的任何一種。此外,在一些態樣,本文所描述的功能可以提供在被配置為實現編碼和解碼的專用硬體及/或軟體模組中,或者併入到組合的轉碼器中。此外,該等技術可以在一或多個電路或邏輯部件中完全實現。Instructions may be executed by one or more processors such as one or more DSPs, general purpose microprocessors, ASICs, FPGAs, or other equivalent integrated or individual logic circuits. Thus, as used herein, the terms "processor" and "processing circuitry" may represent any of the foregoing structures or any other structure suitable for implementing the techniques described herein. Additionally, in some aspects, the functionality described herein may be provided in dedicated hardware and/or software modules configured to implement encoding and decoding, or incorporated into a combined transcoder. Additionally, these techniques may be fully implemented in one or more circuits or logic components.

本案內容的技術可以使用多種多樣的設備或裝置來實現,其包括使用無線手持裝置、積體電路(IC)或者一組IC(例如,晶片集)。本案內容中描述了各種部件、模組或單元,以強調被配置為執行所揭示的技術的設備的功能態樣,但不一定需要由不同的硬體單元來實現。相反,如前述,各個單元可以組合在轉碼器硬體單元中,或者經由協調的硬體單元集合(其包括如前述的一或多個處理器)結合適當的軟體及/或韌體來提供。The technology at issue may be implemented using a variety of devices or devices, including the use of a wireless handheld device, an integrated circuit (IC), or a set of ICs (e.g., a chip set). Various components, modules or units are described in this text to emphasize the functional aspects of devices configured to perform the disclosed technology, but do not necessarily need to be implemented by different hardware units. Instead, as mentioned above, the individual units may be combined in a transcoder hardware unit, or provided via a coordinated set of hardware units (including one or more processors as mentioned above) in conjunction with appropriate software and/or firmware .

描述了各個實例。這些和其他實例落入所附請求項的保護範疇之內。Various examples are described. These and other examples fall within the scope of the appended claims.

100:視訊編碼和解碼系統 102:源設備 104:視訊源 106:記憶體 108:輸出介面 110:電腦可讀取媒體 112:存放裝置 114:檔案伺服器 116:目的地設備 118:顯示裝置 120:記憶體 122:輸入介面 130:現代混合視訊解碼器 132:輸入視訊資料 134:求和單元 136:變換單元 138:量化單元 140:熵譯碼單元 142:逆量化單元 144:逆變換單元 146:求和單元 148:迴路濾波單元 150:解碼圖片緩衝器(DPB) 152:訊框內預測單元 154:訊框間預測單元 156:運動估計單元 158:輸出位元串流 200:視訊轉碼器 202:模式選擇單元 204:殘差產生單元 206:變換處理單元 208:量化單元 210:逆量化單元 212:逆變換處理單元 214:重構單元 216:濾波單元 218:解碼圖片緩衝器(DPB) 220:熵編碼單元 222:運動估計單元 224:運動補償單元 226:訊框內預測單元 230:視訊資料記憶體 300:視訊解碼器 302:熵解碼單元 304:預測處理單元 306:逆量化單元 308:逆變換處理單元 310:重構單元 312:濾波單 314:解碼圖片緩衝器(DPB) 316:運動補償單元 318:訊框內預測單元 320:譯碼圖片緩衝器(CPB)記憶體 350:方塊 352:方塊 354:方塊 356:方塊 358:方塊 360:方塊 370:方塊 372:方塊 374:方塊 376:方塊 378:方塊 380:方塊 382:方塊 700:應用圖片組(GOP) 800:重構取樣 802:殘差取樣 804:經濾波取樣 900:方塊 902:方塊 904:方塊 906:方塊 908:方塊 910:方塊 100: Video encoding and decoding system 102: Source device 104:Video source 106:Memory 108:Output interface 110: Computer readable media 112:Storage device 114:File server 116:Destination device 118:Display device 120:Memory 122:Input interface 130: Modern hybrid video decoder 132: Enter video data 134:Summing unit 136: Transformation unit 138: Quantization unit 140: Entropy decoding unit 142: Inverse quantization unit 144: Inverse transformation unit 146:Summing unit 148: Loop filter unit 150: Decoded Picture Buffer (DPB) 152: In-frame prediction unit 154: Inter-frame prediction unit 156: Motion estimation unit 158: Output bit stream 200:Video transcoder 202: Mode selection unit 204: Residual generation unit 206: Transformation processing unit 208: Quantization unit 210: Inverse quantization unit 212: Inverse transformation processing unit 214: Reconstruction unit 216: Filter unit 218: Decoded Picture Buffer (DPB) 220: Entropy coding unit 222: Motion estimation unit 224: Motion compensation unit 226: In-frame prediction unit 230: Video data memory 300:Video decoder 302: Entropy decoding unit 304: Prediction processing unit 306: Inverse quantization unit 308: Inverse transformation processing unit 310: Reconstruction unit 312: Filter single 314: Decoded Picture Buffer (DPB) 316: Motion compensation unit 318: In-frame prediction unit 320: Decoded picture buffer (CPB) memory 350:block 352:Block 354:Block 356:block 358:Block 360:block 370:block 372:Block 374:Block 376:Block 378:Block 380:block 382:Block 700: Application Picture Group (GOP) 800: Reconstruction sampling 802: Residual sampling 804: Filtered sampling 900:block 902:Block 904:Block 906:block 908: Square 910:block

圖1是示出可以執行本案內容的技術的實例視訊編碼和解碼系統的方塊圖。FIG. 1 is a block diagram illustrating an example video encoding and decoding system that may perform the techniques of this disclosure.

圖2是示出可以執行本案內容的技術的實例視訊轉碼器的方塊圖。FIG. 2 is a block diagram illustrating an example video transcoder that may perform the techniques of this disclosure.

圖3是示出可以執行本案內容的技術的實例視訊解碼器的方塊圖。3 is a block diagram illustrating an example video decoder that may perform the techniques of this disclosure.

圖4是根據本案內容的技術,示出用於對當前塊進行編碼的實例方法的流程圖。4 is a flowchart illustrating an example method for encoding a current block in accordance with the techniques of this disclosure.

圖5是根據本案內容的技術,示出用於對當前塊進行解碼的實例方法的流程圖。FIG. 5 is a flowchart illustrating an example method for decoding a current block in accordance with the techniques of this disclosure.

圖6是示出類似於圖2的混合視訊譯碼框架的實例的流程圖。FIG. 6 is a flowchart illustrating an example of a hybrid video coding framework similar to FIG. 2 .

圖7是示出具有等於16的圖片組(GOP)大小的分層預測結構的實例的概念圖。7 is a conceptual diagram illustrating an example of a hierarchical prediction structure with a group of pictures (GOP) size equal to 16.

圖8是示出迴旋神經網路(CNN)的實例的概念圖。FIG. 8 is a conceptual diagram showing an example of a convolutional neural network (CNN).

圖9是根據本案內容的技術,示出對視訊資料進行處理的實例方法的流程圖。FIG. 9 is a flowchart showing an example method of processing video data according to the technology of this case.

國內寄存資訊(請依寄存機構、日期、號碼順序註記) 無 國外寄存資訊(請依寄存國家、機構、日期、號碼順序註記) 無 Domestic storage information (please note in order of storage institution, date and number) without Overseas storage information (please note in order of storage country, institution, date, and number) without

900:方塊 900:block

902:方塊 902:Block

904:方塊 904:Block

906:方塊 906:Block

908:方塊 908: Square

910:方塊 910:block

Claims (25)

一種對視訊資料進行處理的方法,該方法包括以下步驟: 接收一語法元素,該語法元素定義用於一第一顏色分量和一第二顏色分量兩者的用於一神經網路(NN)模型的一濾波模式; 在所定義的濾波模式中,將該NN模型的一實例應用於該第一顏色分量的一第一塊以產生一第一經濾波塊;及 針對一譯碼單元(CU)儲存該第一經濾波塊。 A method for processing video data, the method includes the following steps: receiving a syntax element defining a filtering pattern for a neural network (NN) model for both a first color component and a second color component; In the defined filtering mode, applying an instance of the NN model to a first block of the first color component to produce a first filtered block; and The first filtered block is stored for a coding unit (CU). 根據請求項1之方法,亦包括以下步驟: 在所定義的濾波模式中,將該NN模型的相同實例應用於該第二顏色分量的一第二塊以產生一第二經濾波塊, 其中儲存包括以下步驟:針對該CU儲存該第一經濾波塊和該第二經濾波塊。 The method according to claim 1 also includes the following steps: applying the same instance of the NN model to a second block of the second color component in the defined filtering mode to produce a second filtered block, The storing includes the following steps: storing the first filtered block and the second filtered block for the CU. 根據請求項1之方法,其中接收該語法元素包括以下步驟:接收可應用於複數個CU的該語法元素,其中該CU是該複數個CU中的一個CU。The method according to claim 1, wherein receiving the syntax element includes the following steps: receiving the syntax element applicable to a plurality of CUs, wherein the CU is one of the plurality of CUs. 根據請求項1之方法,其中該語法元素是一第二語法元素,該方法亦包括以下步驟: 接收可應用於複數個CU的一第一語法元素,該第一語法元素指示針對該複數個CU中的一CU子集來解析該第二語法元素,其中該CU子集包括包含該CU的一或多個CU, 其中接收該語法元素包括以下步驟:基於該第一語法元素指示針對該CU子集來解析該第二語法元素,來接收該第二語法元素。 According to the method of claim 1, wherein the syntax element is a second syntax element, the method also includes the following steps: Receive a first syntax element applicable to a plurality of CUs, the first syntax element indicating that the second syntax element is to be parsed for a subset of CUs in the plurality of CUs, wherein the CU subset includes a CU that contains the CU. or multiple CUs, The receiving the syntax element includes the following steps: parsing the second syntax element for the CU subset based on the first syntax element indication to receive the second syntax element. 根據請求項1之方法,其中該語法元素具有指示針對該第一顏色分量啟用基於NN的濾波的一值。The method of claim 1, wherein the syntax element has a value indicating that NN-based filtering is enabled for the first color component. 根據請求項5之方法,亦包括以下步驟: 接收指示針對該第二顏色分量啟用基於NN的濾波的一標誌;及 經由在所定義的濾波模式中將該NN模型的相同實例應用於該第二塊,基於該第二顏色分量的一第二塊來產生一第二經濾波塊, 其中儲存包括以下步驟:針對該CU儲存該第一經濾波塊和該第二經濾波塊。 The method according to claim 5 also includes the following steps: receiving a flag indicating that NN-based filtering is enabled for the second color component; and generating a second filtered block based on a second block of second color components by applying the same instance of the NN model to the second block in a defined filtering mode, The storing includes the following steps: storing the first filtered block and the second filtered block for the CU. 根據請求項5之方法,亦包括以下步驟: 接收指示針對該第二顏色分量禁用基於NN的濾波的一標誌, 其中儲存包括:針對該CU儲存該第一經濾波塊和該第二顏色分量的一第二塊。 The method according to claim 5 also includes the following steps: receiving a flag indicating that NN-based filtering is disabled for the second color component, The storing includes: storing the first filtered block and a second block of the second color component for the CU. 根據請求項1之方法,亦包括以下步驟: 接收指示針對該第一顏色分量啟用基於NN的濾波的一標誌, 其中接收該語法元素包括以下步驟:基於該標誌指示針對該第一顏色分量啟用基於NN的濾波,來接收該語法元素。 The method according to claim 1 also includes the following steps: receiving a flag indicating that NN-based filtering is enabled for the first color component, wherein receiving the syntax element includes the following steps: receiving the syntax element based on the flag indicating that NN-based filtering is enabled for the first color component. 根據請求項1之方法,其中該語法元素定義從複數個參數中選擇哪個參數,並且其中所定義的參數規定該濾波模式。The method according to claim 1, wherein the syntax element defines which parameter is selected from the plurality of parameters, and wherein the defined parameter specifies the filtering mode. 根據請求項9之方法,其中該參數是一量化參數,並且該複數個參數是一預定義集合或一用訊號通知的集合中的一者。The method of claim 9, wherein the parameter is a quantized parameter and the plurality of parameters is one of a predefined set or a signaled set. 根據請求項1之方法,其中該第一顏色分量是一第一色度分量,並且該第二顏色分量是一第二色度分量。The method of claim 1, wherein the first color component is a first chrominance component and the second color component is a second chrominance component. 根據請求項1之方法,其中在所定義的濾波模式中將該NN模式的該實例應用於該第一顏色分量的該第一塊以產生該第一經濾波塊,包括: 在所定義的濾波模式中,將該NN模型的該實例應用於該第一顏色分量的該第一塊,以產生第一殘差值;及 基於該第一塊和該第一殘差值,產生該第一經濾波塊。 The method of claim 1, wherein applying the instance of the NN mode to the first block of the first color component in a defined filtering mode to produce the first filtered block includes: applying the instance of the NN model to the first block of the first color component in a defined filtering mode to produce a first residual value; and Based on the first block and the first residual value, the first filtered block is generated. 一種用於對視訊資料進行處理的設備,該設備包括: 被配置為儲存該視訊資料的記憶體;及 耦合到該記憶體的利用電路來實現的一或多個處理器,其被配置為: 接收一語法元素,該語法元素定義用於一第一顏色分量和一第二顏色分量兩者的用於一神經網路(NN)模型的一濾波模式; 在所定義的濾波模式中,將該NN模型的一實例應用於該第一顏色分量的一第一塊以產生一第一經濾波塊;及 在該記憶體中,針對一譯碼單元(CU)儲存該第一經濾波塊。 A device used to process video data. The device includes: memory configured to store the video data; and One or more processors coupled to the memory are implemented using circuitry configured to: receiving a syntax element defining a filtering pattern for a neural network (NN) model for both a first color component and a second color component; In the defined filtering mode, applying an instance of the NN model to a first block of the first color component to produce a first filtered block; and In the memory, the first filtered block is stored for a coding unit (CU). 根據請求項13之設備,其中該一或多個處理器被配置為: 在所定義的濾波模式中,將該NN模型的相同實例應用於該第二顏色分量的一第二塊以產生一第二經濾波塊, 其中為了進行儲存,該一或多個處理器被配置為:針對該CU儲存該第一經濾波塊和該第二經濾波塊。 The device of claim 13, wherein the one or more processors are configured to: applying the same instance of the NN model to a second block of the second color component in the defined filtering mode to produce a second filtered block, For storage, the one or more processors are configured to store the first filtered block and the second filtered block for the CU. 根據請求項13之設備,其中為了接收該語法元素,該一或多個處理器被配置為:接收可應用於複數個CU的該語法元素,其中該CU是該複數個CU中的一個CU。The apparatus of claim 13, wherein to receive the syntax element, the one or more processors are configured to: receive the syntax element applicable to a plurality of CUs, wherein the CU is one of the plurality of CUs. 根據請求項13之設備,其中該語法元素是一第二語法元素,並且其中該一或多個處理器被配置為: 接收可應用於複數個CU的一第一語法元素,該第一語法元素指示針對該複數個CU中的一CU子集來解析該第二語法元素,其中該CU子集包括包含該CU的一或多個CU, 其中為了接收該語法元素,該一或多個處理器被配置為:基於該第一語法元素指示針對該CU子集來解析該第二語法元素,來接收該第二語法元素。 The apparatus of claim 13, wherein the syntax element is a second syntax element, and wherein the one or more processors are configured to: Receive a first syntax element applicable to a plurality of CUs, the first syntax element indicating that the second syntax element is to be parsed for a subset of CUs in the plurality of CUs, wherein the CU subset includes a CU that contains the CU. or multiple CUs, In order to receive the syntax element, the one or more processors are configured to: receive the second syntax element based on the first syntax element indicating that the second syntax element is parsed for the CU subset. 根據請求項13之設備,其中該語法元素具有指示針對該第一顏色分量啟用基於NN的濾波的一值。The apparatus of claim 13, wherein the syntax element has a value indicating that NN-based filtering is enabled for the first color component. 根據請求項17之設備,其中該一或多個處理器被配置為: 接收指示針對該第二顏色分量啟用基於NN的濾波的一標誌;及 經由在所定義的濾波模式中將該NN模型的相同實例應用於該第二塊,基於該第二顏色分量的一第二塊來產生一第二經濾波塊, 其中為了儲存,該一或多個處理器被配置為:針對該CU儲存該第一經濾波塊和該第二經濾波塊。 The device of claim 17, wherein the one or more processors are configured to: receiving a flag indicating that NN-based filtering is enabled for the second color component; and generating a second filtered block based on a second block of second color components by applying the same instance of the NN model to the second block in a defined filtering mode, Wherein for storage, the one or more processors are configured to store the first filtered block and the second filtered block for the CU. 根據請求項17之設備,其中該一或多個處理器被配置為: 接收指示針對該第二顏色分量禁用基於NN的濾波的一標誌, 其中為了儲存,該一或多個處理器被配置為:針對該CU儲存該第一經濾波塊和該第二顏色分量的一第二塊。 The device of claim 17, wherein the one or more processors are configured to: receiving a flag indicating that NN-based filtering is disabled for the second color component, wherein for storage, the one or more processors are configured to store the first filtered block and a second block of the second color component for the CU. 根據請求項13之設備,其中該一或多個處理器被配置為: 接收指示針對該第一顏色分量啟用基於NN的濾波的一標誌, 其中為了接收該語法元素,該一或多個處理器被配置為:基於該標誌指示針對該第一顏色分量啟用基於NN的濾波,來接收該語法元素。 The device of claim 13, wherein the one or more processors are configured to: receiving a flag indicating that NN-based filtering is enabled for the first color component, In order to receive the syntax element, the one or more processors are configured to: receive the syntax element based on the flag indicating that NN-based filtering is enabled for the first color component. 根據請求項13之設備,其中該語法元素定義從複數個參數中選擇哪個參數,並且其中所定義的參數規定該濾波模式。Apparatus according to claim 13, wherein the syntax element defines which parameter is selected from the plurality of parameters, and wherein the defined parameter specifies the filtering mode. 根據請求項21之設備,其中該參數是一量化參數,並且該複數個參數是一預定義集合或一用訊號通知的集合中的一者。Apparatus according to claim 21, wherein the parameter is a quantized parameter and the plurality of parameters is one of a predefined set or a signaled set. 根據請求項13之設備,其中該第一顏色分量是一第一色度分量,並且該第二顏色分量是一第二色度分量。The apparatus of claim 13, wherein the first color component is a first chrominance component and the second color component is a second chrominance component. 根據請求項13之設備,其中為了在所定義的濾波模式中將該NN模式的該實例應用於該第一顏色分量的該第一塊以產生該第一經濾波塊,該一或多個處理器被配置為: 在所定義的濾波模式中,將該NN模型的該實例應用於該第一顏色分量的該第一塊,以產生第一殘差值;及 基於該第一塊和該第一殘差值,產生該第一經濾波塊。 The apparatus of claim 13, wherein in order to apply the instance of the NN mode to the first block of the first color component in a defined filtering mode to produce the first filtered block, the one or more processes The server is configured as: applying the instance of the NN model to the first block of the first color component in a defined filtering mode to produce a first residual value; and Based on the first block and the first residual value, the first filtered block is generated. 一種在其上儲存指令的電腦可讀取儲存媒體,當該等指令被執行時,使一或多個處理器執行以下操作: 接收一語法元素,該語法元素定義用於一第一顏色分量和一第二顏色分量兩者的用於一神經網路(NN)模型的一濾波模式; 在所定義的濾波模式中,將該NN模型的一實例應用於該第一顏色分量的一第一塊以產生一第一經濾波塊;及 針對一譯碼單元(CU)儲存該第一經濾波塊。 A computer-readable storage medium on which instructions are stored that, when executed, cause one or more processors to: receiving a syntax element defining a filtering pattern for a neural network (NN) model for both a first color component and a second color component; In the defined filtering mode, applying an instance of the NN model to a first block of the first color component to produce a first filtered block; and The first filtered block is stored for a coding unit (CU).
TW112121642A 2022-07-05 2023-06-09 Neural network based filtering process for multiple color components in video coding TW202404371A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263367713P 2022-07-05 2022-07-05
US63/367,713 2022-07-05
US18/331,674 US20240015312A1 (en) 2022-07-05 2023-06-08 Neural network based filtering process for multiple color components in video coding
US18/331,674 2023-06-08

Publications (1)

Publication Number Publication Date
TW202404371A true TW202404371A (en) 2024-01-16

Family

ID=87158202

Family Applications (1)

Application Number Title Priority Date Filing Date
TW112121642A TW202404371A (en) 2022-07-05 2023-06-09 Neural network based filtering process for multiple color components in video coding

Country Status (2)

Country Link
TW (1) TW202404371A (en)
WO (1) WO2024010672A1 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11930215B2 (en) * 2020-09-29 2024-03-12 Qualcomm Incorporated Multiple neural network models for filtering during video coding
US11716469B2 (en) * 2020-12-10 2023-08-01 Lemon Inc. Model selection in neural network-based in-loop filter for video coding

Also Published As

Publication number Publication date
WO2024010672A1 (en) 2024-01-11

Similar Documents

Publication Publication Date Title
CN113940069A (en) Transform and last significant coefficient position signaling for low frequency non-separable transforms in video coding
TW202115977A (en) Cross-component adaptive loop filtering for video coding
CN111602395B (en) Quantization groups for video coding
TW202121903A (en) Low-frequency non-separable transform (lfnst) simplifications
TW202218422A (en) Multiple neural network models for filtering during video coding
CN113994694A (en) Simplified intra chroma mode coding in video coding
TW202127886A (en) Picture header signaling for video coding
TW202213996A (en) Filtering process for video coding
CN114223202A (en) Low frequency inseparable transform (LFNST) signaling
TW202118297A (en) Scaling matrices and signaling for video coding
TW202127887A (en) Quantization parameter signaling for joint chroma residual mode in video coding
WO2021041153A1 (en) Chroma quantization parameter (qp) derivation for video coding
KR20230081701A (en) Joint-component neural network-based filtering during video coding
KR20230038709A (en) Multiple adaptive loop filter sets
CN114208199A (en) Chroma intra prediction unit for video coding
KR20230019831A (en) General Constraints of Syntax Elements for Video Coding
CN114424570A (en) Transform unit design for video coding and decoding
KR20220157378A (en) High-level syntax for video mixed with NAL unit types
CA3162708A1 (en) Shared decoder picture buffer for multiple layers
KR20230129015A (en) Multiple neural network models for filtering during video coding
KR20230124571A (en) Using Low Complexity History for Rice Parameter Derivation for High Bit-Deep Video Coding
KR20230078658A (en) Activation Function Design in Neural Network-Based Filtering Process for Video Coding -Journal of the Korea Convergence Society Korea Science
KR20220159965A (en) Low-frequency inseparable transform index signaling in video coding
TW202133615A (en) Lfnst signaling for chroma based on chroma transform skip
CN114830657A (en) Low frequency non-separable transforms (LFNST) with reduced zeroing in video coding