TW202243480A - Template matching based affine prediction for video coding - Google Patents

Template matching based affine prediction for video coding Download PDF

Info

Publication number
TW202243480A
TW202243480A TW111113752A TW111113752A TW202243480A TW 202243480 A TW202243480 A TW 202243480A TW 111113752 A TW111113752 A TW 111113752A TW 111113752 A TW111113752 A TW 111113752A TW 202243480 A TW202243480 A TW 202243480A
Authority
TW
Taiwan
Prior art keywords
block
template
current
video
prediction
Prior art date
Application number
TW111113752A
Other languages
Chinese (zh)
Inventor
陳俊啟
黃翰
張智
張耀仁
張言
瓦迪姆 賽萊金
瑪塔 卡克基維克茲
Original Assignee
美商高通公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/715,571 external-priority patent/US11936877B2/en
Application filed by 美商高通公司 filed Critical 美商高通公司
Publication of TW202243480A publication Critical patent/TW202243480A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • H04N19/54Motion estimation other than block-based using feature points or meshes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/533Motion estimation using multistep search, e.g. 2D-log search or one-at-a-time search [OTS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A video decoder can be configured to determine that a current block in a current picture of the video data is coded in an affine prediction mode; determine one or more control-point motion vectors (CPMVs) for the current block; identify an initial prediction block for the current block in a reference picture using the one or more CPMVs; determine a current template for the current block in the current picture; and determine an initial reference template for the initial prediction block in the reference picture; and perform a motion vector refinement process to determine a modified prediction block based on a comparison of the current template to the initial reference template.

Description

用於視訊譯碼的基於範本匹配的仿射預測Template-matching-based affine prediction for video decoding

本專利申請案主張享受於2021年4月12日提出申請的美國臨時專利申請案第63/173,861號以及於2021年4月12日提出申請的美國臨時專利申請案第63/173,949號的權益,將上述每一份申請案的全部內容經由引用的方式併入。This patent application claims the benefit of U.S. Provisional Patent Application No. 63/173,861, filed April 12, 2021, and U.S. Provisional Patent Application No. 63/173,949, filed April 12, 2021, The entire contents of each of the above applications are incorporated by reference.

本案內容係關於視訊編碼和視訊解碼。The content of this case is about video coding and video decoding.

數位視訊能力可以被合併到各種各樣的設備中,包括數位電視機、數位直播系統、無線廣播系統、個人數位助理(PDA)、膝上型電腦或桌上型電腦、平板電腦、電子書閱讀器、數位相機、數位記錄設備、數位媒體播放機、視訊遊戲設備、視訊遊戲控制台、蜂巢或衛星無線電電話(所謂的「智慧型電話」)、視訊電話會議設備、視訊流式設備等。數位視訊設備實現視訊譯碼技術(諸如在由MPEG-2、MPEG-4、ITU-T H.263、ITU-T H.264/MPEG-4(第10部分,高級視訊譯碼(AVC))、ITU-T H.265/高效率視訊譯碼(HEVC)所定義的標準和此類標準的擴展中描述的那些技術)。經由實現此類視訊譯碼技術,視訊設備可以更加高效地發送、接收、編碼、解碼及/或儲存數位視訊資訊。Digital video capabilities can be incorporated into a wide variety of devices, including digital televisions, digital broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, tablet computers, e-book readers devices, digital cameras, digital recording devices, digital media players, video game devices, video game consoles, cellular or satellite radiotelephones (so-called "smart phones"), video conferencing devices, video streaming devices, etc. Digital video equipment implements video coding technology (such as in MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4 (Part 10, Advanced Video Coding (AVC)) , the standards defined by ITU-T H.265/High Efficiency Video Coding (HEVC) and those techniques described in extensions to such standards). By implementing such video decoding technology, video equipment can more efficiently send, receive, encode, decode and/or store digital video information.

視訊譯碼技術包括空間(圖片內(intra-picture))預測及/或時間(圖片間(inter-picture))預測以減少或去除在視訊序列中固有的冗餘。對於基於塊的視訊譯碼,視訊切片(例如,視訊圖片或視訊圖片的一部分)可以被分割為視訊塊,視訊塊亦可以被稱為譯碼樹單元(CTU)、譯碼單元(CU)及/或譯碼節點。圖片的經訊框內編碼(I)的切片中的視訊塊是使用相對於同一圖片中的相鄰塊中的參考取樣的空間預測來編碼的。圖片的經訊框間編碼(P或B)的切片中的視訊塊可以使用相對於同一圖片中的相鄰塊中的參考取樣的空間預測或者相對於其他參考圖片中的參考取樣的時間預測。圖片可以被稱為訊框,並且參考圖片可以被稱為參考訊框。Video coding techniques include spatial (intra-picture) prediction and/or temporal (inter-picture) prediction to reduce or remove redundancy inherent in video sequences. For block-based video decoding, a video slice (e.g., a video picture or a portion of a video picture) can be divided into video blocks, which can also be referred to as coding tree units (CTUs), coding units (CUs), and /or decoding node. Video blocks in an intra-coded (I) slice of a picture are coded using spatial prediction with respect to reference samples in neighboring blocks in the same picture. Video blocks in an inter-coded (P or B) slice of a picture may use spatial prediction with respect to reference samples in neighboring blocks in the same picture or temporal prediction with respect to reference samples in other reference pictures. A picture may be called a frame, and a reference picture may be called a reference frame.

本案內容描述了與仿射預測模式相關的技術,仿射預測模式是潛在地考慮一系列圖片中可能發生的物件旋轉的訊框間預測模式的類型。可以基於塊的控制點的運動向量來決定塊的仿射運動模型,該運動向量可以被稱為控制點運動向量(CPMV)。在一些實現中,塊的控制點是塊的左上角和右上角。在一些實現中,塊的控制點亦包括塊的左下角。視訊譯碼器(亦即,視訊轉碼器或視訊解碼器)可以基於塊的CPMV來計算塊的子塊的運動向量,以定位參考圖片中的預測子塊。預測子塊可以形成預測塊。This document describes techniques related to affine prediction mode, a type of inter-frame prediction mode that potentially takes into account possible rotations of objects in a series of pictures. The affine motion model of the block may be decided based on the motion vector of the block's control points, which may be referred to as a control point motion vector (CPMV). In some implementations, the control points of a block are the upper left and upper right corners of the block. In some implementations, the control points of a block also include the bottom left corner of the block. A video coder (ie, a video transcoder or video decoder) may calculate motion vectors for sub-blocks of a block based on the CPMV of the block to locate predictive sub-blocks in a reference picture. The predictive sub-blocks may form a predictive block.

本案內容描述了可以細化預測子塊並且因此細化預測塊的解碼器側技術。亦即,本案內容的技術可以導致視訊解碼器使用與使用CPMV最初決定或定位的子塊不同的子塊來形成預測塊。經由以本案中描述的方式執行運動向量細化程序以決定用於仿射譯碼塊的經修改的預測塊,視訊解碼器可以決定與一般仿射預測相比更準確的預測塊。利用本案內容的技術決定更準確的預測塊可以在不增加訊號傳遞管理負擔的情況下提高整體譯碼品質。This disclosure describes decoder-side techniques that can refine the predictive sub-block and thus the predictive block. That is, the techniques of this disclosure may cause the video decoder to use different sub-blocks to form the predictive block than the sub-blocks originally determined or located using CPMV. By performing a motion vector refinement procedure in the manner described in this application to determine a modified prediction block for an affine-coded block, a video decoder can determine a prediction block that is more accurate than normal affine prediction. Determining a more accurate prediction block using the techniques presented in this case can improve the overall decoding quality without increasing the burden of signaling management.

根據本案內容的一個實例,一種對視訊資料進行解碼的方法包括:決定以仿射預測模式對該視訊資料的當前圖片中的當前塊進行譯碼;決定用於該當前塊的一或多個控制點運動向量(CPMV);使用該一或多個CPMV來辨識用於參考圖片中的該當前塊的初始預測塊;決定用於該當前圖片中的該當前塊的當前範本;決定用於該參考圖片中的該初始預測塊的初始參考範本;及基於該當前範本與該初始參考範本的比較來執行運動向量細化程序,以決定經修改的預測塊。According to an example of the content of this case, a method for decoding video information includes: determining to use an affine prediction mode to decode a current block in a current picture of the video information; determining one or more controls for the current block point motion vector (CPMV); use the one or more CPMVs to identify an initial prediction block for the current block in a reference picture; determine a current template for the current block in the current picture; determine a current template for the reference an initial reference template for the initial prediction block in a picture; and performing a motion vector refinement procedure based on a comparison of the current template and the initial reference template to determine a modified prediction block.

根據本案內容的另一實例,一種用於對視訊資料進行解碼的設備包括:記憶體;及一或多個處理器,其在電路中實現、耦合到該記憶體並且被配置為:決定以仿射預測模式對該視訊資料的當前圖片中的當前塊進行譯碼;決定用於該當前塊的一或多個控制點運動向量(CPMV);使用該一或多個CPMV來辨識用於參考圖片中的該當前塊的初始預測塊;決定用於該當前圖片中的該當前塊的當前範本;決定用於該參考圖片中的該初始預測塊的初始參考範本;及基於該當前範本與該初始參考範本的比較來執行運動向量細化程序,以決定經修改的預測塊。According to another example of the content of this case, a device for decoding video data includes: a memory; and one or more processors, which are implemented in a circuit, coupled to the memory, and configured to: decide to simulate Decoding the current block in the current picture of the video data in a projective prediction mode; determining one or more control point motion vectors (CPMV) for the current block; using the one or more CPMVs to identify reference pictures the initial prediction block of the current block in the current picture; determine the current template for the current block in the current picture; determine the initial reference template for the initial prediction block in the reference picture; and based on the current template and the initial The motion vector refinement procedure is performed by comparing the reference templates to determine the modified prediction block.

根據本案內容的另一實例,一種電腦可讀取儲存媒體儲存指令,該等指令在由一或多個處理器執行時使得該一或多個處理器進行以下操作:決定以仿射預測模式對該視訊資料的當前圖片中的當前塊進行譯碼;決定用於該當前塊的一或多個控制點運動向量(CPMV);使用該一或多個CPMV來辨識用於參考圖片中的該當前塊的初始預測塊;決定用於該當前圖片中的該當前塊的當前範本;決定用於該參考圖片中的該初始預測塊的初始參考範本;及基於該當前範本與該初始參考範本的比較來執行運動向量細化程序,以決定經修改的預測塊。According to another example of the content of this application, a computer-readable storage medium stores instructions, and the instructions, when executed by one or more processors, cause the one or more processors to perform the following operations: determine an affine prediction mode for Decoding a current block in a current picture of the video data; determining one or more control point motion vectors (CPMVs) for the current block; using the one or more CPMVs to identify the current block in a reference picture an initial prediction block for a block; determining a current template for the current block in the current picture; determining an initial reference template for the initial prediction block in the reference picture; and based on a comparison between the current template and the initial reference template to perform the motion vector refinement procedure to determine the modified prediction block.

根據本案內容的另一實例,一種用於對視訊資料進行譯碼的裝置包括:用於決定以仿射預測模式對該視訊資料的當前圖片中的當前塊進行解碼的單元;用於決定用於該當前塊的一或多個控制點運動向量(CPMV)的單元;用於使用該一或多個CPMV來辨識用於參考圖片中的該當前塊的初始預測塊的單元;用於決定用於該當前圖片中的該當前塊的當前範本的單元;用於決定用於該參考圖片中的該初始預測塊的初始參考範本的單元;及用於基於該當前範本與該初始參考範本的比較來執行運動向量細化程序,以決定經修改的預測塊的單元。According to another example of the content of this application, a device for decoding video data includes: a unit for determining to decode a current block in a current picture of the video data in an affine prediction mode; A unit of one or more control point motion vectors (CPMVs) of the current block; a unit for identifying an initial prediction block for the current block in a reference picture using the one or more CPMVs; a unit of a current example of the current block in the current picture; a unit for determining an initial reference example for the initial prediction block in the reference picture; and a unit for determining an initial reference example based on a comparison of the current example and the initial reference example A motion vector refinement procedure is performed to determine units of modified prediction blocks.

在附圖和以下描述中闡述了一或多個實例的細節。根據描述、附圖和請求項,其他特徵、目的和優點將是顯而易見的。The details of one or more examples are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description, drawings, and claims.

視訊譯碼(例如,視訊編碼及/或視訊解碼)通常涉及從同一圖片中的視訊資料的已經譯碼的塊預測視訊資料的塊(例如,訊框內預測)或從不同圖片中的視訊資料的已經譯碼的塊預測視訊資料的塊(例如,訊框間預測)。在一些情況下,視訊轉碼器亦經由將預測塊與原始塊進行比較來計算殘差資料。因此,殘差資料表示預測塊與原始塊之間的差。為了減少用訊號通知殘差資料所需的位元數量,視訊轉碼器對殘差資料進行變換和量化,並且在經編碼的位元串流中用訊號通知經變換和量化的殘差資料。經由變換和量化程序實現的壓縮可能是有損的,這意味著變換和量化程序可能在經解碼的視訊資料中引入失真。Video decoding (e.g., video encoding and/or video decoding) typically involves predicting blocks of video data from already decoded blocks of video data in the same picture (e.g., intra-frame prediction) or from video data in a different picture Blocks of video data are predicted from already decoded blocks (eg, inter-frame prediction). In some cases, video transcoders also calculate residual data by comparing the predicted block with the original block. The residual data therefore represent the difference between the predicted block and the original block. To reduce the number of bits required to signal the residual data, a video transcoder transforms and quantizes the residual data and signals the transformed and quantized residual data in an encoded bit stream. Compression via transform and quantization procedures can be lossy, which means that the transformation and quantization procedures can introduce distortion in the decoded video data.

視訊解碼器對殘差資料進行解碼並且將其添加到預測塊中,以產生經重構的視訊塊,其比單獨的預測塊更接近地匹配原始視訊塊。由於經由對殘差資料進行變換和量化引入的損失,第一經重構的塊可能具有失真或偽影。一種常見類型的偽影或失真被稱為塊性,其中用於對視訊資料進行譯碼的塊的邊界是可見的。A video decoder decodes the residual data and adds it to the prediction block to produce a reconstructed video block that matches the original video block more closely than the prediction block alone. The first reconstructed block may have distortions or artifacts due to losses introduced by transforming and quantizing the residual data. A common type of artifact or distortion is known as blockiness, where the boundaries of the blocks used to decode the video data are visible.

為了進一步提高經解碼的視訊的品質,視訊解碼器可以對經重構的視訊塊執行一或多個濾波操作。這些濾波操作的實例包括解塊濾波、取樣自我調整偏移(SAO)濾波和自我調整迴路濾波(ALF)。這些濾波操作的參數可以由視訊轉碼器決定並且在經編碼的視訊位元串流中顯式地用訊號通知,或者可以由視訊解碼器隱式地決定,而不需要在經編碼的視訊位元串流中顯式地用訊號通知這些參數。To further improve the quality of the decoded video, a video decoder may perform one or more filtering operations on the reconstructed video blocks. Examples of these filtering operations include deblocking filtering, sample self-adjusting offset (SAO) filtering, and self-adjusting loop filtering (ALF). The parameters of these filtering operations can be determined by the video transcoder and signaled explicitly in the encoded video bitstream, or they can be determined implicitly by the video decoder without requiring an explicit signal in the encoded video bitstream. These parameters are explicitly signaled in the metastream.

本案內容描述了與仿射預測模式相關的技術,仿射預測模式是潛在地考慮一系列圖片中可能發生的物件旋轉的訊框間預測模式的類型。可以基於塊的控制點的運動向量來決定塊的仿射運動模型,該運動向量可以被稱為控制點運動向量(CPMV)。在一些實現中,塊的控制點是塊的左上角和右上角。在一些實現中,塊的控制點亦包括塊的左下角。視訊譯碼器(亦即,視訊轉碼器或視訊解碼器)可以基於塊的CPMV來計算塊的子塊的運動向量,以定位參考圖片中的預測子塊。預測子塊可以形成預測塊。This document describes techniques related to affine prediction mode, a type of inter-frame prediction mode that potentially takes into account possible rotations of objects in a series of pictures. The affine motion model of the block may be decided based on the motion vector of the block's control points, which may be referred to as a control point motion vector (CPMV). In some implementations, the control points of a block are the upper left and upper right corners of the block. In some implementations, the control points of a block also include the bottom left corner of the block. A video coder (ie, a video transcoder or video decoder) may calculate motion vectors for sub-blocks of a block based on the CPMV of the block to locate predictive sub-blocks in a reference picture. The predictive sub-blocks may form a predictive block.

本案內容描述了可以細化預測子塊並且因此細化預測塊的解碼器側技術。亦即,本案內容的技術可以導致視訊解碼器使用與使用CPMV最初決定或定位的子塊不同的子塊來形成預測塊。經由以本案中描述的方式執行運動向量細化程序以決定用於仿射譯碼塊的經修改的預測塊,視訊解碼器可以決定與一般仿射預測相比更準確的預測塊。利用本案內容的技術決定更準確的預測塊可以在不增加訊號傳遞管理負擔的情況下提高整體譯碼品質。This disclosure describes decoder-side techniques that can refine the predictive sub-block and thus the predictive block. That is, the techniques of this disclosure may cause the video decoder to use different sub-blocks to form the predictive block than the sub-blocks originally determined or located using CPMV. By performing a motion vector refinement procedure in the manner described in this application to determine a modified prediction block for an affine-coded block, a video decoder can determine a prediction block that is more accurate than normal affine prediction. Determining a more accurate prediction block using the techniques presented in this case can improve the overall decoding quality without increasing the burden of signaling management.

儘管本案內容的技術通常被描述為由視訊解碼器執行,但是應當理解,本文描述的技術亦可以由視訊轉碼器執行。例如,本案內容的技術可以由視訊轉碼器執行,作為用於決定如何對視訊塊進行編碼以及用於產生可以用於對視訊的後續圖片進行編碼的參考圖片的程序的一部分。Although the techniques in this disclosure are generally described as being performed by a video decoder, it should be understood that the techniques described herein may also be performed by a video transcoder. For example, the techniques of this disclosure may be performed by a video transcoder as part of a process for deciding how to encode a video block and for generating reference pictures that may be used to encode subsequent pictures of the video.

圖1是示出可以執行本案內容的技術的實例視訊編碼和解碼系統100的方塊圖。概括而言,本案內容的技術涉及對視訊資料進行譯碼(編碼及/或解碼)。通常,視訊資料包括用於處理視訊的任何資料。因此,視訊資料可以包括原始的未經編碼的視訊、經編碼的視訊、經解碼(例如,經重構)的視訊、以及視訊中繼資料(例如,訊號傳遞資料)。1 is a block diagram illustrating an example video encoding and decoding system 100 that may implement techniques of this disclosure. Generally speaking, the technology involved in this case involves decoding (encoding and/or decoding) video data. In general, video data includes any data used to process video. Thus, video data may include raw unencoded video, encoded video, decoded (eg, reconstructed) video, and video metadata (eg, signaling data).

如圖1所示,在該實例中,系統100包括源設備102,源設備102提供要被目的地設備116解碼和顯示的、經編碼的視訊資料。具體地,源設備102經由電腦可讀取媒體110來將視訊資料提供給目的地設備116。源設備102和目的地設備116可以包括各種各樣的設備中的任何一種,包括桌上型電腦、筆記型電腦(亦即,膝上型電腦)、行動設備、平板電腦、機上盒、諸如智慧型電話之類的電話手機、電視機、相機、顯示設備、數位媒體播放機、視訊遊戲控制台、視訊資料流式設備、廣播接收器設備等。在一些情況下,源設備102和目的地設備116可以被配備用於無線通訊,並且因此可以被稱為無線通訊設備。As shown in FIG. 1 , in this example, system 100 includes source device 102 that provides encoded video material to be decoded and displayed by destination device 116 . Specifically, the source device 102 provides video data to the destination device 116 via the computer-readable medium 110 . Source device 102 and destination device 116 may comprise any of a wide variety of devices, including desktop computers, notebook computers (i.e., laptops), mobile devices, tablet computers, set-top boxes, such as Telephone handsets such as smart phones, televisions, cameras, display devices, digital media players, video game consoles, video data streaming devices, broadcast receiver devices, etc. In some cases, source device 102 and destination device 116 may be equipped for wireless communication, and thus may be referred to as wireless communication devices.

在圖1的實例中,源設備102包括視訊源104、記憶體106、視訊轉碼器200以及輸出介面108。目的地設備116包括輸入介面122、視訊解碼器300、記憶體120以及顯示設備118。根據本案內容,源設備102的視訊轉碼器200和目的地設備116的視訊解碼器300可以被配置為應用用於執行基於範本的仿射預測的技術。因此,源設備102表示視訊編碼設備的實例,而目的地設備116表示視訊解碼設備的實例。在其他實例中,源設備和目的地設備可以包括其他部件或佈置。例如,源設備102可以從諸如外部相機之類的外部視訊源接收視訊資料。同樣,目的地設備116可以與外部顯示設備對接,而不是包括整合顯示設備。In the example of FIG. 1 , the source device 102 includes a video source 104 , a memory 106 , a video transcoder 200 and an output interface 108 . The destination device 116 includes an input interface 122 , a video decoder 300 , a memory 120 and a display device 118 . In accordance with the present disclosure, the video transcoder 200 of the source device 102 and the video decoder 300 of the destination device 116 may be configured to apply techniques for performing template-based affine prediction. Thus, source device 102 represents an instance of a video encoding device, and destination device 116 represents an instance of a video decoding device. In other examples, the source and destination devices may include other components or arrangements. For example, source device 102 may receive video data from an external video source, such as an external camera. Likewise, destination device 116 may interface with an external display device rather than include an integrated display device.

如圖1所示的系統100僅是一個實例。通常,任何數位視訊編碼及/或解碼設備可以執行用於執行基於範本的仿射預測的技術。源設備102和目的地設備116僅是此類譯碼設備的實例,其中源設備102產生經譯碼的視訊資料以用於傳輸給目的地設備116。本案內容將「解碼」設備代表為執行對資料的譯碼(例如,編碼及/或解碼)的設備。因此,視訊轉碼器200和視訊解碼器300分別表示譯碼設備(具體地,視訊轉碼器和視訊解碼器)的實例。在一些實例中,源設備102和目的地設備116可以以基本上對稱的方式進行操作,使得源設備102和目的地設備116中的每一者皆包括視訊編碼和解碼用部件。因此,系統100可以支援在源設備102和目的地設備116之間的單向或雙向視訊傳輸,例如,以用於視訊資料流式、視訊重播、視訊廣播或視訊電話。System 100 as shown in FIG. 1 is but one example. In general, any digital video encoding and/or decoding device can implement the techniques for performing template-based affine prediction. Source device 102 and destination device 116 are merely examples of such decoding devices, where source device 102 generates decoded video material for transmission to destination device 116 . The context of this case refers to a "decoding" device as a device that performs decoding (eg, encoding and/or decoding) of data. Accordingly, the video transcoder 200 and the video decoder 300 represent examples of coding devices, specifically, a video transcoder and a video decoder, respectively. In some examples, source device 102 and destination device 116 may operate in a substantially symmetrical manner such that source device 102 and destination device 116 each include components for video encoding and decoding. Thus, system 100 can support one-way or two-way video transmission between source device 102 and destination device 116, eg, for video streaming, video playback, video broadcasting, or video telephony.

通常,視訊源104表示視訊資料(即原始的未經編碼的視訊資料)的源,並且將視訊資料的順序的一系列圖片(亦被稱為「訊框」)提供給視訊轉碼器200,視訊轉碼器200對用於圖片的資料進行編碼。源設備102的視訊源104可以包括視訊擷取設備,諸如攝像機、包含先前擷取的原始視訊的視訊存檔單元、及/或用於從視訊內容提供者接收視訊的視訊饋送介面。作為另外的替代方式,視訊源104可以產生基於電腦圖形的資料作為源視訊,或者產生即時視訊、被存檔的視訊和電腦產生的視訊的組合。在每種情況下,視訊轉碼器200對被擷取的、預擷取的或電腦產生的視訊資料進行編碼。視訊轉碼器200可以將圖片從所接收的次序(有時被稱為「顯示次序」)重新排列為用於譯碼的譯碼次序。視訊轉碼器200可以產生包括經編碼的視訊資料的位元串流。隨後,源設備102可以經由輸出介面108將經編碼的視訊資料輸出到電腦可讀取媒體110上,以便由例如目的地設備116的輸入介面122接收及/或取回。In general, video source 104 represents a source of video data (ie, raw unencoded video data), and provides video transcoder 200 with a sequence of pictures (also called "frames") of the video data, The video transcoder 200 encodes data for pictures. Video source 104 of source device 102 may include a video capture device, such as a video camera, a video archive unit containing previously captured raw video, and/or a video feed interface for receiving video from a video content provider. As a further alternative, video source 104 may generate computer graphics-based material as the source video, or a combination of live video, archived video, and computer-generated video. In each case, video transcoder 200 encodes captured, pre-captured, or computer-generated video data. Video transcoder 200 may rearrange pictures from a received order (sometimes referred to as "display order") into a decoding order for decoding. The video transcoder 200 can generate a bit stream including encoded video data. Source device 102 may then output the encoded video data via output interface 108 to computer-readable medium 110 for receipt and/or retrieval by, for example, input interface 122 of destination device 116 .

源設備102的記憶體106和目的地設備116的記憶體120表示通用記憶體。在一些實例中,記憶體106、120可以儲存原始視訊資料,例如,來自視訊源104的原始視訊以及來自視訊解碼器300的原始的經解碼的視訊資料。補充或替代地,記憶體106、120可以儲存可由例如視訊轉碼器200和視訊解碼器300分別執行的軟體指令。儘管記憶體106和記憶體120在該實例中被示為與視訊轉碼器200和視訊解碼器300分開,但是應當理解的是,視訊轉碼器200和視訊解碼器300亦可以包括用於在功能上類似或等效目的的內部記憶體。此外,記憶體106、120可以儲存例如從視訊轉碼器200輸出並且輸入到視訊解碼器300的經編碼的視訊資料。在一些實例中,記憶體106、120的部分可以被分配為一或多個視訊緩衝器,例如,以儲存原始的經解碼及/或經編碼的視訊資料。Memory 106 of source device 102 and memory 120 of destination device 116 represent general purpose memory. In some examples, the memories 106 , 120 may store raw video data, eg, raw video from the video source 104 and raw decoded video data from the video decoder 300 . Additionally or alternatively, the memories 106, 120 may store software instructions executable by, for example, the video transcoder 200 and the video decoder 300, respectively. Although memory 106 and memory 120 are shown in this example as being separate from video transcoder 200 and video decoder 300, it should be understood that video transcoder 200 and video decoder 300 may also be included for use in Internal memory for a functionally similar or equivalent purpose. In addition, the memories 106 , 120 can store encoded video data output from the video transcoder 200 and input to the video decoder 300 , for example. In some examples, portions of memory 106, 120 may be allocated as one or more video buffers, eg, to store raw decoded and/or encoded video data.

電腦可讀取媒體110可以表示能夠將經編碼的視訊資料從源設備102輸送到目的地設備116的任何類型的媒體或設備。在一個實例中,電腦可讀取媒體110表示通訊媒體,其使得源設備102能夠例如經由射頻網路或基於電腦的網路,來即時地向目的地設備116直接發送經編碼的視訊資料。輸出介面108可以根據諸如無線通訊協定之類的通訊標準來對包括經編碼的視訊資料的傳輸訊號進行調制,並且輸入介面122可以根據諸如無線通訊協定之類的通訊標準來對所接收的傳輸訊號進行解調。通訊媒體可以包括任何無線或有線通訊媒體,例如,射頻(RF)頻譜或一或多條實體傳輸線。通訊媒體可以形成諸如以下各項的基於封包的網路的一部分:區域網路、廣域網、或諸如網際網路之類的全球網路。通訊媒體可以包括路由器、交換機、基地台、或對於促進從源設備102到目的地設備116的通訊而言可以有用的任何其他設備。Computer-readable medium 110 may represent any type of medium or device capable of transporting encoded video material from source device 102 to destination device 116 . In one example, computer-readable medium 110 represents a communication medium that enables source device 102 to transmit encoded video data directly to destination device 116 in real-time, such as over a radio frequency or computer-based network. The output interface 108 can modulate the transmission signal including the encoded video data according to a communication standard such as a wireless communication protocol, and the input interface 122 can modulate the received transmission signal according to a communication standard such as a wireless communication protocol. to demodulate. Communication media may include any wireless or wired communication media, such as a radio frequency (RF) spectrum or one or more physical transmission lines. The communication medium can form part of a packet-based network such as a local area network, a wide area network, or a global network such as the Internet. The communication medium may include routers, switches, base stations, or any other device that may be useful for facilitating communication from source device 102 to destination device 116 .

在一些實例中,源設備102可以將經編碼的資料從輸出介面108輸出到存放設備112。類似地,目的地設備116可以經由輸入介面122從存放設備112存取經編碼的資料。存放設備112可以包括各種分散式或本端存取的資料儲存媒體中的任何一種,諸如硬碟、藍光光碟、DVD、CD-ROM、快閃記憶體、揮發性或非揮發性記憶體、或用於儲存經編碼的視訊資料的任何其他適當的數位儲存媒體。In some examples, source device 102 may output encoded data from output interface 108 to depository device 112 . Similarly, destination device 116 may access encoded data from storage device 112 via input interface 122 . Storage device 112 may comprise any of a variety of distributed or locally accessed data storage media, such as hard disks, Blu-ray discs, DVDs, CD-ROMs, flash memory, volatile or non-volatile memory, or Any other suitable digital storage medium for storing encoded video data.

在一些實例中,源設備102可以將經編碼的視訊資料輸出到檔案伺服器114或者可以儲存由源設備102產生的經編碼的視訊資料的另一中間存放設備。目的地設備116可以經由資料流式或下載來從檔案伺服器114存取被儲存的視訊資料。In some examples, source device 102 may output the encoded video data to file server 114 or another intermediate storage device that may store the encoded video data generated by source device 102 . Destination device 116 may access the stored video data from file server 114 via streaming or download.

檔案伺服器114可以是能夠儲存經編碼的視訊資料並且將該經編碼的視訊資料發送給目的地設備116的任何類型的伺服器設備。檔案伺服器114可以表示網頁伺服器(例如,用於網站)、被配置為提供檔案傳輸通訊協定服務(諸如檔案傳輸通訊協定(FTP)或基於單向傳輸的檔遞送(FLUTE)協定)的伺服器、內容遞送網路(CDN)設備、超文字傳輸協定(HTTP)伺服器、多媒體廣播多播服務(MBMS)或增強型MBMS(eMBMS)伺服器、及/或網路附加儲存(NAS)設備。檔案伺服器114可以補充或替代地實現一或多個HTTP資料流式協定,諸如基於HTTP的動態自我調整資料流式(DASH)、HTTP即時資料流式(HLS)、即時資料流式協定(RTSP)、HTTP動態資料流式等。File server 114 may be any type of server device capable of storing encoded video data and sending the encoded video data to destination device 116 . File server 114 may represent a web server (eg, for a website), a server configured to provide file transfer protocol services such as file transfer protocol (FTP) or file delivery over unidirectional transfer (FLUTE) protocol. server, Content Delivery Network (CDN) device, Hypertext Transfer Protocol (HTTP) server, Multimedia Broadcast Multicast Service (MBMS) or Enhanced MBMS (eMBMS) server, and/or Network Attached Storage (NAS) device . File server 114 may additionally or alternatively implement one or more HTTP streaming protocols, such as Dynamic Self-Adjusting Streaming over HTTP (DASH), HTTP Instant Streaming (HLS), Real Time Streaming Protocol (RTSP ), HTTP dynamic data streaming, etc.

目的地設備116可以經由任何標準資料連接(包括網際網路連接)來從檔案伺服器114存取經編碼的視訊資料。這可以包括適於存取被儲存在檔案伺服器114上的經編碼的視訊資料的無線通道(例如,Wi-Fi連接)、有線連接(例如,數位用戶線路(DSL)、纜線數據機等)、或這兩者的組合。輸入介面122可以被配置為根據上文論述的用於從檔案伺服器114取回或接收媒體資料的各種協定或者用於取回媒體資料的其他此類協定中的任何一或多個進行操作。Destination device 116 may access encoded video data from file server 114 via any standard data connection, including an Internet connection. This may include wireless channels (e.g., a Wi-Fi connection), wired connections (e.g., Digital Subscriber Line (DSL), cable modem, etc.) suitable for accessing encoded video data stored on the file server 114 ), or a combination of the two. Input interface 122 may be configured to operate in accordance with any one or more of the various protocols discussed above for retrieving or receiving media data from file server 114 , or other such protocols for retrieving media data.

輸出介面108和輸入介面122可以表示無線發射器/接收器、數據機、有線聯網單元(例如,乙太網路卡)、根據各種IEEE 802.11標準中的任何一種標準進行操作的無線通訊部件、或其他實體部件。在其中輸出介面108和輸入介面122包括無線部件的實例中,輸出介面108和輸入介面122可以被配置為根據蜂巢通訊標準(諸如4G、4G-LTE(長期進化)、改進的LTE、5G等)來傳輸資料(諸如經編碼的視訊資料)。在其中輸出介面108包括無線發射器的一些實例中,輸出介面108和輸入介面122可以被配置為根據其他無線標準(諸如IEEE 802.11規範、IEEE 802.15規範(例如,ZigBee™)、Bluetooth™標準等)來傳輸資料(諸如經編碼的視訊資料)。在一些實例中,源設備102及/或目的地設備116可以包括相應的片上系統(SoC)設備。例如,源設備102可以包括用於執行被賦予視訊轉碼器200及/或輸出介面108的功能的SoC設備,並且目的地設備116可以包括用於執行被賦予視訊解碼器300及/或輸入介面122的功能的SoC設備。Output interface 108 and input interface 122 may represent a wireless transmitter/receiver, a modem, a wired networking unit (e.g., an Ethernet card), a wireless communication component operating according to any of the various IEEE 802.11 standards, or other physical components. In instances where the output interface 108 and the input interface 122 include wireless components, the output interface 108 and the input interface 122 may be configured to conform to a cellular communication standard (such as 4G, 4G-LTE (Long Term Evolution), LTE-Advanced, 5G, etc.) to transmit data (such as encoded video data). In some examples where output interface 108 includes a wireless transmitter, output interface 108 and input interface 122 may be configured to comply with other wireless standards (such as IEEE 802.11 specifications, IEEE 802.15 specifications (e.g., ZigBee™), Bluetooth™ standards, etc.) to transmit data (such as encoded video data). In some examples, source device 102 and/or destination device 116 may include respective system-on-chip (SoC) devices. For example, source device 102 may include a SoC device for performing the functions assigned to video transcoder 200 and/or output interface 108, and destination device 116 may include a device for performing functions assigned to video decoder 300 and/or input interface 108. 122 function SoC devices.

本案內容的技術可以應用於視訊譯碼,以支援各種多媒體應用中的任何一種,諸如空中電視廣播、有線電視傳輸、衛星電視傳輸、網際網路流式視訊傳輸(諸如基於HTTP的動態自我調整資料流式(DASH))、被編碼到資料儲存媒體上的數位視訊、對被儲存在資料儲存媒體上的數位視訊的解碼、或其他應用。The technology in this case can be applied to video decoding to support any of a variety of multimedia applications, such as over-the-air television broadcasting, cable television transmission, satellite television transmission, Internet streaming video transmission (such as HTTP-based dynamic self-adjusting data streaming (DASH)), encoding of digital video onto a data storage medium, decoding of digital video stored on a data storage medium, or other applications.

目的地設備116的輸入介面122從電腦可讀取媒體110(例如,通訊媒體、存放設備112、檔案伺服器114等)接收經編碼的視訊位元串流。經編碼的視訊位元串流可以包括由視訊轉碼器200定義的諸如以下語法元素之類的訊號傳遞資訊(其亦被視訊解碼器300使用):該語法元素具有描述視訊塊或其他譯碼單元(例如,切片、圖片、圖片組、序列等)的特性及/或處理的值。顯示設備118將經解碼的視訊資料的經解碼的圖片顯示給使用者。顯示設備118可以表示各種顯示設備中的任何一種,諸如液晶顯示器(LCD)、電漿顯示器、有機發光二極體(OLED)顯示器、或另一種類型的顯示設備。The input interface 122 of the destination device 116 receives an encoded video bitstream from the computer-readable medium 110 (eg, communication medium, storage device 112 , file server 114 , etc.). The encoded video bitstream may include signaling information defined by video transcoder 200 (which is also used by video decoder 300) such as the following syntax elements: The value of the properties and/or processing of a unit (eg, slice, picture, group of pictures, sequence, etc.). The display device 118 displays the decoded pictures of the decoded video data to the user. Display device 118 may represent any of a variety of display devices, such as a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or another type of display device.

儘管在圖1中未圖示,但是在一些實例中,視訊轉碼器200和視訊解碼器300可以各自與音訊編碼器及/或音訊解碼器整合,並且可以包括適當的MUX-DEMUX單元或其他硬體及/或軟體,以處理包括公共資料串流中的音訊和視訊兩者的經多工的串流。若適用,MUX-DEMUX單元可以遵循ITU H.223多工器協定或其他協定(諸如使用者資料包通訊協定(UDP))。Although not shown in FIG. 1 , in some examples video transcoder 200 and video decoder 300 may each be integrated with an audio encoder and/or audio decoder, and may include appropriate MUX-DEMUX units or other Hardware and/or software to process multiplexed streams including both audio and video in the common data stream. If applicable, the MUX-DEMUX unit may follow the ITU H.223 multiplexer protocol or other protocols such as User Datagram Protocol (UDP).

視訊轉碼器200和視訊解碼器300各自可以被實現為各種適當的編碼器及/或解碼器電路中的任何一種,諸如一或多個微處理器、數位訊號處理器(DSP)、特殊應用積體電路(ASIC)、現場可程式設計閘陣列(FPGA)、個別邏輯、軟體、硬體、韌體、或其任何組合。當該技術部分地用軟體實現時,設備可以將用於軟體的指令儲存在適當的非暫時性電腦可讀取媒體中,並且使用一或多個處理器,用硬體來執行指令以執行本案內容的技術。視訊轉碼器200和視訊解碼器300中的每一者可以被包括在一或多個編碼器或解碼器中,編碼器或解碼器中的任一者可以被整合為相應設備中的組合編碼器/解碼器(CODEC)的一部分。包括視訊轉碼器200及/或視訊解碼器300的設備可以包括積體電路、微處理器、及/或無線通訊設備(諸如蜂巢式電話)。Each of video transcoder 200 and video decoder 300 may be implemented as any of a variety of suitable encoder and/or decoder circuits, such as one or more microprocessors, digital signal processors (DSPs), application-specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), individual logic, software, hardware, firmware, or any combination thereof. When the technology is partially implemented in software, the device may store instructions for the software in a suitable non-transitory computer-readable medium, and use one or more processors to execute the instructions in hardware to perform the present invention content technology. Each of the video transcoder 200 and the video decoder 300 may be included in one or more encoders or decoders, either of which may be integrated as a combined encoding in the corresponding device part of the codec/decoder (CODEC). Devices including video transcoder 200 and/or video decoder 300 may include integrated circuits, microprocessors, and/or wireless communication devices (such as cellular phones).

視訊轉碼器200和視訊解碼器300可以根據視訊譯碼標準(諸如ITU-T H.265(亦被稱為高效率視訊譯碼(HEVC)標準)或對其的擴展(諸如多視圖及/或可伸縮視訊譯碼擴展))進行操作。替代地或補充地,視訊轉碼器200和視訊解碼器300可以根據其他專有或行業標準(諸如ITU-T H.266(亦被稱為通用視訊譯碼(VVC)以及對其的擴展(諸如對螢幕內容或高動態範圍的擴展))進行操作。VVC標準的草案是在以下文件中描述的:Bross等人,「Versatile Video Coding Draft 10」,ITU-T SG 16 WP 3和ISO/IEC JTC 1/SC 29/WG 11的聯合視訊專家組(JVET),第18次會議:經由電話會議,2020年6月22日-7月1日,JVET-S2001-v17(下文中被稱為「VVC草案10」)。然而,本案內容的技術不限於任何特定的譯碼標準。Video transcoder 200 and video decoder 300 may be based on video coding standards such as ITU-T H.265 (also known as High Efficiency Video Coding (HEVC) standard) or extensions thereof such as multi-view and/or or Scalable Video Decoding Extension)) to operate. Alternatively or additionally, video transcoder 200 and video decoder 300 may be based on other proprietary or industry standards such as ITU-T H.266 (also known as Universal Video Coding (VVC) and extensions thereto ( Such as the expansion of screen content or high dynamic range)). A draft of the VVC standard is described in the following documents: Bross et al., "Versatile Video Coding Draft 10", ITU-T SG 16 WP 3 and ISO/IEC Joint Video Expert Team (JVET) of JTC 1/SC 29/WG 11, 18th meeting: via conference call, 22 June - 1 July 2020, JVET-S2001-v17 (hereinafter referred to as " VVC draft 10"). However, the techniques in this case are not limited to any particular coding standard.

通常,視訊轉碼器200和視訊解碼器300可以執行對圖片的基於塊的譯碼。術語「塊」通常代表包括要被處理的(例如,在編碼及/或解碼程序中要被編碼、被解碼或以其他方式使用的)資料的結構。例如,塊可以包括亮度及/或色度資料的取樣的二維矩陣。通常,視訊轉碼器200和視訊解碼器300可以對以YUV(例如,Y、Cb、Cr)格式表示的視訊資料進行譯碼。亦即,並不是對用於圖片的取樣的紅色、綠色和藍色(RGB)資料進行譯碼,視訊轉碼器200和視訊解碼器300可以對亮度和色度分量進行譯碼,其中色度分量可以包括紅色色相和藍色色相色度分量兩者。在一些實例中,視訊轉碼器200在進行編碼之前將所接收的經RGB格式化的資料轉換為YUV表示,並且視訊解碼器300將YUV表示轉換為RGB格式。替代地,預處理和後處理單元(未圖示)可以執行這些轉換。In general, video transcoder 200 and video decoder 300 may perform block-based coding of pictures. The term "block" generally refers to a structure comprising data to be processed (eg, to be encoded, decoded, or otherwise used in an encoding and/or decoding process). For example, a block may include a two-dimensional matrix of samples of luma and/or chrominance data. Generally, the video transcoder 200 and the video decoder 300 can decode video data expressed in YUV (eg, Y, Cb, Cr) format. That is, instead of decoding sampled red, green, and blue (RGB) data for a picture, video transcoder 200 and video decoder 300 can decode luma and chrominance components, where chrominance The components may include both red hue and blue hue chroma components. In some examples, video transcoder 200 converts received RGB formatted data to a YUV representation prior to encoding, and video decoder 300 converts the YUV representation to RGB format. Alternatively, preprocessing and postprocessing units (not shown) may perform these transformations.

概括而言,本案內容可以涉及對圖片的譯碼(例如,編碼和解碼)以包括對圖片的資料進行編碼或解碼的程序。類似地,本案內容可以涉及對圖片的塊的譯碼以包括對用於塊的資料進行編碼或解碼(例如,預測及/或殘差譯碼)的程序。經編碼的視訊位元串流通常包括用於表示譯碼決策(例如,譯碼模式)以及將圖片分割為塊的語法元素的一系列值。因此,關於對圖片或塊進行譯碼的引用通常應當被理解為對用於形成圖片或塊的語法元素的值進行譯碼。在本案內容中,當前塊或當前圖片通常指當前正在被編碼或解碼的塊或圖片,而不是已經解碼的塊或圖片或尚未解碼的塊或圖片。In general terms, this disclosure may relate to the decoding (eg, encoding and decoding) of pictures to include procedures for encoding or decoding data of the pictures. Similarly, the present disclosure may relate to the coding of blocks of pictures to include procedures for encoding or decoding (eg, predictive and/or residual coding) material for the blocks. An encoded video bitstream typically includes a series of values for syntax elements representing coding decisions (eg, coding modes) and partitioning a picture into blocks. Therefore, references to coding a picture or block should generally be understood as coding the values of the syntax elements used to form the picture or block. In this case, the current block or current picture generally refers to a block or picture that is currently being encoded or decoded, rather than a block or picture that has been decoded or a block or picture that has not yet been decoded.

HEVC定義了各種塊,包括譯碼單元(CU)、預測單元(PU)和變換單元(TU)。根據HEVC,視訊譯碼器(諸如視訊轉碼器200)根據四叉樹結構來將譯碼樹單元(CTU)分割為CU。亦即,視訊譯碼器將CTU和CU分割為四個相等的、不重疊的正方形,並且四叉樹的每個節點具有零個或四個子節點。沒有子節點的節點可以被稱為「葉節點」,並且這種葉節點的CU可以包括一或多個PU及/或一或多個TU。視訊解碼器可以進一步分割PU和TU。例如,在HEVC中,殘差四叉樹(RQT)表示對TU的分區。在HEVC中,PU表示訊框間預測資料,而TU表示殘差資料。經訊框內預測的CU包括訊框內預測資訊,諸如訊框內模式指示。HEVC defines various blocks, including Coding Units (CUs), Prediction Units (PUs), and Transform Units (TUs). According to HEVC, a video coder, such as video transcoder 200 , partitions coding tree units (CTUs) into CUs according to a quadtree structure. That is, the video coder partitions the CTU and CU into four equal, non-overlapping squares, and each node of the quadtree has either zero or four child nodes. A node without child nodes may be referred to as a "leaf node," and a CU of such a leaf node may include one or more PUs and/or one or more TUs. Video decoders can further partition PUs and TUs. For example, in HEVC, a residual quadtree (RQT) represents a partition of a TU. In HEVC, PU represents inter-frame prediction data, and TU represents residual data. An intra-predicted CU includes intra-prediction information, such as an intra-mode indication.

作為另一實例,視訊轉碼器200和視訊解碼器300可以被配置為根據VVC進行操作。根據VVC,視訊譯碼器(諸如視訊轉碼器200)將圖片分割為複數個CTU。視訊轉碼器200可以根據樹結構(諸如四叉樹-二叉樹(QTBT)結構或多類型樹(MTT)結構)分割CTU。QTBT結構去除了多種分割類型的概念,諸如在HEVC的CU、PU和TU之間的分隔。QTBT結構包括兩個級別:根據四叉樹分割而被分割的第一級別、以及根據二叉樹分割而被分割的第二級別。QTBT結構的根節點對應於CTU。二叉樹的葉節點對應於CU。As another example, video transcoder 200 and video decoder 300 may be configured to operate according to VVC. According to VVC, a video coder such as video transcoder 200 partitions a picture into a plurality of CTUs. The video transcoder 200 may partition a CTU according to a tree structure such as a quadtree-binary tree (QTBT) structure or a multi-type tree (MTT) structure. The QTBT structure removes the concept of multiple partition types, such as partitions between CUs, PUs, and TUs in HEVC. The QTBT structure includes two levels: a first level divided according to quadtree division, and a second level divided according to binary tree division. The root node of the QTBT structure corresponds to a CTU. The leaf nodes of the binary tree correspond to CUs.

在MTT分割結構中,可以使用四叉樹(QT)分割、二叉樹(BT)分割以及一或多個類型的三叉樹(TT)(亦被稱為三元樹(TT))分割來對塊進行分割。三叉樹或三元樹分割是其中塊被分為三個子塊的分割。在一些實例中,三叉樹或三元樹分割將塊劃分為三個子塊,而不經由中心劃分原始塊。MTT中的分割類型(例如,QT、BT和TT)可以是對稱的或不對稱的。In an MTT partition structure, blocks can be partitioned using quadtree (QT) partitions, binary tree (BT) partitions, and one or more types of ternary tree (TT) (also known as ternary tree (TT)) partitions. segmentation. A ternary tree or ternary tree partition is a partition in which a block is divided into three sub-blocks. In some examples, ternary tree or ternary tree partitioning divides a block into three sub-blocks without dividing the original block via the center. Segmentation types in MTT (eg, QT, BT, and TT) can be symmetric or asymmetric.

在一些實例中,視訊轉碼器200和視訊解碼器300可以使用單個QTBT或MTT結構來表示亮度分量和色度分量中的每一者,而在其他實例中,視訊轉碼器200和視訊解碼器300可以使用兩個或兩個以上QTBT或MTT結構,諸如用於亮度分量的一個QTBT/MTT結構以及用於兩個色度分量的另一個QTBT/MTT結構(或者用於相應色度分量的兩個QTBT/MTT結構)。In some examples, video transcoder 200 and video decoder 300 may use a single QTBT or MTT structure to represent each of the luma and chrominance components, while in other examples video transcoder 200 and video decoding The processor 300 may use two or more QTBT or MTT structures, such as one QTBT/MTT structure for the luma component and another QTBT/MTT structure for the two chroma components (or a QTBT/MTT structure for the corresponding chroma components two QTBT/MTT structures).

視訊轉碼器200和視訊解碼器300可以被配置為使用每HEVC的四叉樹分割、QTBT分割、MTT分割、或其他分割結構。為瞭解釋的目的,關於QTBT分割提供了本案內容的技術的描述。然而,應當理解的是,本案內容的技術亦可以應用於被配置為使用四叉樹分割或者亦使用其他類型的分割的視訊譯碼器。Video transcoder 200 and video decoder 300 may be configured to use quadtree partitioning, QTBT partitioning, MTT partitioning, or other partitioning structures per HEVC. For purposes of explanation, a description of the techniques in this case is provided with respect to QTBT segmentation. However, it should be understood that the techniques disclosed herein can also be applied to video decoders configured to use quadtree partitioning, or to use other types of partitioning as well.

在一些實例中,CTU包括具有三個取樣陣列的圖片的亮度取樣的譯碼樹塊(CTB)、色度取樣的兩個對應CTB、或者單色圖片或使用三個單獨的色彩平面而譯碼的圖片的取樣的CTB、以及用於對取樣進行譯碼的語法結構。CTB可以是針對N的某個值而言的NxN取樣塊,使得將分量劃分為CTB是一種分割。分量是來自包括以4:2:0、4:2:2或4:4:4色彩格式的圖片的三個陣列(亮度和兩個色度)的一個陣列或來自三個陣列之一的單個取樣、或包括以單色格式的圖片的陣列或該陣列的單個取樣。在一些實例中,譯碼塊是針對M和N的一些值而言的MxN取樣塊,使得將CTB劃分為譯碼塊是一種分割。In some examples, a CTU includes a coding tree block (CTB) of luma samples for a picture with three sample arrays, two corresponding CTBs of chroma samples, or a monochrome picture or coded using three separate color planes The CTB of the samples of the picture and the syntax structure used to code the samples. A CTB may be a block of NxN samples for some value of N, such that dividing a component into a CTB is a partition. Components are either an array from three arrays (luminance and two chrominances) containing pictures in 4:2:0, 4:2:2, or 4:4:4 color format or a single array from one of the three arrays A sample, or an array consisting of pictures in monochrome format, or a single sample of the array. In some examples, a coding block is an MxN block of samples for some values of M and N, such that dividing the CTB into coding blocks is a partition.

可以以各種方式在圖片中對塊(例如,CTU或CU)進行群組。作為一個實例,磚塊(brick)可以代表圖片中的特定瓦片(tile)內的CTU行的矩形區域。瓦片可以是圖片中的特定瓦片列和特定瓦片行內的CTU的矩形區域。瓦片列代表CTU的矩形區域,其具有等於圖片的高度的高度以及由語法元素(例如,諸如在圖片參數集中)指定的寬度。瓦片行代表CTU的矩形區域,其具有由語法元素指定的高度(例如,諸如在圖片參數集中)以及等於圖片的寬度的寬度。Blocks (eg, CTUs or CUs) can be grouped in a picture in various ways. As an example, a brick may represent a rectangular area of CTU rows within a particular tile in a picture. A tile may be a rectangular area of a CTU within a specific tile column and a specific tile row in a picture. A tile column represents a rectangular area of a CTU with a height equal to that of a picture and a width specified by a syntax element (eg, such as in a picture parameter set). A tile row represents a rectangular area of a CTU with a height specified by a syntax element (eg, such as in a picture parameter set) and a width equal to the width of the picture.

在一些實例中,可以將瓦片分割為多個磚塊,每個磚塊可以包括瓦片內的一或多個CTU行。沒有被分割為多個磚塊的瓦片亦可以被稱為磚塊。然而,作為瓦片的真實子集的磚塊可以不被稱為瓦片。In some examples, a tile can be partitioned into multiple bricks, each brick can include one or more rows of CTUs within the tile. A tile that is not divided into multiple bricks may also be called a brick. However, bricks that are a true subset of tiles may not be called tiles.

圖片中的磚塊亦可以以切片來排列。切片可以是圖片的整數個磚塊,其可以唯一地被包含在單個網路抽象層(NAL)單元中。在一些實例中,切片包括多個完整的瓦片或者僅包括一個瓦片的完整磚塊的連續序列。The bricks in the picture can also be arranged in slices. A slice may be an integer number of bricks of a picture that may be uniquely contained within a single Network Abstraction Layer (NAL) unit. In some examples, a slice includes multiple full tiles or a contiguous sequence of full bricks that includes only one tile.

本案內容可以互換地使用「NxN」和「N乘N」來代表塊(諸如CU或其他視訊塊)在垂直和水平維度態樣的取樣大小,例如,16x16個取樣或16乘16個取樣。通常,16x16 CU在垂直方向上具有16個取樣(y = 16),並且在水平方向上將具有16個取樣(x = 16)。同樣地,NxN CU通常在垂直方向上具有N個取樣,並且在水平方向上具有N個取樣,其中N表示非負整數值。CU中的取樣可以按行和列來排列。此外,CU不一定需要在水平方向上具有與在垂直方向上相同的數量的取樣。例如,CU可以包括NxM個取樣,其中M不一定等於N。This document uses "NxN" and "N by N" interchangeably to refer to the sample size of a block (such as a CU or other video block) in vertical and horizontal dimensions, eg, 16x16 samples or 16 by 16 samples. Typically, a 16x16 CU has 16 samples vertically (y=16) and will have 16 samples horizontally (x=16). Likewise, an NxN CU typically has N samples in the vertical direction and N samples in the horizontal direction, where N represents a non-negative integer value. Samples in a CU can be arranged in rows and columns. Furthermore, a CU does not necessarily need to have the same number of samples in the horizontal direction as in the vertical direction. For example, a CU may include NxM samples, where M is not necessarily equal to N.

視訊轉碼器200對用於CU的表示預測及/或殘差資訊以及其他資訊的視訊資料進行編碼。預測資訊指示將如何預測CU以便形成用於CU的預測塊。殘差資訊通常表示在編碼之前的CU的取樣與預測塊之間的逐取樣差。Video transcoder 200 encodes video data representing prediction and/or residual information, among other information, for a CU. The prediction information indicates how the CU will be predicted in order to form the prediction block for the CU. The residual information typically represents the sample-by-sample difference between the samples of the CU before encoding and the prediction block.

為了預測CU,視訊轉碼器200通常可以經由訊框間預測或訊框內預測來形成用於CU的預測塊。訊框間預測通常代表根據先前譯碼的圖片的資料來預測CU,而訊框內預測通常代表根據同一圖片的先前譯碼的資料來預測CU。為了執行訊框間預測,視訊轉碼器200可以使用一或多個運動向量來產生預測塊。視訊轉碼器200通常可以執行運動搜尋,以辨識例如在CU與參考塊之間的差異態樣與CU緊密匹配的參考塊。視訊轉碼器200可以使用絕對差之和(SAD)、平方差之和(SSD)、平均絕對差(MAD)、均方差(MSD)、或其他這種差計算來計算差度量,以決定參考塊是否與當前CU緊密匹配。在一些實例中,視訊轉碼器200可以使用單向預測或雙向預測來預測當前CU。In order to predict a CU, the video transcoder 200 may generally form a prediction block for the CU through inter-prediction or intra-prediction. Inter prediction generally refers to predicting a CU based on previously coded data for a picture, while intra prediction generally refers to predicting a CU based on previously coded data for the same picture. To perform inter-frame prediction, video transcoder 200 may use one or more motion vectors to generate a prediction block. Video transcoder 200 can typically perform a motion search to identify reference blocks that closely match the CU, for example, with a difference pattern between the CU and the reference block. Video transcoder 200 may calculate the difference metric using sum of absolute difference (SAD), sum of square difference (SSD), mean absolute difference (MAD), mean square difference (MSD), or other such difference calculations to determine the reference block Whether it closely matches the current CU. In some examples, video transcoder 200 may use uni-prediction or bi-prediction to predict the current CU.

VVC的一些實例亦提供仿射運動補償模式,其可以被認為是訊框間預測模式。在仿射運動補償模式下,視訊轉碼器200可以決定表示非平移運動(諸如放大或縮小、旋轉、透視運動或其他不規則的運動類型)的兩個或兩個以上運動向量。Some examples of VVC also provide an affine motion compensation mode, which can be considered an inter-frame prediction mode. In the affine motion compensation mode, video transcoder 200 may determine two or more motion vectors representing non-translational motion, such as zooming in or out, rotation, perspective motion, or other irregular motion types.

為了執行訊框內預測,視訊轉碼器200可以選擇訊框內預測模式來產生預測塊。VVC的一些實例提供六十七種訊框內預測模式,包括各種方向性模式、以及平面模式和DC模式。通常,視訊轉碼器200選擇訊框內預測模式,訊框內預測模式描述要根據其來預測當前塊(例如,CU的塊)的取樣的、當前塊的相鄰取樣。假定視訊轉碼器200以光柵掃瞄次序(從左到右、從上到下)對CTU和CU進行譯碼,則此類取樣通常可以是在與當前塊相同的圖片中在當前塊的上方、左上方或左側。In order to perform intra prediction, the video transcoder 200 can select an intra prediction mode to generate a prediction block. Some examples of VVC provide sixty-seven intra-frame prediction modes, including various directional modes, as well as planar and DC modes. In general, video transcoder 200 selects an intra prediction mode that describes neighboring samples of the current block (eg, a block of a CU) from which samples of the current block (eg, a block of a CU) are to be predicted. Assuming video transcoder 200 codes CTUs and CUs in raster scan order (left-to-right, top-to-bottom), such samples may typically be above the current block in the same picture as the current block , Top Left, or Left.

視訊轉碼器200對表示用於當前塊的預測模式的資料進行編碼。例如,對於訊框間預測模式,視訊轉碼器200可以對表示使用各種可用訊框間預測模式中的哪一種的資料以及用於對應模式的運動資訊進行編碼。對於單向或雙向訊框間預測,例如,視訊轉碼器200可以使用高級運動向量預測(AMVP)或合併模式來對運動向量進行編碼。視訊轉碼器200可以使用類似的模式來對用於仿射運動補償模式的運動向量進行編碼。The video transcoder 200 encodes data representing the prediction mode for the current block. For example, for an inter prediction mode, video transcoder 200 may encode data indicating which of various available inter prediction modes to use and motion information for the corresponding mode. For uni-directional or bi-directional inter-frame prediction, for example, video transcoder 200 may use advanced motion vector prediction (AMVP) or merge mode to encode motion vectors. Video transcoder 200 may use a similar scheme to encode motion vectors for affine motion compensation mode.

在諸如對塊的訊框內預測或訊框間預測之類的預測之後,視訊轉碼器200可以計算用於該塊的殘差資料。殘差資料(諸如殘差塊)表示在塊與用於該塊的預測塊之間的逐取樣差,該預測塊是使用對應的預測模式來形成的。視訊轉碼器200可以將一或多個變換應用於殘差塊,以在變換域中而非在取樣域中產生經變換的資料。例如,視訊轉碼器200可以將離散餘弦變換(DCT)、整數變換、小波變換或概念上類似的變換應用於殘差視訊資料。另外,視訊轉碼器200可以在第一變換之後應用二次變換,諸如模式相關的不可分離二次變換(MDNSST)、訊號相關變換、Karhunen-Loeve變換(KLT)等。視訊轉碼器200在應用一或多個變換之後產生變換係數。After prediction, such as intra prediction or inter prediction for a block, video transcoder 200 may compute residual data for the block. Residual data, such as a residual block, represents the sample-by-sample difference between a block and the prediction block for that block, which was formed using the corresponding prediction mode. Video transcoder 200 may apply one or more transforms to the residual block to generate transformed data in the transform domain rather than the sample domain. For example, video transcoder 200 may apply a discrete cosine transform (DCT), an integer transform, a wavelet transform, or a conceptually similar transform to the residual video data. In addition, the video transcoder 200 may apply a secondary transform after the first transform, such as mode-dependent non-separable secondary transform (MDNSST), signal-dependent transform, Karhunen-Loeve transform (KLT), and so on. Video transcoder 200 generates transform coefficients after applying one or more transforms.

如前述,在任何變換以產生變換係數之後,視訊轉碼器200可以執行對變換係數的量化。量化通常代表如下的程序:在該程序中,對變換係數進行量化以可能減少用於表示變換係數的資料量,從而提供進一步的壓縮。經由執行量化程序,視訊轉碼器200可以減小與一些或所有變換係數相關聯的位元深度。例如,視訊轉碼器200可以在量化期間將 n位元的值向下捨入為 m位元的值,其中 n大於 m。在一些實例中,為了執行量化,視訊轉碼器200可以執行對要被量化的值的按位右移。 As before, following any transform to generate transform coefficients, video transcoder 200 may perform quantization of the transform coefficients. Quantization generally refers to a procedure in which transform coefficients are quantized to possibly reduce the amount of data used to represent the transform coefficients, thereby providing further compression. By performing a quantization process, video transcoder 200 may reduce the bit depth associated with some or all transform coefficients. For example, video transcoder 200 may round down an n- bit value to an m -bit value during quantization, where n is greater than m . In some examples, to perform quantization, video transcoder 200 may perform a bitwise right shift of the value to be quantized.

在量化之後,視訊轉碼器200可以掃瞄變換係數,從而從包括經量化的變換係數的二維矩陣產生一維向量。可以將掃瞄設計為將較高能量(並且因此較低頻率)的變換係數放在向量的前面,並且將較低能量(並且因此較高頻率)的變換係數放在向量的後面。在一些實例中,視訊轉碼器200可以利用預定義的掃瞄次序來掃瞄經量化的變換係數以產生經序列化的向量,並且隨後對向量的經量化的變換係數進行熵編碼。在其他實例中,視訊轉碼器200可以執行自我調整掃瞄。在掃瞄經量化的變換係數以形成一維向量之後,視訊轉碼器200可以例如根據上下文自我調整二進位算術譯碼(CABAC)來對一維向量進行熵編碼。視訊轉碼器200亦可以對用於描述與經編碼的視訊資料相關聯的中繼資料的語法元素的值進行熵編碼,以供視訊解碼器300在對視訊資料進行解碼時使用。After quantization, video transcoder 200 may scan the transform coefficients to generate a one-dimensional vector from a two-dimensional matrix including quantized transform coefficients. The sweep can be designed to place higher energy (and therefore lower frequency) transform coefficients at the front of the vector, and lower energy (and thus higher frequency) transform coefficients at the back of the vector. In some examples, the video transcoder 200 may scan the quantized transform coefficients using a predefined scan order to generate a serialized vector, and then entropy encode the quantized transform coefficients of the vector. In other examples, video transcoder 200 may perform self-adjusting scans. After scanning the quantized transform coefficients to form a 1D vector, the video transcoder 200 may entropy code the 1D vector, for example, with Context Self-Adjusting Binary Arithmetic Coding (CABAC). The video transcoder 200 may also entropy encode values of syntax elements used to describe metadata associated with the encoded video data for use by the video decoder 300 when decoding the video data.

為了執行CABAC,視訊轉碼器200可以將上下文模型內的上下文分配給要被發送的符號。上下文可以涉及例如符號的相鄰值是否為零值。概率決定可以是基於被分配給符號的上下文的。To perform CABAC, video transcoder 200 may assign contexts within a context model to symbols to be transmitted. The context may relate, for example, to whether a neighboring value of a symbol is zero-valued. Probabilistic decisions may be based on the context assigned to the symbol.

視訊轉碼器200亦可以例如在圖片標頭、塊標頭、切片標頭中為視訊解碼器300產生語法資料(諸如基於塊的語法資料、基於圖片的語法資料和基於序列的語法資料)、或其他語法資料(諸如序列參數集(SPS)、圖片參數集(PPS)或視訊參數集(VPS))。同樣地,視訊解碼器300可以對此類語法資料進行解碼以決定如何解碼對應的視訊資料。The video transcoder 200 can also generate syntax data (such as block-based syntax data, picture-based syntax data, and sequence-based syntax data) for the video decoder 300, for example, in the picture header, block header, slice header, or other syntax data (such as sequence parameter set (SPS), picture parameter set (PPS) or video parameter set (VPS)). Likewise, the video decoder 300 can decode such syntax data to determine how to decode the corresponding video data.

以這種方式,視訊轉碼器200可以產生位元串流,其包括經編碼的視訊資料,例如,描述將圖片分割為塊(例如,CU)以及用於該塊的預測及/或殘差資訊的語法元素。最終,視訊解碼器300可以接收位元串流並且對經編碼的視訊資料進行解碼。In this way, video transcoder 200 may generate a bitstream that includes encoded video data, e.g., describing the partitioning of a picture into blocks (e.g., CUs) and the prediction and/or residual for that block. Syntactic elements for information. Finally, the video decoder 300 can receive the bit stream and decode the encoded video data.

通常,視訊解碼器300執行與由視訊轉碼器200執行的程序相反的程序,以對位元串流的經編碼的視訊資料進行解碼。例如,視訊解碼器300可以使用CABAC,以與視訊轉碼器200的CABAC編碼程序基本上類似的、但是相反的方式來對用於位元串流的語法元素的值進行解碼。語法元素可以定義用於將圖片分割為CTU、以及根據對應的分割結構(諸如QTBT結構)對每個CTU進行分割以定義CTU的CU的分割資訊。語法元素亦可以定義用於視訊資料的塊(例如,CU)的預測和殘差資訊。Generally, the video decoder 300 performs the reverse process of the process performed by the video transcoder 200 to decode the encoded video data of the bitstream. For example, video decoder 300 may use CABAC to decode the values of the syntax elements for the bitstream in a substantially similar, but reversed, manner to the CABAC encoding process of video transcoder 200 . The syntax elements may define partition information for partitioning a picture into CTUs, and partitioning each CTU according to a corresponding partition structure (such as a QTBT structure) to define the CUs of the CTUs. Syntax elements may also define prediction and residual information for a block (eg, CU) of video data.

殘差資訊可以由例如經量化的變換係數來表示。視訊解碼器300可以對塊的經量化的變換係數進行逆量化和逆變換以重現用於該塊的殘差塊。視訊解碼器300使用經訊號通知的預測模式(訊框內預測或訊框間預測)和相關的預測資訊(例如,用於訊框間預測的運動資訊)來形成用於該塊的預測塊。視訊解碼器300隨後可以對預測塊和殘差塊(在逐個取樣的基礎上)進行組合以重現原始塊。視訊解碼器300可以執行額外處理,諸如執行去塊程序以減少沿著塊的邊界的視覺偽影。The residual information may be represented by, for example, quantized transform coefficients. Video decoder 300 may inverse quantize and inverse transform the quantized transform coefficients of a block to reproduce a residual block for the block. Video decoder 300 uses the signaled prediction mode (intra prediction or inter prediction) and associated prediction information (eg, motion information for inter prediction) to form a prediction block for the block. Video decoder 300 may then combine the predicted and residual blocks (on a sample-by-sample basis) to reconstruct the original block. Video decoder 300 may perform additional processing, such as performing a deblocking procedure to reduce visual artifacts along block boundaries.

概括而言,本案內容可能涉及「用訊號通知」某些資訊(諸如語法元素)。術語「用訊號通知」通常可以代表對用於語法元素的值及/或用於對經編碼的視訊資料進行解碼的其他資料的傳送。亦即,視訊轉碼器200可以在位元串流中用訊號通知用於語法元素的值。通常,用訊號通知代表在位元串流中產生值。如前述,源設備102可以基本上即時地或不是即時地(諸如可能在將語法元素儲存到存放設備112以供目的地設備116稍後取回時發生)將位元串流傳輸到目的地設備116。Broadly speaking, the content of this case may involve "signaling" certain information (such as grammatical elements). The term "signaling" may generally refer to the transmission of values for syntax elements and/or other data used to decode encoded video data. That is, video transcoder 200 may signal the values for the syntax elements in the bitstream. Typically, signaling represents the generation of values in the bitstream. As previously mentioned, the source device 102 may transmit the bitstream to the destination device substantially instantaneously or not (such as may occur when syntax elements are stored to the depository device 112 for later retrieval by the destination device 116) 116.

圖2A和2B是示出實例四叉樹二叉樹(QTBT)結構130以及對應的CTU 132的概念圖。實線表示四叉樹拆分,而虛線指示二叉樹拆分。在二叉樹的每個拆分(即非葉)節點中,用訊號通知一個標誌以指示使用哪種拆分類型(亦即,水平或垂直),其中在該實例中,0指示水平拆分,而1指示垂直拆分。對於四叉樹拆分,由於四叉樹節點將塊水平地並且垂直地拆分為具有相等大小的4個子塊,因此無需指示拆分類型。因此,視訊轉碼器200可以對以下各項進行編碼,而視訊解碼器300可以對以下各項進行解碼:用於QTBT結構130的區域樹級別(即實線)的語法元素(諸如拆分資訊)、以及用於QTBT結構130的預測樹級別(即虛線)的語法元素(諸如拆分資訊)。視訊轉碼器200可以對用於由QTBT結構130的終端葉節點表示的CU的視訊資料(諸如預測和變換資料)進行編碼,而視訊解碼器300可以對視訊資料進行解碼。2A and 2B are conceptual diagrams illustrating an example quadtree-binary-tree (QTBT) structure 130 and corresponding CTU 132 . Solid lines indicate quadtree splits, while dashed lines indicate binary tree splits. In each split (i.e. non-leaf) node of the binary tree, a flag is signaled to indicate which type of split to use (i.e., horizontal or vertical), where in this instance 0 indicates a horizontal split and 1 indicates a vertical split. For quadtree splitting, since the quadtree node splits the block horizontally and vertically into 4 sub-blocks of equal size, there is no need to indicate the splitting type. Thus, video transcoder 200 can encode, and video decoder 300 can decode, syntax elements (such as split information ), and syntax elements (such as split information) for the predicted tree level (ie dashed line) of the QTBT structure 130 . Video transcoder 200 may encode video data, such as prediction and transform data, for a CU represented by a terminal leaf node of QTBT structure 130 , and video decoder 300 may decode the video data.

通常,圖2B的CTU 132可以與定義與QTBT結構130的處於第一和第二級別的節點相對應的塊的大小的參數相關聯。這些參數可以包括CTU大小(表示取樣中的CTU 132的大小)、最小四叉樹大小(MinQTSize,其表示最小允許四叉樹葉節點大小)、最大二叉樹大小(MaxBTSize,其表示最大允許二叉樹根節點大小)、最大二叉樹深度(MaxBTDepth,其表示最大允許二叉樹深度)、以及最小二叉樹大小(MinBTSize,其表示最小允許二叉樹葉節點大小)。In general, the CTU 132 of FIG. 2B may be associated with parameters defining the size of the blocks corresponding to the nodes of the QTBT structure 130 at the first and second levels. These parameters may include the CTU size (representing the size of the CTU 132 in the sample), the minimum quadtree size (MinQTSize, which represents the minimum allowable quadtree leaf node size), the maximum binary tree size (MaxBTSize, which represents the maximum allowable binary tree root node size ), the maximum binary tree depth (MaxBTDepth, which represents the maximum allowable binary tree depth), and the minimum binary tree size (MinBTSize, which represents the minimum allowable binary tree leaf node size).

QTBT結構的與CTU相對應的根節點可以在QTBT結構的第一級別處具有四個子節點,每個子節點可以是根據四叉樹分割來分割的。亦即,第一級別的節點是葉節點(沒有子節點)或者具有四個子節點。QTBT結構130的實例將此類節點表示為包括具有實線分支的父節點和子節點。若第一級別的節點不大於最大允許二叉樹根節點大小(MaxBTSize),則可以經由相應的二叉樹進一步對該等節點進行分割。可以對一個節點的二叉樹拆分進行反覆運算,直到從拆分產生的節點達到最小允許二叉樹葉節點大小(MinBTSize)或最大允許二叉樹深度(MaxBTDepth)。QTBT結構130的實例將此類節點表示為具有虛線分支。二叉樹葉節點被稱為CU,其用於預測(例如,圖片內或圖片間預測)和變換,而不進行任何進一步分割。如上所論述的,CU亦可以被稱為「視訊塊」或「塊」。The root node of the QTBT structure corresponding to the CTU may have four child nodes at the first level of the QTBT structure, and each child node may be divided according to quadtree division. That is, nodes at the first level are either leaf nodes (no child nodes) or have four child nodes. The example of QTBT structure 130 represents such nodes as including parent nodes and child nodes with solid line branches. If the first-level nodes are not larger than the maximum allowable binary tree root node size (MaxBTSize), then these nodes can be further divided through the corresponding binary tree. The binary tree split of a node can be iterated until the node resulting from the split reaches the minimum allowable binary tree leaf node size (MinBTSize) or the maximum allowable binary tree depth (MaxBTDepth). The example of QTBT structure 130 shows such nodes as having dashed branches. The binary tree leaf nodes are called CUs, which are used for prediction (eg, intra-picture or inter-picture prediction) and transformation without any further partitioning. As discussed above, a CU may also be referred to as a "video block" or "block."

在QTBT分割結構的一個實例中,CTU大小被設置為128x128(亮度取樣和兩個對應的64x64色度取樣),MinQTSize被設置為16x16,MaxBTSize被設置為64x64,MinBTSize(對於寬度和高度兩者)被設置為4,並且MaxBTDepth被設置為4。首先對CTU應用四叉樹分割以產生四叉樹葉節點。四叉樹葉節點可以具有從16x16(即MinQTSize)到128x128(即CTU大小)的大小。若四叉樹葉節點為128x128,則由於該大小超過MaxBTSize(亦即,在該實例中為64x64),因此葉四叉樹節點可能不被二叉樹進一步拆分。否則,四叉樹葉節點可以被二叉樹進一步分割。因此,四叉樹葉節點亦是用於二叉樹的根節點,並且具有為0的二叉樹深度。當二叉樹深度達到MaxBTDepth(在該實例中為4)時,不允許進一步拆分。具有等於MinBTSize(在該實例中為4)的寬度的二叉樹節點意味著不允許針對該二叉樹節點進行進一步的垂直拆分(亦即,對寬度的劃分)。類似地,具有等於MinBTSize的高度的二叉樹節點意味著不允許針對該二叉樹節點進行進一步的水平拆分(亦亦即,對高度的劃分)。如前述,二叉樹的葉節點被稱為CU,並且根據預測和變換而被進一步處理,而無需進一步分割。In an instance of the QTBT partition structure, the CTU size is set to 128x128 (a luma sample and two corresponding 64x64 chroma samples), MinQTSize is set to 16x16, MaxBTSize is set to 64x64, MinBTSize (for both width and height) is set to 4, and MaxBTDepth is set to 4. Quadtree partitioning is first applied to the CTU to produce quadtree leaf nodes. Quadtree leaf nodes can have a size from 16x16 (i.e. MinQTSize) to 128x128 (i.e. CTU size). If the quadtree leaf node is 128x128, the leaf quadtree node may not be further split by the binary tree since this size exceeds MaxBTSize (ie, 64x64 in this example). Otherwise, the quadtree leaf nodes can be further split by the binary tree. Therefore, the quadtree leaf node is also the root node for the binary tree and has a binary tree depth of 0. When the binary tree depth reaches MaxBTDepth (4 in this instance), no further splits are allowed. A binary tree node having a width equal to MinBTSize (4 in this example) means that no further vertical splits (ie, divisions of width) are allowed for that binary tree node. Similarly, a binary tree node having a height equal to MinBTSize means that no further horizontal splits (ie, division of heights) are allowed for that binary tree node. As before, the leaf nodes of the binary tree are called CUs and are further processed according to prediction and transformation without further partitioning.

如前述,視訊轉碼器200和視訊解碼器300可以被配置為執行運動向量預測。在HEVC中,對於預測單元(PU),存在兩種訊框間預測模式,分別被稱為合併(跳過被認為是合併的特例)和AMVP模式。在AMVP和合併模式下,視訊轉碼器200和視訊解碼器300維護多個運動向量預測器的運動向量(MV)候選列表。經由從MV候選列表中選擇一個候選來產生當前PU的運動向量以及合併模式下的參考索引。As mentioned above, the video transcoder 200 and the video decoder 300 may be configured to perform motion vector prediction. In HEVC, for prediction units (PUs), there are two inter-frame prediction modes called merge (skipping is considered a special case of merge) and AMVP mode. In AMVP and merge modes, video transcoder 200 and video decoder 300 maintain motion vector (MV) candidate lists for multiple motion vector predictors. The motion vector of the current PU and the reference index in merge mode are generated by selecting one candidate from the MV candidate list.

在HEVC的實施中,MV候選列表最多包含五個用於合併模式的候選和兩個用於AMVP模式的候選。合併候選可以包含運動資訊集合,例如,與參考圖片列表(列表0和列表1)和參考索引兩者相對應的運動向量。經由接收由合併索引標識的合併候選,視訊解碼器300決定用於預測當前塊的參考圖片以及相關聯的運動向量。另一態樣,在針對來自列表0或列表1的每個潛在預測方向的AMVP模式下,視訊解碼器300接收MV候選列表的MV預測器(MVP)索引,因為AMVP候選僅包含運動向量。視訊解碼器300另外接收運動向量差(MVD)和參考索引,以明確地辨識參考圖片。在AMVP模式下,可以進一步細化預測的運動向量。In an HEVC implementation, the MV candidate list contains at most five candidates for merge mode and two candidates for AMVP mode. Merge candidates may contain sets of motion information, eg, motion vectors corresponding to both reference picture lists (list 0 and list 1 ) and reference indices. By receiving the merge candidates identified by the merge indices, the video decoder 300 determines the reference pictures and associated motion vectors for predicting the current block. In another aspect, in AMVP mode for each potential prediction direction from list 0 or list 1, video decoder 300 receives the MV predictor (MVP) index of the MV candidate list, since AMVP candidates only contain motion vectors. The video decoder 300 additionally receives a motion vector difference (MVD) and a reference index to unambiguously identify the reference picture. In AMVP mode, the predicted motion vectors can be further refined.

可以從相同的空間和時間相鄰塊來類似地推導用於兩種模式的候選。在HEVC中,如圖3A和圖3B所示,對於特定PU(PU 0),視訊轉碼器200和視訊解碼器300可以從相鄰塊推導空間MV候選,儘管用於從塊產生候選的技術對於合併和AMVP模式不同。 Candidates for both modes can be similarly derived from the same spatial and temporal neighboring blocks. In HEVC, as shown in FIGS. 3A and 3B , for a specific PU (PU 0 ), video transcoder 200 and video decoder 300 can derive spatial MV candidates from neighboring blocks, although the techniques used to generate candidates from blocks It is different for Merge and AMVP modes.

圖3A是示出用於合併模式的塊140的空間相鄰候選的概念圖。圖3B是示出用於AMVP模式的塊142的空間相鄰候選的概念圖。在合併模式下,視訊轉碼器200和視訊解碼器300可以按照圖3A所示的順序推導多達四個空間MV候選。順序如下:左側塊(0,A1)、上方塊(1,B1)、右上方塊(2,B0)、左下方塊(3,A0)和左上方(4,B2)塊。FIG. 3A is a conceptual diagram illustrating spatial neighbor candidates of a block 140 for a merge mode. FIG. 3B is a conceptual diagram illustrating spatial neighbor candidates for a block 142 in AMVP mode. In merge mode, video transcoder 200 and video decoder 300 can derive up to four spatial MV candidates in the sequence shown in FIG. 3A . The order is as follows: left block (0, A1), upper block (1, B1), upper right block (2, B0), lower left block (3, A0) and upper left block (4, B2).

在AMVP模式下,視訊轉碼器200和視訊解碼器300可以將相鄰塊分成兩個組:左側組包括塊0和1,並且上方組包括塊2、3和4,如圖3B所示。對於每個組,引用與用訊號通知的參考索引所指示的參考圖片相同的參考圖片的相鄰塊中的潛在候選具有被選擇的最高優先順序,以形成該組的最終候選。所有相鄰塊可能不包含指向相同參考圖片的運動向量。因此,若不能找到此類候選,則視訊轉碼器200和視訊解碼器300可以對第一可用候選進行縮放以形成最終候選。因此,可以補償時間距離差。In AMVP mode, video transcoder 200 and video decoder 300 may divide adjacent blocks into two groups: the left group includes blocks 0 and 1, and the upper group includes blocks 2, 3 and 4, as shown in FIG. 3B. For each group, potential candidates in neighboring blocks that refer to the same reference picture as the reference picture indicated by the signaled reference index have the highest priority selected to form the final candidates for the group. All neighboring blocks may not contain motion vectors pointing to the same reference picture. Therefore, if no such candidate can be found, video transcoder 200 and video decoder 300 may scale the first available candidate to form a final candidate. Therefore, the temporal distance difference can be compensated.

現在將論述HEVC中的時間運動向量預測。視訊轉碼器200和視訊解碼器300可以被配置為將時間運動向量預測器(TMVP)候選(若啟用且可用的話)在空間運動向量候選之後添加到MV候選列表中。用於TMVP候選的運動向量推導的程序對於合併和AMVP模式相同。然而,在HEVC中,用於合併模式下的TMVP候選的目標參考索引被設置為0。Temporal motion vector prediction in HEVC will now be discussed. Video transcoder 200 and video decoder 300 may be configured to add temporal motion vector predictor (TMVP) candidates (if enabled and available) to the MV candidate list after spatial motion vector candidates. The procedure for motion vector derivation for TMVP candidates is the same for merge and AMVP modes. However, in HEVC, the target reference index for TMVP candidates in merge mode is set to 0.

圖4A圖示用於塊154(PU0)的實例TMVP候選,並且圖4B圖示運動向量縮放程序156。用於TMVP候選推導的主塊位置是共置PU之外的右下方塊。該候選在圖4A中被示為塊「T」。塊T的位置用於補償對用於產生空間相鄰候選的上方和左側塊的偏置。然而,若該塊位於當前CTB行之外或運動資訊不可用,則該塊將被PU的中心塊替換。FIG. 4A illustrates example TMVP candidates for block 154 (PUO), and FIG. 4B illustrates motion vector scaling procedure 156 . The main block location for TMVP candidate derivation is the lower right block outside the co-located PU. This candidate is shown as block "T" in FIG. 4A. The location of block T is used to compensate for the bias to the above and left blocks used to generate spatially neighboring candidates. However, if the block is located outside the current CTB row or motion information is not available, the block will be replaced by the center block of the PU.

視訊轉碼器200和視訊解碼器300可以從位於共置圖片的共置PU推導用於TMVP候選的運動向量,如在切片級別指示的。用於共置PU的運動向量被稱為共置MV。與AVC中的時間直接模式類似,為了推導TMVP候選運動向量,可以對共置MV進行縮放以補償時間距離差,如圖4B所示。Video transcoder 200 and video decoder 300 may derive motion vectors for TMVP candidates from co-located PUs located in co-located pictures, as indicated at the slice level. A motion vector for a co-located PU is called a co-located MV. Similar to the temporal direct mode in AVC, to derive TMVP candidate motion vectors, the co-located MVs can be scaled to compensate for the temporal distance difference, as shown in Figure 4B.

現在將描述與本文描述的技術相關的HEVC中的運動預測的其他態樣。視訊轉碼器200和視訊解碼器300可以被配置為執行運動向量縮放。假設運動向量的值與呈現時間中的圖片的距離成比例。運動向量將兩個圖片(即參考圖片和包含該運動向量的圖片(即包含圖片))進行關聯。當利用運動向量預測另一運動向量時,基於圖片順序計數(POC)值來計算包含圖片和參考圖片的距離。Other aspects of motion prediction in HEVC that relate to the techniques described herein will now be described. Video transcoder 200 and video decoder 300 may be configured to perform motion vector scaling. The value of the motion vector is assumed to be proportional to the distance of the picture in presentation time. A motion vector associates two pictures (ie, a reference picture and a picture containing the motion vector (ie, a containing picture)). When predicting another motion vector using a motion vector, the distance of the containing picture and the reference picture is calculated based on a picture order count (POC) value.

對於要預測的運動向量,相關聯的包含圖片可能不同於參考圖片。因此,視訊轉碼器200和視訊解碼器300可以基於POC來計算新距離。視訊轉碼器200和視訊解碼器300可以基於這兩個POC距離來對運動向量進行縮放。對於空間相鄰候選,用於兩個運動向量的包含圖片相同,而參考圖片不同。在HEVC中,運動向量縮放應用於用於空間和時間相鄰候選的TMVP和AMVP兩者。For a motion vector to be predicted, the associated containing picture may be different from the reference picture. Therefore, the video transcoder 200 and the video decoder 300 can calculate the new distance based on the POC. Video transcoder 200 and video decoder 300 can scale motion vectors based on these two POC distances. For spatially adjacent candidates, the containing pictures for the two motion vectors are the same, but the reference pictures are different. In HEVC, motion vector scaling is applied to both TMVP and AMVP for spatial and temporal neighbor candidates.

視訊轉碼器200和視訊解碼器300可以被配置為執行人工運動向量候選產生。若運動向量候選列表不完整,則產生人工運動向量候選,並且將其插入列表末尾,直到列表已滿為止。The video transcoder 200 and the video decoder 300 may be configured to perform manual motion vector candidate generation. If the motion vector candidate list is not complete, artificial motion vector candidates are generated and inserted at the end of the list until the list is full.

在合併模式下,存在兩種類型的人工MV候選:僅針對B切片推導的組合候選;及若第一類型沒有提供足夠的人工候選,僅用於AMVP的零候選。對於已經在候選列表中並且具有必要運動資訊的每對候選,經由引用列表0中的圖片的第一候選的運動向量和引用列表1中的圖片的第二候選的運動向量的組合來推導雙向組合運動向量候選。In merge mode, there are two types of artificial MV candidates: combined candidates derived only for B slices; and zero candidates for AMVP only if the first type does not provide enough artificial candidates. For each pair of candidates already in the candidate list and having the necessary motion information, a bidirectional combination is derived via the combination of the motion vector of the first candidate referencing a picture in list 0 and the motion vector of the second candidate referencing a picture in list 1 motion vector candidates.

視訊轉碼器200和視訊解碼器300可以被配置為執行用於候選插入的修剪程序。來自不同塊的候選可能恰好相同,這降低合併/AMVP候選列表的效率。應用修剪程序來解決該問題。當實現修剪程序時,視訊轉碼器200或視訊解碼器300將當前候選列表中的一個候選與其他候選進行比較,以在一定程度上避免插入相同的候選。為了降低複雜性,僅應用有限數量的修剪程序,而不是將每個潛在修剪程序與所有其他現有修剪程序進行比較。Video transcoder 200 and video decoder 300 may be configured to perform a pruning procedure for candidate insertion. Candidates from different blocks may happen to be the same, which reduces the efficiency of merging/AMVP candidate lists. Apply a trimmer to fix this. When implementing the pruning procedure, the video transcoder 200 or the video decoder 300 compares one candidate in the current candidate list with other candidates to avoid inserting the same candidate to a certain extent. To reduce complexity, only a limited number of pruners are applied, rather than comparing each potential pruner with all other existing pruners.

視訊轉碼器200和視訊解碼器300可以被配置為執行範本匹配預測。範本匹配預測是基於畫面播放速率上轉換(FRUC)技術的特殊合併模式。在這種模式下,塊的部分運動資訊不是用訊號通知的,而是在解碼器側推導的。範本匹配可以應用於AMVP模式和一般合併模式兩者。在AMVP模式下,MVP候選選擇是基於範本匹配來決定的,以選擇達到當前塊範本和參考塊範本之間的最小差異的MVP候選。在一般合併模式下,用訊號通知範本匹配模式標誌,以指示使用範本匹配。隨後,視訊轉碼器200和視訊解碼器300可以將範本匹配應用於由合併索引指示的合併候選,以進行MV細化。The video transcoder 200 and the video decoder 300 may be configured to perform template matching prediction. Frame Match Prediction is a special merge mode based on Frame Rate Up-Conversion (FRUC) technology. In this mode, part of the motion information of the block is not signaled but derived at the decoder side. Template matching can be applied to both AMVP mode and general merge mode. In AMVP mode, MVP candidate selection is decided based on template matching to select the MVP candidate that achieves the minimum difference between the current block template and the reference block template. In normal merge mode, signal the template matching mode flag to indicate that template matching is used. Then, the video transcoder 200 and the video decoder 300 can apply template matching to the merge candidates indicated by the merge indexes for MV refinement.

如圖5所示,範本匹配用於經由檢視當前圖片中的當前範本162與參考圖片中的參考範本164(與範本大小相同)之間最接近的匹配來推導當前CU 160的運動資訊。在基於初始匹配錯誤選擇AMVP候選的情況下,視訊轉碼器200和視訊解碼器300可以使用範本匹配來細化MVP。在由用訊號通知的合併索引指示的合併候選的情況下,視訊轉碼器200和視訊解碼器300可以被配置為經由範本匹配來獨立地細化與L0和L1相對應的MV,並且隨後基於更準確的MV來進一步細化不太準確的MV。As shown in FIG. 5 , template matching is used to derive the motion information of the current CU 160 by looking at the closest match between the current template 162 in the current picture and the reference template 164 (same size as the template) in the reference picture. In case AMVP candidates are selected based on initial matching errors, video transcoder 200 and video decoder 300 may use template matching to refine the MVP. In the case of a merge candidate indicated by the signaled merge index, the video transcoder 200 and the video decoder 300 may be configured to independently refine the MVs corresponding to L0 and L1 via template matching, and then based on More accurate MVs to further refine less accurate MVs.

視訊轉碼器200和視訊解碼器300可以被配置為決定成本函數。當運動向量指向分數取樣位置時,需要運動補償內插。為了降低複雜度,可以使用雙線性內插代替一般的8分接點DCT-IF內插進行範本匹配,以在參考圖片上產生範本。按如下來計算範本匹配的匹配成本 CC= SAD+ w* ( | MV x - MV x s | + | MV y - MV y s | ), 其中w是加權因數,其可以被設置為整數,諸如0、1、2、3或4,

Figure 02_image001
Figure 02_image003
分別指示當前測試MV和初始MV(亦即,AMVP模式下的MVP候選或合併模式下的合併運動)。SAD被用作範本匹配的匹配成本。 The video transcoder 200 and the video decoder 300 can be configured to determine the cost function. Motion compensated interpolation is required when motion vectors point to fractional sample locations. In order to reduce the complexity, bilinear interpolation can be used instead of the general 8-point DCT-IF interpolation for template matching to generate a template on the reference picture. The matching cost C of template matching is calculated as follows: C = SAD + w * ( | MV x - MV x s | + | MV y - MV y s | ), where w is a weighting factor, which can be set as an integer, such as 0, 1, 2, 3 or 4,
Figure 02_image001
with
Figure 02_image003
Indicates the current test MV and initial MV (ie, MVP candidate in AMVP mode or merge motion in merge mode), respectively. SAD is used as the matching cost for template matching.

當使用範本匹配時,視訊轉碼器200和視訊解碼器300可以被配置為僅使用亮度取樣來細化運動。推導出的運動可以用於運動補償(MC)訊框間預測的亮度和色度兩者。在決定MV之後,針對亮度使用8分接點內插濾波器來執行最終MC,並且針對色度使用4分接點內插濾波器來執行最終MC。When template matching is used, video transcoder 200 and video decoder 300 may be configured to refine motion using only luma samples. The derived motion can be used for both luma and chroma for motion compensated (MC) inter-frame prediction. After deciding the MV, a final MC is performed using an 8-tap interpolation filter for luma and a 4-tap interpolation filter for chrominance.

視訊轉碼器200和視訊解碼器300可以被配置為決定和實現搜尋程序。MV細化是基於模式的MV搜尋,具有範本匹配成本的準則和層次結構。針對MV細化支援兩種搜尋模式:菱形搜尋和交叉搜尋。層次結構規定了用於細化MV的反覆運算程序,從粗略的MVD精度(例如,四分之一圖元)開始並且到精細的MVD精度(例如,1/8圖元)結束。使用菱形模式以四分之一亮度取樣MVD精度、接著利用交叉模式以四分之一亮度取樣MVD精度、並且隨後接著利用交叉模式以八分之一亮度取樣MVD細化,來直接搜尋MV。MV細化的搜尋範圍可以被設置為等於初始MV周圍的(-8,+8)亮度取樣。當當前塊為雙預測時,獨立地細化兩個MV,並且隨後將其中的最佳者(就匹配成本而言)設置為先驗,以利用BCW權重值進一步細化另一MV。The video transcoder 200 and the video decoder 300 can be configured to determine and implement a search procedure. MV refinement is a pattern-based MV search with criteria and hierarchy of template matching costs. Two search modes are supported for MV refinement: diamond search and cross search. The hierarchy specifies the iterative procedure for refining the MVs, starting with coarse MVD precision (eg, quarter primitive) and ending with fine MVD precision (eg, 1/8 primitive). MVs are sought directly using diamond mode to sample MVD precision at quarter luma, then sample MVD precision with interleave mode to quarter luma, and then sample MVD refinement with interleave mode to eighth luma. The search range for MV refinement can be set equal to (-8, +8) luma samples around the original MV. When the current block is bi-predictive, two MVs are refined independently, and the best of them (in terms of matching cost) is then set as a priori to further refine the other MV with the BCW weight value.

視訊轉碼器200和視訊解碼器300可以被配置為執行仿射預測。在HEVC中,僅平移運動模型應用於運動補償預測(MCP)。而在現實世界中,存在許多種運動,例如,放大/縮小、旋轉、透視運動和其他不規則運動。在VTM-6中,應用了基於塊的仿射變換運動補償預測。如圖6A所示,塊的仿射運動場由兩個控制點(170A和170B)的運動資訊描述,亦被稱為四參數模型。如圖6B所示,塊的仿射運動場由三個控制點(172A-172C)和三個控制點運動向量的運動資訊描述,這亦被稱為6參數模型。Video transcoder 200 and video decoder 300 may be configured to perform affine prediction. In HEVC, only translational motion models are applied for motion compensated prediction (MCP). In the real world, however, there are many kinds of motion, such as zoom in/out, rotation, perspective motion, and other irregular motions. In VTM-6, block-based affine transform motion compensated prediction is applied. As shown in Figure 6A, the affine motion field of a block is described by the motion information of two control points (170A and 170B), also known as a four-parameter model. As shown in FIG. 6B, the affine motion field of a block is described by motion information of three control points (172A-172C) and three control point motion vectors, which is also called a 6-parameter model.

對於4參數仿射運動模型,將塊中的取樣位置( x, y)處的運動向量推導為:

Figure 02_image005
(2-1) For a 4-parameter affine motion model, the motion vector at the sampled position ( x, y ) in the block is derived as:
Figure 02_image005
(2-1)

對於6參數仿射運動模型,將塊中的取樣位置( x, y)處的運動向量推導為:

Figure 02_image007
(2-2) For a 6-parameter affine motion model, the motion vector at the sampled position ( x, y ) in the block is derived as:
Figure 02_image007
(2-2)

在上述等式中,( mv 0x , mv 0y )表示左上角的CPMV,並且( mv 1x , mv 1y )和( mv 2x , mv 2y )分別表示右上角和左下角的CPMV。 In the above equation, ( mv 0x , mv 0y ) represents the upper-left CPMV, and ( mv 1x , mv 1y ) and ( mv 2x , mv 2y ) represent the upper-right and lower-left CPMVs, respectively.

為了簡化運動補償預測,視訊轉碼器200和視訊解碼器300可以被配置為應用基於塊的仿射變換預測。圖7圖示塊170,其是包括16個4x4亮度子塊的16x16亮度塊。為了推導用於每個4×4亮度子塊的運動向量,視訊轉碼器200和視訊解碼器300根據上述等式來計算每個子塊的中心取樣的運動向量,如圖7所示,並且被捨入到1/16分數精度。箭頭172A和172B標識子塊的十六個運動向量中的兩個運動向量。其他14個箭頭亦對應於運動向量,但是在圖7中沒有標記。運動補償內插濾波器應用於利用推導的運動向量來產生每個子塊的預測。色度分量的子塊大小亦被設置為4×4。將4×4色度子塊的MV計算為四個對應的4×4亮度子塊的MV的平均值。To simplify motion compensated prediction, video transcoder 200 and video decoder 300 may be configured to apply block-based affine transform prediction. Figure 7 illustrates block 170, which is a 16x16 luma block comprising sixteen 4x4 luma sub-blocks. In order to derive the motion vector for each 4×4 luma sub-block, the video transcoder 200 and the video decoder 300 calculate the motion vector of the center sample of each sub-block according to the above equation, as shown in FIG. 7, and is Rounds to 1/16 fractional precision. Arrows 172A and 172B identify two of the sixteen motion vectors for the sub-block. The other 14 arrows also correspond to motion vectors, but are not labeled in FIG. 7 . A motion compensated interpolation filter is applied to generate a prediction for each sub-block using the derived motion vectors. The sub-block size of the chroma component is also set to 4x4. The MV of a 4x4 chroma sub-block is calculated as the average of the MVs of the four corresponding 4x4 luma sub-blocks.

視訊轉碼器200和視訊解碼器300可以被配置為利用仿射模式的光流執行預測細化。利用光流的預測細化(PROF)用於對基於子塊的仿射運動補償預測進行細化,而不增加運動補償的記憶體存取頻寬。在VVC中,在執行基於子塊的仿射運動補償之後,經由添加經由光流方程推導的差來細化亮度預測取樣。Video transcoder 200 and video decoder 300 may be configured to perform prediction refinement using affine mode optical flow. Predictive Refinement Using Optical Flow (PROF) is used to refine sub-block based affine motion compensated predictions without increasing the memory access bandwidth for motion compensation. In VVC, after sub-block based affine motion compensation is performed, the luma prediction samples are refined by adding a difference derived via the optical flow equation.

在PROF的一個實例實現中,視訊解碼器300可以被配置為執行以下四個步驟: 步驟1)執行基於子塊的仿射MC以產生子塊預測 I( i, j)。 步驟2)使用3分接點濾波器[−1, 0, 1]在每個取樣位置處計運算元塊預測的空間梯度 g x ( i, j)和 g y ( i, j)。梯度計算與BDOF中的梯度計算完全相同。 g x ( i, j) = ( I( i+ 1, j) >> shift1) - ( I( i- 1, j) >> shift1), g y ( i, j) = ( I( i, j+ 1) >> shift1) - ( I( i, j- 1) >> shift1), 其中shift1用於控制梯度的精度。子塊(例如,4x4)預測在梯度計算的每側擴展一個取樣。為了避免額外的記憶體頻寬和額外的內插計算,從參考圖片中的最近的整數圖元位置複製擴展邊界上的那些擴展取樣。 步驟3)經由以下光流方程來計算亮度預測細化: ∆ I( i, j) = g x ( i, j) * ∆ x ( i, j) + g y ( i, j) * ∆ y ( i, j), 其中∆ v( i, j)是針對取樣位置( i, j)計算的取樣MV(由 v( i, j)表示)與取樣( i, j)所屬的子塊的子塊MV之間的差,如圖8所示。以1/32亮度取樣精度為單位對∆ v( i, j)進行量化。圖8圖示子塊MV V SB和圖元∆ v( i, j)(箭頭190)。 In an example implementation of PROF, video decoder 300 may be configured to perform the following four steps: Step 1) Perform sub-block based affine MC to generate sub-block prediction I ( i , j ). Step 2) Compute the spatial gradients g x ( i , j ) and g y ( i , j ) of the operator block prediction at each sampling location using a 3-tap filter [−1, 0, 1]. The gradient calculation is exactly the same as the gradient calculation in BDOF. g x ( i , j ) = ( I ( i + 1, j ) >> shift1) - ( I ( i - 1, j ) >> shift1), g y ( i , j ) = ( I ( i , j + 1) >> shift1) - ( I ( i , j - 1) >> shift1), where shift1 is used to control the accuracy of the gradient. Subblock (eg, 4x4) predictions are extended by one sample on each side of the gradient computation. To avoid extra memory bandwidth and extra interpolation calculations, those outsamples on the outbound boundaries are copied from the nearest integer primitive position in the reference picture. Step 3) Compute the brightness prediction refinement via the following optical flow equation: ∆ I ( i , j ) = g x ( i , j ) * ∆ x ( i , j ) + g y ( i , j ) * ∆ y ( i , j ), where ∆ v ( i , j ) is the sub-block of the sample MV (denoted by v ( i , j )) computed for the sample location ( i , j ) and the sub-block to which the sample ( i , j ) belongs The difference between the MVs is shown in Figure 8. Quantize ∆v ( i , j ) with 1/32 luma sampling precision. FIG. 8 illustrates sub-block MV V SB and primitive ∆ v ( i , j ) (arrow 190 ).

由於仿射模型參數和相對於子塊中心的取樣位置在子塊之間沒有改變,因此可以針對第一子塊計算∆ v( i, j),並且可以將∆ v( i, j)重用於同一CU中的其他子塊。令 dx( i, j)和 dy( i, j)是從取樣位置( i, j)到子塊( x SB , y SB )的中心的水平和垂直偏移,則可以經由以下等式來推導∆ v( i, j): dx( i, j) = i- x SB dy( i, j) = j- y SB v x ( i, j) = C* dx( i, j) + D* dy( i, j) ∆ v y ( i, j) = E* dx( i, j) + F* dy( i, j) Since the affine model parameters and sampling locations relative to the sub-patch centers do not change between sub-plots, ∆v ( i , j ) can be computed for the first sub-patch, and ∆v ( i , j ) can be reused for other subblocks in the same CU. Let dx ( i , j ) and dy ( i , j ) be the horizontal and vertical offsets from the sampling position ( i , j ) to the center of the sub-block ( x SB , y SB ), then it can be derived via the following equation ∆ v ( i , j ): dx ( i , j ) = i - x SB dy ( i , j ) = j - y SB v x ( i , j ) = C * dx ( i , j ) + D * dy ( i , j ) ∆ v y ( i , j ) = E * dx ( i , j ) + F * dy ( i , j )

為了保持精度,將子塊( x SB , y SB )的中心計算為( ( W SB− 1 )/2, ( H SB− 1 ) / 2 ),其中W SB和H SB分別是子塊寬度和高度。 To maintain accuracy, the center of the sub-block ( x SB , y SB ) is calculated as ( ( W SB − 1 )/2, ( H SB − 1 ) / 2 ), where WSB and H SB are the sub-block width and high.

對於4參數仿射模型, C= F= ( v 1 x v 0 x ) / w, E= - D= ( v 1 y v 0 y ) / wFor a 4-parameter affine model, C = F = ( v 1 x v 0 x ) / w , E = - D = ( v 1 y v 0 y ) / w .

對於6參數仿射模型, C= ( v 1 x v 0 x ) / w, D= ( v 2 x v 0 x ) / h, E= ( v 1 y v 0 y ) / w, F= ( v 2y v 0 y ) / h, 其中( v 0 x , v 0 y )、( v 1 x , v 1 y )和( v 2 x , v 2 y )是左上角、右上角和左下角的控制點運動向量, wh是CU的寬度和高度。 步驟4)最後,將亮度預測細化∆ I( i, j)添加到子塊預測 I( i, j)。按如下等式來產生最終預測 I’: I’( i, j) = I( i, j) + ∆ I( i, j)。 For a 6-parameter affine model, C = ( v 1 x v 0 x ) / w , D = ( v 2 x v 0 x ) / h , E = ( v 1 y v 0 y ) / w , F = ( v 2y v 0 y ) / h , where ( v 0 x , v 0 y ), ( v 1 x , v 1 y ) and ( v 2 x , v 2 y ) are the upper left, upper right and lower left The control point motion vector of the corner, w and h are the width and height of the CU. Step 4) Finally, the luma prediction refinement ∆I ( i , j ) is added to the subblock prediction I ( i , j ). The final forecast I ' is produced by the following equation: I' ( i , j ) = I ( i , j ) + ∆ I ( i , j ).

對於仿射譯碼的CU,在兩種情況下不應用PROF:1)所有控制點MV皆相同,這指示CU僅具有平移運動;2) 仿射運動參數大於指定的限制,因為基於子塊的仿射MC被降級為基於CU的MC,以避免大的記憶體存取頻寬要求。For an affine-coded CU, PROF is not applied in two cases: 1) all control points MV are the same, which indicates that the CU has only translational motion; 2) the affine motion parameter is larger than the specified limit, because the Affine MC is downgraded to CU-based MC to avoid large memory access bandwidth requirements.

視訊轉碼器200和視訊解碼器300可以被配置為應用快速編碼程序,以降低利用PROF的仿射運動估計的編碼複雜度。在以下兩種情況下,在仿射運動估計階段不應用PROF:a)若該CU不是根塊,並且其父塊不是利用仿射模式作為其最佳模式進行譯碼的,則由於當前CU使用仿射模式作為最佳模式的可能性很低,因此不應用PROF;b) 若四個仿射參數(C, D, E, F)的大小皆小於預定義閥值,並且當前圖片不是低延遲圖片,則不應用PROF,因為對於這種情況,PROF引入的改進很小。以這種方式,可以加速利用PROF的仿射運動估計。The video transcoder 200 and the video decoder 300 can be configured to apply a fast encoding procedure to reduce the encoding complexity of the affine motion estimation using PROF. In the following two cases, PROF is not applied in the affine motion estimation stage: a) If the CU is not the root block, and its parent block is not decoded with affine mode as its best mode, since the current CU uses The possibility of affine mode as the best mode is very low, so PROF is not applied; b) If the size of the four affine parameters (C, D, E, F) is less than the predefined threshold, and the current picture is not low-latency image, PROF is not applied because for this case the improvement introduced by PROF is minimal. In this way, affine motion estimation with PROF can be accelerated.

現有技術具有若干潛在問題。當與訊框間預測的平移模型的訊號傳遞管理負擔相比時,對於塊,可能顯著增加CPMV的訊號傳遞管理負擔。因此,針對CPMV的解碼器側細化可以提高CPMV的精度並且減少訊號傳遞管理負擔。本案內容描述了可以解決這些問題中的一些問題的技術。The prior art has several potential problems. For blocks, the signaling management burden of CPMV may be significantly increased when compared to that of the translation model for inter-frame prediction. Therefore, decoder-side refinement for CPMV can improve CPMV accuracy and reduce signaling management burden. This case describes techniques that may address some of these problems.

基於範本匹配的仿射預測(以下簡稱AffTM)是一種用於細化仿射譯碼塊的CPMV的解碼器側訊框間預測模式。與範本匹配類似,如前述,視訊解碼器300可以基於最初決定的CPMV來決定初始參考範本塊,並且隨後在搜尋區域內搜尋具有降低的匹配成本的其他參考範本。隨後,視訊解碼器300可以決定要替換初始CPMV的最佳CPMV集合。Affine Prediction Based on Template Matching (hereinafter referred to as AffTM) is a decoder-side inter-frame prediction mode for refining CPMV of affinely decoded blocks. Similar to template matching, as mentioned above, the video decoder 300 can determine an initial reference template block based on the initially determined CPMV, and then search for other reference templates with reduced matching costs within the search area. Then, the video decoder 300 can determine the best CPMV set to replace the original CPMV.

視訊轉碼器200和視訊解碼器300可以被配置為決定參考範本塊。參考範本塊的取樣是基於使用CPMV推導的運動場來在子塊的基礎上產生的。在當前塊和對應的當前範本塊192位於同一仿射運動場內的假設下,視訊轉碼器200和視訊解碼器300可以使用等式(2-1)或(2-2)來決定子塊(例如,當前範本塊192上的圖9A中的A 0、A 1、…、A n-1和L 0、L 1、…、L n-1)的MV,其中取樣位置( x, y)是每個相應子塊的質心。隨後,視訊轉碼器200和視訊解碼器300基於相應的子塊MV來獲取參考範本塊的子塊的取樣或者對其進行內插。如實例圖9B中的參考範本194A所示,參考範本子塊不需要緊靠預測塊的任何邊界子塊。另外,用於在參考範本塊上產生子塊取樣的內插濾波器可以是以下各項中的一項或多項:無濾波器(因此在獲取參考取樣之前將子塊MV限幅或捨入到整數精度)、2分接點雙線性濾波器、6分接點DCTIF(如在AVC中)、8分接點DCTIF(如在HEVC或VVC中)或可切換濾波器(如在VVC中)。 The video transcoder 200 and the video decoder 300 can be configured to determine a reference template block. The sampling of the reference template block is generated on a sub-block basis using the CPMV-derived motion field. Under the assumption that the current block and the corresponding current template block 192 are located in the same affine motion field, the video transcoder 200 and the video decoder 300 can use equation (2-1) or (2-2) to determine the sub-block ( For example, the MVs of A 0 , A 1 , ..., A n-1 and L 0 , L 1 , ..., L n-1 ) in FIG. 9A on the current template block 192, wherein the sampling position ( x , y ) is The centroid of each corresponding subblock. Subsequently, the video transcoder 200 and the video decoder 300 acquire or interpolate samples of the sub-blocks of the reference template block based on the corresponding sub-block MV. As shown by reference example 194A in example FIG. 9B, the reference example sub-block does not need to be immediately adjacent to any boundary sub-block of the prediction block. In addition, the interpolation filter used to generate sub-block samples on the reference sample block can be one or more of the following: No filter (thus clipping or rounding the sub-block MV to integer precision), 2-tap bilinear filter, 6-tap DCTIF (as in AVC), 8-tap DCTIF (as in HEVC or VVC), or switchable filter (as in VVC) .

在另一實例中,如圖9C中的參考範本194B所示,參考範本子塊可以緊靠對應預測塊的邊界子塊。因此,每個子塊(A 0、…、 n-1和L 0、…、 n-1)的MV與位於當前塊的邊界上的對應的緊鄰子塊相同。 In another example, as shown in the reference template 194B in FIG. 9C , the reference template sub-block may be adjacent to the boundary sub-block of the corresponding prediction block. Therefore, the MV of each sub-block (A 0 , . . . , n-1 and L 0 , . . . , n-1 ) is the same as the corresponding immediately adjacent sub-block located on the boundary of the current block.

在另一實例中,當前範本塊上除了A 0和L 0以外的子塊的MV可以在位於當前塊的邊界上的子塊自身與其緊鄰子塊之間的質心上的取樣位置( x, y)處經由等式(2-1)或(2-2)來計算。對於A 0和L 0,若A 0和L 0兩者皆存在,取樣位置( x, y)可以是(0,0);若僅A 0存在,則取樣位置( x, y)可以是A 0與當前塊上的第一子塊之間的質心;若僅L 0存在,取樣位置( x, y)可以是L 0與當前塊上的第一子塊之間的質心。 In another example, the MVs of sub-blocks other than A 0 and L 0 on the current sample block may be at the sampling position ( x , y ) is calculated via equation (2-1) or (2-2). For A 0 and L 0 , if both A 0 and L 0 exist, the sampling position ( x , y ) can be (0,0); if only A 0 exists, the sampling position ( x , y ) can be A 0 and the centroid between the first sub-block on the current block; if only L 0 exists, the sampling position ( x , y ) can be the centroid between L 0 and the first sub-block on the current block.

在另一個實例中,視訊轉碼器200和視訊解碼器300可以被配置為將PROF應用於參考範本塊。In another example, video transcoder 200 and video decoder 300 may be configured to apply PROF to reference template blocks.

在另一實例中,當所有CPMV彼此相同時,視訊轉碼器200和視訊解碼器300可以利用如前述的一般的基於塊的範本匹配的預測程序來替換AffTM的預測程序。CPMV中的一個CPMV可以被視為初始MV並且用於基於塊的範本匹配。In another example, when all CPMVs are identical to each other, the video transcoder 200 and the video decoder 300 can replace the AffTM prediction procedure with the aforementioned general block-based template matching prediction procedure. One of the CPMVs can be regarded as the initial MV and used for block-based template matching.

在另一實例中,當所有初始CPMV彼此相同時,視訊轉碼器200和視訊解碼器300可以被配置為在AffTM之前執行如前述的一般的基於塊的範本匹配,以細化初始CPMV。CPMV中的一個CPMV可以被視為用於一般範本匹配程序的初始MV。該實例可以進一步擴展到如前述的平移模型搜尋。In another example, when all initial CPMVs are identical to each other, the video transcoder 200 and video decoder 300 may be configured to perform the general block-based template matching as described above before AWTM to refine the initial CPMV. One of the CPMVs can be considered as the initial MV for the general template matching procedure. This example can be further extended to translational model search as described above.

視訊轉碼器200和視訊解碼器300可以被配置為執行搜尋程序。本節提供了用於AffTM的若干搜尋程序。在不喪失一般性的情況下,所有演算法都是利用6參數仿射模型提供的。可以經由簡單地從描述中移除左下方CPMV來將這些演算法直接轉換用於4參數仿射模式。搜尋範圍可以是預先定義的或用訊號通知的,例如,±2、±4、±6、±8圖元。CPMV的初始搜尋點可以是以下各項中的任何一項:AMVP候選、與合併候選的參考圖片列表相對應的CPMV、或與塊的參考圖片列表相對應的CPMV。The video transcoder 200 and the video decoder 300 can be configured to perform a search procedure. This section provides several search procedures for AffTM. Without loss of generality, all algorithms are provided using a 6-parameter affine model. These algorithms can be directly translated for 4-parameter affine mode by simply removing the bottom left CPMV from the description. The search range can be predefined or signaled, eg, ±2, ±4, ±6, ±8 picture elements. The initial search point for the CPMV may be any of the following: an AMVP candidate, a CPMV corresponding to a reference picture list of a merge candidate, or a CPMV corresponding to a reference picture list of a block.

視訊轉碼器200和視訊解碼器300可以被配置為執行正方形搜尋。正方形搜尋應用正方形模式,以按順序細化CU的CPMV,每次一個向量,從左上方CPMV開始,隨後是右上方CPMV,最後在左下方CPMV結束。值得注意的是,只有當CU具有6參數模型時,才搜尋左下方CPMV,以具有該第三CPMV。正方形搜尋模式可以被指定為一系列增量運動向量,dMv = {(0,0), (-1,1), (0,1), (1,1), (1,0), (1,-1), (0,-1), (-1,-1), (-1,0)}或基於這九個增量運動向量的任何其他順序。在不喪失一般性的情況下,本節以上述dMv為例,其中當CU是分別經由仿射AMVP模式和仿射合併模式進行解碼的時,初始搜尋步長 s 0和最小搜尋步長 s min 是根據對AMVR索引的指示或1/16來決定的。 s 0的值可以被設置為等於或大於 s min ,並且對於所有 i∈{0, 1, …, min}, s i+1被設置為等於 s i。正方形搜尋程序是按如下指定的7步程序: 1、給定搜尋步長集合{ s 0, s 1, …, s min }、dMv和分別表示左上方、右上方和左下方CPMV的mv 0 (0)、mv 1 (0)和mv 2 (0),搜尋程序從反覆運算 i=0開始。 2、對於 s i、dMv和{mv 0 ( i) , mv 1 ( i) , mv 2 ( i) },搜尋子程序開始順序程序,以從搜尋mv 0 ( i) 開始,隨後是mv 1 ( i) ,最後以mv 1 ( i) 結束(注意,在一些實例中,順序可以是{mv 2 ( i) , mv 1 ( i) , mv 0 ( i) })。 3、對於 s i、dMv和mv 0 ( i) ,搜尋子程序針對所有這些CPMV集合單獨地計算相應的範本匹配成本:S={mv 0 ( i) + d * s i, mv 1 ( i) , mv 2 ( i) , 對於所有d∈dMv}。該搜尋子程序可以表示為mv 0 ( i)* = mv 0 ( i) + argmin d{cost(S 0), cost(S 1), …, cost(S 8)} * s i。 4、與步驟3類似,搜尋子程序針對S={mv 0 ( i)* , mv 1 ( i) + d * s i, mv 2 ( i) , 對於所有d∈dMv}計算相應的範本匹配成本,並且最佳結果表示為mv 1 ( i)* 。 5、與步驟3類似,搜尋子程序針對S={mv 0 ( i)* , mv 1 ( i)* , mv 2 ( i) + d * s i,對於所有d∈dMv}計算相應的範本匹配成本,並且最佳結果表示為mv 2 ( i)* 。 6、在步驟3-5中搜尋所有CPMV之前,搜尋子程序的輸出為{mv 0 ( i)* , mv 1 ( i)* , mv 2 ( i)* }。 ·若當搜尋步長為 s i時,搜尋程序在預定義的閥值上多次存取步驟6,則搜尋程序將{mv 0 ( i+1) , mv 1 ( i+1) , mv 2 ( i+1) }設置為等於子程序輸出,並且轉至步驟7。 ·否則,若子程序輸出與{mv 0 ( i) , mv 1 ( i) , mv 2 ( i) }完全相同,則搜尋程序將{mv 0 ( i+1) , mv 1 ( i+1) , mv 2 ( i+1) }設置為等於子程序輸出,並且轉至步驟7。 ·否則(若子程序輸出與{mv 0 ( i) , mv 1 ( i) , mv 2 ( i) }不完全相同),將{mv 0 ( i) , mv 1 ( i) , mv 2 ( i) }設置為等於{mv 0 ( i)* , mv 1 ( i)* , mv 2 ( i)* },並且搜尋程序在步驟2處繼續。 7、若 s i不等於 s min,則搜尋程序將i設置為i+1,並且返回到步驟2。否則,搜尋程序以輸出{mv 0 ( i+1) , mv 1 ( i+1) , mv 2 ( i+1) }終止。 Video transcoder 200 and video decoder 300 may be configured to perform square search. Square seeking applies a square pattern to sequentially refine the CU's CPMVs, one vector at a time, starting with the upper left CPMV, followed by the upper right CPMV, and ending at the lower left CPMV. It is worth noting that the bottom left CPMV is searched to have this third CPMV only if the CU has a 6-parameter model. The square search pattern can be specified as a series of incremental motion vectors, dMv = {(0,0), (-1,1), (0,1), (1,1), (1,0), (1 ,-1), (0,-1), (-1,-1), (-1,0)} or any other sequence based on these nine incremental motion vectors. Without loss of generality, this section takes the above dMv as an example, where the initial search step s 0 and the minimum search step s min are Based on indication of AMVR index or 1/16. The value of s 0 may be set equal to or greater than s min , and s i+1 is set equal to s i for all i ∈ {0, 1, . . . , min }. The square search procedure is a 7-step procedure specified as follows: 1. Given a set of search steps { s 0 , s 1 , …, s min }, dMv and mv 0 ( 0) , mv 1 (0) and mv 2 (0) , the search procedure starts from the repeated operation of i =0. 2. For s i , dMv and {mv 0 ( i ) , mv 1 ( i ) , mv 2 ( i ) }, the search subroutine starts the sequential procedure to begin with the search for mv 0 ( i ) , followed by mv 1 ( i ) , and finally ends with mv 1 ( i ) (note that in some instances, the order may be {mv 2 ( i ) , mv 1 ( i ) , mv 0 ( i ) }). 3. For s i , dMv and mv 0 ( i ) , the search subroutine separately calculates the corresponding template matching costs for all these CPMV sets: S={mv 0 ( i ) + d * s i , mv 1 ( i ) , mv 2 ( i ) , for all d∈dMv}. The search subroutine can be expressed as mv 0 ( i )* = mv 0 ( i ) + argmin d {cost(S 0 ), cost(S 1 ), …, cost(S 8 )} * s i . 4. Similar to step 3, the search subroutine calculates the corresponding template matching cost for all d∈dMv} for S={mv 0 ( i )* , mv 1 ( i ) + d * s i , mv 2 ( i ) , and the best result is denoted as mv 1 ( i )* . 5. Similar to step 3, the search subroutine calculates the corresponding template matching for all d∈dMv} for S={mv 0 ( i )* , mv 1 ( i )* , mv 2 ( i ) + d * s i cost, and the best result is expressed as mv 2 ( i )* . 6. Before searching all CPMVs in steps 3-5, the output of the search subroutine is {mv 0 ( i )* , mv 1 ( i )* , mv 2 ( i )* }. ·If the search program accesses step 6 multiple times on the predefined threshold when the search step size is si , the search program will be {mv 0 ( i +1) , mv 1 ( i +1) , mv 2 ( i +1) } is set equal to the subroutine output, and go to step 7. · Otherwise, if the subroutine output is exactly the same as {mv 0 ( i ) , mv 1 ( i ) , mv 2 ( i ) }, the search routine will be {mv 0 ( i +1) , mv 1 ( i +1) , mv 2 ( i +1 ) } is set equal to the subroutine output, and go to step 7. ·Otherwise (if the subroutine output is not exactly the same as {mv 0 ( i ) , mv 1 ( i ) , mv 2 ( i ) }), set {mv 0 ( i ) , mv 1 ( i ) , mv 2 ( i ) } is set equal to {mv 0 ( i )* , mv 1 ( i )* , mv 2 ( i )* }, and the search procedure continues at step 2. 7. If s i is not equal to s min , the search program sets i to i+1, and returns to step 2. Otherwise, the search procedure terminates with the output {mv 0 ( i +1) , mv 1 ( i +1) , mv 2 ( i +1) }.

視訊轉碼器200和視訊解碼器300可以被配置為執行交叉搜尋。交叉搜尋利用交叉模式來細化CPMV。其搜尋程序與正方形搜尋相同,除了增量運動向量是以不同方式定義的。該搜尋模式的增量運動向量定義為:dMv = {(0,0), (-1,0), (0,-1) (0,1), (1,0)}。The video transcoder 200 and the video decoder 300 may be configured to perform a cross search. Cross-seeking utilizes cross-patterns to refine the CPMV. The search procedure is the same as for the square search, except that the incremental motion vectors are defined differently. The incremental motion vector for this search mode is defined as: dMv = {(0,0), (-1,0), (0,-1) (0,1), (1,0)}.

視訊轉碼器200和視訊解碼器300可以被配置為執行對角線搜尋。對角線搜尋利用對角線模式來細化CPMV。其搜尋程序與正方形搜尋相同,除了增量運動向量是以不同方式定義的,如下所示:dMv = {(0,0), (-1,-1), (-1,1), (1,1), (1,-1)}。The video transcoder 200 and the video decoder 300 may be configured to perform a diagonal search. Diagonal search utilizes diagonal patterns to refine the CPMV. The search procedure is the same as for the square search, except that the incremental motion vectors are defined differently, as follows: dMv = {(0,0), (-1,-1), (-1,1), (1 ,1), (1,-1)}.

視訊轉碼器200和視訊解碼器300可以被配置為執行菱形搜尋。菱形搜尋利用對角線模式來細化CPMV。其搜尋程序與正方形搜尋相同,除了增量運動向量是以不同方式定義的,如下所示:dMv = {(0,0), (0, 2), (1,1), (2,0), (1,-1), (0,-2), (-1,-1), (-2,0), (-1,1)}。Video transcoder 200 and video decoder 300 may be configured to perform a diamond search. Diamond search utilizes diagonal patterns to refine the CPMV. The search procedure is the same as for the square search, except that the incremental motion vectors are defined differently, as follows: dMv = {(0,0), (0, 2), (1,1), (2,0) , (1,-1), (0,-2), (-1,-1), (-2,0), (-1,1)}.

在另一實例中,菱形搜尋的輸出可以用作交叉搜尋的輸入,並且交叉搜尋的輸出被視為組合搜尋程序的最終輸出。In another example, the output of the diamond search can be used as the input of the cross search, and the output of the cross search is considered as the final output of the combined search procedure.

視訊轉碼器200和視訊解碼器300可以被配置為執行二通路八點搜尋。雙通路八點搜尋是一種搜尋程序,其中在搜尋程序期間有條件地使用兩種搜尋模式(即交叉模式和對角線模式)。其搜尋程序與正方形搜尋相同,除了步驟3-5之外。在二通路八點搜尋中,dMV包括兩個增量運動向量集合:dMv 0= {(0,0), (-1,0), (0,-1) (0,1), (1,0)}和dMv 1= {(-1,-1), (-1,1), (1,1), (1,-1)}。下文圖示相對於正方形搜尋的區別。 1-2、這些步驟與正方形搜尋相同。 3、對於 s i、dMv和mv 0 ( i) ,搜尋子程序針對所有這些CPMV集合單獨地計算相應的範本匹配成本:S={mv 0 ( i) + d * s i, mv 1 ( i) , mv 2 ( i) , 對於所有d∈dMv 0}。該搜尋子程序可以表示為d 0 *= argmin d{cost(S 0), cost(S 1), …, cost(S 5)}。 隨後,若d 0 *等於(0,0),則將mv 0 ( i)* 設置為等於mv 0 ( i) 。 否則,子程序針對S={mv 0 ( i) + d * s i, mv 1 ( i) , mv 2 ( i) , 對於所有d∈dMv 1U d 0 *}計算相應的範本匹配成本,並且其最佳增量運動向量表示為d 1 *。結果是mv 0 ( i)* = mv 0 ( i) + d 1 ** s i。 4、與步驟3類似,搜尋子程序針對S={mv 0 ( i)* , mv 1 ( i) + d * s i, mv 2 ( i) , 對於所有d∈dMv 0}計算相應的範本匹配成本,並且當必要時,針對另一S={mv 0 ( i)* , mv 1 ( i) + d * s i, mv 2 ( i) , 對於所有d∈dMv 1U d 0 *}計算相應的範本匹配成本。最佳搜尋結果表示為mv 1 ( i)* = mv 1 ( i) + d 1 ** s i(若d 0 *≠(0,0))或mv 1 ( i) (若d 0 *=(0,0))。 5、與步驟3類似,搜尋子程序針對S={mv 0 ( i)* , mv 1 ( i)* , mv 2 ( i) + d * s i, 對於所有d∈dMv 0}計算相應的範本匹配成本,並且當必要時,針對另一S={mv 0 ( i)* , mv 1 ( i) , mv 2 ( i) + d * s i, 對於所有d∈dMv 1U d 0 *}計算相應的範本匹配成本。最佳搜尋結果表示為mv 2 ( i)* = mv 2 ( i) + d 1 ** s i(若d 0 *≠(0,0))或mv 2 ( i) (若d 0 *=(0,0))。 6-7、這些步驟與正方形搜尋相同。 The video transcoder 200 and the video decoder 300 may be configured to perform a two-pass eight-point search. A dual-pass eight-point hunt is a search procedure in which two search patterns (i.e., cross pattern and diagonal pattern) are conditionally used during the search procedure. The search procedure is the same as the square search, except for steps 3-5. In a two-path eight-point search, dMV includes two sets of incremental motion vectors: dMv 0 = {(0,0), (-1,0), (0,-1) (0,1), (1, 0)} and dMv 1 = {(-1,-1), (-1,1), (1,1), (1,-1)}. The difference with respect to square search is illustrated below. 1-2. These steps are the same as the square search. 3. For s i , dMv and mv 0 ( i ) , the search subroutine separately calculates the corresponding template matching costs for all these CPMV sets: S={mv 0 ( i ) + d * s i , mv 1 ( i ) , mv 2 ( i ) , for all d∈dMv 0 }. The search subroutine can be expressed as d 0 * = argmin d {cost(S 0 ), cost(S 1 ), …, cost(S 5 )}. Then, if d 0 * is equal to (0,0), mv 0 ( i )* is set equal to mv 0 ( i ) . Otherwise, the subroutine calculates the corresponding template matching costs for S={mv 0 ( i ) + d * s i , mv 1 ( i ) , mv 2 ( i ) , for all d∈dMv 1 U d 0 * }, and Its optimal incremental motion vector is denoted as d 1 * . The result is mv 0 ( i )* = mv 0 ( i ) + d 1 * * s i . 4. Similar to step 3, the search subroutine calculates the corresponding template matching for all d∈dMv 0 } for S={mv 0 ( i )* , mv 1 ( i ) + d * s i , mv 2 ( i ) cost , and when necessary , compute the corresponding _ _ template matching cost for . The best search result is expressed as mv 1 ( i )* = mv 1 ( i ) + d 1 * * s i (if d 0 * ≠(0,0)) or mv 1 ( i ) (if d 0 * =( 0,0)). 5. Similar to step 3, the search subroutine calculates corresponding templates for S={mv 0 ( i )* , mv 1 ( i )* , mv 2 ( i ) + d * s i , for all d∈dMv 0 } Matching cost, and when necessary, computed against another S={mv 0 ( i )* , mv 1 ( i ) , mv 2 ( i ) + d * s i , for all d∈dMv 1 U d 0 * } The corresponding template matching cost. The best search result is expressed as mv 2 ( i )* = mv 2 ( i ) + d 1 * * s i (if d 0 * ≠(0,0)) or mv 2 ( i ) (if d 0 * =( 0,0)). 6-7. These steps are the same as the square search.

視訊轉碼器200和視訊解碼器300可以被配置為執行基於梯度的搜尋,以同時更新所有CPMV。假設初始CPMV為{mv 0 (0), mv 1 (0), mv 2 (0)},則CPMV用於產生參考範本塊,該參考範本塊用於計算水平和垂直方向上的取樣域梯度值和預測殘差(亦即,當前範本塊與參考範本塊之間的增量)。隨後,在基於梯度的搜尋中使用這些值來更新給定CPMV。新CPMV(表示為{mv 0 (1), mv 1 (1), mv 2 (1)})隨後用作基於梯度的搜尋的另一反覆運算的輸入。當滿足條件時,反覆運算程序可以終止。條件可以例如是反覆運算次數超過預定義(或用訊號通知的)閥值,或者CPMV在兩次反覆運算之間沒有變化。 Video transcoder 200 and video decoder 300 may be configured to perform a gradient-based search to update all CPMVs simultaneously. Assuming that the initial CPMV is {mv 0 (0) , mv 1 (0) , mv 2 (0) }, the CPMV is used to generate a reference sample block, which is used to calculate the gradient value of the sampling domain in the horizontal and vertical directions and prediction residuals (that is, the delta between the current sample block and the reference sample block). These values are then used in a gradient-based search to update a given CPMV. The new CPMV (denoted as {mv 0 (1) , mv 1 (1) , mv 2 (1) }) is then used as input for another iterative operation of the gradient-based search. The iterative procedure can be terminated when the condition is met. A condition may eg be that the number of iterations exceeds a predefined (or signaled) threshold, or that the CPMV does not change between two iterations.

視訊轉碼器200和視訊解碼器300可以被配置為執行平移模型搜尋。當所有CPMV在上述搜尋程序應用之前、期間或之後恰好相同時,AffTM的所有搜尋程序終止,並且在基於一般的基於塊的範本匹配中使用最佳CPMV之一(例如,由於所有CPMV相同,因此使用隨機的CPMV),如上文範本匹配預測所述,作為其進一步運動向量細化的初始搜尋點。Video transcoder 200 and video decoder 300 may be configured to perform translational model search. When all CPMVs happen to be the same before, during, or after the application of the above-mentioned search procedure, all search procedures of AffTM are terminated and one of the best CPMVs is used in a general block-based template matching (e.g., since all CPMVs are the same, so Use random CPMV), as described above for template matching prediction, as the initial search point for its further motion vector refinement.

視訊轉碼器200和視訊解碼器300可以被配置為計算範本匹配成本。範本匹配成本可以被定義(或用訊號通知)為以下度量之一:SAD、絕對變換差和(SATD)、平方誤差和(SSE)、絕對差平均縮減和(MRSAD)、絕對變換差平均縮減和(MRSATD)。若對當前處理塊使用照度補償,則可以有條件地使用MRSAD。The video transcoder 200 and the video decoder 300 can be configured to calculate the template matching cost. Template matching cost can be defined (or signaled) as one of the following metrics: SAD, sum of absolute transformed differences (SATD), sum of squared errors (SSE), mean reduced sum of absolute differences (MRSAD), mean reduced sum of absolute transformed differences (MRSATD). MRSAD can be used conditionally if illumination compensation is used for the current processing block.

在另一實例中,視訊轉碼器200和視訊解碼器300可以向範本塊上的每個取樣指派每取樣權重值。例如,對於WxH範本塊,每取樣權重值可以表示為 N* w x , y ,並且應用於當前塊範本和參考塊範本的相應的取樣 c x , y p x , y ,其中 N可以是正整數(例如,1、2、3、4、5等)。為了簡化,範本匹配成本可以定義為: N -1* ∑ x,y ϵ template( N* w x , y * | c x , y - p x , y | ) 或者 ∑ x,y ϵ template( N* w x , y * | c x , y - p x , y | ) In another example, video transcoder 200 and video decoder 300 may assign per-sample weight values to each sample on the template block. For example, for a WxH template block, the per-sample weight value can be expressed as N * w x , y , and applied to the corresponding samples c x , y and p x , y of the current block template and the reference block template, where N can be a positive integer (eg, 1, 2, 3, 4, 5, etc.). For simplicity, template matching cost can be defined as: N -1 * ∑ x,y ϵ template ( N * w x , y * | c x , y - p x , y | ) or ∑ x,y ϵ template ( N * w x , y * | c x , y - p x , y | )

當使用局部照度補償(LIC)或MRSAD時,為了簡化,等式可以是: N -1* ∑ x,y ϵ template( N* w x , y * | c x , y - p x , y - ∆ x , y | ) 或者 ∑ x,y ϵ template( N* w x , y * | c x , y - p x , y - ∆ x , y | ) 在這些等式中,∆ x , y p x , y 的平均值減去 c x , y 的平均值(簡而言之,mean( p x , y ) – mean( c x , y ))。由於左側範本的權重值的指派是上方範本的權重值的指派的轉置,因此只需要決定上方範本的權重值的指派。 When using local illumination compensation (LIC) or MRSAD, for simplicity, the equation can be: N -1 * ∑ x,y ϵ template ( N * w x , y * | c x , y - p x , y - ∆ x , y | ) or ∑ x,y ϵ template ( N * w x , y * | c x , y - p x , y - ∆ x , y | ) In these equations, ∆ x , y is p x , the mean of y minus the mean of c x , y (in short, mean( p x , y ) – mean( c x , y )). Since the assignment of the weight value of the left template is the transposition of the assignment of the weight value of the upper template, only the assignment of the weight value of the upper template needs to be determined.

在另一實例中,每取樣權重值可以是基於區域的,範本塊被相等地拆分為16個區域,並且區域內的範本取樣共享單個權重值。In another example, the per-sample weight value may be region-based, with the sample block being equally split into 16 regions, and the sample samples within a region sharing a single weight value.

圖10是示出可以被指派給相鄰塊的取樣以計算範本匹配成本的每取樣權重的實例的概念圖。在一些實例中,視訊轉碼器200和視訊解碼器300可以將較大的權重值指派給更靠近當前塊的區域,及/或將較小的權重值指派給更靠近當前塊左上角的區域。圖10圖示了兩個實例。在當前CU 198A和198B的兩個實例中,視訊轉碼器200和視訊解碼器300可以將較大的權重值指派給更靠近當前塊的區域,而對於198A的實例,視訊轉碼器200和視訊解碼器300另外調低更靠近當前塊的左上角的區域的權重值。10 is a conceptual diagram illustrating an example of per-sample weights that may be assigned to samples of neighboring blocks to compute template matching costs. In some examples, video transcoder 200 and video decoder 300 may assign larger weight values to regions closer to the current block, and/or assign smaller weight values to regions closer to the upper left corner of the current block . Figure 10 illustrates two examples. In both instances of current CUs 198A and 198B, video transcoder 200 and video decoder 300 may assign larger weight values to regions closer to the current block, while for the instance of CU 198A, video transcoder 200 and The video decoder 300 additionally lowers the weight of the region closer to the upper left corner of the current block.

在另一實例中,可以加權方式添加上述度量,如上文範本匹配預測所述,其中經由AffTM推導所有CPMV的增量運動向量。In another example, the above-mentioned metrics can be added in a weighted manner, as described above for case-matching prediction, where delta motion vectors for all CPMVs are derived via AffTM.

視訊轉碼器200和視訊解碼器300可以被配置為執行雙預測搜尋程序。在一些實例中,視訊轉碼器200和視訊解碼器300可以被配置為經由分別與雙預測塊的每個參考圖片列表相對應的AffTM CPMV進行細化。The video transcoder 200 and the video decoder 300 may be configured to perform a bi-predictive search procedure. In some examples, video transcoder 200 and video decoder 300 may be configured to perform refinement via AW™ CPMVs respectively corresponding to each reference picture list of the bi-predictive block.

在一些實例中,視訊轉碼器200和視訊解碼器300可以被配置為首先使用分別與雙預測塊的每個參考圖片列表相對應的AffTM CPMV進行細化。隨後,視訊轉碼器200和視訊解碼器300可以另外細化與參考圖片列表相對應的CPMV,其中將與另一參考圖片列表相對應的CPMV作為先驗。例如,視訊轉碼器200和視訊解碼器300可以為要進一步細化的L1和L1的CPMV選擇雙預測權重 w。首先,在細化期間使用的當前範本塊成為原始當前範本塊 C和與L0相對應的參考範本塊 R 0 之間的加權增量。 C’= ( C– (1- w) R 0 ) / w In some examples, video transcoder 200 and video decoder 300 may be configured to first refine using AW™ CPMVs respectively corresponding to each reference picture list of the bi-predictive block. Subsequently, the video transcoder 200 and the video decoder 300 may additionally refine the CPMV corresponding to the reference picture list, wherein the CPMV corresponding to another reference picture list is used as a priori. For example, the video transcoder 200 and the video decoder 300 can select the bi-prediction weight w for the CPMV of L1 and L1 to be further refined. First, the current template block used during refinement becomes a weighted delta between the original current template block C and the reference template block R0 corresponding to L0. C' = ( C – (1- w ) R 0 ) / w

該減法程序亦被稱為高頻移除,並且以與在L1 CPMV搜尋程序期間使用的當前範本塊相同的方式來使用 C’。要注意的是,可以以另一種方式執行此種高頻移除,即 C’= ( Cw R 0 ) / (1- w)。 This subtraction procedure is also called high frequency removal and uses C' in the same way as the current template block used during the L1 CPMV search procedure. Note that this high frequency removal can be performed in another way, namely C' = ( C - w R 0 ) / (1 - w ).

在一些實例中,視訊轉碼器200和視訊解碼器300可以被配置為在參考圖片列表Lx的CPMV要被細化時應用高頻移除,其中x可以是0或1。在一些實例中,視訊轉碼器200和視訊解碼器300可以被配置為在參考圖片列表Lx的CPMV要被細化時應用高頻移除,其中x可以是0或1。在AffTM在Lx的CPMV上執行之後,基於Lx的CPMV來應用高頻移除,並且隨後AffTM可以在另一參考圖片列表的CPMV上執行。該反覆運算程序終止,直到在AffTM的搜尋程序期間沒有任何CPMV發生變化。In some examples, video transcoder 200 and video decoder 300 may be configured to apply high frequency removal when the CPMV of reference picture list Lx is to be refined, where x may be 0 or 1 . In some examples, video transcoder 200 and video decoder 300 may be configured to apply high frequency removal when the CPMV of reference picture list Lx is to be refined, where x may be 0 or 1 . After AWTM is performed on the CPMV of Lx, high frequency removal is applied based on the CPMV of Lx, and then AWTM may be performed on the CPMV of another reference picture list. The iterative procedure terminates until no CPMV changes during the AWTM's search procedure.

在一些實例中,視訊轉碼器200和視訊解碼器300可以被配置為首先根據BWC權重值來向Lx的CPMV應用高頻移除。規則可能適用。規則可以例如是:當L0的BCW權重較大時,首先細化L0的CPMV,或者當L0的BCW權重較小時,首先細化L0的CPMV。In some examples, video transcoder 200 and video decoder 300 may be configured to first apply high frequency removal to the CPMV of Lx according to the BWC weight value. Rules may apply. The rule may be, for example: when the BCW weight of L0 is larger, the CPMV of L0 is first refined, or when the BCW weight of L0 is smaller, the CPMV of L0 is first refined.

在一些實例中,視訊轉碼器200和視訊解碼器300可以被配置為首先根據ph_mvd_l1_zero_flag(其指示L1 CPMV的MVD始終為零,並且可在視訊譯碼標準之間以不同的方式命名)來向Lx的CPMV應用高頻移除。規則可能適用。規則可以例如是:當標誌為真時,首先細化L0的CPMV,或者當標誌為假時,首先細化L0的CPMV。In some examples, the video transcoder 200 and the video decoder 300 may be configured to first provide the Lx The CPMV applies high frequency removal. Rules may apply. A rule could eg be: when the flag is true, refine the CPMV of L0 first, or when the flag is false, refine the CPMV of L0 first.

在一些實例中,視訊轉碼器200和視訊解碼器300可以被配置為當範本匹配成本高於上述實例中描述的其他參考圖片列表的範本匹配成本時,首先細化參考圖片列表Lx的CPMV。In some examples, the video transcoder 200 and the video decoder 300 may be configured to refine the CPMV of the reference picture list Lx first when the template matching cost is higher than that of other reference picture lists described in the above examples.

在一些實例中,視訊轉碼器200和視訊解碼器300可以被配置為首先細化參考圖片列表Lx的CPMV,在針對每個參考圖片列表分別計算基於初始CPMV的成本之後,這些CPMV實現高於另一參考圖片列表的範本匹配成本的範本匹配成本。In some examples, video transcoder 200 and video decoder 300 may be configured to first refine the CPMVs of the reference picture lists Lx, which achieve higher than The template matching cost of another reference image list's template matching cost.

在一些實例中,視訊轉碼器200和視訊解碼器300可以被配置為將雙預測CPMV轉換為單預測CPMV。在執行AffTM之後,在應用高頻移除之前,應當存在兩個範本匹配成本值,即對應於L0的CPMV的 cost 0和對應於L1的CPMV的 cost 1。第三成本值來自由AffTM在應用高頻移除之後產生的成本值。若第三成本值高於另外兩個成本值之一,則丟棄與參考L0或L1相對應的CPMV,這取決於 cost 0cost 1中的哪一個較大。 In some examples, video transcoder 200 and video decoder 300 may be configured to convert bi-predictive CPMV to uni-predictive CPMV. After performing AffTM, before applying high frequency removal, there should be two template matching cost values, cost 0 corresponding to the CPMV of L0 and cost 1 corresponding to the CPMV of L1. The third cost value comes from the cost value generated by AWTM after applying high frequency removal. If the third cost value is higher than one of the other two cost values, the CPMV corresponding to the reference L0 or L1 is discarded, depending on which of cost 0 and cost 1 is greater.

視訊轉碼器200和視訊解碼器300可以被配置為執行從4參數模型到6參數模型的模型轉換。仿射模型可以從4參數仿射模型轉換為6參數仿射模型。利用當前塊上的左下角的座標位置(即(0,塊高度)),可以基於等式(2-1)來計算左下角的CPMV。隨後,將當前塊的運動模型視為用於AffTM的6參數仿射模型。Video transcoder 200 and video decoder 300 may be configured to perform model conversion from a 4-parameter model to a 6-parameter model. The affine model can be converted from a 4-parameter affine model to a 6-parameter affine model. Using the coordinate position of the lower left corner on the current block (ie (0, block height)), the CPMV of the lower left corner can be calculated based on Equation (2-1). Then, the motion model of the current block is considered as a 6-parameter affine model for AffTM.

視訊轉碼器200和視訊解碼器300可以被配置為執行從平移模型到仿射模型的模型轉換。在一些實例中,在範本匹配合併模式下,可以在一般的基於塊的範本匹配之上應用先前描述的AffTM程序,初始CPMV皆被設置為等於經由範本匹配程序產生的平移MV。若應用額外的AffTM程序的範本匹配成本小於一般範本匹配的成本,則AffTM程序的CPMV用於針對當前塊的仿射運動補償,而不是來自原始範本匹配程序的平移運動模型。Video transcoder 200 and video decoder 300 may be configured to perform model conversion from translational models to affine models. In some examples, in the template matching merge mode, the previously described AffTM procedure can be applied on top of the general block-based template matching, the initial CPMVs are all set equal to the translated MVs generated via the template matching procedure. If the template matching cost of applying the additional AffTM procedure is less than the cost of general template matching, then the CPMV of the AffTM procedure is used for affine motion compensation for the current block instead of the translational motion model from the original template matching procedure.

在一些實例中,僅當雙邊匹配(或VVC中的解碼器側運動細化(DMVR))和雙向光流(BDOF)均未應用於當前塊時,才應用轉換。In some instances, the transformation is only applied if neither bilateral matching (or decoder-side motion refinement (DMVR) in VVC) nor bi-directional optical flow (BDOF) is applied to the current block.

在一些實例中,視訊解碼器300可以被配置為始終使用4參數仿射模型作為目標轉換模型。在一些實例中,視訊解碼器300可以被配置為使用6參數仿射模型作為目標轉換模型。在一些實例中,視訊解碼器300可以經由使範本匹配成本最小化來決定最終運動模型。In some examples, video decoder 300 may be configured to always use a 4-parameter affine model as the target transformation model. In some examples, video decoder 300 may be configured to use a 6-parameter affine model as the target transformation model. In some examples, the video decoder 300 may determine the final motion model by minimizing the template matching cost.

圖11是示出可以執行本案內容的技術的實例視訊轉碼器200的方塊圖。圖11是出於解釋的目的而提供的,並且不應當被認為對在本案內容中泛泛地舉例說明和描述的技術進行限制。出於解釋的目的,本案內容描述了根據VVC(正在開發的ITU-T H.266)和HEVC(ITU-T H.265)技術的視訊轉碼器200。然而,本案內容的技術可以由被配置為其他視訊譯碼標準的視訊編碼設備來執行。FIG. 11 is a block diagram illustrating an example video transcoder 200 that may implement the techniques of this disclosure. FIG. 11 is provided for purposes of explanation and should not be considered limiting of the techniques broadly illustrated and described in this context. For purposes of explanation, this case describes a video transcoder 200 according to VVC (ITU-T H.266 under development) and HEVC (ITU-T H.265) technologies. However, the techniques of this disclosure may be implemented by video encoding devices configured to other video encoding standards.

在圖11的實例中,視訊轉碼器200包括視訊資料記憶體230、模式選擇單元202、殘差產生單元204、變換處理單元206、量化單元208、逆量化單元210、逆變換處理單元212、重構單元214、濾波器單元216、解碼圖片緩衝器(DPB)218和熵編碼單元220。視訊資料記憶體230、模式選擇單元202、殘差產生單元204、變換處理單元206、量化單元208、逆量化單元210、逆變換處理單元212、重構單元214、濾波器單元216、DPB 218和熵編碼單元220中的任何一者或全部可以在一或多個處理器中或者在處理電路中實現。例如,視訊轉碼器200的單元可以被實現為一或多個電路或邏輯元件,作為硬體電路的一部分,或者作為處理器、ASIC或FPGA的一部分。此外,視訊轉碼器200可以包括額外或替代的處理器或處理電路以執行這些和其他功能。In the example of FIG. 11 , the video transcoder 200 includes a video data memory 230, a mode selection unit 202, a residual generation unit 204, a transformation processing unit 206, a quantization unit 208, an inverse quantization unit 210, an inverse transformation processing unit 212, Reconstruction unit 214 , filter unit 216 , decoded picture buffer (DPB) 218 , and entropy encoding unit 220 . Video data memory 230, mode selection unit 202, residual generation unit 204, transformation processing unit 206, quantization unit 208, inverse quantization unit 210, inverse transformation processing unit 212, reconstruction unit 214, filter unit 216, DPB 218 and Any or all of entropy encoding units 220 may be implemented in one or more processors or in processing circuitry. For example, the elements of video transcoder 200 may be implemented as one or more circuits or logic elements, as part of a hardware circuit, or as part of a processor, ASIC or FPGA. Furthermore, video transcoder 200 may include additional or alternative processors or processing circuits to perform these and other functions.

視訊資料記憶體230可以儲存要由視訊轉碼器200的部件來編碼的視訊資料。視訊轉碼器200可以從例如視訊源104(圖1)接收被儲存在視訊資料記憶體230中的視訊資料。DPB 218可以充當參考圖片記憶體,其儲存參考視訊資料以在由視訊轉碼器200對後續視訊資料進行預測時使用。視訊資料記憶體230和DPB 218可以由各種記憶體設備中的任何一種形成,諸如動態隨機存取記憶體(DRAM)(包括同步DRAM(SDRAM))、磁阻RAM(MRAM)、電阻性RAM(RRAM)、或其他類型的記憶體設備。視訊資料記憶體230和DPB 218可以由相同的記憶體設備或單獨的記憶體設備來提供。在各個實例中,視訊資料記憶體230可以與視訊轉碼器200的其他部件在晶片上(如圖所示),或者相對於那些部件在晶片外。The video data memory 230 can store video data to be encoded by the components of the video transcoder 200 . Video transcoder 200 may receive video data stored in video data memory 230 from, for example, video source 104 ( FIG. 1 ). The DPB 218 may act as a reference picture memory, which stores reference video data for use by the video transcoder 200 in predicting subsequent video data. Video data memory 230 and DPB 218 may be formed from any of a variety of memory devices, such as dynamic random access memory (DRAM) (including synchronous DRAM (SDRAM)), magnetoresistive RAM (MRAM), resistive RAM ( RRAM), or other types of memory devices. Video data memory 230 and DPB 218 may be provided by the same memory device or separate memory devices. In various examples, video data memory 230 may be on-die with other components of video transcoder 200 (as shown), or off-die with respect to those components.

在本案內容中,對視訊資料記憶體230的引用不應當被解釋為限於在視訊轉碼器200內部的記憶體(除非如此具體地描述),或者不限於在視訊轉碼器200外部的記憶體(除非如此具體地描述)。確切而言,對視訊資料記憶體230的引用應當被理解為儲存視訊轉碼器200接收以用於編碼的視訊資料(例如,用於要被編碼的當前塊的視訊資料)的參考記憶體。圖1的記憶體106亦可以提供對來自視訊轉碼器200的各個單元的輸出的臨時儲存。In this context, references to video data memory 230 should not be construed as limited to memory internal to video codec 200 (unless so specifically described), or to memory external to video codec 200 (unless so specifically described). Specifically, the reference to the video data memory 230 should be understood as a reference memory for storing video data received by the video transcoder 200 for encoding (eg, video data for the current block to be encoded). The memory 106 of FIG. 1 may also provide temporary storage of outputs from various units of the video transcoder 200 .

圖示圖11的各個單元以説明理解由視訊轉碼器200執行的操作。這些單元可以被實現為固定功能電路、可程式設計電路、或其組合。固定功能電路代表提供特定功能並且關於可以執行的操作而預先設置的電路。可程式設計電路代表可以被程式設計以執行各種任務並且以可以執行的操作來提供靈活功能的電路。例如,可程式設計電路可以執行軟體或韌體,軟體或韌體使得可程式設計電路以軟體或韌體的指令所定義的方式進行操作。固定功能電路可以執行軟體指令(例如,以接收參數或輸出參數),但是固定功能電路執行的操作類型通常是不可變的。在一些實例中,這些單元中的一或多個單元可以是不同的電路塊(固定功能或可程式設計),並且在一些實例中,這些單元中的一或多個單元可以是積體電路。The various units of FIG. 11 are illustrated to illustrate and understand the operations performed by the video transcoder 200 . These units may be implemented as fixed function circuits, programmable circuits, or a combination thereof. A fixed-function circuit represents a circuit that provides a specific function and is preset as to operations that can be performed. Programmable circuits represent circuits that can be programmed to perform various tasks and provide flexible functionality in terms of operations that can be performed. For example, a programmable circuit can execute software or firmware that causes the programmable circuit to operate in a manner defined by instructions of the software or firmware. Fixed-function circuits can execute software instructions (for example, to receive parameters or output parameters), but the types of operations performed by fixed-function circuits are usually immutable. In some examples, one or more of these units may be different circuit blocks (fixed function or programmable), and in some examples, one or more of these units may be integrated circuits.

視訊轉碼器200可以包括由可程式設計電路形成的算數邏輯單位(ALU)、基本功能單元(EFU)、數位電路、類比電路及/或可程式設計核。在其中使用由可程式設計電路執行的軟體來執行視訊轉碼器200的操作的實例中,記憶體106(圖1)可以儲存視訊轉碼器200接收並且執行的軟體的指令(例如,目標代碼),或者視訊轉碼器200內的另一記憶體(未圖示)可以儲存此類指令。The video transcoder 200 may include an arithmetic logic unit (ALU), an elementary functional unit (EFU), a digital circuit, an analog circuit and/or a programmable core formed by programmable circuits. In instances where the operations of video transcoder 200 are performed using software executed by programmable circuitry, memory 106 (FIG. 1) may store instructions (e.g., object code) for the software that video transcoder 200 receives and executes. ), or another memory (not shown) in the video transcoder 200 can store such instructions.

視訊資料記憶體230被配置為儲存所接收的視訊資料。視訊轉碼器200可以從視訊資料記憶體230取回視訊資料的圖片,並且將視訊資料提供給殘差產生單元204和模式選擇單元202。視訊資料記憶體230中的視訊資料可以是要被編碼的原始視訊資料。The video data memory 230 is configured to store the received video data. The video transcoder 200 can retrieve the picture of the video data from the video data memory 230 and provide the video data to the residual generation unit 204 and the mode selection unit 202 . The video data in the video data memory 230 may be the original video data to be encoded.

模式選擇單元202包括運動估計單元222、運動補償單元224和訊框內預測單元226。模式選擇單元202可以包括額外功能單元,其根據其他預測模式來執行視訊預測。作為實例,模式選擇單元202可以包括調色板單元、塊內複製單元(其可以是運動估計單元222及/或運動補償單元224的一部分)、仿射單元、線性模型(LM)單元等。The mode selection unit 202 includes a motion estimation unit 222 , a motion compensation unit 224 and an intra prediction unit 226 . The mode selection unit 202 may include additional functional units to perform video prediction according to other prediction modes. As examples, mode selection unit 202 may include a palette unit, an intra-block replication unit (which may be part of motion estimation unit 222 and/or motion compensation unit 224 ), an affine unit, a linear model (LM) unit, and the like.

模式選擇單元202通常協調多個編碼通路(pass),以測試編碼參數的組合以及針對此類組合所得到的率失真值。編碼參數可以包括將CTU分割為CU、用於CU的預測模式、用於CU的殘差資料的變換類型、用於CU的殘差資料的量化參數等。模式選擇單元202可以最終選擇編碼參數的具有比其他測試的組合更佳的率失真值的組合。Mode selection unit 202 typically coordinates multiple encoding passes to test combinations of encoding parameters and resulting rate-distortion values for such combinations. The encoding parameters may include partitioning of a CTU into CUs, prediction modes for CUs, transform types for residual data of CUs, quantization parameters for residual data of CUs, and the like. The mode selection unit 202 may finally select a combination of encoding parameters that has a better rate-distortion value than other tested combinations.

視訊轉碼器200可以將從視訊資料記憶體230取回的圖片分割為一系列CTU,並且將一或多個CTU封裝在切片內。模式選擇單元202可以根據樹結構(諸如上述HEVC的QTBT結構或四叉樹結構)來分割圖片的CTU。如前述,視訊轉碼器200可以經由根據樹結構來分割CTU,從而形成一或多個CU。此類CU通常亦可以被稱為「視訊塊」或「塊」。The video transcoder 200 can divide the picture retrieved from the video data memory 230 into a series of CTUs, and pack one or more CTUs into slices. The mode selection unit 202 may divide the CTUs of the picture according to a tree structure (such as the above-mentioned QTBT structure or quadtree structure of HEVC). As mentioned above, the video transcoder 200 can form one or more CUs by dividing the CTU according to the tree structure. Such CUs may also be commonly referred to as "video blocks" or "blocks."

通常,模式選擇單元202亦控制其部件(例如,運動估計單元222、運動補償單元224和訊框內預測單元226)以產生用於當前塊(例如,當前CU,或者在HEVC中為PU和TU的重疊部分)的預測塊。為了對當前塊進行訊框間預測,運動估計單元222可以執行運動搜尋以辨識在一或多個參考圖片(例如,被儲存在DPB 218中的一或多個先前譯碼的圖片)中的一或多個緊密匹配的參考塊。具體地,運動估計單元222可以例如根據絕對差之和(SAD)、平方差之和(SSD)、平均絕對差(MAD)、均方差(MSD)等,來計算表示潛在參考塊將與當前塊的類似程度的值。運動估計單元222通常可以使用在當前塊與所考慮的參考塊之間的逐取樣差來執行這些計算。運動估計單元222可以辨識從這些計算所得到的具有最低值的參考塊,其指示與當前塊最緊密匹配的參考塊。Typically, mode selection unit 202 also controls its components (e.g., motion estimation unit 222, motion compensation unit 224, and intra prediction unit 226) to generate overlaps) prediction blocks. For inter-prediction of the current block, motion estimation unit 222 may perform a motion search to identify one or more reference pictures (eg, one or more previously decoded pictures stored in DPB 218) or multiple closely matching reference blocks. Specifically, the motion estimation unit 222 may, for example, calculate the sum of absolute difference (SAD), sum of squared difference (SSD), mean absolute difference (MAD), mean square difference (MSD), etc. similar value of . Motion estimation unit 222 may typically perform these calculations using the sample-by-sample difference between the current block and the reference block under consideration. Motion estimation unit 222 may identify the reference block resulting from these calculations with the lowest value, which indicates the reference block that most closely matches the current block.

運動估計單元222可以形成一或多個運動向量(MV),該運動向量限定相對於當前塊在當前圖片中的位置而言參考塊在參考圖片中的的位置。隨後,運動估計單元222可以將運動向量提供給運動補償單元224。例如,對於單向訊框間預測,運動估計單元222可以提供單個運動向量,而對於雙向訊框間預測,運動估計單元222可以提供兩個運動向量。隨後,運動補償單元224可以使用運動向量來產生預測塊。例如,運動補償單元224可以使用運動向量來取回參考塊的資料。作為另一實例,若運動向量具有分數取樣精度,則運動補償單元224可以根據一或多個內插濾波器來對用於預測塊的值進行內插。此外,對於雙向訊框間預測,運動補償單元224可以取回用於由相應的運動向量標識的兩個參考塊的資料並且例如經由逐取樣平均或加權平均來將所取回的資料進行組合。Motion estimation unit 222 may form one or more motion vectors (MVs) that define the position of a reference block in a reference picture relative to the position of the current block in the current picture. Motion estimation unit 222 may then provide the motion vectors to motion compensation unit 224 . For example, for unidirectional inter prediction, motion estimation unit 222 may provide a single motion vector, and for bidirectional inter prediction, motion estimation unit 222 may provide two motion vectors. Motion compensation unit 224 may then use the motion vectors to generate a prediction block. For example, the motion compensation unit 224 may use motion vectors to retrieve reference block information. As another example, if the motion vector has fractional sampling precision, motion compensation unit 224 may interpolate the values used to predict the block according to one or more interpolation filters. Furthermore, for bidirectional inter-prediction, motion compensation unit 224 may retrieve data for two reference blocks identified by corresponding motion vectors and combine the retrieved data, eg, via sample-wise averaging or weighted averaging.

根據本文描述的技術,運動估計單元222和運動補償單元224可以被配置為使用仿射預測模式來對視訊資料區塊進行編碼和解碼。此外,運動估計單元222和運動補償單元224可以被配置為執行本文描述的運動向量細化程序。According to the techniques described herein, motion estimation unit 222 and motion compensation unit 224 may be configured to encode and decode blocks of video data using an affine prediction mode. Additionally, motion estimation unit 222 and motion compensation unit 224 may be configured to perform the motion vector refinement procedure described herein.

作為另一實例,對於訊框內預測或訊框內預測譯碼,訊框內預測單元226可以根據與當前塊相鄰的取樣來產生預測塊。例如,對於方向性模式,訊框內預測單元226通常可以在數學上將相鄰取樣的值進行組合,並且跨當前塊在所定義的方向上填充這些計算出的值以產生預測塊。作為另一實例,對於DC模式,訊框內預測單元226可以計算當前塊的相鄰取樣的平均值,並且產生預測塊以包括針對預測塊的每個取樣的該得到的平均值。As another example, for intra prediction or intra prediction coding, the intra prediction unit 226 may generate a prediction block according to samples adjacent to the current block. For example, for directional modes, the intra prediction unit 226 may generally mathematically combine values of adjacent samples and pad these calculated values in a defined direction across the current block to produce a predicted block. As another example, for DC mode, the intra prediction unit 226 may calculate an average value of neighboring samples of the current block, and generate a predicted block to include the resulting average value for each sample of the predicted block.

模式選擇單元202將預測塊提供給殘差產生單元204。殘差產生單元204從視訊資料記憶體230接收當前塊的原始的未經編碼的版本,並且從模式選擇單元202接收預測塊。殘差產生單元204計算在當前塊與預測塊之間的逐取樣差。所得到的逐取樣差定義了用於當前塊的殘差塊。在一些實例中,殘差產生單元204可以決定殘差塊中的取樣值之間的差,以使用殘差差分脈衝譯碼調制(RDPCM)來產生殘差塊。在一些實例中,可以使用執行二進位減法的一或多個減法器電路來形成殘差產生單元204。The mode selection unit 202 provides the prediction block to the residual generation unit 204 . The residual generation unit 204 receives the original unencoded version of the current block from the video data memory 230 and receives the predicted block from the mode selection unit 202 . The residual generation unit 204 calculates the sample-by-sample difference between the current block and the prediction block. The resulting sample-by-sample differences define the residual block for the current block. In some examples, residual generation unit 204 may determine the difference between sample values in the residual block to generate the residual block using residual differential pulse code modulation (RDPCM). In some examples, residual generation unit 204 may be formed using one or more subtractor circuits that perform binary subtraction.

在其中模式選擇單元202將CU分割為PU的實例中,每個PU可以與亮度預測單元和對應的色度預測單元相關聯。視訊轉碼器200和視訊解碼器300可以支援具有各種大小的PU。如上所指出的,CU的大小可以代表CU的亮度譯碼塊的大小,而PU的大小可以代表PU的亮度預測單元的大小。假定特定CU的大小為2Nx2N,則視訊轉碼器200可以支援用於訊框內預測的2Nx2N或NxN的PU大小、以及用於訊框間預測的2Nx2N、2NxN、Nx2N、NxN或類似的對稱的PU大小。視訊轉碼器200和視訊解碼器300亦可以支援針對用於訊框間預測的2NxnU、2NxnD、nLx2N和nRx2N的PU大小的非對稱分割。In examples where mode select unit 202 partitions a CU into PUs, each PU may be associated with a luma prediction unit and a corresponding chroma prediction unit. The video transcoder 200 and the video decoder 300 can support PUs with various sizes. As noted above, the size of a CU may represent the size of a luma coding block of the CU, and the size of a PU may represent the size of a luma prediction unit of the PU. Assuming that a particular CU has a size of 2Nx2N, the video transcoder 200 may support a PU size of 2Nx2N or NxN for intra prediction, and 2Nx2N, 2NxN, Nx2N, NxN, or similar symmetric PU size for inter prediction. PU size. Video transcoder 200 and video decoder 300 may also support asymmetric partitioning for PU sizes of 2NxnU, 2NxnD, nLx2N, and nRx2N for inter-frame prediction.

在其中模式選擇單元202不將CU進一步分割為PU的實例中,每個CU可以與亮度譯碼塊和對應的色度譯碼塊相關聯。如前述,CU的大小可以代表CU的亮度譯碼塊的大小。視訊轉碼器200和視訊解碼器300可以支援2Nx2N、2NxN 或 Nx2N 的CU大小。In examples where mode select unit 202 does not further partition the CU into PUs, each CU may be associated with a luma coding block and a corresponding chroma coding block. As mentioned above, the size of a CU may represent the size of a luma coding block of the CU. The video transcoder 200 and the video decoder 300 can support a CU size of 2Nx2N, 2NxN or Nx2N.

對於其他視訊譯碼技術(舉一些實例,諸如塊內複製模式譯碼、仿射模式譯碼和線性模型(LM)模式譯碼),模式選擇單元202經由與譯碼技術相關聯的相應單元來產生用於正被編碼的當前塊的預測塊。在一些實例中(諸如調色板模式譯碼),模式選擇單元202可以不產生預測塊,而是替代地產生指示基於所選擇的調色板來重構塊的方式的語法元素。在此類模式下,模式選擇單元202可以將這些語法元素提供給熵編碼單元220以進行編碼。For other video coding techniques (such as intra-block copy mode coding, affine mode coding, and linear model (LM) mode coding, to name a few examples), the mode selection unit 202 selects A predictive block is generated for the current block being encoded. In some examples, such as palette mode coding, mode select unit 202 may not generate a prediction block, but instead generate syntax elements indicating the manner in which the block is reconstructed based on the selected palette. In such modes, mode selection unit 202 may provide these syntax elements to entropy encoding unit 220 for encoding.

如前述,殘差產生單元204接收用於當前塊和對應的預測塊的視訊資料。隨後,殘差產生單元204為當前塊產生殘差塊。為了產生殘差塊,殘差產生單元204計算在預測塊與當前塊之間的逐取樣差。As mentioned above, the residual generation unit 204 receives video data for the current block and the corresponding prediction block. Subsequently, the residual generation unit 204 generates a residual block for the current block. To generate a residual block, the residual generation unit 204 calculates a sample-by-sample difference between the predicted block and the current block.

變換處理單元206將一或多個變換應用於殘差塊,以產生變換係數的塊(本文中被稱為「變換係數塊」)。變換處理單元206可以將各種變換應用於殘差塊,以形成變換係數塊。例如,變換處理單元206可以將離散餘弦變換(DCT)、方向變換、Karhunen-Loeve變換(KLT)、或概念上類似的變換應用於殘差塊。在一些實例中,變換處理單元206可以對殘差塊執行多種變換,例如,初級變換和二次變換(諸如旋轉變換)。在一些實例中,變換處理單元206不對殘差塊應用變換。Transform processing unit 206 applies one or more transforms to the residual block to produce a block of transform coefficients (referred to herein as a "transform coefficient block"). Transform processing unit 206 may apply various transforms to the residual block to form a block of transform coefficients. For example, transform processing unit 206 may apply a discrete cosine transform (DCT), a directional transform, a Karhunen-Loeve transform (KLT), or a conceptually similar transform to the residual block. In some examples, transform processing unit 206 may perform multiple transforms on the residual block, eg, a primary transform and a secondary transform (such as a rotation transform). In some examples, transform processing unit 206 applies no transform to the residual block.

量化單元208可以對變換係數塊中的變換係數進行量化,以產生經量化的變換係數塊。量化單元208可以根據與當前塊相關聯的量化參數(QP)值來對變換係數塊的變換係數進行量化。視訊轉碼器200(例如,經由模式選擇單元202)可以經由調整與CU相關聯的QP值來調整被應用於與當前塊相關聯的變換係數塊的量化程度。量化可能引起資訊損失,並且因此,經量化的變換係數可能具有與變換處理單元206所產生的原始變換係數相比較低的精度。Quantization unit 208 may quantize the transform coefficients in the transform coefficient block to produce a quantized transform coefficient block. Quantization unit 208 may quantize transform coefficients of a block of transform coefficients according to a quantization parameter (QP) value associated with the current block. Video transcoder 200 (eg, via mode selection unit 202 ) may adjust the degree of quantization applied to the transform coefficient block associated with the current block by adjusting the QP value associated with the CU. Quantization may cause loss of information, and thus, the quantized transform coefficients may have lower precision compared to the original transform coefficients generated by transform processing unit 206 .

逆量化單元210和逆變換處理單元212可以將逆量化和逆變換分別應用於經量化的變換係數塊,以從變換係數塊重構殘差塊。重構單元214可以基於經重構的殘差塊和由模式選擇單元202產生的預測塊來產生與當前塊相對應的重構塊(儘管潛在地具有某種程度的失真)。例如,重構單元214可以將經重構的殘差塊的取樣與來自模式選擇單元202所產生的預測塊的對應取樣相加,以產生經重構的塊。The inverse quantization unit 210 and the inverse transform processing unit 212 may apply inverse quantization and inverse transform to quantized transform coefficient blocks, respectively, to reconstruct a residual block from the transform coefficient blocks. Reconstruction unit 214 may generate a reconstructed block corresponding to the current block (although potentially with some degree of distortion) based on the reconstructed residual block and the prediction block generated by mode selection unit 202 . For example, reconstruction unit 214 may add samples of the reconstructed residual block to corresponding samples from the prediction block produced by mode selection unit 202 to produce a reconstructed block.

濾波器單元216可以對經重構的塊執行一或多個濾波器操作。例如,濾波器單元216可以執行去塊操作以減少沿著CU的邊緣的塊效應偽影。在一些實例中,可以跳過濾波器單元216的操作。Filter unit 216 may perform one or more filter operations on the reconstructed block. For example, filter unit 216 may perform a deblocking operation to reduce blocking artifacts along the edges of a CU. In some examples, the operation of filter unit 216 may be skipped.

視訊轉碼器200將經重構的塊儲存在DPB 218中。例如,在其中不執行濾波器單元216的操作的實例中,重構單元214可以將經重構的塊儲存到DPB 218中。在其中執行濾波器單元216的操作的實例中,濾波器單元216可以將經濾波的重構塊儲存到DPB 218中。運動估計單元222和運動補償單元224可以從DPB 218取回由經重構的(並且潛在地經濾波的)塊形成的參考圖片,以對後續編碼的圖片的塊進行訊框間預測。另外,訊框內預測單元226可以使用在DPB 218中的當前圖片的經重構的塊來對當前圖片中的其他塊進行訊框內預測。Video transcoder 200 stores the reconstructed blocks in DPB 218 . For example, in instances where the operations of filter unit 216 are not performed, reconstruction unit 214 may store the reconstructed block into DPB 218 . In instances where the operations of filter unit 216 are performed, filter unit 216 may store the filtered reconstruction block into DPB 218 . Motion estimation unit 222 and motion compensation unit 224 may retrieve reference pictures formed from reconstructed (and potentially filtered) blocks from DPB 218 for inter-prediction of blocks of subsequently encoded pictures. In addition, the intra prediction unit 226 may use the reconstructed block of the current picture in the DPB 218 to perform intra prediction on other blocks in the current picture.

通常,熵編碼單元220可以對從視訊轉碼器200的其他功能部件接收的語法元素進行熵編碼。例如,熵編碼單元220可以對來自量化單元208的經量化的變換係數塊進行熵編碼。作為另一實例,熵編碼單元220可以對來自模式選擇單元202的預測語法元素(例如,用於訊框間預測的運動資訊或用於訊框內預測的訊框內模式資訊)進行熵編碼。熵編碼單元220可以對作為視訊資料的另一實例的語法元素執行一或多個熵編碼操作,以產生經熵編碼的資料。例如,熵編碼單元220可以執行上下文自我調整變長譯碼(CAVLC)操作、CABAC操作、可變-可變(V2V)長度譯碼操作、基於語法的上下文自我調整二進位算術譯碼(SBAC)操作、概率區間分割熵(PIPE)譯碼操作、指數哥倫布編碼操作、或對資料的另一種類型的熵編碼操作。在一些實例中,熵編碼單元220可以在其中語法元素未被熵編碼的旁路模式下操作。Generally, the entropy coding unit 220 can entropy code the syntax elements received from other functional components of the video transcoder 200 . For example, entropy encoding unit 220 may entropy encode the quantized transform coefficient block from quantization unit 208 . As another example, the entropy encoding unit 220 may entropy encode the prediction syntax elements (eg, motion information for inter prediction or intra mode information for intra prediction) from the mode selection unit 202 . The entropy encoding unit 220 may perform one or more entropy encoding operations on syntax elements, which are another example of video data, to generate entropy encoded data. For example, the entropy coding unit 220 may perform context self-adjusting variable length coding (CAVLC) operations, CABAC operations, variable-to-variable (V2V) length coding operations, syntax-based context self-adjusting binary arithmetic coding (SBAC) operation, a Probability Interval Partitioning Entropy (PIPE) decoding operation, an Exponential Golomb encoding operation, or another type of entropy encoding operation on the data. In some examples, entropy encoding unit 220 may operate in a bypass mode in which syntax elements are not entropy encoded.

視訊轉碼器200可以輸出位元串流,其包括用於重構切片或圖片的塊所需要的經熵編碼的語法元素。具體地,熵編碼單元220可以輸出位元串流。Video transcoder 200 may output a bitstream comprising entropy-encoded syntax elements required for reconstructing the blocks of the slice or picture. Specifically, the entropy coding unit 220 can output a bit stream.

關於塊描述了上述操作。此類描述應當被理解為用於亮度譯碼塊及/或色度譯碼塊的操作。如前述,在一些實例中,亮度譯碼塊和色度譯碼塊是CU的亮度分量和色度分量。在一些實例中,亮度譯碼塊和色度譯碼塊是PU的亮度分量和色度分量。The above operations are described with respect to blocks. Such descriptions should be understood as operations for luma coding blocks and/or chroma coding blocks. As previously mentioned, in some examples, the luma and chroma coding blocks are the luma and chroma components of the CU. In some examples, the luma coding block and the chroma coding block are the luma and chroma components of the PU.

在一些實例中,不需要針對色度譯碼塊重複關於亮度譯碼塊執行的操作。作為一個實例,不需要重多工於辨識用於亮度譯碼塊的運動向量(MV)和參考圖片的操作來辨識用於色度塊的MV和參考圖片。確切而言,可以對用於亮度譯碼塊的MV進行縮放以決定用於色度塊的MV,並且參考圖片可以是相同的。作為另一實例,對於亮度譯碼塊和色度譯碼塊,訊框內預測程序可以是相同的。In some examples, operations performed on luma coding blocks need not be repeated for chroma coding blocks. As an example, the operations of identifying motion vectors (MVs) and reference pictures for luma coded blocks need not be repeated to identify MVs and reference pictures for chroma blocks. Specifically, the MV for luma coded blocks may be scaled to decide the MV for chroma blocks, and the reference pictures may be the same. As another example, the intra prediction procedure may be the same for luma and chroma coded blocks.

圖12是示出可以執行本案內容的技術的實例視訊解碼器300的方塊圖。圖12是出於解釋的目的而提供的,並且不對在本案內容中泛泛地舉例說明和描述的技術進行限制。出於解釋的目的,本案內容根據VVC(正在開發中的ITU-T H.266)和HEVC(ITU-T H.265)的技術描述了視訊解碼器300。然而,本案內容的技術可以由被配置為其他視訊譯碼標準的視訊譯碼設備來執行。12 is a block diagram illustrating an example video decoder 300 that may perform techniques of this disclosure. FIG. 12 is provided for purposes of explanation, and not limitation, of the techniques broadly illustrated and described in this context. For purposes of explanation, this document describes video decoder 300 in terms of VVC (ITU-T H.266 under development) and HEVC (ITU-T H.265) technologies. However, the techniques in this disclosure may be performed by video decoding devices configured to other video coding standards.

在圖12的實例中,視訊解碼器300包括譯碼圖片緩衝器(CPB)記憶體320、熵解碼單元302、預測處理單元304、逆量化單元306、逆變換處理單元308、重構單元310、濾波器單元312和解碼圖片緩衝器(DPB)134。CPB記憶體320、熵解碼單元302、預測處理單元304、逆量化單元306、逆變換處理單元308、重構單元310、濾波器單元312和DPB 134中的任何一者或全部可以在一或多個處理器中或者在處理電路中實現。例如,視訊解碼器300的單元可以被實現為一或多個電路或邏輯部件,作為硬體電路的一部分,或者作為處理器、ASIC或FPGA的一部分。此外,視訊解碼器300可以包括額外或替代的處理器或處理電路以執行這些和其他功能。In the example of FIG. 12 , the video decoder 300 includes a coded picture buffer (CPB) memory 320, an entropy decoding unit 302, a prediction processing unit 304, an inverse quantization unit 306, an inverse transform processing unit 308, a reconstruction unit 310, filter unit 312 and decoded picture buffer (DPB) 134 . Any one or all of CPB memory 320, entropy decoding unit 302, prediction processing unit 304, inverse quantization unit 306, inverse transform processing unit 308, reconstruction unit 310, filter unit 312 and DPB 134 can be in one or more implemented in a processor or in a processing circuit. For example, the elements of video decoder 300 may be implemented as one or more circuits or logic components, as part of a hardware circuit, or as part of a processor, ASIC or FPGA. Furthermore, video decoder 300 may include additional or alternative processors or processing circuits to perform these and other functions.

預測處理單元304包括運動補償單元316和訊框內預測單元318。預測處理單元304可以包括加法單元,其根據其他預測模式來執行預測。作為實例,預測處理單元304可以包括調色板單元、塊內複製單元(其可以形成運動補償單元316的一部分)、仿射單元、線性模型(LM)單元等。在其他實例中,視訊解碼器300可以包括更多、更少或不同的功能部件。The prediction processing unit 304 includes a motion compensation unit 316 and an intra-frame prediction unit 318 . The prediction processing unit 304 may include an addition unit that performs prediction according to other prediction modes. As examples, prediction processing unit 304 may include a palette unit, an intra-block replication unit (which may form part of motion compensation unit 316 ), an affine unit, a linear model (LM) unit, and the like. In other examples, video decoder 300 may include more, fewer or different functional components.

CPB記憶體320可以儲存要由視訊解碼器300的部件解碼的視訊資料,諸如經編碼的視訊位元串流。例如,可以從電腦可讀取媒體110(圖1)獲得被儲存在CPB記憶體320中的視訊資料。CPB記憶體320可以包括儲存來自經編碼的視訊位元串流的經編碼的視訊資料(例如,語法元素)的CPB。此外,CPB記憶體320可以儲存除了經譯碼的圖片的語法元素之外的視訊資料,諸如表示來自視訊解碼器300的各個單元的輸出的臨時資料。DPB 314通常儲存經解碼的圖片,視訊解碼器300可以輸出經解碼的圖片,及/或在解碼經編碼的視訊位元串流的後續資料或圖片時使用經解碼的圖片作為參考視訊資料。CPB記憶體320和DPB 314可以由各種記憶體設備中的任何一種形成,諸如DRAM,包括SDRAM、MRAM、RRAM或其他類型的記憶體設備。CPB記憶體320和DPB 314可以由相同的記憶體設備或單獨的記憶體設備來提供。在各個實例中,CPB記憶體320可以與視訊解碼器300的其他部件在晶片上,或者相對於那些部件在晶片外。The CPB memory 320 may store video data, such as an encoded video bit stream, to be decoded by components of the video decoder 300 . For example, video data stored in CPB memory 320 may be obtained from computer readable medium 110 (FIG. 1). CPB memory 320 may include a CPB that stores encoded video data (eg, syntax elements) from an encoded video bitstream. In addition, the CPB memory 320 may store video data other than the syntax elements of the decoded pictures, such as temporary data representing outputs from various units of the video decoder 300 . The DPB 314 typically stores decoded pictures, and the video decoder 300 may output the decoded pictures and/or use the decoded pictures as reference video data when decoding subsequent data or pictures of the encoded video bitstream. CPB memory 320 and DPB 314 may be formed from any of a variety of memory devices, such as DRAM, including SDRAM, MRAM, RRAM, or other types of memory devices. CPB memory 320 and DPB 314 may be provided by the same memory device or separate memory devices. In various examples, CPB memory 320 may be on-die with other components of video decoder 300, or off-die with respect to those components.

補充或替代地,在一些實例中,視訊解碼器300可以從記憶體120(圖1)取回經譯碼的視訊資料。亦即,記憶體120可以如上文所論述地利用CPB記憶體320來儲存資料。同樣,當視訊解碼器300的一些或全部功能是用要被視訊解碼器300的處理電路執行的軟體來實現時,記憶體120可以儲存要被視訊解碼器300執行的指令。Additionally or alternatively, in some examples, video decoder 300 may retrieve decoded video data from memory 120 ( FIG. 1 ). That is, memory 120 may utilize CPB memory 320 to store data as discussed above. Likewise, the memory 120 may store instructions to be executed by the video decoder 300 when some or all of the functions of the video decoder 300 are implemented by software to be executed by the processing circuit of the video decoder 300 .

圖示圖12中示出的各個單元以説明理解由視訊解碼器300執行的操作。這些單元可以被實現為固定功能電路、可程式設計電路、或其組合。類似於圖11,固定功能電路代表提供特定功能並且關於可以執行的操作而預先設置的電路。可程式設計電路代表可以被程式設計以執行各種任務並且以可以執行的操作來提供靈活功能的電路。例如,可程式設計電路可以執行軟體或韌體,軟體或韌體使得可程式設計電路以軟體或韌體的指令所定義的方式進行操作。固定功能電路可以執行軟體指令(例如,以接收參數或輸出參數),但是固定功能電路執行的操作的類型通常是不可變的。在一些實例中,這些單元中的一或多個單元可以是不同的電路塊(固定功能或可程式設計),並且在一些實例中,這些單元中的一或多個單元可以是積體電路。The various units shown in FIG. 12 are illustrated to illustrate and understand the operations performed by the video decoder 300 . These units may be implemented as fixed function circuits, programmable circuits, or a combination thereof. Similar to FIG. 11 , a fixed-function circuit represents a circuit that provides a specific function and is preset as to operations that can be performed. Programmable circuits represent circuits that can be programmed to perform various tasks and provide flexible functionality in terms of operations that can be performed. For example, a programmable circuit can execute software or firmware that causes the programmable circuit to operate in a manner defined by instructions of the software or firmware. Fixed-function circuits can execute software instructions (eg, to receive parameters or output parameters), but the types of operations performed by fixed-function circuits are typically immutable. In some examples, one or more of these units may be different circuit blocks (fixed function or programmable), and in some examples, one or more of these units may be integrated circuits.

視訊解碼器300可以包括由可程式設計電路形成的ALU、EFU、數位電路、類比電路及/或可程式設計核。在其中由在可程式設計電路上執行的軟體執行視訊解碼器300的操作的實例中,片上或片外記憶體可以儲存視訊解碼器300接收並且執行的軟體的指令(例如,目標代碼)。The video decoder 300 may include an ALU, an EFU, a digital circuit, an analog circuit and/or a programmable core formed by programmable circuits. In instances where the operations of video decoder 300 are performed by software executing on programmable circuitry, on-chip or off-chip memory may store instructions (eg, object code) for the software that video decoder 300 receives and executes.

熵解碼單元302可以從CPB接收經編碼的視訊資料,並且對視訊資料進行熵解碼以重現語法元素。預測處理單元304、逆量化單元306、逆變換處理單元308、重構單元310和濾波器單元312可以基於從位元串流中提取的語法元素來產生經解碼的視訊資料。The entropy decoding unit 302 may receive encoded video data from the CPB, and perform entropy decoding on the video data to recover syntax elements. The prediction processing unit 304 , inverse quantization unit 306 , inverse transform processing unit 308 , reconstruction unit 310 , and filter unit 312 can generate decoded video data based on syntax elements extracted from the bitstream.

通常,視訊解碼器300逐塊地重構圖片。視訊解碼器300可以單獨地對每個區塊執行重構操作(其中當前正在被重構(亦即,被解碼)的塊可以被稱為「當前塊」)。In general, video decoder 300 reconstructs a picture block by block. The video decoder 300 may perform reconstruction operations on each block individually (wherein the block currently being reconstructed (ie, decoded) may be referred to as a "current block").

熵解碼單元302可以對定義經量化的變換係數塊的經量化的變換係數的語法元素以及諸如量化參數(QP)及/或變換模式指示之類的變換資訊進行熵解碼。逆量化單元306可以使用與經量化的變換係數塊相關聯的QP來決定量化程度,並且同樣地,決定供逆量化單元306應用的逆量化程度。逆量化單元306可以例如執行按位左移操作以對經量化的變換係數進行逆量化。逆量化單元306從而可以形成包括變換係數的變換係數塊。The entropy decoding unit 302 may entropy decode syntax elements defining quantized transform coefficients of a block of quantized transform coefficients, as well as transform information such as quantization parameters (QP) and/or transform mode indications. Inverse quantization unit 306 may use the QP associated with the quantized transform coefficient block to determine the degree of quantization, and as such, determine the degree of inverse quantization for inverse quantization unit 306 to apply. Inverse quantization unit 306 may, for example, perform a bitwise left shift operation to inverse quantize the quantized transform coefficients. The inverse quantization unit 306 may thus form a transform coefficient block comprising transform coefficients.

在逆量化單元306形成變換係數塊之後,逆變換處理單元308可以將一或多個逆變換應用於變換係數塊,以產生與當前塊相關聯的殘差塊。例如,逆變換處理單元308可以將逆DCT、逆整數變換、逆Karhunen-Loeve變換(KLT)、逆旋轉變換、逆方向變換或另一逆變換應用於變換係數塊。After inverse quantization unit 306 forms the block of transform coefficients, inverse transform processing unit 308 may apply one or more inverse transforms to the block of transform coefficients to generate a residual block associated with the current block. For example, the inverse transform processing unit 308 may apply an inverse DCT, an inverse integer transform, an inverse Karhunen-Loeve transform (KLT), an inverse rotation transform, an inverse direction transform, or another inverse transform to the transform coefficient block.

此外,預測處理單元304根據由熵解碼單元302進行熵解碼的預測資訊語法元素來產生預測塊。例如,若預測資訊語法元素指示當前塊是經訊框間預測的,則運動補償單元316可以產生預測塊。在這種情況下,預測資訊語法元素可以指示在DPB 314中的要從其取回參考塊的參考圖片、以及標識相對於當前塊在當前圖片中的位置而言參考塊在參考圖片中的位置的運動向量。運動補償單元316通常可以以與關於運動補償單元224(圖11)所描述的方式基本類似的方式來執行訊框間預測程序。根據本文描述的技術,運動補償單元316可以被配置為使用仿射預測模式來對視訊資料區塊進行解碼,並且可以被配置為執行本文描述的運動向量細化程序。In addition, the prediction processing unit 304 generates a prediction block according to the prediction information syntax element entropy-decoded by the entropy decoding unit 302 . For example, motion compensation unit 316 may generate a prediction block if the prediction information syntax element indicates that the current block is inter-predicted. In this case, the prediction information syntax element may indicate the reference picture in the DPB 314 from which the reference block is to be retrieved, and identify the position of the reference block in the reference picture relative to the position of the current block in the current picture motion vector. Motion compensation unit 316 may generally perform inter-frame prediction procedures in a manner substantially similar to that described with respect to motion compensation unit 224 (FIG. 11). According to the techniques described herein, the motion compensation unit 316 may be configured to decode blocks of video data using an affine prediction mode, and may be configured to perform the motion vector refinement procedure described herein.

作為另一實例,若預測資訊語法元素指示當前塊是經訊框內預測的,則訊框內預測單元318可以根據由預測資訊語法元素指示的訊框內預測模式來產生預測塊。再次,訊框內預測單元318通常可以以與關於訊框內預測單元226(圖11)所描述的方式基本上類似的方式來執行訊框內預測程序。訊框內預測單元318可以從DPB 314取回當前塊的相鄰取樣的資料。As another example, if the prediction information syntax element indicates that the current block is intra-predicted, the intra prediction unit 318 may generate the prediction block according to the intra prediction mode indicated by the prediction information syntax element. Again, intra prediction unit 318 may generally perform the intra prediction process in a manner substantially similar to that described with respect to intra prediction unit 226 (FIG. 11). The intra-frame prediction unit 318 can retrieve data of neighboring samples of the current block from the DPB 314 .

重構單元310可以使用預測塊和殘差塊來重構當前塊。例如,重構單元310可以將殘差塊的取樣與預測塊的對應取樣相加來重構當前塊。The reconstruction unit 310 may reconstruct the current block using the prediction block and the residual block. For example, the reconstruction unit 310 may add the samples of the residual block and the corresponding samples of the prediction block to reconstruct the current block.

濾波器單元312可以對經重構的塊執行一或多個濾波器操作。例如,濾波器單元312可以執行去塊操作以減少沿著經重構的塊的邊緣的塊效應偽影。不一定在所有實例中皆執行濾波器單元312的操作。Filter unit 312 may perform one or more filter operations on the reconstructed block. For example, filter unit 312 may perform a deblocking operation to reduce blocking artifacts along edges of reconstructed blocks. The operations of filter unit 312 are not necessarily performed in all examples.

視訊解碼器300可以將經重構的塊儲存在DPB 314中。例如,在其中不執行濾波器單元312的操作的實例中,重構單元310可以將經重構的塊儲存到DPB 314中。在其中執行濾波器單元312的操作的實例中,濾波器單元312可以將經濾波的重構塊儲存到DPB 314中。如上所論述的,DPB 314可以將參考資訊(諸如用於訊框內預測的當前圖片以及用於後續運動補償的先前解碼的圖片的取樣)提供給預測處理單元304。此外,視訊解碼器300可以從DPB 314輸出經解碼的圖片(例如,經解碼的視訊),以用於在諸如圖1的顯示設備118之類的顯示設備上的後續呈現。Video decoder 300 may store the reconstructed blocks in DPB 314 . For example, in instances where the operations of filter unit 312 are not performed, reconstruction unit 310 may store the reconstructed block into DPB 314 . In examples where the operations of filter unit 312 are performed, filter unit 312 may store the filtered reconstruction block into DPB 314 . As discussed above, DPB 314 may provide reference information, such as samples of the current picture for intra prediction and previously decoded pictures for subsequent motion compensation, to prediction processing unit 304 . Additionally, video decoder 300 may output decoded pictures (eg, decoded video) from DPB 314 for subsequent presentation on a display device, such as display device 118 of FIG. 1 .

圖13是示出根據本案內容的技術的用於對當前塊進行編碼的實例程序的流程圖。當前塊可以包括當前CU。儘管關於視訊轉碼器200(圖1和11)進行了描述,但是應當理解的是,其他設備可以被配置為執行與圖13的程序類似的程序。13 is a flowchart illustrating an example procedure for encoding a current block in accordance with the techniques of this disclosure. A current block may include a current CU. Although described with respect to video transcoder 200 ( FIGS. 1 and 11 ), it should be understood that other devices may be configured to perform procedures similar to those of FIG. 13 .

在該實例中,視訊轉碼器200最初預測當前塊(350)。例如,視訊轉碼器200可以使用基於範本的仿射預測來形成用於當前塊的預測塊,如本文揭示內容中描述的。隨後,視訊轉碼器200可以計算用於當前塊的殘差塊(352)。為了計算殘差塊,視訊轉碼器200可以計算在原始的未經編碼的塊與用於當前塊的預測塊之間的差。隨後,視訊轉碼器200可以對殘差塊進行變換以及對殘差塊的係數進行變換和量化(354)。接下來,視訊轉碼器200可以掃瞄殘差塊的經量化的變換係數(356)。在掃瞄期間或在掃瞄之後,視訊轉碼器200可以對變換係數進行熵編碼(358)。例如,視訊轉碼器200可以使用CAVLC或CABAC來對變換係數進行編碼。隨後,視訊轉碼器200可以輸出塊的經熵編碼的資料(360)。In this example, video transcoder 200 initially predicts the current block (350). For example, video transcoder 200 may use template-based affine prediction to form a predicted block for a current block, as described in this disclosure. Video transcoder 200 may then compute a residual block for the current block (352). To calculate the residual block, video transcoder 200 may calculate the difference between the original uncoded block and the predicted block for the current block. Video transcoder 200 may then transform the residual block and transform and quantize the coefficients of the residual block (354). Next, video transcoder 200 may scan the residual block for quantized transform coefficients (356). During scanning or after scanning, video transcoder 200 may entropy encode the transform coefficients (358). For example, video transcoder 200 may use CAVLC or CABAC to encode transform coefficients. Video transcoder 200 may then output entropy encoded data for the block (360).

圖14是示出根據本案內容的技術的用於對視訊資料的當前塊進行解碼的實例程序的流程圖。當前塊可以包括當前CU。儘管關於視訊解碼器300(圖1和12)進行了描述,但是應當理解的是,其他設備可以被配置為執行與圖14的程序類似的程序。14 is a flowchart illustrating an example process for decoding a current block of video material in accordance with the techniques of this disclosure. A current block may include a current CU. Although described with respect to video decoder 300 ( FIGS. 1 and 12 ), it should be understood that other devices may be configured to perform procedures similar to those of FIG. 14 .

視訊解碼器300可以接收用於當前塊的經熵編碼的資料(例如,經熵編碼的預測資訊和用於與當前塊相對應的殘差塊的變換係數的經熵編碼的資料)(370)。視訊解碼器300可以對經熵編碼的資料進行熵解碼以決定用於當前塊的預測資訊並且重現殘差塊的變換係數(372)。視訊解碼器300可以例如使用如由用於當前塊的預測資訊所指示的訊框內或訊框間預測模式來預測當前塊(374),以計算用於當前塊的預測塊。例如,視訊解碼器300可以使用如本案內容中描述的基於範本的仿射預測來預測當前塊。隨後,視訊解碼器300可以對所重現的變換係數進行逆掃瞄(376),以建立經量化的變換係數的塊。隨後,視訊解碼器300可以對變換係數進行逆量化並且將逆變換應用於變換係數以產生殘差塊(378)。最終,視訊解碼器300可以經由將預測塊和殘差塊進行組合來對當前塊進行解碼(380)。Video decoder 300 may receive entropy-coded data for the current block (eg, entropy-coded prediction information and entropy-coded data for transform coefficients of a residual block corresponding to the current block) (370) . Video decoder 300 may entropy decode the entropy encoded data to determine prediction information for the current block and to recover transform coefficients for the residual block (372). Video decoder 300 may predict the current block (374), eg, using an intra or inter prediction mode as indicated by the prediction information for the current block, to compute a predicted block for the current block. For example, video decoder 300 may predict the current block using template-based affine prediction as described in this disclosure. Video decoder 300 may then inverse scan (376) the reconstructed transform coefficients to create blocks of quantized transform coefficients. Video decoder 300 may then inverse quantize the transform coefficients and apply the inverse transform to the transform coefficients to produce a residual block (378). Finally, video decoder 300 may decode the current block by combining the prediction block and the residual block (380).

圖15是示出根據本案內容的技術的用於對視訊資料的當前塊進行解碼的實例程序的流程圖。當前塊可以包括當前CU。儘管關於視訊解碼器300(圖1和12)進行了描述,但是應當理解的是,其他設備可以被配置為執行與圖15的程序類似的程序。15 is a flowchart illustrating an example process for decoding a current block of video material in accordance with the techniques of this disclosure. A current block may include a current CU. Although described with respect to video decoder 300 ( FIGS. 1 and 12 ), it should be understood that other devices may be configured to perform procedures similar to those of FIG. 15 .

視訊解碼器300可以決定以仿射預測模式對視訊資料的當前圖片中的當前塊進行譯碼(400)。仿射預測模式可以例如是4參數仿射預測模式、6參數仿射預測模式或某種其他此類仿射預測模式。The video decoder 300 may decide to decode a current block in a current picture of the video data in an affine prediction mode (400). The affine prediction mode may, for example, be a 4-parameter affine prediction mode, a 6-parameter affine prediction mode, or some other such affine prediction mode.

視訊解碼器300可以決定用於當前塊的一或多個控制點運動向量(CPMV)(402)。視訊解碼器300可以使用一或多個CPMV來辨識用於參考圖片中的當前塊的初始預測塊(404)。為了辨識當前塊的初始預測塊,視訊解碼器300例如可以使用CPMV來在參考訊框中定位複數個子塊。Video decoder 300 may determine one or more control point motion vectors (CPMV) for the current block (402). Video decoder 300 may use one or more CPMVs to identify an initial predictive block for a current block in a reference picture (404). In order to identify the initial predictive block of the current block, the video decoder 300 may, for example, use CPMV to locate a plurality of sub-blocks in the reference frame.

視訊解碼器300可以決定用於當前圖片中的當前塊的當前範本(406)。當前範本可以包括位於當前塊上方或當前塊左側的複數個子塊,例如,如圖9A所示。Video decoder 300 may determine a current template for a current block in a current picture (406). The current template may include a plurality of sub-blocks located above or to the left of the current block, for example, as shown in FIG. 9A .

視訊解碼器300可以決定用於參考圖片中的初始預測塊的初始參考範本(408)。初始參考範本可以包括位於初始預測塊上方或初始預測塊左側的複數個子塊,例如,如圖9B和9C所示。Video decoder 300 may determine an initial reference example for an initial prediction block in a reference picture (408). The initial reference template may include a plurality of sub-blocks located above or to the left of the initial prediction block, for example, as shown in FIGS. 9B and 9C .

視訊解碼器300可以基於當前範本與初始參考範本的比較來執行運動向量細化程序,以決定經修改的預測塊(410)。為了執行運動向量細化處理以進一步決定經修改的預測塊,視訊解碼器300例如可以在初始參考範本周圍的搜尋區域內搜尋與當前範本比初始參考範本更緊密匹配的後續參考範本。例如,當前範本與初始參考範本的比較可以是範本匹配成本,並且視訊解碼器300可以基於當前範本中的取樣與初始參考範本中的取樣的加權每取樣比較來決定範本匹配成本。Video decoder 300 may perform a motion vector refinement procedure based on the comparison of the current template and the initial reference template to determine a modified prediction block (410). To perform the motion vector refinement process to further determine the modified prediction block, the video decoder 300 may, for example, search for a subsequent reference template that matches the current template more closely than the initial reference template in a search area around the initial reference template. For example, the comparison of the current template and the initial reference template may be a template matching cost, and video decoder 300 may determine the template matching cost based on a weighted per-sample comparison of samples in the current template and samples in the initial reference template.

視訊解碼器300可以基於經修改的預測塊來決定預測塊;將預測塊添加到殘差塊以決定經重構的塊;向經重構的塊應用一或多個濾波操作;及輸出包括經濾波的經重構的塊的經解碼的視訊資料的圖片。The video decoder 300 may determine a prediction block based on the modified prediction block; add the prediction block to the residual block to determine a reconstructed block; apply one or more filtering operations to the reconstructed block; A picture of the decoded video data of the filtered reconstructed block.

以下編號的條款圖示本案內容中描述的設備和技術的一或多個態樣。The following numbered clauses illustrate one or more aspects of the devices and techniques described in this disclosure.

條款1A、一種對視訊資料進行解碼的方法,該方法包括:決定用於當前塊的一或多個控制點運動向量(CPMV),其中該一或多個CPMV對應於用於該當前塊的初始預測塊;及執行運動向量細化程序以決定經修改的預測塊。Clause 1A. A method of decoding video material, the method comprising: determining one or more control point motion vectors (CPMVs) for a current block, wherein the one or more CPMVs correspond to initial a prediction block; and performing a motion vector refinement procedure to determine a modified prediction block.

條款2A、根據條款1A所述的方法,其中該運動向量細化程序包括:執行範本匹配程序。Clause 2A. The method of Clause 1A, wherein the motion vector refinement procedure comprises: performing a template matching procedure.

條款3A、根據條款2A所述的方法,其中該一或多個CPMV包括初始CPMV集合,並且該範本匹配程序包括:決定經細化的CPMV集合。Clause 3A. The method as recited in Clause 2A, wherein the one or more CPMVs comprise an initial set of CPMVs, and the template matching procedure comprises: determining a refined set of CPMVs.

條款4A、根據條款3A所述的方法,其中決定該經細化的CPMV集合包括:向該一或多個CPMV添加一或多個增量運動向量值,以決定該經細化的CPMV集合。Clause 4A. The method of Clause 3A, wherein determining the refined set of CPMVs comprises: adding one or more incremental motion vector values to the one or more CPMVs to determine the refined set of CPMVs.

條款5A、根據條款3A或4A所述的方法,亦包括:基於該一或多個CPMV來決定搜尋區域;並且其中決定該經細化的CPMV集合包括:將經細化的CPMV限制在該搜尋區域內。Clause 5A. The method as recited in clause 3A or 4A, further comprising: determining a search area based on the one or more CPMVs; and wherein determining the set of refined CPMVs comprises limiting the refined CPMVs to the search area within the area.

條款6A、根據條款3A-5A中任一項所述的方法,亦包括:決定搜尋模式;及基於該搜尋模式來決定該經細化的CPMV集合。Clause 6A. The method as recited in any one of clauses 3A-5A, further comprising: determining a search pattern; and determining the refined set of CPMVs based on the search pattern.

條款7A、根據條款1A-6A中任一項所述的方法,其中執行該運動向量細化程序以決定該經修改的預測塊包括:執行一或多個範本匹配成本計算。Clause 7A. The method of any one of clauses 1A-6A, wherein performing the motion vector refinement procedure to determine the modified prediction block comprises: performing one or more template matching cost calculations.

條款8A、根據條款1A-7A中任一項所述的方法,其中該解碼的方法是作為編碼程序的一部分來執行的。Clause 8A. The method of any one of clauses 1A-7A, wherein the method of decoding is performed as part of an encoding procedure.

條款9A、一種用於對視訊資料進行解碼的設備,該設備包括用於執行根據條款1A-8A中任一項所述的方法的一或多個單元。Clause 9A. An apparatus for decoding video material, the apparatus comprising one or more means for performing the method of any one of clauses 1A-8A.

條款10A、根據條款9A所述的設備,其中該一或多個單元包括在電路中實現的一或多個處理器。Clause 10A. The apparatus of Clause 9A, wherein the one or more units comprise one or more processors implemented in circuitry.

條款11A、根據條款9A和10A中任一項所述的設備,亦包括:用於儲存該視訊資料的記憶體。Clause 11A. The apparatus according to any one of clauses 9A and 10A, further comprising: memory for storing the video data.

條款12A、根據條款9A-11A中任一項所述的設備,亦包括:被配置為顯示經解碼的視訊資料的顯示器。Clause 12A. The apparatus of any one of clauses 9A-11A, further comprising: a display configured to display the decoded video data.

條款13A、根據條款9A-12A中任一項所述的設備,其中該設備包括相機、電腦、行動設備、廣播接收器設備或機上盒中的一者或多者。Clause 13A. The device of any one of clauses 9A-12A, wherein the device comprises one or more of a camera, computer, mobile device, broadcast receiver device, or set-top box.

條款14A、根據條款9A-13A中任一項所述的設備,其中該設備包括視訊解碼器。Clause 14A. The apparatus of any one of clauses 9A-13A, wherein the apparatus comprises a video decoder.

條款15A、根據條款9A-14A中任一項所述的設備,其中該設備包括視訊轉碼器。Clause 15A. The apparatus of any one of clauses 9A-14A, wherein the apparatus comprises a video transcoder.

條款16A、一種具有儲存在其上的指令的電腦可讀取儲存媒體,該等指令在被執行時使得一或多個處理器執行根據條款1A-8A中任一項所述的方法。Clause 16A. A computer-readable storage medium having stored thereon instructions which, when executed, cause one or more processors to perform the method of any one of clauses 1A-8A.

條款1B、一種對視訊資料進行解碼的方法,該方法包括:決定以仿射預測模式對該視訊資料的當前圖片中的當前塊進行譯碼;決定用於該當前塊的一或多個控制點運動向量(CPMV);使用該一或多個CPMV來辨識用於參考圖片中的該當前塊的初始預測塊;決定用於該當前圖片中的該當前塊的當前範本;決定用於該參考圖片中的該初始預測塊的初始參考範本;及基於該當前範本與該初始參考範本的比較來執行運動向量細化程序,以決定經修改的預測塊。Clause 1B. A method of decoding video data, the method comprising: determining an affine prediction mode for decoding a current block in a current picture of the video data; determining one or more control points for the current block motion vector (CPMV); use the one or more CPMVs to identify an initial prediction block for the current block in a reference picture; determine a current template for the current block in the current picture; determine a current template for the reference picture an initial reference template for the initial prediction block; and performing a motion vector refinement procedure based on a comparison between the current template and the initial reference template to determine a modified prediction block.

條款2B、根據條款1B所述的方法,其中執行該運動向量細化程序以決定該經修改的預測塊亦包括:在該初始參考範本周圍的搜尋區域內搜尋比該初始參考範本更緊密地與該當前範本匹配的後續參考範本。Clause 2B. The method as recited in Clause 1B, wherein performing the motion vector refinement procedure to determine the modified prediction block further comprises: searching a search area around the initial reference template that is more closely related to the initial reference template than the initial reference template The subsequent reference template that this current template matches.

條款3B、根據條款1B所述的方法,其中該當前範本與該初始參考範本的該比較包括範本匹配成本。Clause 3B. The method of Clause 1B, wherein the comparison of the current template and the initial reference template includes a template matching cost.

條款4B、根據條款3B所述的方法,亦包括:基於該當前範本中的取樣與該初始參考範本中的取樣的加權每取樣比較來決定該範本匹配成本。Clause 4B. The method of Clause 3B, further comprising: determining the template matching cost based on a weighted per-sample comparison of samples in the current template and samples in the initial reference template.

條款5B、根據條款1B所述的方法,其中該初始參考範本包括位於該初始預測塊上方或該初始預測塊左側的複數個子塊。Clause 5B. The method of Clause 1B, wherein the initial reference example comprises a plurality of sub-blocks located above or to the left of the initial prediction block.

條款6B、根據條款1B所述的方法,其中該仿射預測模式包括4參數仿射預測模式。Clause 6B. The method of Clause 1B, wherein the affine prediction mode comprises a 4-parameter affine prediction mode.

條款7B、根據條款1B所述的方法,其中該仿射預測模式包括6參數仿射預測模式。Clause 7B. The method of Clause 1B, wherein the affine prediction mode comprises a 6-parameter affine prediction mode.

條款8B、根據條款1B所述的方法,亦包括:基於該經修改的預測塊來決定預測塊;將該預測塊添加到殘差塊以決定經重構的塊;向該經重構的塊應用一或多個濾波操作;及輸出包括經濾波的經重構的塊的經解碼的視訊資料的圖片。Clause 8B. The method of Clause 1B, further comprising: determining a prediction block based on the modified prediction block; adding the prediction block to the residual block to determine a reconstructed block; adding to the reconstructed block applying one or more filtering operations; and outputting a picture of decoded video data comprising the filtered reconstructed block.

條款9B、根據條款1B所述的方法,其中該解碼的方法是作為視訊編碼程序的一部分來執行的。Clause 9B. The method of Clause 1B, wherein the decoding is performed as part of a video encoding process.

條款10B、一種用於對視訊資料進行譯碼的設備,該設備包括:記憶體;及一或多個處理器,其在電路中實現、耦合到該記憶體並且被配置為:決定以仿射預測模式對該視訊資料的當前圖片中的當前塊進行解碼;決定用於該當前塊的一或多個控制點運動向量(CPMV);使用該一或多個CPMV來辨識用於參考圖片中的該當前塊的初始預測塊;決定用於該當前圖片中的該當前塊的當前範本;決定用於該參考圖片中的該初始預測塊的初始參考範本;及基於該當前範本與該初始參考範本的比較來執行運動向量細化程序,以決定經修改的預測塊。Clause 10B. An apparatus for decoding video data, the apparatus comprising: a memory; and one or more processors implemented in a circuit, coupled to the memory and configured to: determine an affine predictive mode decoding a current block in a current picture of the video data; determining one or more control point motion vectors (CPMVs) for the current block; using the one or more CPMVs to identify a control point motion vector (CPMV) for a reference picture an initial prediction block of the current block; determining a current template for the current block in the current picture; determining an initial reference template for the initial prediction block in the reference picture; and based on the current template and the initial reference template The comparison to perform the motion vector refinement procedure to determine the modified prediction block.

條款11B、根據條款10B所述的設備,其中為了執行該運動向量細化程序以決定該經修改的預測塊,該一或多個處理器亦被配置為:在該初始參考範本周圍的搜尋區域內搜尋比該初始參考範本更緊密地與該當前範本匹配的後續參考範本。Clause 11B. The apparatus of Clause 10B, wherein for performing the motion vector refinement procedure to determine the modified prediction block, the one or more processors are also configured to: search regions around the initial reference template A subsequent reference template that more closely matches the current template than the initial reference template is searched within.

條款12B、根據條款10B所述的設備,其中該當前範本與該初始參考範本的該比較包括範本匹配成本。Clause 12B. The apparatus of Clause 10B, wherein the comparison of the current template and the initial reference template includes a template matching cost.

條款13B、根據條款12B所述的設備,其中該一或多個處理器亦被配置為:基於該當前範本中的取樣與該初始參考範本中的取樣的加權每取樣比較來決定該範本匹配成本。Clause 13B. The apparatus of Clause 12B, wherein the one or more processors are further configured to: determine the template matching cost based on a weighted per-sample comparison of samples in the current template with samples in the initial reference template .

條款14B、根據條款10B所述的設備,其中該初始參考範本包括位於該初始預測塊上方或該初始預測塊左側的複數個子塊。Clause 14B. The apparatus of Clause 10B, wherein the initial reference template comprises a plurality of sub-blocks located above or to the left of the initial predictive block.

條款15B、根據條款10B所述的設備,其中該仿射預測模式包括4參數仿射預測模式。Clause 15B. The apparatus of Clause 10B, wherein the affine prediction mode comprises a 4-parameter affine prediction mode.

條款16B、根據條款10B所述的設備,其中該仿射預測模式包括6參數仿射預測模式。Clause 16B. The apparatus of Clause 10B, wherein the affine prediction mode comprises a 6-parameter affine prediction mode.

條款17B、根據條款10B所述的設備,其中該一或多個處理器亦被配置為:基於該經修改的預測塊來決定預測塊;將該預測塊添加到殘差塊以決定經重構的塊;向該經重構的塊應用一或多個濾波操作;及輸出包括經濾波的經重構的塊的經解碼的視訊資料的圖片。Clause 17B. The apparatus of Clause 10B, wherein the one or more processors are also configured to: determine a prediction block based on the modified prediction block; add the prediction block to the residual block to determine the reconstructed applying one or more filtering operations to the reconstructed block; and outputting a picture of decoded video data comprising the filtered reconstructed block.

條款18B:根據條款10B所述的設備,其中該設備包括無線通訊設備,亦包括被配置為接收經編碼的視訊資料的接收器。Clause 18B: The device of Clause 10B, wherein the device comprises a wireless communication device and also comprises a receiver configured to receive the encoded video data.

條款19B:根據條款18B所述的設備,其中該無線通訊設備包括電話手機,並且其中該接收器被配置為根據無線通訊標準來對包括該經編碼的視訊資料的訊號進行解調。Clause 19B: The device of Clause 18B, wherein the wireless communication device comprises a telephone handset, and wherein the receiver is configured to demodulate a signal comprising the encoded video data according to a wireless communication standard.

條款20B、根據條款10B所述的設備,亦包括:被配置為顯示經解碼的視訊資料的顯示器。Clause 2OB. The apparatus of Clause 1OB, further comprising: a display configured to display the decoded video data.

條款21B、根據條款10B所述的設備,其中該設備包括相機、電腦、行動設備、廣播接收器設備或機上盒中的一者或多者。Clause 21B. The device of clause 10B, wherein the device comprises one or more of a camera, computer, mobile device, broadcast receiver device, or set-top box.

條款22B、根據條款10B所述的設備,其中該設備包括視訊編碼設備。Clause 22B. The device of Clause 10B, wherein the device comprises a video encoding device.

條款23B、一種儲存指令的電腦可讀取儲存媒體,該等指令在由一或多個處理器執行時使得該一或多個處理器進行以下操作:決定以仿射預測模式對該視訊資料的當前圖片中的當前塊進行譯碼;決定用於該當前塊的一或多個控制點運動向量(CPMV);使用該一或多個CPMV來辨識用於參考圖片中的該當前塊的初始預測塊;決定用於該當前圖片中的該當前塊的當前範本;決定用於該參考圖片中的該初始預測塊的初始參考範本;及基於該當前範本與該初始參考範本的比較來執行運動向量細化程序,以決定經修改的預測塊。Clause 23B. A computer-readable storage medium storing instructions that, when executed by one or more processors, cause the one or more processors to: determine an affine prediction mode for the video data Decoding the current block in the current picture; determining one or more control point motion vectors (CPMVs) for the current block; using the one or more CPMVs to identify an initial prediction for the current block in a reference picture block; determine a current template for the current block in the current picture; determine an initial reference template for the initial prediction block in the reference picture; and perform motion vectoring based on a comparison between the current template and the initial reference template Refinement procedure to determine modified prediction blocks.

條款24B、根據條款23B所述的電腦可讀取儲存媒體,其中為了執行該運動向量細化程序以決定該經修改的預測塊,該等指令亦使得該一或多個處理器進行以下操作:在該初始參考範本周圍的搜尋區域內搜尋比該初始參考範本更緊密地與該當前範本匹配的後續參考範本。Clause 24B. The computer-readable storage medium of clause 23B, wherein to execute the motion vector refinement procedure to determine the modified prediction block, the instructions also cause the one or more processors to: A subsequent reference template that more closely matches the current template than the initial reference template is searched in a search area around the initial reference template.

條款25B、根據條款23B所述的電腦可讀取儲存媒體,其中該當前範本與該初始參考範本的該比較包括範本匹配成本。Clause 25B. The computer-readable storage medium of Clause 23B, wherein the comparison of the current template and the initial reference template includes a template matching cost.

條款26B、根據條款25B所述的電腦可讀取儲存媒體,其中該等指令使得該一或多個處理器進行以下操作:基於該當前範本中的取樣與該初始參考範本中的取樣的加權每取樣比較來決定該範本匹配成本。Clause 26B. The computer-readable storage medium of clause 25B, wherein the instructions cause the one or more processors to: each Sampling comparisons are used to determine the template matching cost.

條款27B、根據條款23B所述的電腦可讀取儲存媒體,其中該初始參考範本包括位於該初始預測塊上方或該初始預測塊左側的複數個子塊。Clause 27B. The computer-readable storage medium of Clause 23B, wherein the initial reference template includes a plurality of sub-blocks located above or to the left of the initial predicted block.

條款28B、根據條款23B所述的電腦可讀取儲存媒體,其中該等指令使得該一或多個處理器進行以下操作:基於該經修改的預測塊來決定預測塊;將該預測塊添加到殘差塊以決定經重構的塊;向該經重構的塊應用一或多個濾波操作;及輸出包括經濾波的經重構的塊的經解碼的視訊資料的圖片。Clause 28B. The computer-readable storage medium of clause 23B, wherein the instructions cause the one or more processors to: determine a prediction block based on the modified prediction block; add the prediction block to the residual block to determine a reconstructed block; apply one or more filtering operations to the reconstructed block; and output a picture of decoded video data including the filtered reconstructed block.

條款29B、一種用於對視訊資料進行解碼的裝置,該裝置包括:用於決定以仿射預測模式對該視訊資料的當前圖片中的當前塊進行譯碼的單元;用於決定用於該當前塊的一或多個控制點運動向量(CPMV)的單元;用於使用該一或多個CPMV來辨識用於參考圖片中的該當前塊的初始預測塊的單元;用於決定用於該當前圖片中的該當前塊的當前範本的單元;用於決定用於該參考圖片中的該初始預測塊的初始參考範本的單元;及用於基於該當前範本與該初始參考範本的比較來執行運動向量細化程序,以決定經修改的預測塊的單元。Clause 29B. An apparatus for decoding video data, the apparatus comprising: means for determining to decode a current block in a current picture of the video data in an affine prediction mode; A unit of one or more control point motion vectors (CPMVs) of a block; a unit used to identify an initial prediction block for the current block in a reference picture using the one or more CPMVs; used to determine a block for the current block A unit for the current example of the current block in the picture; a unit for determining an initial reference example for the initial prediction block in the reference picture; and for performing motion based on a comparison of the current example and the initial reference example Vector refinement procedure to determine elements of the modified prediction block.

條款30B、根據條款29B所述的裝置,其中該當前範本與該初始參考範本的該比較包括範本匹配成本,該裝置亦包括:用於基於該當前範本中的取樣與該初始參考範本中的取樣的加權每取樣比較來決定該範本匹配成本的單元。Clause 30B. The apparatus of Clause 29B, wherein the comparison of the current template and the initial reference template includes a template matching cost, the means further comprising: The weighted per-sample comparisons are used to determine the cost of matching the template to the unit.

條款1C、一種對視訊資料進行解碼的方法,該方法包括:決定以仿射預測模式對該視訊資料的當前圖片中的當前塊進行譯碼;決定用於該當前塊的一或多個控制點運動向量(CPMV);使用該一或多個CPMV來辨識用於參考圖片中的該當前塊的初始預測塊;決定用於該當前圖片中的該當前塊的當前範本;決定用於該參考圖片中的該初始預測塊的初始參考範本;及基於該當前範本與該初始參考範本的比較來執行運動向量細化程序,以決定經修改的預測塊。Clause 1C. A method of decoding video data, the method comprising: determining an affine prediction mode to decode a current block in a current picture of the video data; determining one or more control points for the current block motion vector (CPMV); use the one or more CPMVs to identify an initial prediction block for the current block in a reference picture; determine a current template for the current block in the current picture; determine a current template for the reference picture an initial reference template for the initial prediction block; and performing a motion vector refinement procedure based on a comparison between the current template and the initial reference template to determine a modified prediction block.

條款2C、根據條款1C所述的方法,其中執行該運動向量細化程序以決定該經修改的預測塊亦包括:在該初始參考範本周圍的搜尋區域內搜尋比該初始參考範本更緊密地與該當前範本匹配的後續參考範本。Clause 2C. The method of Clause 1C, wherein performing the motion vector refinement procedure to determine the modified prediction block further comprises: searching a search area around the initial reference template that is more closely related to the initial reference template than the initial reference template The subsequent reference template that this current template matches.

條款3C、根據條款1C或2C所述的方法,其中該當前範本與該初始參考範本的該比較包括範本匹配成本。Clause 3C. The method of Clause 1C or 2C, wherein the comparison of the current template and the initial reference template includes a template matching cost.

條款4C、根據條款3C所述的方法,亦包括:基於該當前範本中的取樣與該初始參考範本中的取樣的加權每取樣比較來決定該範本匹配成本。Clause 4C. The method of Clause 3C, further comprising: determining the template matching cost based on a weighted per-sample comparison of samples in the current template and samples in the initial reference template.

條款5C、根據條款1C-4C中任一項所述的方法,其中該初始參考範本包括位於該初始預測塊上方或該初始預測塊左側的複數個子塊。Clause 5C. The method of any one of clauses 1C-4C, wherein the initial reference example comprises a plurality of sub-blocks located above or to the left of the initial predicted block.

條款6C、根據條款1C-5C中任一項所述的方法,其中該仿射預測模式包括4參數仿射預測模式。Clause 6C. The method of any one of clauses 1C-5C, wherein the affine prediction mode comprises a 4-parameter affine prediction mode.

條款7C、根據條款1C-5C中任一項所述的方法,其中該仿射預測模式包括6參數仿射預測模式。Clause 7C. The method of any one of clauses 1C-5C, wherein the affine prediction mode comprises a 6-parameter affine prediction mode.

條款8C、根據條款1C-7C中任一項所述的方法,亦包括:基於該經修改的預測塊來決定預測塊;將該預測塊添加到殘差塊以決定經重構的塊;向該經重構的塊應用一或多個濾波操作;及輸出包括經濾波的經重構的塊的經解碼的視訊資料的圖片。Clause 8C. The method of any one of clauses 1C-7C, further comprising: determining a prediction block based on the modified prediction block; adding the prediction block to the residual block to determine a reconstructed block; Applying one or more filtering operations to the reconstructed block; and outputting a picture of decoded video data comprising the filtered reconstructed block.

條款9C、根據條款1C-8C中任一項所述的方法,其中該解碼的方法是作為視訊編碼程序的一部分來執行的。Clause 9C. The method of any one of clauses 1C-8C, wherein the decoding is performed as part of a video encoding process.

條款10C、一種用於對視訊資料進行解碼的設備,該設備包括:記憶體;及一或多個處理器,其在電路中實現、耦合到該記憶體並且被配置為:決定以仿射預測模式對該視訊資料的當前圖片中的當前塊進行譯碼;決定用於該當前塊的一或多個控制點運動向量(CPMV);使用該一或多個CPMV來辨識用於參考圖片中的該當前塊的初始預測塊;決定用於該當前圖片中的該當前塊的當前範本;決定用於該參考圖片中的該初始預測塊的初始參考範本;及基於該當前範本與該初始參考範本的比較來執行運動向量細化程序,以決定經修改的預測塊。Clause 10C. An apparatus for decoding video data, the apparatus comprising: a memory; and one or more processors implemented in a circuit, coupled to the memory, and configured to: determine an affine prediction mode to decode a current block in a current picture of the video data; determine one or more control point motion vectors (CPMVs) for the current block; use the one or more CPMVs to identify a control point motion vector (CPMV) for a reference picture an initial prediction block of the current block; determining a current template for the current block in the current picture; determining an initial reference template for the initial prediction block in the reference picture; and based on the current template and the initial reference template The comparison to perform the motion vector refinement procedure to determine the modified prediction block.

條款11C、根據條款10C所述的設備,其中為了執行該運動向量細化程序以決定該經修改的預測塊,該一或多個處理器亦被配置為:在該初始參考範本周圍的搜尋區域內搜尋比該初始參考範本更緊密地與該當前範本匹配的後續參考範本。Clause 11C. The apparatus of Clause 10C, wherein for performing the motion vector refinement procedure to determine the modified prediction block, the one or more processors are also configured to: search regions around the initial reference template A subsequent reference template that more closely matches the current template than the initial reference template is searched within.

條款12C、根據條款10C或11C所述的設備,其中該當前範本與該初始參考範本的該比較包括範本匹配成本。Clause 12C. The apparatus of Clause 10C or 11C, wherein the comparison of the current template and the initial reference template includes a template matching cost.

條款13C、根據條款12C所述的設備,其中該一或多個處理器亦被配置為:基於該當前範本中的取樣與該初始參考範本中的取樣的加權每取樣比較來決定該範本匹配成本。Clause 13C. The apparatus of Clause 12C, wherein the one or more processors are further configured to: determine the template matching cost based on a weighted per-sample comparison of samples in the current template with samples in the initial reference template .

條款14C、根據條款10C-13C中任一項所述的設備,其中該初始參考範本包括位於該初始預測塊上方或該初始預測塊左側的複數個子塊。Clause 14C. The apparatus of any one of clauses 10C-13C, wherein the initial reference example comprises a plurality of sub-blocks located above or to the left of the initial prediction block.

條款15C、根據條款10C-14C中任一項所述的設備,其中該仿射預測模式包括4參數仿射預測模式。Clause 15C. The apparatus of any one of clauses 10C-14C, wherein the affine prediction mode comprises a 4-parameter affine prediction mode.

條款16C、根據條款10C-14C中任一項所述的設備,其中該仿射預測模式包括6參數仿射預測模式。Clause 16C. The apparatus of any one of clauses 10C-14C, wherein the affine prediction mode comprises a 6-parameter affine prediction mode.

條款17C、根據條款10C-16C中任一項所述的設備,其中該一或多個處理器亦被配置為:基於該經修改的預測塊來決定預測塊;將該預測塊添加到殘差塊以決定經重構的塊;向該經重構的塊應用一或多個濾波操作;及輸出包括經濾波的經重構的塊的經解碼的視訊資料的圖片。Clause 17C. The apparatus of any one of clauses 10C-16C, wherein the one or more processors are also configured to: determine a prediction block based on the modified prediction block; add the prediction block to the residual block to determine a reconstructed block; apply one or more filtering operations to the reconstructed block; and output a picture of decoded video data including the filtered reconstructed block.

條款18C:根據條款10C-17C中任一項所述的設備,其中該設備包括無線通訊設備,亦包括被配置為接收經編碼的視訊資料的接收器。Clause 18C: The device according to any one of clauses 10C-17C, wherein the device comprises a wireless communication device and also comprises a receiver configured to receive the encoded video data.

條款19C:根據條款18C所述的設備,其中該無線通訊設備包括電話手機,並且其中該接收器被配置為根據無線通訊標準來對包括該經編碼的視訊資料的訊號進行解調。Clause 19C: The device of Clause 18C, wherein the wireless communication device comprises a telephone handset, and wherein the receiver is configured to demodulate a signal comprising the encoded video data according to a wireless communication standard.

條款20C、根據條款10C-19C中任一項所述的設備,亦包括:被配置為顯示經解碼的視訊資料的顯示器。Clause 20C. The apparatus according to any one of clauses 10C-19C, further comprising: a display configured to display the decoded video data.

條款21C、根據條款10C-20C中任一項所述的設備,其中該設備包括相機、電腦、行動設備、廣播接收器設備或機上盒中的一者或多者。Clause 21C. The device of any one of clauses 10C-20C, wherein the device comprises one or more of a camera, a computer, a mobile device, a broadcast receiver device, or a set top box.

條款22C、根據條款10C-20C中任一項所述的設備,其中該設備包括視訊編碼設備。Clause 22C. The apparatus of any one of clauses 10C-20C, wherein the apparatus comprises a video encoding apparatus.

條款23C、一種儲存指令的電腦可讀取儲存媒體,該等指令在由一或多個處理器執行時使得該一或多個處理器進行以下操作:決定以仿射預測模式對該視訊資料的當前圖片中的當前塊進行譯碼;決定用於該當前塊的一或多個控制點運動向量(CPMV);使用該一或多個CPMV來辨識用於參考圖片中的該當前塊的初始預測塊;決定用於該當前圖片中的該當前塊的當前範本;決定用於該參考圖片中的該初始預測塊的初始參考範本;及基於該當前範本與該初始參考範本的比較來執行運動向量細化程序,以決定經修改的預測塊。Clause 23C. A computer-readable storage medium storing instructions that, when executed by one or more processors, cause the one or more processors to: determine an affine prediction mode for the video data Decoding the current block in the current picture; determining one or more control point motion vectors (CPMVs) for the current block; using the one or more CPMVs to identify an initial prediction for the current block in a reference picture block; determine a current template for the current block in the current picture; determine an initial reference template for the initial prediction block in the reference picture; and perform motion vectoring based on a comparison between the current template and the initial reference template Refinement procedure to determine modified prediction blocks.

條款24C、根據條款23C所述的電腦可讀取儲存媒體,其中為了執行該運動向量細化程序以決定該經修改的預測塊,該等指令使得該一或多個處理器進行以下操作:在該初始參考範本周圍的搜尋區域內搜尋比該初始參考範本更緊密地與該當前範本匹配的後續參考範本。Clause 24C. The computer-readable storage medium of Clause 23C, wherein to execute the motion vector refinement procedure to determine the modified prediction block, the instructions cause the one or more processors to: A search region around the initial reference template is searched for subsequent reference templates that more closely match the current template than the initial reference template.

條款25C、根據條款23C所述的電腦可讀取儲存媒體,其中該當前範本與該初始參考範本的該比較包括範本匹配成本。Clause 25C. The computer-readable storage medium of Clause 23C, wherein the comparison of the current template and the initial reference template includes a template matching cost.

條款26C、根據條款25C所述的電腦可讀取儲存媒體,其中該等指令使得該一或多個處理器進行以下操作:基於該當前範本中的取樣與該初始參考範本中的取樣的加權每取樣比較來決定該範本匹配成本。Clause 26C. The computer-readable storage medium of clause 25C, wherein the instructions cause the one or more processors to: each Sampling comparisons are used to determine the template matching cost.

條款27C、根據條款23C所述的電腦可讀取儲存媒體,其中該初始參考範本包括位於該初始預測塊上方或該初始預測塊左側的複數個子塊。Clause 27C. The computer-readable storage medium of Clause 23C, wherein the initial reference template includes a plurality of sub-blocks located above or to the left of the initial predicted block.

條款28C、根據條款23C所述的電腦可讀取儲存媒體,其中該等指令使得該一或多個處理器進行以下操作:基於該經修改的預測塊來決定預測塊;將該預測塊添加到殘差塊以決定經重構的塊;向該經重構的塊應用一或多個濾波操作;及輸出包括經濾波的經重構的塊的經解碼的視訊資料的圖片。Clause 28C. The computer-readable storage medium of clause 23C, wherein the instructions cause the one or more processors to: determine a prediction block based on the modified prediction block; add the prediction block to the residual block to determine a reconstructed block; apply one or more filtering operations to the reconstructed block; and output a picture of decoded video data including the filtered reconstructed block.

條款29C、一種用於對視訊資料進行解碼的裝置,該裝置包括:用於決定以仿射預測模式對該視訊資料的當前圖片中的當前塊進行譯碼的單元;用於決定用於該當前塊的一或多個控制點運動向量(CPMV)的單元;用於使用該一或多個CPMV來辨識用於參考圖片中的該當前塊的初始預測塊的單元;用於決定用於該當前圖片中的該當前塊的當前範本的單元;用於決定用於該參考圖片中的該初始預測塊的初始參考範本的單元;及用於基於該當前範本與該初始參考範本的比較來執行運動向量細化程序,以決定經修改的預測塊的單元。Clause 29C. An apparatus for decoding video data, the apparatus comprising: means for determining to decode a current block in a current picture of the video data in an affine prediction mode; A unit of one or more control point motion vectors (CPMVs) of a block; a unit used to identify an initial prediction block for the current block in a reference picture using the one or more CPMVs; used to determine a block for the current block A unit for the current example of the current block in the picture; a unit for determining an initial reference example for the initial prediction block in the reference picture; and for performing motion based on a comparison of the current example and the initial reference example Vector refinement procedure to determine elements of the modified prediction block.

條款30C、根據條款29C所述的裝置,其中該當前範本與該初始參考範本的該比較包括範本匹配成本,該裝置亦包括:用於基於該當前範本中的取樣與該初始參考範本中的取樣的加權每取樣比較來決定該範本匹配成本的單元。Clause 30C. The apparatus of Clause 29C, wherein the comparison of the current template and the initial reference template includes a template matching cost, the means further comprising: The weighted per-sample comparisons are used to determine the cost of matching the template to the unit.

要認識到的是,根據實例,本文描述的任何技術的某些動作或事件可以以不同的循序執行,可以被添加、合併或完全省略(例如,並非所有描述的動作或事件是對於實施該等技術皆是必要的)。此外,在某些實例中,動作或事件可以例如經由多執行緒處理、中斷處理或多個處理器併發地而不是順序地執行。It is to be appreciated that, depending on the example, certain acts or events of any of the techniques described herein may be performed in a different order, added to, combined, or omitted entirely (e.g., not all described acts or events are essential for implementing such technology is necessary). Furthermore, in some instances, actions or events may be performed concurrently rather than sequentially, eg, via multi-thread processing, interrupt handling, or multiple processors.

在一或多個實例中,所描述的功能可以用硬體、軟體、韌體或其任何組合來實現。若用軟體來實現,則該等功能可以作為一或多個指令或代碼儲存在電腦可讀取媒體上或者經由其進行傳輸並且由基於硬體的處理單元執行。電腦可讀取媒體可以包括電腦可讀取儲存媒體,其對應於諸如資料儲存媒體之類的有形媒體或者通訊媒體,該通訊媒體包括例如根據通訊協定來促進電腦程式從一個地方傳送到另一個地方的任何媒體。以這種方式,電腦可讀取媒體通常可以對應於(1)非暫時性的有形電腦可讀取儲存媒體、或者(2)諸如訊號或載波之類的通訊媒體。資料儲存媒體可以是可以由一或多個電腦或者一或多個處理器存取以取得用於實現在本案內容中描述的技術的指令、代碼及/或資料結構的任何可用的媒體。電腦程式產品可以包括電腦可讀取媒體。In one or more instances, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which correspond to tangible media such as data storage media, or communication media including, for example, communication protocols to facilitate transfer of computer programs from one place to another. any media. In this manner, a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to obtain instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer readable medium.

舉例而言而非進行限制,此類電腦可讀取儲存媒體可以包括RAM、ROM、EEPROM、CD-ROM或其他光碟儲存、磁碟儲存或其他磁存放設備、快閃記憶體、或者能夠用於以指令或資料結構形式儲存期望的程式碼以及能夠由電腦存取的任何其他媒體。此外,任何連接被適當地稱為電腦可讀取媒體。例如,若使用同軸電纜、光纖光纜、雙絞線、數位用戶線路(DSL)或者無線技術(例如,紅外線、無線電和微波)從網站、伺服器或其他遠端源傳輸指令,則同軸電纜、光纖光纜、雙絞線、DSL或者無線技術(例如,紅外線、無線電和微波)被包括在媒體的定義中。然而,應當理解的是,電腦可讀取儲存媒體和資料儲存媒體不包括連接、載波、訊號或其他臨時性媒體,而是替代地針對非暫時性的有形儲存媒體。如本文所使用的,磁碟和光碟包括壓縮光碟(CD)、鐳射光碟、光碟、數位多功能光碟(DVD)、軟碟和藍光光碟,其中磁碟通常磁性地複製資料,而光碟利用鐳射來光學地複製資料。上述各項的組合亦應當被包括在電腦可讀取媒體的範疇之內。By way of example and not limitation, such computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or Any other medium that stores desired code in the form of instructions or data structures and can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology (such as infrared, radio, and microwave), then coaxial cable, fiber optic Fiber optic cable, twisted pair, DSL, or wireless technologies (eg, infrared, radio, and microwave) are included in the definition of media. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, compact disc, digital versatile disc (DVD), floppy disc, and Blu-ray disc, where disks usually reproduce data magnetically, while discs use lasers to Optically reproduce data. Combinations of the above should also be included within the scope of computer-readable media.

指令可以由一或多個處理器來執行,諸如一或多個DSP、通用微處理器、ASIC、FPGA、或其他等效的整合或個別邏輯電路。因此,如本文所使用的術語「處理器」和「處理電路」可以代表前述結構中的任何一者或者適於實現本文描述的技術的任何其他結構。另外,在一些態樣中,本文描述的功能可以在被配置用於編碼和解碼的專用硬體及/或軟體模組內提供,或者被併入經組合的轉碼器中。此外,該等技術可以完全在一或多個電路或邏輯部件中實現。Instructions may be executed by one or more processors, such as one or more DSPs, general purpose microprocessors, ASICs, FPGAs, or other equivalent integrated or discrete logic circuits. Accordingly, the terms "processor" and "processing circuitry," as used herein, may represent any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Additionally, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated into a combined transcoder. In addition, the techniques may be implemented entirely in one or more circuits or logic components.

本案內容的技術可以在多種多樣的設備或裝置中實現,包括無線手機、積體電路(IC)或一組IC(例如,晶片組)。在本案內容中描述了各種部件、模組或單元以強調被配置為執行所揭示的技術的設備的功能性態樣,但是不一定需要經由不同的硬體單元來實現。確切而言,如前述,各種單元可以被組合在轉碼器硬體單元中,或者由可交互動操作的硬體單元的集合(包括如前述的一或多個處理器)結合適當的軟體及/或韌體來提供。The technology at issue may be implemented in a wide variety of devices or devices, including a wireless handset, an integrated circuit (IC), or a group of ICs (eg, a chipset). Various components, modules, or units are described in this context to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization via different hardware units. Specifically, as mentioned above, various units may be combined in a transcoder hardware unit, or a collection of hardware units (including one or more processors as described above) combined with appropriate software and /or firmware to provide.

已經描述了各個實例。這些和其他實例在所附的請求項的範疇內。Various examples have been described. These and other examples are within the scope of the appended claims.

100:系統 102:源設備 104:視訊源 106:記憶體 108:輸出介面 110:電腦可讀取媒體 112:存放設備 114:檔案伺服器 116:目的地設備 118:顯示設備 120:記憶體 122:輸入介面 130:QTBT結構 132:CTU 140:塊 142:塊 154:塊 156:運動向量縮放程序 160:當前CU 162:當前範本 164:參考範本 170A:控制點 170B:控制點 172A:控制點 172B:控制點 172C:控制點 190:箭頭 192:當前範本塊 194A:參考範本 194B:參考範本 198A:當前CU 198B:當前CU 200:視訊轉碼器 202:模式選擇單元 204:殘差產生單元 206:變換處理單元 208:量化單元 210:逆量化單元 212:逆變換處理單元 214:重構單元 216:濾波器單元 218:解碼圖片緩衝器(DPB) 220:熵編碼單元 222:運動估計單元 224:運動補償單元 226:訊框內預測單元 230:視訊資料記憶體 300:視訊解碼器 302:熵解碼單元 304:預測處理單元 306:逆量化單元 308:逆變換處理單元 310:重構單元 312:濾波器單元 314:解碼圖片緩衝器(DPB) 316:運動補償單元 318:訊框內預測單元 320:譯碼圖片緩衝器(CPB)記憶體 350:方塊 352:方塊 354:方塊 356:方塊 358:方塊 360:方塊 370:方塊 372:方塊 374:方塊 376:方塊 378:方塊 380:方塊 400:方塊 402:方塊 404:方塊 406:方塊 408:方塊 410:方塊 A 0:子塊 A 1:子塊 A 2:子塊 A 3:子塊 L 0:子塊 L 1:子塊 L 2:子塊 L 3:子塊 PU0:預測單元 PU1:預測單元 TMVP:時間運動向量預測器 V 0:向量 V 1:向量 V SB:向量 v 2:向量 100: system 102: source device 104: video source 106: memory 108: output interface 110: computer readable medium 112: storage device 114: file server 116: destination device 118: display device 120: memory 122: Input interface 130: QTBT structure 132: CTU 140: block 142: block 154: block 156: motion vector scaling program 160: current CU 162: current template 164: reference template 170A: control point 170B: control point 172A: control point 172B: Control point 172C: control point 190: arrow 192: current template block 194A: reference template 194B: reference template 198A: current CU 198B: current CU 200: video transcoder 202: mode selection unit 204: residual generation unit 206: transformation Processing unit 208: Quantization unit 210: Inverse quantization unit 212: Inverse transform processing unit 214: Reconstruction unit 216: Filter unit 218: Decoded picture buffer (DPB) 220: Entropy encoding unit 222: Motion estimation unit 224: Motion compensation Unit 226: intra-frame prediction unit 230: video data memory 300: video decoder 302: entropy decoding unit 304: prediction processing unit 306: inverse quantization unit 308: inverse transform processing unit 310: reconstruction unit 312: filter unit 314: Decoded Picture Buffer (DPB) 316: Motion Compensation Unit 318: Intra-Frame Prediction Unit 320: Decoded Picture Buffer (CPB) Memory 350: Block 352: Block 354: Block 356: Block 358: Block 360: Block 370: block 372: block 374: block 376: block 378: block 380: block 400: block 402: block 404: block 406: block 408: block 410: block A 0 : sub-block A 1 : sub-block A 2 : Sub-block A 3 : Sub-block L 0 : Sub-block L 1 : Sub-block L 2 : Sub-block L 3 : Sub-block PU0: Prediction unit PU1: Prediction unit TMVP: Temporal motion vector predictor V 0 : Vector V 1 : Vector V SB : vector v 2 : vector

圖1是示出可以執行本案內容的技術的實例視訊編碼和解碼系統的方塊圖。1 is a block diagram illustrating an example video encoding and decoding system that may implement the techniques of this disclosure.

圖2A和圖2B是示出實例四叉樹二叉樹(QTBT)結構以及對應的譯碼樹單元(CTU)的概念圖。2A and 2B are conceptual diagrams illustrating example quadtree binary tree (QTBT) structures and corresponding coding tree units (CTUs).

圖3A是示出用於合併模式的空間相鄰運動向量候選的概念圖。FIG. 3A is a conceptual diagram illustrating spatially adjacent motion vector candidates for merge mode.

圖3B是示出用於高級運動向量預測(AMVP)模式的空間相鄰運動向量候選的概念圖。FIG. 3B is a conceptual diagram illustrating spatially adjacent motion vector candidates for an advanced motion vector prediction (AMVP) mode.

圖4A是示出時間運動向量候選者的概念圖。FIG. 4A is a conceptual diagram showing temporal motion vector candidates.

圖4B是示出運動向量縮放的概念圖。FIG. 4B is a conceptual diagram illustrating motion vector scaling.

圖5圖示在初始運動向量周圍的搜尋區域上執行的範本匹配的實例。FIG. 5 illustrates an example of template matching performed on a search area around an initial motion vector.

圖6A是示出基於控制點的6參數仿射運動模型的概念圖。FIG. 6A is a conceptual diagram illustrating a control point-based 6-parameter affine motion model.

圖6B是示出基於控制點的4參數仿射運動模型的概念圖。FIG. 6B is a conceptual diagram illustrating a control point-based 4-parameter affine motion model.

圖7圖示每個子塊的仿射運動向量場的實例。FIG. 7 illustrates an example of an affine motion vector field for each sub-block.

圖8圖示子塊運動向量的實例。FIG. 8 illustrates an example of a sub-block motion vector.

圖9A-9C圖示當前範本塊和參考範本塊。9A-9C illustrate a current template block and a reference template block.

圖10是示出可以被指派給相鄰塊的取樣以計算範本匹配成本的每取樣權重的實例的概念圖。10 is a conceptual diagram illustrating an example of per-sample weights that may be assigned to samples of neighboring blocks to compute template matching costs.

圖11是示出可以執行本案內容的技術的實例視訊轉碼器的方塊圖。11 is a block diagram illustrating an example video transcoder that may implement the techniques of this disclosure.

圖12是示出可以執行本案內容的技術的實例視訊解碼器的方塊圖。12 is a block diagram illustrating an example video decoder that may perform techniques of this disclosure.

圖13是示出根據本案內容的技術的用於對當前塊進行編碼的實例程序的流程圖。13 is a flowchart illustrating an example procedure for encoding a current block in accordance with the techniques of this disclosure.

圖14是示出根據本案內容的技術的用於對當前塊進行解碼的實例程序的流程圖。14 is a flowchart illustrating an example procedure for decoding a current block in accordance with the techniques of this disclosure.

圖15是示出根據本案內容的技術的用於對當前塊進行解碼的實例程序的流程圖。15 is a flowchart illustrating an example procedure for decoding a current block in accordance with the techniques of this disclosure.

國內寄存資訊(請依寄存機構、日期、號碼順序註記) 無 國外寄存資訊(請依寄存國家、機構、日期、號碼順序註記) 無 Domestic deposit information (please note in order of depositor, date, and number) none Overseas storage information (please note in order of storage country, institution, date, and number) none

400:方塊 400: block

402:方塊 402: block

404:方塊 404: block

406:方塊 406: block

408:方塊 408: block

410:方塊 410: block

Claims (30)

一種對視訊資料進行解碼的方法,該方法包括以下步驟: 決定以一仿射預測模式對該視訊資料的一當前圖片中的一當前塊進行譯碼; 決定用於該當前塊的一或多個控制點運動向量(CPMV); 使用該一或多個CPMV來辨識用於一參考圖片中的該當前塊的一初始預測塊; 決定用於該當前圖片中的該當前塊的一當前範本; 決定用於該參考圖片中的該初始預測塊的一初始參考範本;及 基於該當前範本與該初始參考範本的一比較來執行一運動向量細化程序,以決定一經修改的預測塊。 A method for decoding video data, the method includes the following steps: determining to decode a current block in a current picture of the video data in an affine prediction mode; determine one or more control point motion vectors (CPMV) for the current block; using the one or more CPMVs to identify an initial prediction block for the current block in a reference picture; determining a current template for the current block in the current picture; determining an initial reference sample for the initial prediction block in the reference picture; and A motion vector refinement procedure is performed based on a comparison of the current template and the initial reference template to determine a modified prediction block. 根據請求項1之方法,其中執行該運動向量細化程序以決定該經修改的預測塊亦包括以下步驟: 在該初始參考範本周圍的一搜尋區域內搜尋比該初始參考範本更緊密地與該當前範本匹配的一後續參考範本。 The method according to claim 1, wherein performing the motion vector refinement procedure to determine the modified prediction block also includes the following steps: A subsequent reference template that matches the current template more closely than the initial reference template is searched in a search region around the initial reference template. 根據請求項1之方法,其中該當前範本與該初始參考範本的該比較包括一範本匹配成本。The method according to claim 1, wherein the comparison of the current template and the initial reference template includes a template matching cost. 根據請求項3之方法,亦包括以下步驟: 基於該當前範本中的取樣與該初始參考範本中的取樣的一加權每取樣比較來決定該範本匹配成本。 The method according to claim 3 also includes the following steps: The template matching cost is determined based on a weighted per-sample comparison of samples in the current template with samples in the initial reference template. 根據請求項1之方法,其中該初始參考範本包括位於該初始預測塊上方或該初始預測塊左側的複數個子塊。The method according to claim 1, wherein the initial reference template includes a plurality of sub-blocks located above or to the left of the initial prediction block. 根據請求項1之方法,其中該仿射預測模式包括一4參數仿射預測模式。The method according to claim 1, wherein the affine prediction mode comprises a 4-parameter affine prediction mode. 根據請求項1之方法,其中該仿射預測模式包括一6參數仿射預測模式。The method according to claim 1, wherein the affine prediction mode comprises a 6-parameter affine prediction mode. 根據請求項1之方法,亦包括以下步驟: 基於該經修改的預測塊來決定一預測塊; 將該預測塊添加到一殘差塊以決定一經重構的塊; 向該經重構的塊應用一或多個濾波操作;及 輸出包括經濾波的經重構的塊的經解碼的視訊資料的一圖片。 The method according to Claim 1 also includes the following steps: determining a prediction block based on the modified prediction block; adding the predicted block to a residual block to determine a reconstructed block; applying one or more filtering operations to the reconstructed block; and A picture of decoded video data including the filtered reconstructed blocks is output. 根據請求項1之方法,其中該解碼的方法是作為一視訊編碼程序的一部分來執行的。The method according to claim 1, wherein the decoding method is performed as part of a video encoding process. 一種用於對視訊資料進行解碼的設備,該設備包括: 一記憶體;及 一或多個處理器,其在電路中實現、耦合到該記憶體並且被配置為: 決定以一仿射預測模式對該視訊資料的一當前圖片中的一當前塊進行解碼; 決定用於該當前塊的一或多個控制點運動向量(CPMV); 使用該一或多個CPMV來辨識用於一參考圖片中的該當前塊的一初始預測塊; 決定用於該當前圖片中的該當前塊的一當前範本; 決定用於該參考圖片中的該初始預測塊的初始參考範本;及 基於該當前範本與該初始參考範本的一比較來執行一運動向量細化程序,以決定一經修改的預測塊。 A device for decoding video data, the device comprising: a memory; and One or more processors implemented in circuitry, coupled to the memory and configured to: determining to decode a current block in a current picture of the video data in an affine prediction mode; determine one or more control point motion vectors (CPMV) for the current block; using the one or more CPMVs to identify an initial prediction block for the current block in a reference picture; determining a current template for the current block in the current picture; determining an initial reference sample for the initial prediction block in the reference picture; and A motion vector refinement procedure is performed based on a comparison of the current template and the initial reference template to determine a modified prediction block. 根據請求項10之設備,其中為了執行該運動向量細化程序以決定該經修改的預測塊,該一或多個處理器亦被配置為: 在該初始參考範本周圍的一搜尋區域內搜尋比該初始參考範本更緊密地與該當前範本匹配的一後續參考範本。 The apparatus according to claim 10, wherein for performing the motion vector refinement procedure to determine the modified prediction block, the one or more processors are also configured to: A subsequent reference template that matches the current template more closely than the initial reference template is searched in a search region around the initial reference template. 根據請求項10之設備,其中該當前範本與該初始參考範本的該比較包括一範本匹配成本。The apparatus according to claim 10, wherein the comparison of the current template and the initial reference template includes a template matching cost. 根據請求項12之設備,其中該一或多個處理器亦被配置為: 基於該當前範本中的取樣與該初始參考範本中的取樣的一加權每取樣比較來決定該範本匹配成本。 The apparatus according to claim 12, wherein the one or more processors are also configured to: The template matching cost is determined based on a weighted per-sample comparison of samples in the current template with samples in the initial reference template. 根據請求項10之設備,其中該初始參考範本包括位於該初始預測塊上方或該初始預測塊左側的複數個子塊。The apparatus according to claim 10, wherein the initial reference template includes a plurality of sub-blocks located above or to the left of the initial prediction block. 根據請求項10之設備,其中該仿射預測模式包括一4參數仿射預測模式。The apparatus according to claim 10, wherein the affine prediction mode comprises a 4-parameter affine prediction mode. 根據請求項10之設備,其中該仿射預測模式包括一6參數仿射預測模式。The apparatus according to claim 10, wherein the affine prediction mode comprises a 6-parameter affine prediction mode. 根據請求項10之設備,其中該一或多個處理器亦被配置為: 基於該經修改的預測塊來決定一預測塊; 將該預測塊添加到一殘差塊以決定一經重構的塊; 向該經重構的塊應用一或多個濾波操作;及 輸出包括經濾波的經重構的塊的經解碼的視訊資料的一圖片。 The device according to claim 10, wherein the one or more processors are also configured to: determining a prediction block based on the modified prediction block; adding the predicted block to a residual block to determine a reconstructed block; applying one or more filtering operations to the reconstructed block; and A picture of decoded video data including the filtered reconstructed blocks is output. 根據請求項10之設備,其中該設備包括一無線通訊設備,亦包括被配置為接收經編碼的視訊資料的一接收器。The apparatus according to claim 10, wherein the apparatus comprises a wireless communication device and also comprises a receiver configured to receive encoded video data. 根據請求項18之設備,其中該無線通訊設備包括一電話手機,並且其中該接收器被配置為根據一無線通訊標準來對包括該經編碼的視訊資料的一訊號進行解調。The apparatus according to claim 18, wherein the wireless communication device comprises a telephone handset, and wherein the receiver is configured to demodulate a signal including the encoded video data according to a wireless communication standard. 根據請求項10之設備,亦包括: 被配置為顯示經解碼的視訊資料的一顯示器。 The equipment according to claim 10 also includes: A display configured to display decoded video data. 根據請求項10之設備,其中該設備包括一相機、一電腦、一行動設備、一廣播接收器設備或一機上盒中的一者或多者。The device according to claim 10, wherein the device includes one or more of a camera, a computer, a mobile device, a broadcast receiver device, or a set-top box. 根據請求項10之設備,其中該設備包括一視訊編碼設備。The device according to claim 10, wherein the device comprises a video encoding device. 一種儲存指令的電腦可讀取儲存媒體,該等指令在由一或多個處理器執行時使得該一或多個處理器進行以下操作: 決定以一仿射預測模式對該視訊資料的一當前圖片中的一當前塊進行譯碼; 決定用於該當前塊的一或多個控制點運動向量(CPMV); 使用該一或多個CPMV來辨識用於一參考圖片中的該當前塊的一初始預測塊; 決定用於該當前圖片中的該當前塊的一當前範本; 決定用於該參考圖片中的該初始預測塊的一初始參考範本;及 基於該當前範本與該初始參考範本的一比較來執行一運動向量細化程序,以決定一經修改的預測塊。 A computer-readable storage medium storing instructions that, when executed by one or more processors, cause the one or more processors to: determining to decode a current block in a current picture of the video data in an affine prediction mode; determine one or more control point motion vectors (CPMV) for the current block; using the one or more CPMVs to identify an initial prediction block for the current block in a reference picture; determining a current template for the current block in the current picture; determining an initial reference sample for the initial prediction block in the reference picture; and A motion vector refinement procedure is performed based on a comparison of the current template and the initial reference template to determine a modified prediction block. 根據請求項23之電腦可讀取儲存媒體,其中為了執行該運動向量細化程序以決定該經修改的預測塊,該等指令亦使得該一或多個處理器進行以下操作: 在該初始參考範本周圍的一搜尋區域內搜尋比該初始參考範本更緊密地與該當前範本匹配的一後續參考範本。 The computer-readable storage medium according to claim 23, wherein in order to execute the motion vector refinement procedure to determine the modified prediction block, the instructions also cause the one or more processors to: A subsequent reference template that matches the current template more closely than the initial reference template is searched in a search region around the initial reference template. 根據請求項23之電腦可讀取儲存媒體,其中該當前範本與該初始參考範本的該比較包括一範本匹配成本。The computer-readable storage medium according to claim 23, wherein the comparison of the current template and the initial reference template includes a template matching cost. 根據請求項25之電腦可讀取儲存媒體,其中該等指令使得該一或多個處理器進行以下操作: 基於該當前範本中的取樣與該初始參考範本中的取樣的一加權每取樣比較來決定該範本匹配成本。 The computer-readable storage medium according to claim 25, wherein the instructions cause the one or more processors to perform the following operations: The template matching cost is determined based on a weighted per-sample comparison of samples in the current template with samples in the initial reference template. 根據請求項23之電腦可讀取儲存媒體,其中該初始參考範本包括位於該初始預測塊上方或該初始預測塊左側的複數個子塊。The computer-readable storage medium according to claim 23, wherein the initial reference template includes a plurality of sub-blocks located above or to the left of the initial prediction block. 根據請求項23之電腦可讀取儲存媒體,其中該等指令使得該一或多個處理器進行以下操作: 基於該經修改的預測塊來決定一預測塊; 將該預測塊添加到一殘差塊以決定一經重構的塊; 向該經重構的塊應用一或多個濾波操作;及 輸出包括經濾波的經重構的塊的經解碼的視訊資料的一圖片。 The computer-readable storage medium according to claim 23, wherein the instructions cause the one or more processors to perform the following operations: determining a prediction block based on the modified prediction block; adding the predicted block to a residual block to determine a reconstructed block; applying one or more filtering operations to the reconstructed block; and A picture of decoded video data including the filtered reconstructed blocks is output. 一種用於對視訊資料進行解碼的裝置,該裝置包括: 用於決定以一仿射預測模式對該視訊資料的一當前圖片中的一當前塊進行譯碼的單元; 用於決定用於該當前塊的一或多個控制點運動向量(CPMV)的單元; 用於使用該一或多個CPMV來辨識用於一參考圖片中的該當前塊的一初始預測塊的單元; 用於決定用於該當前圖片中的該當前塊的一當前範本的單元; 用於決定用於該參考圖片中的該初始預測塊的一初始參考範本的單元;及 用於基於該當前範本與該初始參考範本的一比較來執行一運動向量細化程序,以決定一經修改的預測塊的單元。 A device for decoding video data, the device comprising: means for determining to decode a current block in a current picture of the video data in an affine prediction mode; means for determining one or more control point motion vectors (CPMV) for the current block; a unit for identifying an initial prediction block for the current block in a reference picture using the one or more CPMVs; means for determining a current example for the current block in the current picture; means for determining an initial reference example for the initial prediction block in the reference picture; and A motion vector refinement procedure is performed based on a comparison of the current template and the initial reference template to determine a modified prediction block unit. 根據請求項29之裝置,其中該當前範本與該初始參考範本的該比較包括一範本匹配成本,該裝置亦包括: 用於基於該當前範本中的取樣與該初始參考範本中的一取樣的加權每取樣比較來決定該範本匹配成本的一單元。 The apparatus according to claim 29, wherein the comparison of the current template and the initial reference template includes a template matching cost, the apparatus further comprising: A unit for determining the template matching cost based on a weighted per-sample comparison of samples in the current template and a sample in the initial reference template.
TW111113752A 2021-04-12 2022-04-12 Template matching based affine prediction for video coding TW202243480A (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US202163173861P 2021-04-12 2021-04-12
US202163173949P 2021-04-12 2021-04-12
US63/173,861 2021-04-12
US63/173,949 2021-04-12
US17/715,571 2022-04-07
US17/715,571 US11936877B2 (en) 2021-04-12 2022-04-07 Template matching based affine prediction for video coding

Publications (1)

Publication Number Publication Date
TW202243480A true TW202243480A (en) 2022-11-01

Family

ID=81448440

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111113752A TW202243480A (en) 2021-04-12 2022-04-12 Template matching based affine prediction for video coding

Country Status (6)

Country Link
EP (1) EP4324206A1 (en)
JP (1) JP2024514113A (en)
KR (1) KR20230169960A (en)
BR (1) BR112023020254A2 (en)
TW (1) TW202243480A (en)
WO (1) WO2022221140A1 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10701390B2 (en) * 2017-03-14 2020-06-30 Qualcomm Incorporated Affine motion information derivation
CN118042151A (en) * 2018-01-25 2024-05-14 三星电子株式会社 Method and apparatus for video signal processing using sub-block based motion compensation

Also Published As

Publication number Publication date
WO2022221140A1 (en) 2022-10-20
JP2024514113A (en) 2024-03-28
BR112023020254A2 (en) 2023-11-21
KR20230169960A (en) 2023-12-18
EP4324206A1 (en) 2024-02-21

Similar Documents

Publication Publication Date Title
TWI819100B (en) History-based motion vector prediction for affine mode
CN110771164A (en) Combination of inter-prediction and intra-prediction in video coding
CN113748679A (en) Intra block copy merge data syntax for video coding
CN114128259A (en) Merge-mode coding for video coding
CN114128261A (en) Combined inter and intra prediction modes for video coding
CN114223202A (en) Low frequency inseparable transform (LFNST) signaling
US11936877B2 (en) Template matching based affine prediction for video coding
CN113924776A (en) Video coding with unfiltered reference samples using different chroma formats
TW202114426A (en) Harmonized early termination in bdof and dmvr in video coding
WO2023055583A1 (en) Decoder side motion derivation using spatial correlation
TW202245477A (en) Template matching refinement in inter-prediction modes
TW202228437A (en) Decoder side intra mode derivation for most probable mode list construction in video coding
WO2023137414A2 (en) Coding video data using out-of-boundary motion vectors
TW202308391A (en) Hybrid inter bi-prediction in video coding
TW202306386A (en) Merge candidate reordering in video coding
TWI809200B (en) Restrictions for the worst-case bandwidth reduction in video coding
TW202243480A (en) Template matching based affine prediction for video coding
US20240121399A1 (en) Decoder-side control point motion vector refinement for affine inter-prediction in video coding
CN117203966A (en) Template matching-based affine prediction for video coding
TW202243475A (en) Bi-directional optical flow in video coding
TW202243478A (en) Adaptively coding motion information for multiple hypothesis prediction for video coding
TW202232951A (en) Multi-pass decoder-side motion vector refinement
TW202332272A (en) Block-level reference pictures adaptation for video coding
TW202345598A (en) Methods for adaptive signaling of maximum number of merge candidates in multiple hypothesis prediction
TW202345599A (en) Interaction between reference picture resampling and template-based inter prediction techniques in video coding