TW202032991A - Extension of look-up table based motion vector prediction with temporal information - Google Patents

Extension of look-up table based motion vector prediction with temporal information Download PDF

Info

Publication number
TW202032991A
TW202032991A TW108124973A TW108124973A TW202032991A TW 202032991 A TW202032991 A TW 202032991A TW 108124973 A TW108124973 A TW 108124973A TW 108124973 A TW108124973 A TW 108124973A TW 202032991 A TW202032991 A TW 202032991A
Authority
TW
Taiwan
Prior art keywords
motion
candidates
video
candidate
block
Prior art date
Application number
TW108124973A
Other languages
Chinese (zh)
Other versions
TWI820169B (en
Inventor
張莉
張凱
劉鴻彬
王悅
Original Assignee
大陸商北京字節跳動網絡技術有限公司
美商字節跳動有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商北京字節跳動網絡技術有限公司, 美商字節跳動有限公司 filed Critical 大陸商北京字節跳動網絡技術有限公司
Publication of TW202032991A publication Critical patent/TW202032991A/en
Application granted granted Critical
Publication of TWI820169B publication Critical patent/TWI820169B/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Abstract

The application provides a method for video processing. This method includes: determining a new candidate for video processing by averaging motion vectors of two or more selected motion candidates; adding the new candidate to a candidate list; and performing a conversion between a first video block of a video and a bitstream representation of the video by using the determined new candidate in the candidate list.

Description

用時間資訊擴展基於查閱資料表的運動向量預測Extend motion vector prediction based on look-up table with time information

本發明檔涉及視頻編碼技術、設備和系統。The document of the present invention relates to video coding technology, equipment and system.

相關申請的交叉引用 根據適用的專利法和/或巴黎公約的規定,本發明及時要求於2018年7月14日提交的國際專利申請號PCT/CN2018/095716以及2018年7月15日提交的國際專利申請號PCT/CN2018/095719的優先權和利益。將國際專利申請號PCT/CN2018/095716和PCT/CN2018/095719的全部公開以引用方式併入本文,作為本發明公開的一部分。Cross references to related applications According to the applicable patent law and/or the Paris Convention, the present invention timely requires the international patent application number PCT/CN2018/095716 filed on July 14, 2018 and the international patent application number PCT/ Priorities and benefits of CN2018/095719. The entire disclosures of International Patent Application Nos. PCT/CN2018/095716 and PCT/CN2018/095719 are incorporated herein by reference as part of the disclosure of the present invention.

儘管視訊壓縮有所進步,數位視訊在互聯網和其它數位通信網路上使用的頻寬仍然最大。隨著能夠接收和顯示視頻的連接使用者設備數量的增加,預計數位視訊使用的頻寬需求將繼續增長。Despite advances in video compression, digital video still uses the largest bandwidth on the Internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, it is expected that the demand for bandwidth used by digital video will continue to grow.

本檔公開了用於使用運動向量的Merge清單編碼和解碼數位視訊的方法、系統和設備。This document discloses methods, systems and equipment for encoding and decoding digital video using the Merge list of motion vectors.

在一個示例的方面,公開了一種視頻處理方法。該視頻處理方法包括:通過平均兩個或更多選擇的運動候選,確定用於視頻處理的新候選;將所述新候選增加到候選列表;通過使用所述候選列表中的確定的新候選,執行視頻的第一視頻塊和視頻的位元流表示之間的轉換。In an exemplary aspect, a video processing method is disclosed. The video processing method includes: determining a new candidate for video processing by averaging two or more selected motion candidates; adding the new candidate to a candidate list; and by using the determined new candidate in the candidate list, Perform conversion between the first video block of the video and the bitstream representation of the video.

在一個示例的方面,公開了一種視頻處理方法。該視頻處理方法包括:通過使用來自一個或多個表的一個或多個運動候選來確定用於視頻處理的新運動候選,其中表包括一個或多個運動候選,並且每個運動候選是關聯的運動資訊;基於新候選者在視頻塊和視頻塊的編碼表示之間執行轉換。In an exemplary aspect, a video processing method is disclosed. The video processing method includes: determining a new motion candidate for video processing by using one or more motion candidates from one or more tables, where the table includes one or more motion candidates, and each motion candidate is associated Motion information; based on the new candidate, the conversion between the video block and the coded representation of the video block is performed.

在一個示例的方面,公開了一種視頻處理方法。該視頻處理方法包括:通過始終使用來自當前圖片中的第一視頻塊的多於一個空間相鄰塊的運動資訊,並且不使用來自與當前圖片不同的圖片中的時間塊的運動資訊,來確定用於視頻處理的新候選;通過使用所確定的新候選來執行視頻的當前圖片中的第一視頻塊與該視頻的位元流表示之間的轉換。In an exemplary aspect, a video processing method is disclosed. The video processing method includes: always using the motion information of more than one spatial neighboring block from the first video block in the current picture, and not using the motion information from a time block in a picture different from the current picture to determine A new candidate for video processing; the conversion between the first video block in the current picture of the video and the bitstream representation of the video is performed by using the determined new candidate.

在一個示例的方面,公開了一種視頻處理方法。該視頻處理方法包括:通過使用來自當前圖片中的第一視頻塊的至少一個空間非緊鄰塊的運動資訊、以及從第一視頻塊的空間非緊鄰塊或並非從第一視頻塊的空間非緊鄰塊推導出的其他候選,來確定用於視頻處理的新候選;通過使用所確定的新候選,來執行視頻的第一視頻塊與該視頻的位元流表示之間的轉換。In an exemplary aspect, a video processing method is disclosed. The video processing method includes: by using the motion information of at least one spatially non-neighboring block from the first video block in the current picture, and the spatially non-neighboring block from the first video block or the spatially non-neighboring block Other candidates derived from the block are used to determine a new candidate for video processing; by using the determined new candidate, the conversion between the first video block of the video and the bitstream representation of the video is performed.

在一個示例的方面,公開了一種視頻處理方法。該視頻處理方法包括:通過使用來自當前圖片中的第一視頻塊的一個或多個表的運動資訊和來自不同於當前圖片的圖片中的時間塊的運動資訊,來確定用於視頻處理的新候選;通過使用所確定的新候選,來執行視頻的當前圖片中的第一視頻塊與該視頻的位元流表示之間的轉換。In an exemplary aspect, a video processing method is disclosed. The video processing method includes: determining a new video processing method by using motion information from one or more tables of a first video block in the current picture and motion information from a time block in a picture different from the current picture. Candidate; by using the determined new candidate, the conversion between the first video block in the current picture of the video and the bitstream representation of the video is performed.

在一個示例的方面,公開了一種視頻處理方法。該視頻處理方法包括:通過使用來自第一視頻塊的一個或多個表的運動資訊和來自第一視頻塊的一個或多個空間相鄰塊的運動資訊,來確定用於視頻處理的新候選;通過使用所確定的新候選,來執行視頻的當前圖片中的第一視頻塊與該視頻的位元流表示之間的轉換。In an exemplary aspect, a video processing method is disclosed. The video processing method includes: determining a new candidate for video processing by using motion information from one or more tables of a first video block and motion information from one or more spatial neighboring blocks of the first video block ; By using the determined new candidate, the conversion between the first video block in the current picture of the video and the bitstream representation of the video is performed.

在一個示例的方面,公開了一種視頻處理方法。該視頻處理方法包括:保持一組表,其中每個表包括運動候選,並且每個運動候選與對應的運動資訊相關聯;執行第一視頻塊與包括該第一視頻塊的視頻的位元流表示之間的轉換;以及通過基於第一視頻塊的編碼/解碼模式選擇性地對一個或多個表中的現有運動候選進行修剪,來更新該一個或多個表。In an exemplary aspect, a video processing method is disclosed. The video processing method includes: maintaining a set of tables, where each table includes motion candidates, and each motion candidate is associated with corresponding motion information; executing a bit stream of a first video block and a video including the first video block The conversion between representations; and updating the one or more tables by selectively trimming the existing motion candidates in the one or more tables based on the encoding/decoding mode of the first video block.

在一個示例的方面,公開了一種視頻處理方法。該視頻處理方法包括:保持一組表,其中每個表包括運動候選,並且每個運動候選與對應的運動資訊相關聯;執行第一視頻塊與包括該第一視頻塊的視頻的位元流表示之間的轉換;以及更新一個或多個表,以包括來自所述第一視頻塊的一個或多個時間相鄰塊的運動資訊作為新的運動候選。In an exemplary aspect, a video processing method is disclosed. The video processing method includes: maintaining a set of tables, where each table includes motion candidates, and each motion candidate is associated with corresponding motion information; executing a bit stream of a first video block and a video including the first video block Indicates the conversion between; and updating one or more tables to include the motion information of one or more temporal neighboring blocks from the first video block as a new motion candidate.

在一個示例的方面,公開了一種更新運動候選表的方法。該方法包括:基於正被處理的視頻塊的編碼/解碼模式,選擇性地對表中的現有運動候選進行修剪,每個運動候選與對應的運動資訊相關聯;以及更新所述表,以包括視頻塊的運動資訊作為新的運動候選。In an exemplary aspect, a method of updating a motion candidate table is disclosed. The method includes: selectively trimming the existing motion candidates in the table based on the encoding/decoding mode of the video block being processed, and each motion candidate is associated with corresponding motion information; and updating the table to include The motion information of the video block is used as a new motion candidate.

在一個示例的方面,公開了一種更新運動候選表的方法。該方法包括:保持運動候選表,每個運動候選與對應的運動資訊相關聯;以及更新所述表,以包括來自正被處理的視頻塊的一個或多個時間相鄰塊的運動資訊作為新的運動候選。In an exemplary aspect, a method of updating a motion candidate table is disclosed. The method includes: maintaining a table of motion candidates, each of which is associated with corresponding motion information; and updating the table to include motion information from one or more temporal neighbors of the video block being processed as the new Motion candidates.

在一個示例的方面,公開了一種視頻處理方法。該視頻處理方法包括:通過使用來自一個或多個表的一個或多個運動候選,來確定用於視頻處理的新的運動候選,其中表包括一個或多個運動候選,並且每個運動候選與運動資訊相關聯;以及基於新候選,在視頻塊和視頻塊的編碼表示之間執行轉換。In an exemplary aspect, a video processing method is disclosed. The video processing method includes: determining a new motion candidate for video processing by using one or more motion candidates from one or more tables, wherein the table includes one or more motion candidates, and each motion candidate and Motion information is associated; and based on the new candidate, a conversion is performed between the video block and the coded representation of the video block.

在一個示例的方面,公開了一種視頻系統中的裝置。該裝置包括處理器和其上具有指令的非暫時性記憶體,其中所述指令在由處理器執行時使處理器實現本文所述的各種方法。In an exemplary aspect, an apparatus in a video system is disclosed. The device includes a processor and a non-transitory memory with instructions thereon, where the instructions when executed by the processor cause the processor to implement the various methods described herein.

本文所述的各種技術可以實施為存儲在非暫時性電腦可讀介質上的電腦程式產品。電腦程式產品包括用於執行本文所述方法的程式碼。The various technologies described herein can be implemented as computer program products stored on non-transitory computer-readable media. The computer program product includes program code for executing the method described herein.

在附件、附圖和下面的描述中闡述了一個或多個實現的細節。其它特徵將從說明書和附圖以及申請專利範圍書中顯而易見。One or more implementation details are set forth in the appendix, drawings, and the following description. Other features will be apparent from the specification and drawings and the scope of patent application.

為了提高視頻的壓縮比,研究人員不斷尋找新的技術來編碼視頻。In order to improve the compression ratio of video, researchers are constantly looking for new technologies to encode video.

1.1. 介紹Introduction

本檔與視頻編碼技術相關。具體地,與視頻編碼中的運動資訊編碼(例如Merge模式、AMVP模式)相關。其可應用於現有的視頻編碼標準HEVC,或待最終確定的標準多功能視頻編碼(VVC)。也可能適用於未來的視頻編碼標準或視頻轉碼器。This file is related to video coding technology. Specifically, it is related to motion information coding (for example, Merge mode, AMVP mode) in video coding. It can be applied to the existing video coding standard HEVC, or the standard multifunctional video coding (VVC) to be finalized. It may also be suitable for future video coding standards or video transcoders.

簡要討論Brief discussion

視頻編碼標準主要是通過開發公知的ITU-T和ISO/IEC標準而發展起來的。ITU-T開發了H.261和H.263,ISO/IEC開發了MPEG-1和MPEG-4視覺,並且兩個組織聯合開發了H.262/MPEG-2視頻、H.264/MPEG-4高級視頻編碼(AVC)和H.265/HEVC標準。自H.262以來,視頻編碼標準基於混合視頻編碼結構,其中採用了時域預測加變換編碼。典型HEVC編碼器框架的示例如圖1所示。Video coding standards are mainly developed through the development of well-known ITU-T and ISO/IEC standards. ITU-T developed H.261 and H.263, ISO/IEC developed MPEG-1 and MPEG-4 vision, and the two organizations jointly developed H.262/MPEG-2 video, H.264/MPEG-4 Advanced Video Coding (AVC) and H.265/HEVC standard. Since H.262, the video coding standard is based on a hybrid video coding structure, which uses time-domain prediction plus transform coding. An example of a typical HEVC encoder framework is shown in Figure 1.

2.12.1 分割結構Split structure

2.1.1 H.264/AVC2.1.1 H.264/AVC 中的分割樹結構Split tree structure in

先前標準中編碼層的核心是巨集塊,包含16×16的亮度樣本塊,並且在常規的4:2:0顏色採樣情況下,包含兩個對應的8×8的彩度樣本塊。The core of the coding layer in the previous standard is the macro block, which contains 16×16 luma sample blocks, and in the case of conventional 4:2:0 color sampling, it contains two corresponding 8×8 chroma sample blocks.

內部編碼塊使用空間預測來探索像素之間的空間相關性。定義了兩種分割:16x16和4x4。Intra-coding blocks use spatial prediction to explore the spatial correlation between pixels. Two divisions are defined: 16x16 and 4x4.

幀間編碼塊通過估計圖片之間的運動來使用時域預測,而不是空間預測。可以單獨估計16x16宏塊或其任何子宏塊分割的運動:16x8、8x16、8x8、8x4、4x8、4x4(見圖2)。每個子宏塊分割只允許一個運動向量(MV)。Inter-coding blocks use temporal prediction instead of spatial prediction by estimating the motion between pictures. The motion of a 16x16 macroblock or any sub-macroblock partition can be estimated separately: 16x8, 8x16, 8x8, 8x4, 4x8, 4x4 (see Figure 2). Only one motion vector (MV) is allowed for each sub-macroblock partition.

2.1.2 HEVC2.1.2 HEVC 中的分割樹結構Split tree structure in

在HEVC中,通過使用四叉樹結構(表示為編碼樹)將CTU劃分成CU來適應各種局部特性。在CU級別決定是使用幀間(時域)預測還是幀內(空間)預測對圖片區域進行編碼。根據PU的分割類型,每個CU可以進一步劃分成一個、兩個或四個PU。在一個PU中,應用相同的預測處理,並且相關資訊以PU為基礎傳輸到解碼器。在基於PU分割類型通過應用預測處理獲得殘差塊後,可以根據與CU的編碼樹相似的另一個四叉樹結構將CU分割成變換單元(TU)。HEVC結構的一個重要特徵是它具有多個分割概念,包括CU、PU以及TU。In HEVC, the CTU is divided into CUs by using a quadtree structure (represented as a coding tree) to adapt to various local characteristics. It is decided at the CU level whether to use inter (temporal) prediction or intra (spatial) prediction to encode picture regions. According to the PU partition type, each CU can be further divided into one, two, or four PUs. In a PU, the same prediction process is applied, and relevant information is transmitted to the decoder on the PU basis. After the residual block is obtained by applying prediction processing based on the PU partition type, the CU may be partitioned into transformation units (TU) according to another quad-tree structure similar to the coding tree of the CU. An important feature of the HEVC structure is that it has multiple partition concepts, including CU, PU and TU.

在下文中,使用HEVC的混合視頻編碼中涉及的各種特徵突出顯示如下。In the following, various features involved in hybrid video coding using HEVC are highlighted as follows.

1)編碼樹單元和編碼樹塊(CTB)結構:HEVC中的類似結構是編碼樹單元(CTU),其具有由編碼器選擇並且可以大於傳統的宏塊的尺寸。CTU由亮度 CTB和相應的彩度 CTB以及語法元素組成。亮度 CTB的尺寸L×L可以選擇為L=16、32或64個樣本,較大的尺寸通常能夠實現更好的壓縮。然後,HEVC支援使用樹結構和四叉樹式信令將CTB分割為更小的塊。1) Coding tree unit and coding tree block (CTB) structure: A similar structure in HEVC is a coding tree unit (CTU), which has a size selected by the encoder and can be larger than a traditional macroblock. CTU is composed of luminance CTB and corresponding saturation CTB and syntax elements. The size L×L of the brightness CTB can be selected as L=16, 32, or 64 samples. A larger size usually achieves better compression. Then, HEVC supports the use of tree structure and quadtree signaling to divide CTB into smaller blocks.

2)編碼單元(CU)和編碼塊(CB):CTU的四叉樹語法規定了其亮度和彩度 CB的尺寸和位置。四叉樹的根與CTU相關聯。因此,亮度 CTB的尺寸是亮度 CB支持的最大尺寸。CTU的亮度和彩度 CB的劃分是聯合發信令的。一個亮度CB和通常兩個彩度CB以及相關的語法一起形成編碼單元(CU)。CTB可以只包含一個CU,也可以劃分形成多個CU,並且每個CU都有一個相關的劃分,分割成預測單元(PU)和轉換單元樹(TU)。2) Coding Unit (CU) and Coding Block (CB): The quadtree syntax of CTU specifies the size and location of its brightness and chroma CB. The root of the quadtree is associated with the CTU. Therefore, the size of the brightness CTB is the maximum size supported by the brightness CB. The brightness and chroma of CTU and the division of CB are signaled jointly. One luma CB and usually two chroma CBs and related syntax together form a coding unit (CU). The CTB can contain only one CU, or it can be divided to form multiple CUs, and each CU has an associated division, which is divided into a prediction unit (PU) and a conversion unit tree (TU).

3)預測單元(PU)和預測塊(PB):在CU級別決定是使用幀間預測還是幀內預測對圖片區域進行編碼。PU分割結構的根位於CU級。取決於基本的預測類型決定,可以在尺寸上進一步劃分亮度和彩度 CB,並從亮度和彩度預測塊(PB)中預測亮度和彩度 CB。HEVC支援從64×64到4×4個樣本的可變PB尺寸。圖3示出了MxM CU的允許PB示例。3) Prediction unit (PU) and prediction block (PB): It is decided at the CU level whether to use inter prediction or intra prediction to encode the picture area. The root of the PU partition structure is at the CU level. Depending on the basic prediction type decision, the brightness and chroma CB can be further divided in size, and the brightness and chroma CB can be predicted from the brightness and chroma prediction block (PB). HEVC supports variable PB size from 64×64 to 4×4 samples. Figure 3 shows an example of allowed PB for MxM CU.

4)變換單元(TU)和變換塊(TB):使用塊變換對預測殘差進行編碼。TU樹結構的根位於CU級。亮度CB殘差可能與亮度TB相同,或者也可能進一步劃分成更小的亮度變換塊TB。同樣適用於彩度TB。對於4×4、8×8、16×16和32×32的方形TB定義了與離散余弦變換(DCT)相似的整數基函數。對於亮度幀內預測殘差的4×4變換,也可以指定從離散正弦變換(DST)形式推導的整數變換。4) Transform unit (TU) and transform block (TB): Use block transform to encode prediction residuals. The root of the TU tree structure is at the CU level. The luminance CB residual may be the same as the luminance TB, or it may be further divided into smaller luminance transformation blocks TB. The same applies to chroma TB. For 4×4, 8×8, 16×16, and 32×32 square TBs, an integer basis function similar to the discrete cosine transform (DCT) is defined. For the 4×4 transform of the luma intra-prediction residual, an integer transform derived from the discrete sine transform (DST) form can also be specified.

圖4示出了將CTB細分成CB(和轉換塊(TB))的示例。實線指示CB邊界,並且虛線指示TB邊界。(a)帶分割的CTB。(b)對應的四叉樹。Fig. 4 shows an example of subdividing CTB into CB (and transformation blocks (TB)). The solid line indicates the CB boundary, and the dashed line indicates the TB boundary. (A) CTB with segmentation. (B) The corresponding quadtree.

2.1.2.12.1.2.1 分割成變換塊和單元的樹形結構劃分Divide into a tree structure division of transform blocks and units

對於殘差編碼,CB可以遞迴地分割為轉換塊(TB)。分割由殘差四叉樹發信令。只指定了方形CB和TB分割,其中塊可以遞迴地劃分為四象限,如圖4所示。對於尺寸為M×M的給定的亮度 CB,標誌指示它是否被劃分成四個尺寸為M/2×M/2的塊。如果可以進一步劃分,如序列參數集(SPS)中指示的殘差四叉樹的最大深度所指示的那樣,每個象限都會分配一個標誌,指示是否將其劃分為四個象限。由殘差四叉樹生成的葉節點塊是由變換編碼進一步處理的變換塊。編碼器指示它將使用的最大和最小亮度 TB尺寸。當CB尺寸大於最大TB尺寸時,則暗示劃分。當劃分將導致比指示的最小值更小的亮度TB尺寸時,則暗示不劃分。彩度TB尺寸在每個維度上是亮度TB尺寸的一半,但當亮度TB尺寸為4×4時除外,在這種情況下,被四個4×4亮度TB覆蓋的區域使用單個4×4彩度TB。在幀內預測的CU的情況下,最近相鄰TB(CB內或CB外)的解碼樣本用作幀內預測的參考資料。For residual coding, CB can be recursively divided into conversion blocks (TB). The segmentation is signaled by the residual quadtree. Only the square CB and TB partitions are specified, where the blocks can be recursively divided into four quadrants, as shown in Figure 4. For a given luminance CB of size M×M, the flag indicates whether it is divided into four blocks of size M/2×M/2. If it can be further divided, as indicated by the maximum depth of the residual quadtree indicated in the sequence parameter set (SPS), each quadrant will be assigned a flag indicating whether to divide it into four quadrants. The leaf node blocks generated from the residual quadtree are transformed blocks that are further processed by transform coding. The encoder indicates the maximum and minimum brightness TB size it will use. When the CB size is larger than the maximum TB size, the division is implied. When the division will result in a smaller luminance TB size than the indicated minimum value, it implies no division. The chroma TB size is half of the luminance TB size in each dimension, except when the luminance TB size is 4×4, in this case, the area covered by four 4×4 luminance TBs uses a single 4×4 Saturation TB. In the case of an intra-predicted CU, the decoded samples of the nearest neighboring TB (in or out of CB) are used as reference material for intra prediction.

與以前的標準不同,對於幀間預測的CU,HEVC設計允許TB跨越多個PB,以最大化得益於四叉樹結構的TB分割的潛在編碼效率。Unlike previous standards, for inter-predicted CUs, the HEVC design allows TB to span multiple PBs to maximize the potential coding efficiency benefiting from the quadtree structure of TB partitioning.

2.1.2.22.1.2.2 父節點和子節點Parent node and child node

根據四叉樹結構對CTB進行劃分,其節點為編碼單元。四叉樹結構中的多個節點包括葉節點和非葉節點。葉節點在樹結構中沒有子節點(即,葉節點不會進一步劃分)。非葉節點包括樹結構的根節點。根節點對應於視頻資料的初始視頻塊(例如,CTB)。對於多個節點的每個各自的非根節點,各自的非根節點對應於視頻塊,該視頻塊是對應於各自非根節點的樹結構中的父節點的視頻塊的子塊。多個非葉節點的每個各自的非葉節點在樹結構中具有一個或多個子節點。The CTB is divided according to the quadtree structure, and its nodes are coding units. Multiple nodes in the quadtree structure include leaf nodes and non-leaf nodes. Leaf nodes have no child nodes in the tree structure (that is, leaf nodes will not be further divided). Non-leaf nodes include the root node of the tree structure. The root node corresponds to the initial video block (for example, CTB) of the video material. For each respective non-root node of the plurality of nodes, the respective non-root node corresponds to a video block, which is a child block of the video block corresponding to the parent node in the tree structure of the respective non-root node. Each of the plurality of non-leaf nodes has one or more child nodes in the tree structure.

2.1.32.1.3 聯合探索模型(Joint Exploration Model ( JEMJEM )中具有較大) Has a larger CTUCTU 的四叉樹加二叉樹塊結構Quadtree plus binary tree block structure

為探索HEVC之外的未來視頻編碼技術,VCEG和MPEG於2015年共同成立了聯合視頻探索團隊(JVET)。從那時起,JVET採用了許多新的方法,並將其應用到了名為聯合探索模型(JEM)的參考軟體中。In order to explore future video coding technologies beyond HEVC, VCEG and MPEG jointly established a Joint Video Exploration Team (JVET) in 2015. Since then, JVET has adopted many new methods and applied them to a reference software called Joint Exploration Model (JEM).

2.1.3.1 QTBT塊分割結構2.1.3.1 QTBT block partition structure

與HEVC不同,QTBT結構消除了多個分割類型的概念,即,其消除了CU、PU和TU概念的分離,並支持CU分割形狀的更多靈活性。在QTBT塊結構中,CU可以是方形或矩形。如圖5所示,首先用四叉樹結構對編碼樹單元(CTU)進行分割。四叉樹葉節點進一步被二叉樹結構分割。在二叉樹劃分中有兩種分割類型:對稱的水準劃分和對稱的垂直劃分。二叉樹葉節點被稱為編碼單元(CU),該劃分用於預測和轉換處理,而無需進一步分割。這意味著在QTBT編碼塊結構中CU、PU和TU具有相同的塊尺寸。在JEM中,CU有時由不同顏色分量的編碼塊(CB)組成,例如,在4:2:0彩度格式的P條帶和B條帶中,一個CU包含一個亮度 CB和兩個彩度 CB,並且CU有時由單個分量的CB組成,例如,在I條帶的情況下,一個CU僅包含一個亮度 CB或僅包含兩個彩度 CB。Unlike HEVC, the QTBT structure eliminates the concept of multiple partition types, that is, it eliminates the separation of the concepts of CU, PU, and TU, and supports more flexibility of CU partition shape. In the QTBT block structure, the CU can be square or rectangular. As shown in Figure 5, first the coding tree unit (CTU) is segmented with a quadtree structure. The quad leaf node is further divided by the binary tree structure. There are two types of divisions in binary tree division: symmetrical horizontal division and symmetrical vertical division. The binary leaf node is called a coding unit (CU), and the division is used for prediction and transformation processing without further division. This means that CU, PU, and TU have the same block size in the QTBT coding block structure. In JEM, a CU is sometimes composed of coding blocks (CB) of different color components. For example, in the P-strip and B-strip in the 4:2:0 chroma format, a CU contains a luminance CB and two color CU is sometimes composed of a single component of CB, for example, in the case of I stripe, one CU contains only one luma CB or only two chroma CBs.

為QTBT分割方案定義了以下參數。The following parameters are defined for the QTBT segmentation scheme.

–CTU尺寸:四叉樹的根節點尺寸,與HEVC中的概念相同。-CTU size: the size of the root node of the quadtree, which is the same as the concept in HEVC.

MiNQTSize :最小允許的四叉樹葉節點尺寸MiNQTSize : the minimum allowable quadrilateral leaf node size

MaxBTSize :最大允許的二叉樹根節點尺寸MaxBTSize : the maximum allowable size of the root node of the binary tree

MaxBTDePTh :最大允許的二叉樹深度MaxBTDePTh : the maximum allowable depth of the binary tree

MiNBTSize :最小允許的二叉樹葉節點尺寸MiNBTSize : the minimum allowable size of a binary leaf node

在QTBT分割結構的一個示例中,CTU尺寸被設置為具有兩個對應的64×64彩度樣本塊的128×128 個亮度樣本,MiNQTSize 被設置為16×16,MaxBTSize 被設置為64×64,MiNBTSize (寬度和高度)被設置為4×4,MaxBTSize 被設置為4。四叉樹分割首先應用於CTU,以生成四叉樹葉節點。四叉樹葉節點的尺寸可以具有從16×16(即,MiNQTSize )到128×128(即,CTU尺寸)的尺寸。如果葉四叉樹節點是128×128,則其不會被二叉樹進一步劃分,因為其尺寸超過了MaxBTSize (即,64×64)。否則,葉四叉樹節點可以被二叉樹進一步分割。因此,四叉樹葉節點也是二叉樹的根節點,並且其二叉樹深度為0。當二叉樹深度達到MaxBTDePTh (即,4)時,不考慮進一步劃分。當二叉樹節點的寬度等於MiNBTSize (即,4)時,不考慮進一步的水準劃分。同樣,當二叉樹節點的高度等於MiNBTSize 時,不考慮進一步的垂直劃分。通過預測和變換處理進一步處理二叉樹的葉節點,而不需要進一步的分割。在JEM中,最大CTU尺寸為256×256 個亮度樣本。In an example of the QTBT segmentation structure, the CTU size is set to 128×128 luma samples with two corresponding 64×64 chroma sample blocks, MiNQTSize is set to 16×16, MaxBTSize is set to 64×64, MiNBTSize (width and height) is set to 4×4, and MaxBTSize is set to 4. The quadtree division is first applied to the CTU to generate quad-leaf nodes. The size of the quad leaf node may have a size from 16×16 (ie, MiNQTSize ) to 128×128 (ie, CTU size). If the leaf quadtree node is 128×128, it will not be further divided by the binary tree because its size exceeds MaxBTSize (ie, 64×64). Otherwise, the leaf quadtree node can be further divided by the binary tree. Therefore, the quad leaf node is also the root node of the binary tree, and its binary tree depth is zero. When the depth of the binary tree reaches MaxBTDePTh (ie, 4), no further division is considered. When the width of the binary tree node is equal to MiNBTSize (ie, 4), no further leveling is considered. Similarly, when the height of the binary tree node is equal to MiNBTSize , no further vertical division is considered. The leaf nodes of the binary tree are further processed through prediction and transformation processing without further segmentation. In JEM, the maximum CTU size is 256×256 luminance samples.

圖5(左側)圖示了通過使用QTBT進行塊分割的示例,圖5(右側)圖示了相應的樹表示。實線表示四叉樹分割,並且虛線表示二叉樹分割。在二叉樹的每個劃分(即,非葉)節點中,會對一個標誌發信令來指示使用哪種分割類型(即,水準或垂直),其中0表示水準劃分,1表示垂直劃分。對於四叉樹分割,不需要指明分割類型,因為四叉樹分割總是水準和垂直劃分一個塊,以生成尺寸相同的4個子塊。Figure 5 (left side) illustrates an example of block segmentation by using QTBT, and Figure 5 (right side) illustrates the corresponding tree representation. The solid line represents the quadtree division, and the dashed line represents the binary tree division. In each partition (ie, non-leaf) node of the binary tree, a flag is signaled to indicate which partition type (ie, horizontal or vertical) is used, where 0 represents horizontal partition and 1 represents vertical partition. For quadtree division, there is no need to specify the division type, because quadtree division always divides one block horizontally and vertically to generate 4 sub-blocks of the same size.

此外,QTBT方案支援亮度和彩度具有單獨的QTBT結構的能力。目前,對於P條帶和B條帶,一個CTU中的亮度和彩度 CTB共用相同的QTBT結構。然而,對於I條帶,用QTBT結構將亮度CTB分割為CU,用另一個QTBT結構將彩度CTB分割為彩度CU。這意味著I條帶中的CU由亮度分量的編碼塊或兩個彩度分量的編碼塊組成,P條帶或B條帶中的CU由所有三種顏色分量的編碼塊組成。In addition, the QTBT solution supports the ability to have separate QTBT structures for brightness and chroma. Currently, for P-strip and B-strip, the luminance and chroma CTB in a CTU share the same QTBT structure. However, for I-slice, the QTBT structure is used to divide the luminance CTB into CUs, and another QTBT structure is used to divide the chroma CTB into chroma CUs. This means that the CU in the I slice consists of coding blocks of the luma component or two chroma component coding blocks, and the CU in the P slice or the B slice consists of coding blocks of all three color components.

在HEVC中,為了減少運動補償的記憶體訪問,限制小塊的幀間預測,使得4×8和8×4塊不支持雙向預測,並且4×4塊不支援幀間預測。在JEM的QTBT中,這些限制被移除。In HEVC, in order to reduce memory access for motion compensation, inter-frame prediction of small blocks is restricted, so that 4×8 and 8×4 blocks do not support bidirectional prediction, and 4×4 blocks do not support inter-frame prediction. In JEM's QTBT, these restrictions are removed.

2.1.42.1.4 多功能視頻編碼(Multifunctional video coding ( VVCVVC )的三叉樹) Trinomial tree

如JVET-D0117中提出的,也支持四叉樹和二叉樹以外的樹類型。在實現中,引入了兩個額外的三叉樹(TT)劃分,即,水準和垂直中心側三叉樹,如圖6(d)和(e)所示。As proposed in JVET-D0117, tree types other than quadtree and binary tree are also supported. In the implementation, two additional trinomial tree (TT) divisions are introduced, namely, horizontal and vertical center-side trinomial trees, as shown in Figure 6(d) and (e).

圖6示出了:(a)四叉樹分割,(b)垂直二叉樹分割(c)水準二叉樹分割(d)垂直中心側三叉樹分割,和(e)水準中心側三叉樹分割。Figure 6 shows: (a) quadtree partition, (b) vertical binary tree partition (c) horizontal binary tree partition (d) vertical center side trinomial tree partition, and (e) horizontal center side trinomial tree partition.

在一些實現中,有兩個層次的樹:區域樹(四叉樹)和預測樹(二叉樹或三叉樹)。首先用區域樹(RT)對CTU進行劃分。可以進一步用預測樹(PT)劃分RT葉。也可以用PT進一步劃分PT葉,直到達到最大PT深度。PT葉是基本的編碼單元。為了方便起見,它仍然被稱為CU。CU不能進一步劃分。預測和變換都以與JEM相同的方式應用於CU。整個分割結構被稱為“多類型樹”。In some implementations, there are two levels of trees: regional trees (quadtrees) and prediction trees (binary or trinomial trees). First use the regional tree (RT) to divide the CTU. The prediction tree (PT) can be further used to divide the RT leaves. PT can also be used to further divide the PT leaf until the maximum PT depth is reached. The PT leaf is the basic coding unit. For convenience, it is still called CU. CU cannot be further divided. Both prediction and transformation are applied to CU in the same way as JEM. The whole partition structure is called "multi-type tree".

2.1.5 JVET-J00212.1.5 JVET-J0021 中的分割結構Split structure

被稱為多樹型(MTT)的樹結構是QTBT的廣義化。在QTBT中,如圖5所示,首先用四叉樹結構對編碼樹單元(CTU)進行劃分。然後用二叉樹結構對四叉樹葉節點進行進一步劃分。The tree structure called multi-tree (MTT) is a generalization of QTBT. In QTBT, as shown in Figure 5, the coding tree unit (CTU) is divided first with a quadtree structure. Then use the binary tree structure to further divide the quad leaf nodes.

MTT的基本結構由兩種類型的樹節點組成:區域樹(RT)和預測樹(PT),支援九種類型的劃分,如圖7所示。The basic structure of MTT consists of two types of tree nodes: regional tree (RT) and prediction tree (PT), which supports nine types of divisions, as shown in Figure 7.

圖7圖示了:(a)四叉樹分割,(b)垂直二叉樹分割,(c)水準二叉樹分割,(d)垂直三叉樹分割,(e)水準三叉樹分割,(f)水準向上非對稱二叉樹分割,(g)水準向下非對稱二叉樹分割,(h)垂直的左非對稱二叉樹分割,和(i)垂直的右非對稱二叉樹分割。Figure 7 illustrates: (a) quadtree division, (b) vertical binary tree division, (c) horizontal binary tree division, (d) vertical trinomial tree division, (e) horizontal trinomial tree division, (f) horizontal upward non Symmetrical binary tree partition, (g) horizontal downward asymmetric binary tree partition, (h) vertical left asymmetric binary tree partition, and (i) vertical right asymmetric binary tree partition.

區域樹可以遞迴地將CTU劃分為方形塊,直至4x4尺寸的區域樹葉節點。在區域樹的每個節點上,可以從三種樹類型中的一種形成預測樹:二叉樹(BT)、三叉樹(TT)和非對稱二叉樹(ABT)。在PT劃分中,禁止在預測樹的分支中進行四叉樹分割。和JEM一樣,亮度樹和彩度樹在I條帶中被分開。RT和PT的信令方法如圖8所示。The regional tree can recursively divide the CTU into square blocks up to 4x4 size regional leaf nodes. At each node of the area tree, a prediction tree can be formed from one of three tree types: Binary Tree (BT), Trinomial Tree (TT) and Asymmetric Binary Tree (ABT). In the PT division, quadtree division is prohibited in the branches of the prediction tree. Like JEM, the luminance tree and the chromaticity tree are separated in the I strip. The signaling method of RT and PT is shown in Figure 8.

2.2 HEVC/H.265中的幀間預測2.2 Inter prediction in HEVC/H.265

每個幀間預測的PU具有一個或兩個參考圖片清單的運動參數。運動參數包括運動向量和參考圖片索引。對兩個參考圖片清單中的一個的使用也可以使用inter_pred_idc 來信令通知。運動向量可以相對於預測器顯式地編碼為增量,這種編碼模式稱為高級運動向量預測(AMVP)模式。Each inter-predicted PU has one or two motion parameters of the reference picture list. The motion parameters include motion vectors and reference picture indexes. The use of one of the two reference picture lists can also be signaled using inter_pred_idc . Motion vectors can be explicitly coded as increments relative to the predictor. This coding mode is called advanced motion vector prediction (AMVP) mode.

當CU採用跳躍模式編碼時,一個PU與CU相關聯,並且沒有顯著的殘差係數、沒有編碼的運動向量增量或參考圖片索引。指定了一種Merge模式,通過該模式,可以從相鄰的PU(包括空間和時域候選)中獲取當前PU的運動參數。Merge模式可以應用於任何幀間預測的PU,而不僅僅是跳躍模式。Merge模式的另一種選擇是運動參數的顯式傳輸,其中運動向量、每個參考圖片清單對應的參考圖片索引和參考圖片清單的使用都會在每個PU中顯式地用信令通知。When the CU is coded in skip mode, a PU is associated with the CU, and there is no significant residual coefficient, no coded motion vector increment or reference picture index. A Merge mode is specified, through which the motion parameters of the current PU can be obtained from adjacent PUs (including spatial and temporal candidates). The Merge mode can be applied to any inter-predicted PU, not just the skip mode. Another option for the Merge mode is the explicit transmission of motion parameters, in which the motion vector, the reference picture index corresponding to each reference picture list, and the use of the reference picture list will be explicitly signaled in each PU.

當信令指示要使用兩個參考圖片清單中的一個時,從一個樣本塊中生成PU。這被稱為“單向預測”。單向預測對P條帶和B條帶都可用。When the signaling indicates that one of the two reference picture lists is to be used, the PU is generated from one sample block. This is called "one-way prediction". One-way prediction is available for both P-band and B-band.

當信令指示要使用兩個參考圖片清單時,從兩個樣本塊中生成PU。這被稱為“雙向預測”。雙向預測僅對B條帶可用。When the signaling indicates that two reference picture lists are to be used, the PU is generated from the two sample blocks. This is called "bidirectional prediction". Bi-directional prediction is only available for band B.

下文提供HEVC中規定的幀間預測模式的細節。描述將從Merge模式開始。The details of the inter prediction modes specified in HEVC are provided below. The description will start in Merge mode.

2.2.1 Merge2.2.1 Merge 模式mode

2.2.1.1Merge2.2.1.1Merge 模式的候選的推導Derivation of model candidates

當使用Merge模式預測PU時,從位元流分析指向Merge候選清單中條目的索引,並用於檢索運動資訊。該清單的結構在HEVC標準中有規定,並且可以按照以下步驟順序進行概括:When using the Merge mode to predict the PU, the index pointing to the entry in the Merge candidate list is analyzed from the bit stream and used to retrieve motion information. The structure of the list is specified in the HEVC standard and can be summarized in the following sequence of steps:

步驟1:初始候選推導Step 1: Initial candidate derivation

步驟1.1:空域候選推導Step 1.1: Airspace candidate derivation

步驟1.2:空域候選的冗餘檢查Step 1.2: Redundancy check of airspace candidates

步驟1.3:時域候選推導Step 1.3: Time domain candidate derivation

步驟2:附加候選插入Step 2: Additional candidate insertion

步驟2.1:雙向預測候選的創建Step 2.1: Creation of bidirectional prediction candidates

步驟2.2:零運動候選的插入Step 2.2: Insertion of zero motion candidates

在圖9中也示意性描述了這些步驟。對於空間Merge候選推導,在位於五個不同位置的候選中最多選擇四個Merge候選。對於時域Merge候選推導,在兩個候選中最多選擇一個Merge候選。由於在解碼器處假定每個PU的候選數為常量,因此當候選數未達到條帶標頭中發信令的最大Merge候選數(MaxNumMergeCand )時,生成附加的候選。由於候選數是恒定的,所以最佳Merge候選的索引使用截斷的一元二值化(TU)進行編碼。如果CU的大小等於8,則當前CU的所有PU都共用一個Merge候選列表,這與2N×2N預測單元的Merge候選清單相同。These steps are also schematically depicted in FIG. 9. For the derivation of spatial Merge candidates, at most four Merge candidates are selected among candidates located at five different positions. For the time-domain Merge candidate derivation, at most one Merge candidate is selected from the two candidates. Since the number of candidates for each PU is assumed to be constant at the decoder, when the number of candidates does not reach the maximum number of Merge candidates ( MaxNumMergeCand ) signaled in the slice header , additional candidates are generated. Since the number of candidates is constant, the index of the best Merge candidate is encoded using truncated unary binarization (TU). If the size of the CU is equal to 8, all PUs of the current CU share a Merge candidate list, which is the same as the Merge candidate list of 2N×2N prediction units.

下面詳細介紹與上述步驟相關的操作。The following describes the operations related to the above steps in detail.

2.2.1.22.2.1.2 空域候選推導Airspace candidate derivation

在空間Merge候選的推導中,在位於圖10所示位置的候選中最多選擇四個Merge候選。推導順序為A1、 B1、 B0、 A0 和 B2。只有當位置A1、 B1、 B0、 A0的任何PU不可用(例如,因為它屬於另一個條帶或片)或是內部編碼時,才考慮位置B2。在增加A1位置的候選後,對其餘候選的增加進行冗餘檢查,其確保具有相同運動資訊的候選被排除在清單之外,從而提高編碼效率。為了降低計算的複雜度,在所提到的冗餘檢查中並不考慮所有可能的候選對。相反,只有與圖11中的箭頭連結的對才會被考慮,並且只有當用於冗餘檢查的對應候選沒有相同的運動資訊時,才將候選添加到列表中。複製運動資訊的另一個來源是與2N×2N不同的分區相關的“第二PU”。例如,圖12分別描述了N×2N和2N×N情況下的第二PU。當當前的PU被劃分為N×2N時,對於列表構建不考慮A1位置的候選。在一些實施例中,添加此候選可能導致兩個具有相同運動資訊的預測單元,這對於在編碼單元中僅具有一個PU是冗餘的。同樣地,當當前PU被劃分為2N×N時,不考慮位置B1。In the derivation of spatial Merge candidates, at most four Merge candidates are selected among the candidates located at the positions shown in FIG. 10. The derivation sequence is A1, B1, B0, A0 and B2. Position B2 is considered only when any PU at positions A1, B1, B0, A0 is not available (for example, because it belongs to another strip or slice) or is internally coded. After the candidates at the A1 position are added, a redundancy check is performed on the addition of the remaining candidates, which ensures that candidates with the same motion information are excluded from the list, thereby improving coding efficiency. In order to reduce the computational complexity, not all possible candidate pairs are considered in the mentioned redundancy check. On the contrary, only the pair connected with the arrow in Fig. 11 will be considered, and only when the corresponding candidate for redundancy check does not have the same motion information, the candidate is added to the list. Another source of copied motion information is the "second PU" related to the 2N×2N different partition. For example, Fig. 12 depicts the second PU in the case of N×2N and 2N×N, respectively. When the current PU is divided into N×2N, the A1 position candidate is not considered for list construction. In some embodiments, adding this candidate may result in two prediction units with the same motion information, which is redundant for having only one PU in the coding unit. Likewise, when the current PU is divided into 2N×N, the position B1 is not considered.

2.2.1.32.2.1.3 時域Time Domain 候選推導Candidate Derivation

在此步驟中,只有一個候選添加到列表中。特別地,在這個時域Merge候選的推導中,基於與給定參考圖片清單中當前圖片具有最小圖片順序計數POC差異的並置PU推導了縮放運動向量。用於推導並置PU的參考圖片清單在條帶標頭中顯式地發信令。圖13中的虛線示出了時域Merge候選的縮放運動向量的獲得,其使用POC距離tb和td從並置PU的運動向量進行縮放,其中tb定義為當前圖片的參考圖片和當前圖片之間的POC差異,並且td定義為並置圖片的參考圖片與並置圖片之間的POC差異。時域Merge候選的參考圖片索引設置為零。縮放處理的實際處理在HEVC規範中描述。對於B條帶,得到兩個運動向量(一個是對於參考圖片清單0,另一個是對於參考圖片清單1)並將其組合使其成為雙向預測Merge候選。圖示用於時間Merge候選的運動向量縮放。In this step, only one candidate is added to the list. In particular, in the derivation of this time-domain Merge candidate, the scaling motion vector is derived based on the collocated PU that has the smallest picture order count POC difference with the current picture in the given reference picture list. The reference picture list used to derive the collocated PU is explicitly signaled in the slice header. The dotted line in FIG. 13 shows the acquisition of the scaled motion vector of the time-domain Merge candidate, which uses POC distances tb and td to scale from the motion vector of the collocated PU, where tb is defined as the distance between the reference picture of the current picture and the current picture. POC difference, and td is defined as the POC difference between the reference picture of the collocated picture and the collocated picture. The reference picture index of the time domain Merge candidate is set to zero. The actual processing of the scaling process is described in the HEVC specification. For slice B, two motion vectors (one for reference picture list 0 and the other for reference picture list 1) are obtained and combined to make them a bi-directional predictive Merge candidate. Illustrates the motion vector scaling for temporal Merge candidates.

在屬於參考幀的並置PU(Y)中,在候選C0 和C1 之間選擇時域候選的位置,如圖14所示。如果位置C0 處的PU不可用、內部編碼或在當前CTU之外,則使用位置C1 。否則,位置C0 被用於時域Merge候選的推導。In the collocated PU (Y) belonging to the reference frame, the position of the time domain candidate is selected between the candidates C 0 and C 1 , as shown in FIG. 14. If the PU at position C 0 is unavailable, internally coded or outside the current CTU, position C 1 is used. Otherwise, the position C 0 is used for the derivation of the time-domain Merge candidate.

2.2.1.42.2.1.4 附加候選插入Additional candidate insertion

除了空時Merge候選,還有兩種附加類型的Merge候選:組合雙向預測Merge候選和零Merge候選。組合雙向預測Merge候選是利用空時Merge候選生成的。組合雙向預測Merge候選僅用於B條帶。通過將初始候選的第一參考圖片清單運動參數與另一候選的第二參考圖片清單運動參數相結合,生成組合雙向預測候選。如果這兩個元組提供不同的運動假設,它們將形成新的雙向預測候選。圖15示出了原始列表中(在左側)的兩個候選被用於創建添加到最終列表(在右側)中的組合雙向預測Merge候選的情況,其具有MvL0和refIdxL0或MvL1和refIdxL1的兩個候選。有許多關於組合的規則需要考慮以生成這些附加Merge候選。In addition to space-time Merge candidates, there are two additional types of Merge candidates: combined bidirectional prediction Merge candidates and zero Merge candidates. The combined bidirectional prediction Merge candidate is generated using space-time Merge candidates. The combined bidirectional prediction Merge candidate is only used for the B band. The combined bidirectional prediction candidate is generated by combining the first reference picture list motion parameter of the initial candidate with the second reference picture list motion parameter of another candidate. If these two tuples provide different motion hypotheses, they will form a new bidirectional prediction candidate. Figure 15 shows the case where two candidates in the original list (on the left) are used to create a combined bidirectional prediction Merge candidate added to the final list (on the right), which has two of MvL0 and refIdxL0 or MvL1 and refIdxL1 Candidate. There are many rules about combination that need to be considered to generate these additional Merge candidates.

插入零運動候選以填充Merge候選列表中的其餘條目,從而達到MaxNumMergeCand的容量。這些候選具有零空間位移和從零開始並且每次將新的零運動候選添加到清單中時都會增加的參考圖片索引。這些候選使用的參考幀的數目對於單向預測和雙向預測分別是1幀和2幀。最後,對這些候選不執行冗餘檢查。Insert zero motion candidates to fill the remaining entries in the Merge candidate list to reach the capacity of MaxNumMergeCand. These candidates have a zero spatial displacement and a reference picture index that starts from zero and increases every time a new zero motion candidate is added to the list. The number of reference frames used by these candidates is 1 frame and 2 frames for unidirectional prediction and bidirectional prediction, respectively. Finally, no redundancy check is performed on these candidates.

2.2.1.52.2.1.5 並行處理的運動估計區域Motion estimation area processed in parallel

為了加快編碼處理,可以並存執行運動估計,從而同時推導給定區域內所有預測單元的運動向量。從空間鄰域推導Merge候選可能會干擾並行處理,因為一個預測單元在完成相關運動估計之前無法從相鄰的PU推導運動參數。為了緩和編碼效率和處理延遲之間的平衡,HEVC定義了運動估計區域(MER)。可使用如下所述的語法元素“log2_parallel_merge_level_minus2” 在圖片參數集中對MER的尺寸中進行信令通知。當定義MER時,落入同一區域的Merge候選標記為不可用,並且因此在列表構建中不考慮。In order to speed up the encoding process, motion estimation can be performed concurrently, thereby deriving the motion vectors of all prediction units in a given area at the same time. Deriving Merge candidates from the spatial neighborhood may interfere with parallel processing, because a prediction unit cannot derive motion parameters from neighboring PUs before completing related motion estimation. In order to ease the balance between coding efficiency and processing delay, HEVC defines a motion estimation region (MER). The following syntax element "log2_parallel_merge_level_minus2" can be used to signal the size of MER in the picture parameter set. When MER is defined, Merge candidates that fall into the same area are marked as unavailable, and therefore are not considered in the list construction.

圖片參數設置原始位元組序列有效載荷(The picture parameter sets the original byte sequence payload ( RBSPRBSP )語法)grammar

通用圖片參數集General picture parameter set RBSPRBSP 語法grammar pic_paraMeter_set_rbsp( ) {pic_paraMeter_set_rbsp() { 描述符Descriptor pps_pic_paraMeter_set_idpps_pic_paraMeter_set_id ue(v)ue(v) pps_seq_paraMeter_set_idpps_seq_paraMeter_set_id ue(v)ue(v) depeNdeNt_slice_segMeNts_eNabled_flagdepeNdeNt_slice_segMeNts_eNabled_flag u(1)u(1)  To  pps_scaliNg_list_data_preseNt_flag pps_scaliNg_list_data_preseNt_flag u(1)u(1)   if( pps_scaliNg_list_data_preseNt_flag )If( pps_scaliNg_list_data_preseNt_flag)  To      scaliNg_list_data( )ScaliNg_list_data()  To  lists_ModificatioN_preseNt_flag lists_ModificatioN_preseNt_flag u(1)u(1)  log2_parallel_Merge_level_MiNus2 log2_parallel_Merge_level_MiNus2 ue(v)ue(v)  slice_segMeNt_header_exteNsioN_preseNt_flag slice_segMeNt_header_exteNsioN_preseNt_flag u(1)u(1) pps_exteNsioN_preseNt_flagpps_exteNsioN_preseNt_flag u(1)u(1)  To   rbsp_trailiNg_bits( )Rbsp_trailiNg_bits()  To }}  To

log2_parallel_Merge_level_MiNus2 加2指定變數Log2ParMrgLevel的值,該變數用於第8.5.3.2.2條中規定的Merge模式亮度運動向量的推導過程,以及第8.5.3.2.3條中規定的空間Merge候選的推導過程。log2_parallel_Merge_level_MiNus2的值應在0到CtbLog2SizeY − 2的範圍內,包括0和CtbLog2SizeY − 2。 log2_parallel_Merge_level_MiNus2 plus 2 specifies the value of the variable Log2ParMrgLevel, which is used in the derivation process of the brightness motion vector of the Merge mode specified in Article 8.5.3.2.2, and the derivation process of the spatial Merge candidate specified in Article 8.5.3.2.3. The value of log2_parallel_Merge_level_MiNus2 should be in the range of 0 to CtbLog2SizeY − 2, including 0 and CtbLog2SizeY − 2.

變數Log2ParMrgLevel推導如下:The variable Log2ParMrgLevel is derived as follows:

Log2ParMrgLevel = log2_parallel_Merge_level_MiNus2 + 2  (7-37)Log2ParMrgLevel = log2_parallel_Merge_level_MiNus2 + 2 (7-37)

注釋3–Log2ParMrgLevel的值表示Merge候選列表並行推導的內置功能。例如,當Log2ParMrgLevel等於6時,可以並行推導64x64塊中包含的所有預測單元(PU)和編碼單元(CU)的Merge候選列表。Note 3-The value of Log2ParMrgLevel represents the built-in function of parallel derivation of the Merge candidate list. For example, when Log2ParMrgLevel is equal to 6, the Merge candidate list of all prediction units (PU) and coding units (CU) contained in the 64x64 block can be derived in parallel.

2.2.2 AMVP2.2.2 AMVP 模式中的運動向量預測Motion vector prediction in mode

運動向量預測利用運動向量與相鄰的PU的空時相關性,其用於運動參數的顯式傳輸。首先通過檢查左上方的時域相鄰的PU位置的可用性、去掉多餘的候選位置並且加上零向量以使候選列表長度恒定來構建運動向量候選列表。然後,編碼器可以從候選清單中選擇最佳的預測器,並發送指示所選候選的對應索引。與Merge索引信令類似,最佳運動向量候選的索引使用截斷的一元進行編碼。在這種情況下要編碼的最大值是2(例如,圖2至圖8)。在下面的章節中,將詳細介紹運動向量預測候選的推導過程。Motion vector prediction uses the space-time correlation between motion vectors and adjacent PUs, which is used for explicit transmission of motion parameters. First, the motion vector candidate list is constructed by checking the availability of the temporally adjacent PU positions in the upper left, removing redundant candidate positions and adding a zero vector to make the length of the candidate list constant. Then, the encoder can select the best predictor from the candidate list and send a corresponding index indicating the selected candidate. Similar to Merge index signaling, the index of the best motion vector candidate is encoded using truncated unary. The maximum value to be encoded in this case is 2 (for example, Figure 2 to Figure 8). In the following chapters, the derivation process of motion vector prediction candidates will be introduced in detail.

2.2.2.12.2.2.1 運動向量預測候選的推導Derivation of motion vector prediction candidates

圖16概括了運動向量預測候選的推導過程。Figure 16 summarizes the derivation process of motion vector prediction candidates.

在運動向量預測中,考慮了兩種類型的運動向量候選:空間運動向量候選和時域運動向量候選。對於空間運動向量候選的推導,基於位於圖11所示的五個不同位置的每個PU的運動向量最終推推導兩個運動向量候選。In motion vector prediction, two types of motion vector candidates are considered: spatial motion vector candidates and temporal motion vector candidates. For the derivation of the spatial motion vector candidates, two motion vector candidates are finally derived based on the motion vector of each PU located in the five different positions shown in FIG. 11.

對於時域運動向量候選的推導,從兩個候選中選擇一個運動向量候選,這兩個候選是基於兩個不同的並置位置推導的。在作出第一個空時候選列表後,移除列表中重複的運動向量候選。如果潛在候選的數量大於二,則從列表中移除相關聯的參考圖片清單中參考圖片索引大於1的運動向量候選。如果空時運動向量候選數小於二,則會在列表中添加附加的零運動向量候選。For the derivation of time-domain motion vector candidates, one motion vector candidate is selected from two candidates, and the two candidates are derived based on two different juxtaposed positions. After making the first empty time selection list, remove the repeated motion vector candidates from the list. If the number of potential candidates is greater than two, the motion vector candidates whose reference picture index is greater than 1 in the associated reference picture list are removed from the list. If the number of space-time motion vector candidates is less than two, additional zero motion vector candidates will be added to the list.

2.2.2.22.2.2.2 空間運動向量候選Spatial motion vector candidate

在推導空間運動向量候選時,在五個潛在候選中最多考慮兩個候選,這五個候選來自圖11所描繪位置上的PU,這些位置與運動Merge的位置相同。當前PU左側的推導順序定義為A0 、A1 、以及縮放的 A0 、縮放的A1 。當前PU上面的推導順序定義為B0 、B1 , B2 、縮放的 B0 、縮放的 B1 、縮放的B2 。因此,每側有四種情況可以用作運動向量候選,其中兩種情況不需要使用空間縮放,並且兩種情況使用空間縮放。四種不同的情況概括如下:When deriving the spatial motion vector candidates, at most two candidates are considered among the five potential candidates. These five candidates are from the PU at the positions depicted in FIG. 11, and these positions are the same as the positions of the motion merge. The derivation sequence on the left side of the current PU is defined as A 0 , A 1 , and scaled A 0 , scaled A 1 . The derivation sequence above the current PU is defined as B 0 , B 1 , B 2 , scaled B 0 , scaled B 1 , scaled B 2 . Therefore, there are four cases on each side that can be used as motion vector candidates, two cases do not need to use spatial scaling, and two cases use spatial scaling. The four different situations are summarized as follows:

--無空間縮放--Zoom without space

(1)相同的參考圖片清單,並且相同的參考圖片索引(相同的POC)(1) The same reference picture list and the same reference picture index (same POC)

(2)不同的參考圖片清單,但是相同的參考圖片(相同的POC)(2) Different reference picture lists, but the same reference picture (same POC)

--空間縮放--Space zoom

(3)相同的參考圖片清單,但是不同的參考圖片(不同的POC)(3) The same reference picture list, but different reference pictures (different POC)

(4)不同的參考圖片清單,並且不同的參考圖片(不同的POC)(4) Different reference picture lists, and different reference pictures (different POC)

首先檢查無空間縮放的情況,然後檢查空間縮放。當POC在相鄰PU的參考圖片與當前PU的參考圖片之間不同時,都會考慮空間縮放,而不考慮參考圖片清單。如果左側候選的所有PU都不可用或是內部編碼,則允許對上述運動向量進行縮放,以幫助左側和上方MV候選的平行推導。否則,不允許對上述運動向量進行空間縮放。First check the situation without space scaling, and then check the space scaling. When the POC is different between the reference picture of the neighboring PU and the reference picture of the current PU, spatial scaling will be considered instead of the reference picture list. If all the PUs of the left candidate are unavailable or are internally coded, the aforementioned motion vector is allowed to be scaled to help parallel derivation of the left and upper MV candidates. Otherwise, spatial scaling of the above motion vector is not allowed.

在空間縮放處理中,相鄰PU的運動向量以與時域縮放相似的方式縮放,如圖17所示。主要區別在於,給出了當前PU的參考圖片清單和索引作為輸入,實際縮放處理與時域縮放處理相同。In the spatial scaling process, the motion vectors of neighboring PUs are scaled in a similar manner to temporal scaling, as shown in FIG. 17. The main difference is that the reference picture list and index of the current PU are given as input, and the actual scaling process is the same as the time domain scaling process.

2.2.2.32.2.2.3 時域Time Domain 運動向量候選Motion vector candidate

除了參考圖片索引的推導外,時域Merge候選的所有推導過程與空間運動向量候選的推導過程相同(參見例如圖6)。向解碼器信令通知參考圖片索引。Except for the derivation of the reference picture index, all the derivation processes of the temporal Merge candidates are the same as those of the spatial motion vector candidates (see, for example, FIG. 6). Signal the reference picture index to the decoder.

2.2.2.4 AMVP2.2.2.4 AMVP 信息的信令Information signaling

對於AMVP模式,可以在位元流對四個部分發信令,包括預測方向、參考索引、MVD和MV預測候選索引。For AMVP mode, four parts can be signaled in the bit stream, including prediction direction, reference index, MVD and MV prediction candidate index.

語法表: predictioN_uNit( x0, y0, NPbW, NPbH ) { 描述符      if( cu_skip_flag[ x0 ][ y0 ] ) {              if( MaxNuMMergeCaNd > 1 )                  Merge_idx [ x0 ][ y0 ] ae(v)      } else { /* MODE_INTER */             Merge_flag [ x0 ][ y0 ] ae(v)            if( Merge_flag[ x0 ][ y0 ] ) {                   if( MaxNuMMergeCaNd > 1 )                       Merge_idx [ x0 ][ y0 ] ae(v)            } else {                   if( slice_type  = =  B )                       iNter_pred_idc [ x0 ][ y0 ] ae(v)                 if( iNter_pred_idc[ x0 ][ y0 ]  !=  PRED_L1 ) {                        if( NuM_ref_idx_l0_active_MiNus1 > 0 )                             ref_idx_l0 [ x0 ][ y0 ] ae(v)                      Mvd_codiNg( x0, y0, 0 )                       Mvp_l0_flag [ x0 ][ y0 ] ae(v)                 }                   if( iNter_pred_idc[ x0 ][ y0 ]  !=  PRED_L0 ) {                        if( NuM_ref_idx_l1_active_MiNus1 > 0 )                             ref_idx_l1 [ x0 ][ y0 ] ae(v)                      if( Mvd_l1_zero_flag  &&  iNter_pred_idc[ x0 ][ y0 ]  = =  PRED_BI ) {                              MvdL1[ x0 ][ y0 ][ 0 ] = 0                              MvdL1[ x0 ][ y0 ][ 1 ] = 0                        } else                              Mvd_codiNg( x0, y0, 1 )                       Mvp_l1_flag [ x0 ][ y0 ] ae(v)                 }              }        }   }   Syntax table: predictioN_uNit( x0, y0, NPbW, NPbH) { Descriptor if( cu_skip_flag[ x0 ][ y0]) { if( MaxNuMMergeCaNd> 1) Merge_idx [x0 ][ y0] ae(v) } else {/* MODE_INTER */ Merge_flag [x0 ][ y0] ae(v) if( Merge_flag[ x0 ][ y0]) { if( MaxNuMMergeCaNd> 1) Merge_idx [x0 ][ y0] ae(v) } else { if( slice_type = = B) iNter_pred_idc [x0 ][ y0] ae(v) if( iNter_pred_idc[ x0 ][ y0] != PRED_L1) { if( NuM_ref_idx_l0_active_MiNus1> 0) ref_idx_l0 [x0 ][ y0] ae(v) Mvd_codiNg( x0, y0, 0) Mvp_l0_flag [x0 ][ y0] ae(v) } if( iNter_pred_idc[ x0 ][ y0] != PRED_L0) { if( NuM_ref_idx_l1_active_MiNus1> 0) ref_idx_l1 [x0 ][ y0] ae(v) if( Mvd_l1_zero_flag && iNter_pred_idc[ x0 ][ y0] == PRED_BI) { MvdL1[ x0 ][ y0 ][ 0] = 0 MvdL1[ x0 ][ y0 ][ 1] = 0 } else Mvd_codiNg( x0, y0, 1) Mvp_l1_flag [x0 ][ y0] ae(v) } } } }

運動向量差語法 Mvd_codiNg( x0, y0, refList ) { 描述符     abs_Mvd_greater0_flag [ 0 ] ae(v)     abs_Mvd_greater0_flag [ 1 ] ae(v)      if( abs_Mvd_greater0_flag[ 0 ] )             abs_Mvd_greater1_flag [ 0 ] ae(v)      if( abs_Mvd_greater0_flag[ 1 ] )             abs_Mvd_greater1_flag [ 1 ] ae(v)      if( abs_Mvd_greater0_flag[ 0 ] ) {              if( abs_Mvd_greater1_flag[ 0 ] )                  abs_Mvd_MiNus2 [ 0 ] ae(v)           Mvd_sigN_flag [ 0 ] ae(v)      }        if( abs_Mvd_greater0_flag[ 1 ] ) {              if( abs_Mvd_greater1_flag[ 1 ] )                  abs_Mvd_MiNus2 [ 1 ] ae(v)           Mvd_sigN_flag [ 1 ] ae(v)      }   }   Motion vector difference grammar Mvd_codiNg( x0, y0, refList) { Descriptor abs_Mvd_greater0_flag [0] ae(v) abs_Mvd_greater0_flag [1] ae(v) if( abs_Mvd_greater0_flag[ 0]) abs_Mvd_greater1_flag [0] ae(v) if( abs_Mvd_greater0_flag[ 1]) abs_Mvd_greater1_flag [1] ae(v) if( abs_Mvd_greater0_flag[ 0]) { if( abs_Mvd_greater1_flag[ 0]) abs_Mvd_MiNus2 [0] ae(v) Mvd_sigN_flag [0] ae(v) } if( abs_Mvd_greater0_flag[ 1]) { if( abs_Mvd_greater1_flag[ 1]) abs_Mvd_MiNus2 [1] ae(v) Mvd_sigN_flag [1] ae(v) } }

2.32.3 聯合探索模型(Joint Exploration Model ( JEMJEM )中新的幀間預測方法) New inter prediction method

2.3.12.3.1 基於子Based on sub CUCU 的運動向量預測Motion vector prediction

在具有QTBT的JEM中,每個CU對於每個預測方向最多可以具有一組運動參數。通過將大的CU分割成子CU並推導該大CU的所有子CU的運動資訊,編碼器中考慮了兩種子CU級的運動向量預測方法。可選時域運動向量預測(ATMVP)方法允許每個CU從多個小於並置參考圖片中當前CU的塊中獲取多組運動資訊。在空時運動向量預測(STMVP)方法中,通過利用時域運動向量預測器和空間鄰接運動向量遞迴地推導子CU的運動向量。In JEM with QTBT, each CU can have at most one set of motion parameters for each prediction direction. By dividing a large CU into sub-CUs and deriving the motion information of all sub-CUs of the large CU, two sub-CU-level motion vector prediction methods are considered in the encoder. The optional temporal motion vector prediction (ATMVP) method allows each CU to obtain multiple sets of motion information from multiple blocks smaller than the current CU in the collocated reference picture. In the space-time motion vector prediction (STMVP) method, the motion vector of the sub-CU is recursively derived by using a time-domain motion vector predictor and a spatially adjacent motion vector.

為了為子CU運動預測的保持更精確的運動場,當前禁用參考幀的運動壓縮。In order to maintain a more accurate motion field for sub-CU motion prediction, motion compression of reference frames is currently disabled.

2.3.1.12.3.1.1 可選時域運動向量預測Optional temporal motion vector prediction

在可選時域運動向量預測(ATMVP)方法中,運動向量時域運動向量預測(TMVP)是通過從小於當前CU的塊中提取多組運動資訊(包括運動向量和參考索引)來修改的。如圖18所示,子CU為方形N×N塊(默認N設置為4)。In the optional temporal motion vector prediction (ATMVP) method, the motion vector temporal motion vector prediction (TMVP) is modified by extracting multiple sets of motion information (including motion vectors and reference indexes) from blocks smaller than the current CU. As shown in Figure 18, the sub-CU is a square N×N block (N is set to 4 by default).

ATMVP分兩步預測CU內的子CU的運動向量。第一步是用所謂的時域向量識別參考圖中的對應塊。參考圖片稱為運動源圖片。第二步是將當前CU劃分成子CU,並從每個子CU對應的塊中獲取運動向量以及每個子CU的參考索引,如圖18所示。ATMVP predicts the motion vectors of sub-CUs in the CU in two steps. The first step is to use so-called time-domain vectors to identify the corresponding blocks in the reference image. The reference picture is called the motion source picture. The second step is to divide the current CU into sub-CUs, and obtain the motion vector and the reference index of each sub-CU from the block corresponding to each sub-CU, as shown in FIG. 18.

在第一步中,參考圖片和對應的塊由當前CU的空間相鄰塊的運動資訊確定。為了避免相鄰塊的重複掃描處理,使用當前CU的Merge候選列表中的第一個Merge候選。第一個可用的運動向量及其相關聯的參考索引被設置為時域向量和運動源圖片的索引。這樣,在ATMVP中,與TMVP相比,可以更準確地識別對應的塊,其中對應的塊(有時稱為並置塊)始終位於相對於當前CU的右下角或中心位置。在一個示例中,如果第一個Merge候選來自左相鄰塊(即,圖19中的A1 ),則使用相關的MV和參考圖片來識別源塊和源圖片。In the first step, the reference picture and the corresponding block are determined by the motion information of the spatial neighboring blocks of the current CU. In order to avoid repeated scanning processing of adjacent blocks, the first Merge candidate in the Merge candidate list of the current CU is used. The first available motion vector and its associated reference index are set as the index of the time domain vector and the motion source picture. In this way, in ATMVP, compared with TMVP, the corresponding block can be identified more accurately, where the corresponding block (sometimes called a collocated block) is always located at the lower right corner or center position relative to the current CU. In one example, if the first Merge candidate is from the left neighboring block (ie, A 1 in FIG. 19), the related MV and reference picture are used to identify the source block and the source picture.

圖19示出了源塊和源圖片的識別的示例。Fig. 19 shows an example of identification of source blocks and source pictures.

在第二步中,通過將時域向量添加到當前CU的座標中,通過運動源圖片中的時域向量識別子CU的對應塊。對於每個子CU,使用其對應塊的運動資訊(覆蓋中心樣本的最小運動網格)來推導子CU的運動資訊。在識別出對應N×N塊的運動資訊後,將其轉換為當前子CU的運動向量和參考索引,與HEVC的TMVP方法相同,其中應用運動縮放和其它處理。例如,解碼器檢查是否滿足低延遲條件(即,當前圖片的所有參考圖片的POC都小於當前圖片的POC),並可能使用運動向量MVx(與參考圖片清單X對應的運動向量)來為每個子CU預測運動向量MVy(X等於0或1且Y等於1−X)。In the second step, by adding the time domain vector to the coordinates of the current CU, the corresponding block of the sub-CU is identified by the time domain vector in the motion source picture. For each sub-CU, the motion information of its corresponding block (the smallest motion grid covering the center sample) is used to derive the motion information of the sub-CU. After identifying the motion information corresponding to the N×N block, it is converted into the motion vector and reference index of the current sub-CU, which is the same as the TMVP method of HEVC, in which motion scaling and other processing are applied. For example, the decoder checks whether the low-delay condition is met (that is, the POC of all reference pictures of the current picture is less than the POC of the current picture), and may use the motion vector MVx (the motion vector corresponding to the reference picture list X) to assign each sub CU predicts the motion vector MVy (X equals 0 or 1 and Y equals 1−X).

2.3.1.22.3.1.2 空時運動向量預測Space-time motion vector prediction

在這種方法中,子CU的運動向量是按照光柵掃描順序遞迴推導的。圖20圖示了這一概念。考慮一個8×8的 CU,它包含四個4×4的子CU A、B、C和D。當前幀中相鄰的4×4的塊標記為a、b、c和d。In this method, the motion vector of the sub-CU is derived recursively in the raster scan order. Figure 20 illustrates this concept. Consider an 8×8 CU, which contains four 4×4 sub-CUs A, B, C, and D. The adjacent 4×4 blocks in the current frame are labeled a, b, c, and d.

子CU A的運動推導由識別其兩個空間鄰居開始。第一個鄰居是子CU A上方的N×N塊(塊c)。如果該塊c不可用或內部編碼,則檢查子CU A上方的其它N×N塊(從左到右,從塊c處開始)。第二個鄰居是子CU A左側的一個塊(塊b)。如果塊b不可用或是內部編碼,則檢查子CU A左側的其它塊(從上到下,從塊b處開始)。每個清單從相鄰塊獲得的運動資訊被縮放到給定清單的第一個參考幀。接下來,按照HEVC中規定的與TMVP相同的程式,推推導子塊A的時域運動向量預測(TMVP)。提取位置D處的並置塊的運動資訊並進行相應的縮放。最後,在檢索和縮放運動資訊後,對每個參考列表分別平均所有可用的運動向量(最多3個)。將平均運動向量指定為當前子CU的運動向量。The motion derivation of sub CU A starts by identifying its two spatial neighbors. The first neighbor is the N×N block (block c) above the sub CU A. If the block c is not available or internally coded, then check the other N×N blocks above the sub-CU A (from left to right, starting from block c). The second neighbor is a block to the left of the sub CU A (block b). If block b is not available or is internally coded, check other blocks on the left of sub-CU A (from top to bottom, starting from block b). The motion information obtained from adjacent blocks in each list is scaled to the first reference frame of a given list. Next, according to the same formula as TMVP stipulated in HEVC, the temporal motion vector prediction (TMVP) of sub-block A is derived. Extract the motion information of the collocated block at position D and perform corresponding scaling. Finally, after retrieving and scaling the motion information, all available motion vectors (up to 3) are averaged for each reference list. The average motion vector is designated as the motion vector of the current sub-CU.

圖20示出了具有四個子塊(A-D)及其相鄰塊(a-d)的一個CU的示例。Fig. 20 shows an example of one CU with four sub-blocks (A-D) and its neighboring blocks (a-d).

2.3.1.32.3.1.3 child CUCU 運動預測模式信令通知Motion prediction mode signaling

子CU模式作為附加的Merge候選模式啟用,並且不需要附加的語法元素來對該模式發信令。將另外兩個Merge候選添加到每個CU的Merge候選列表中,以表示ATMVP模式和STMVP模式。如果序列參數集指示啟用了ATMVP和STMVP,則最多使用七個Merge候選。附加Merge候選的編碼邏輯與HM中的Merge候選的編碼邏輯相同,這意味著對於P條帶或B條帶中的每個CU,需要對兩個附加Merge候選進行兩次額外的RD檢查。The sub-CU mode is enabled as an additional Merge candidate mode, and no additional syntax elements are required to signal the mode. The other two Merge candidates are added to the Merge candidate list of each CU to indicate the ATMVP mode and the STMVP mode. If the sequence parameter set indicates that ATMVP and STMVP are enabled, a maximum of seven Merge candidates are used. The coding logic of the additional Merge candidate is the same as the coding logic of the Merge candidate in the HM, which means that for each CU in the P slice or the B slice, two additional RD checks need to be performed on the two additional Merge candidates.

在JEM中,Merge索引的所有bin檔都由CABAC進行上下文編碼。然而在HEVC中,只有第一個bin檔是上下文編碼的,並且其餘的biN檔是上下文旁路編碼的。In JEM, all bin files indexed by Merge are context-encoded by CABAC. However, in HEVC, only the first bin file is context coded, and the remaining biN files are context bypass coded.

2.3.22.3.2 自我調整運動向量差解析度Self-adjusting motion vector difference resolution

在HEVC中,當在條帶標頭中use_integer_mv_flag等於0時,運動向量差(MVD)(在PU的運動向量和預測運動向量之間)以四分之一亮度樣本為單位發信令。在JEM中,引入了局部自我調整運動向量解析度(LAMVR)。在JEM中,MVD可以用四分之一亮度樣本、整數亮度樣本或四亮度樣本的單位進行編碼。MVD解析度控制在編碼單元(CU)級別,並且MVD解析度標誌有條件地為每個至少有一個非零MVD分量的CU發信令。In HEVC, when use_integer_mv_flag is equal to 0 in the slice header, the motion vector difference (MVD) (between the motion vector of the PU and the predicted motion vector) is signaled in units of a quarter luminance sample. In JEM, local self-adjusting motion vector resolution (LAMVR) is introduced. In JEM, MVD can be coded in units of one-quarter luminance samples, integer luminance samples, or four luminance samples. The MVD resolution is controlled at the coding unit (CU) level, and the MVD resolution flag conditionally signals each CU with at least one non-zero MVD component.

對於具有至少一個非零MVD分量的CU,第一個標誌將發信令以指示CU中是否使用四分之一亮度樣本MV精度。當第一個標誌(等於1)指示不使用四分之一亮度樣本MV精度時,另一個標誌發信令以指示是使用整數亮度樣本MV精度還是使用四亮度樣本MV精度。For a CU with at least one non-zero MVD component, the first flag will be signaled to indicate whether a quarter-luminance sample MV accuracy is used in the CU. When the first flag (equal to 1) indicates that one-quarter luminance sample MV accuracy is not used, the other flag signals whether to use integer luminance sample MV accuracy or four luminance sample MV accuracy.

當CU的第一個MVD解析度標誌為零或沒有為CU編碼(意味著CU中的所有MVD都為零)時,CU使用四分之一亮度樣本MV解析度。當一個CU使用整數亮度樣本MV精度或四亮度樣本MV精度時,該CU的AMVP候選列表中的MVP將取整到對應的精度。When the first MVD resolution flag of the CU is zero or is not coded for the CU (meaning that all MVDs in the CU are zero), the CU uses a quarter luminance sample MV resolution. When a CU uses integer luma sample MV precision or four luma sample MV precision, the MVP in the AMVP candidate list of the CU will be rounded to the corresponding precision.

在編碼器中,CU級別的RD檢查用於確定哪個MVD解析度將用於CU。也就是說,對每個MVD解析度執行三次CU級別的RD檢查。為了加快編碼器速度,在JEM中應用以下編碼方案。In the encoder, the CU-level RD check is used to determine which MVD resolution will be used for the CU. That is, three CU-level RD checks are performed for each MVD resolution. In order to speed up the encoder, the following coding scheme is applied in JEM.

在對具有正常四分之一亮度採樣MVD解析度的CU進行RD檢查期間,存儲當前CU(整數亮度採樣精度)的運動資訊。在對具有整數亮度樣本和4 亮度樣本MVD解析度的同一個CU進行RD檢查時,將存儲的運動資訊(取整後)用作進一步小範圍運動向量細化的起始點,從而使耗時的運動估計處理不會重複三次。During the RD inspection of a CU with a normal quarter luminance sampling MVD resolution, the motion information of the current CU (integer luminance sampling accuracy) is stored. When performing RD inspection on the same CU with integer luminance samples and 4 luminance sample MVD resolutions, the stored motion information (rounded) is used as the starting point for further refinement of small-range motion vectors, which makes it time-consuming The motion estimation process will not be repeated three times.

有條件地調用具有4 亮度樣本MVD解析度的CU的RD檢查。對於CU,當整數亮度樣本MVD解析度的RD檢查成本遠大於四分之一亮度樣本MVD解析度的RD檢查成本時,將跳過對CU的4 亮度樣本MVD解析度的RD檢查。Conditionally invoke the RD check of the CU with the MVD resolution of 4 luminance samples. For the CU, when the RD check cost of the MVD resolution of the integer luminance sample is much greater than the RD check cost of the quarter luminance sample MVD resolution, the RD check of the 4 luminance sample MVD resolution of the CU will be skipped.

2.3.32.3.3 模式匹配運動向量推導Pattern matching motion vector derivation

模式匹配運動向量推導(PMMVD)模式是基於畫面播放速率上轉換(FRUC)技術的特殊Merge模式。在這種模式下,塊的運動資訊不會被發信令,而是在解碼器側推導。Pattern matching motion vector derivation (PMMVD) mode is a special Merge mode based on frame rate up-conversion (FRUC) technology. In this mode, the motion information of the block will not be signaled, but is derived at the decoder side.

對於CU,當其Merge標誌為真時,對FRUC標誌發信令。當FRUC標誌為假時,對Merge索引發信令並且使用常規Merge模式。當FRUC標誌為真時,對另一個FRUC模式標誌發信令來指示將使用哪種模式(雙邊匹配或範本匹配)來推導該塊的運動資訊。For CU, when its Merge flag is true, the FRUC flag is signaled. When the FRUC flag is false, the Merge index is signaled and the regular Merge mode is used. When the FRUC flag is true, another FRUC mode flag is signaled to indicate which mode (bilateral matching or template matching) will be used to derive the motion information of the block.

在編碼器側,基於對正常Merge候選所做的RD成本選擇決定是否對CU使用FRUC Merge模式。即通過使用RD成本選擇來檢查CU的兩個匹配模式(雙邊匹配和範本匹配)。導致最低成本的模式進一步與其它CU模式相比較。如果FRUC匹配模式是最有效的模式,那麼對於CU,FRUC標誌設置為真,並且使用相關的匹配模式。On the encoder side, it is determined whether to use the FRUC Merge mode for the CU based on the RD cost selection made for the normal Merge candidates. That is, the two matching modes of CU (bilateral matching and template matching) are checked by using RD cost selection. The mode that results in the lowest cost is further compared with other CU modes. If the FRUC matching mode is the most effective mode, then for the CU, the FRUC flag is set to true and the relevant matching mode is used.

FRUC Merge模式中的運動推導過程有兩個步驟:首先執行CU級運動搜索,然後執行子CU級運動優化。在CU級,基於雙邊匹配或範本匹配,推導整個CU的初始運動向量。首先,生成一個MV候選列表,並且選擇導致最低匹配成本的候選作為進一步優化CU級的起點。然後在起始點附近執行基於雙邊匹配或範本匹配的局部搜索,並且將最小匹配成本的MV結果作為整個CU的MV值。接著,以推導的CU運動向量為起點,進一步在子CU級細化運動資訊。The motion derivation process in FRUC Merge mode has two steps: first, perform CU-level motion search, and then perform sub-CU-level motion optimization. At the CU level, based on bilateral matching or template matching, the initial motion vector of the entire CU is derived. First, a list of MV candidates is generated, and the candidate that leads to the lowest matching cost is selected as the starting point for further optimization of the CU level. Then a local search based on bilateral matching or template matching is performed near the starting point, and the MV result with the smallest matching cost is taken as the MV value of the entire CU. Then, using the derived CU motion vector as a starting point, the motion information is further refined at the sub-CU level.

例如,對於W×H CU運動資訊推導執行以下推導過程。在第一階段,推推導了整個W×H CU的MV。在第二階段,該CU進一步被分成M×M子CU。M的值按照(16)計算,D是預先定義的劃分深度,在JEM中默認設置為3。然後推導每個子CU的MV值。For example, for W×H CU motion information derivation, the following derivation process is performed. In the first stage, the MV of the entire W×H CU was derived. In the second stage, the CU is further divided into M×M sub-CUs. The value of M is calculated according to (16), and D is the predefined division depth, which is set to 3 by default in JEM. Then the MV value of each sub-CU is derived.

Figure 02_image001
(3)
Figure 02_image001
(3)

如圖21所示,通過沿當前CU的運動軌跡在兩個不同的參考圖片中找到兩個塊之間最接近的匹配,使用雙邊匹配來推導當前CU的運動資訊。在連續運動軌跡假設下,指向兩個參考塊的運動向量MV0和MV1與當前圖片和兩個參考圖片之間的時間距離(即,TD0和TD1成正比。作為特殊情況,當當前圖片暫時位於兩個參考圖片之間並且當前圖片到兩個參考圖片的時間距離相同時,雙邊匹配成為基於鏡像的雙向MV。As shown in FIG. 21, by finding the closest match between two blocks in two different reference pictures along the motion trajectory of the current CU, bilateral matching is used to derive the motion information of the current CU. Under the assumption of continuous motion trajectories, the motion vectors MV0 and MV1 pointing to the two reference blocks are proportional to the time distance between the current picture and the two reference pictures (ie, TD0 and TD1 are proportional. As a special case, when the current picture is temporarily located between When the time distance between two reference pictures and the current picture to the two reference pictures is the same, bilateral matching becomes a mirror-based bidirectional MV.

如圖22所示,通過在當前圖片中的範本(當前CU的頂部和/或左側相鄰塊)和參考圖片中的塊(與範本尺寸相同)之間找到最接近的匹配,使用範本匹配來推導當前CU的運動資訊。除了上述的FRUC Merge模式外,範本匹配也應用於AMVP模式。在JEM中,正如在HEVC中一樣,AMVP有兩個候選。利用範本匹配方法,推導了新的候選。如果由範本匹配新推導的候選與第一個現有AMVP候選不同,則將其插入AMVP候選列表的最開始處,並且然後將列表尺寸設置為2(即移除第二個現有AMVP候選)。當應用於AMVP模式時,僅應用CU級搜索。As shown in Figure 22, by finding the closest match between the template in the current picture (the top and/or left adjacent block of the current CU) and the block in the reference picture (the same size as the template), the template matching is used to Derive the current CU sports information. In addition to the aforementioned FRUC Merge mode, template matching is also applied to AMVP mode. In JEM, just as in HEVC, AMVP has two candidates. Using the template matching method, new candidates are derived. If the newly derived candidate from the template matching is different from the first existing AMVP candidate, insert it at the very beginning of the AMVP candidate list, and then set the list size to 2 (ie remove the second existing AMVP candidate). When applied to AMVP mode, only CU-level search is applied.

2.3.3.1 CU2.3.3.1 CU level MVMV 候選集Candidate set

CU級的MV候選集包括:The CU-level MV candidate set includes:

(i)原始AMVP候選,如果當前CU處於AMVP模式,(I) Original AMVP candidate, if the current CU is in AMVP mode,

(ii)所有Merge候選,(Ii) All Merge candidates,

(iii)插值MV場中的幾個MV。(Iii) Interpolate several MVs in the MV field.

(iv)頂部和左側相鄰運動向量(Iv) Top and left adjacent motion vectors

當使用雙邊匹配時,Merge候選的每個有效MV用作輸入,以生成假設為雙邊匹配的MV對。例如,Merge候選在參考列表A處的一個有效MV為(MVa,refa )。然後在另一個參考列表B中找到其配對的雙邊MV的參考圖片refb ,以便refa 和refb 在時間上位於當前圖片的不同側。如果參考列表B中的參考refb 不可用,則將參考refb 確定為與參考refa 不同的參考,並且其到當前圖片的時間距離是清單B中的最小距離。確定參考refb 後,通過基於當前圖片和參考refa 、參考refb 之間的時間距離縮放MVa推導MVb。When using bilateral matching, each valid MV of the Merge candidate is used as input to generate MV pairs that are assumed to be bilateral matching. For example, a valid MV of the Merge candidate at the reference list A is (MVa, ref a ). Then find the reference picture ref b of its paired bilateral MV in another reference list B, so that ref a and ref b are located on different sides of the current picture in time. If the reference ref b in the reference list B is not available, the reference ref b is determined to be a different reference from the reference ref a , and its time distance to the current picture is the smallest distance in the list B. After the reference ref b is determined, the MVb is derived by scaling MVa based on the time distance between the current picture and the reference ref a and the reference ref b .

還將來自插值MV場中的四個MV添加到CU級候選列表中。更具體地,添加當前CU的位置(0,0),(W/2,0),(0,H/2)和(W/2,H/2)處插值的MV。Four MVs from the interpolated MV field are also added to the CU-level candidate list. More specifically, the interpolated MVs at positions (0, 0), (W/2, 0), (0, H/2), and (W/2, H/2) of the current CU are added.

當在AMVP模式下應用FRUC時,原始的AMVP候選也添加到CU級的MV候選集。When FRUC is applied in AMVP mode, the original AMVP candidates are also added to the MV candidate set at the CU level.

在CU級,可以將AMVP CU的最多15個 MV和Merge CU的最多13個 MV添加到候選列表中。At the CU level, up to 15 MVs of AMVP CU and up to 13 MVs of Merge CU can be added to the candidate list.

2.3.3.22.3.3.2 child CUCU level MVMV 候選集Candidate set

在子CU級設置的MV候選包括:The MV candidates set at the sub-CU level include:

(i)從CU級搜索確定的MV,(I) MV determined from CU-level search,

(ii)頂部、左側、左上方和右上方相鄰的MV,(Ii) MVs adjacent to the top, left, top left, and top right,

(iii)來自參考圖片的並置MV的縮放版本,(Iii) A scaled version of the collocated MV from the reference picture,

(iv)最多4個ATMVP候選,(Iv) Up to 4 ATMVP candidates,

(v)最多4個STMVP候選。(V) Up to 4 STMVP candidates.

來自參考圖片的縮放MV推導如下。兩個清單中的所有參考圖片都被遍歷。參考圖片中子CU的並置位置處的MV被縮放為起始CU級MV的參考。The zoomed MV from the reference picture is derived as follows. All reference pictures in the two lists are traversed. The MV at the collocated position of the sub-CU in the reference picture is scaled to the reference of the starting CU-level MV.

ATMVP和STMVP候選被限制為前四個。在子CU級,最多17個MV被添加到候選列表中。The ATMVP and STMVP candidates are limited to the first four. At the sub-CU level, up to 17 MVs are added to the candidate list.

2.3.3.32.3.3.3 插值Interpolation MVMV 場的生成Field generation

在對幀進行編碼之前,基於單向ME生成整個圖片的內插運動場。然後,該運動場可以隨後用作CU級或子CU級的MV候選。Before encoding the frame, an interpolated motion field of the entire picture is generated based on the one-way ME. Then, the sports field can be subsequently used as a CU-level or sub-CU-level MV candidate.

首先,兩個參考清單中每個參考圖片的運動場在4×4的塊級別上被遍歷。對於每個4×4塊,如果與塊相關聯的運動通過當前圖片中的4×4塊(如圖23所示),並且該塊沒有被分配任何內插運動,則根據時間距離TD0和TD1將參考塊的運動縮放到當前圖片(與HEVC中TMVP的MV縮放相同),並且在當前幀中將該縮放運動指定給該塊。如果沒有縮放的MV指定給4×4塊,則在插值運動場中將塊的運動標記為不可用。First, the motion field of each reference picture in the two reference lists is traversed at the 4×4 block level. For each 4×4 block, if the motion associated with the block passes through a 4×4 block in the current picture (as shown in Figure 23), and the block is not assigned any interpolated motion, then according to the time distance TD0 and TD1 Scale the motion of the reference block to the current picture (the same as the MV scaling of TMVP in HEVC), and assign the scaling motion to the block in the current frame. If an unscaled MV is assigned to a 4×4 block, the motion of the block is marked as unusable in the interpolated motion field.

2.3.3.42.3.3.4 插補匹配Imputation matching 成本cost

當運動向量指向分數採樣位置時,需要運動補償插值。為了降低複雜度,對雙邊匹配和範本匹配都使用雙線性插值而不是常規的8抽頭HEVC插值。When the motion vector points to the fractional sampling position, motion compensation interpolation is required. In order to reduce the complexity, bilinear interpolation is used for both bilateral matching and template matching instead of conventional 8-tap HEVC interpolation.

匹配成本的計算在不同的步驟處有點不同。當從CU級的候選集中選擇候選時,匹配成本是雙邊匹配或範本匹配的絕對和差(SAD)。在確定起始MV後,雙邊匹配在子CU級搜索的匹配成本C計算如下:The calculation of the matching cost is a bit different at different steps. When selecting candidates from the CU-level candidate set, the matching cost is the absolute sum difference (SAD) of bilateral matching or template matching. After the initial MV is determined, the matching cost C for bilateral matching at the sub-CU level is calculated as follows:

Figure 02_image003
(4)
Figure 02_image003
(4)

這裡,w是權重係數,被經驗地設置為4。MV和

Figure 02_image005
分別指示當前MV和起始MV。仍然將SAD用作模式匹配在子CU級搜索的匹配成本。Here, w is the weight coefficient, which is empirically set to 4. MV and
Figure 02_image005
Indicate the current MV and start MV respectively. SAD is still used as the matching cost of pattern matching in the sub-CU level search.

在FRUC模式下,MV通過僅使用亮度樣本推導。推導的運動將用於亮度和彩度的MC幀間預測。確定MV後,對亮度使用8抽頭(8-taps)插值濾波器並且對彩度使用4抽頭(4-taps)插值濾波器執行最終MC。In FRUC mode, MV is derived by using only luminance samples. The derived motion will be used for MC inter prediction of luma and chroma. After determining the MV, an 8-tap (8-taps) interpolation filter is used for luminance and a 4-tap (4-taps) interpolation filter is used for chroma to perform the final MC.

2.3.3.5 MV2.3.3.5 MV 細化Refine

MV細化是基於模式的MV搜索,以雙邊成本或範本匹配成本為標準。在JEM中,支援兩種搜索模式—無限制中心偏置菱形搜索(UCBDS)和自我調整交叉搜索,分別在CU級別和子CU級別進行MV細化。對於CU級和子CU級的MV細化,都在四分之一亮度樣本精度下直接搜索MV,接著是八分之一亮度樣本MV細化。將CU和子CU步驟的MV細化的搜索範圍設置為8 個亮度樣本。MV refinement is a pattern-based MV search, using bilateral cost or template matching cost as the standard. In JEM, two search modes are supported—unlimited center-biased diamond search (UCBDS) and self-adjusting cross search. MV refinement is performed at the CU level and the sub-CU level respectively. For the MV refinement at the CU level and the sub-CU level, the MV is directly searched at the accuracy of a quarter of the luminance sample, followed by the MV refinement of the one-eighth luminance sample. Set the search range of MV refinement in the CU and sub-CU steps to 8 luminance samples.

2.3.3.62.3.3.6 範本匹配Template matching FRUC MergeFRUC Merge 模式下預測方向的選擇Selection of prediction direction in mode

在雙邊Merge模式下,總是應用雙向預測,因為CU的運動資訊是在兩個不同的參考圖片中基於當前CU運動軌跡上兩個塊之間的最近匹配得出的。範本匹配Merge模式沒有這種限定。在範本匹配Merge模式下,編碼器可以從清單0的單向預測、列表1的單向預測或者雙向預測中為CU做出選擇。該選擇基於如下的範本匹配成本:In the bilateral Merge mode, two-way prediction is always applied, because the motion information of the CU is obtained based on the closest match between two blocks on the current CU motion trajectory in two different reference pictures. There is no such limitation for the template matching Merge mode. In the template matching Merge mode, the encoder can make a choice for the CU from the one-way prediction of List 0, the one-way prediction of List 1, or the two-way prediction. This selection is based on the following template matching costs:

如果 costBi>=factor*min(cost0,cost1)If costBi>=factor*min(cost0,cost1)

則使用雙向預測;Bidirectional prediction is used;

否則,如果 cost0>=cost1Otherwise, if cost0>=cost1

則使用列表0中的單向預測;Then use the one-way prediction in list 0;

否則,otherwise,

使用列表1中的單向預測;Use the one-way forecast in Listing 1;

其中cost0是清單0範本匹配的SAD,cost1是清單2範本匹配的SAD,並且costBi是雙向預測範本匹配的SAD。factor的值等於1.25,意味著選擇處理朝雙向預測偏移。幀間預測方向選擇可以僅應用於CU級範本匹配處理。Where cost0 is the SAD matched by the template in Listing 0, cost1 is the SAD matched by the template in Listing 2, and costBi is the SAD matched by the bidirectional prediction template. The value of factor is equal to 1.25, which means that the selection process is shifted towards bidirectional prediction. Inter-frame prediction direction selection can only be applied to CU-level template matching processing.

2.3.42.3.4 解碼器側運動向量細化Decoder side motion vector refinement

在雙向預測操作中,對於一個塊區域的預測,將兩個分別由列表0的運動向量(MV)和列表1的MV形成的預測塊組合形成單個預測信號。在解碼器側運動向量細化(DMVR)方法中,通過雙邊範本匹配處理進一步細化雙向預測的兩個運動向量。解碼器中應用的雙邊範本匹配用於在雙邊範本和參考圖片中的重建樣本之間執行基於失真的搜索,以便在不傳輸附加運動資訊的情況下獲得細化的MV。In the bidirectional prediction operation, for the prediction of a block region, two prediction blocks respectively formed by the motion vector (MV) of list 0 and the MV of list 1 are combined to form a single prediction signal. In the decoder-side motion vector refinement (DMVR) method, the two motion vectors of bidirectional prediction are further refined through bilateral template matching processing. The bilateral template matching applied in the decoder is used to perform a distortion-based search between the bilateral template and the reconstructed samples in the reference picture in order to obtain a refined MV without transmitting additional motion information.

在DMVR中,雙邊範本被生成為兩個預測塊的加權組合(即平均),其中兩個預測塊分別來自列表0的初始MV0和列表1的MV1。範本匹配操作包括計算生成的範本與參考圖片中的樣本區域(在初始預測塊周圍)之間的成本度量。對於兩個參考圖片中的每一個,產生最小範本成本的MV被視為該列表的更新MV,以替換原始MV。在JEM中,為每個列表搜索九個MV候選。九個MV候選包括原始MV和8個周邊MV,這八個周邊MV在水準或垂直方向上或兩者與原始MV具有一個亮度樣本的偏移。最後,使用圖24所示的兩個新的MV(即MV0′和MV1′)生成最終的雙向預測結果。絕對差異之和(SAD)被用作成本度量。In DMVR, the bilateral template is generated as a weighted combination (ie, average) of two prediction blocks, where the two prediction blocks are from the initial MV0 in list 0 and MV1 in list 1. The template matching operation includes calculating the cost metric between the generated template and the sample area (around the initial prediction block) in the reference picture. For each of the two reference pictures, the MV that produces the smallest template cost is regarded as the updated MV of the list to replace the original MV. In JEM, nine MV candidates are searched for each list. The nine MV candidates include the original MV and 8 peripheral MVs, and the eight peripheral MVs have a luminance sample offset from the original MV in the horizontal or vertical direction or both. Finally, use the two new MVs shown in Figure 24 (namely MV0' and MV1') to generate the final bidirectional prediction result. The sum of absolute differences (SAD) is used as a cost metric.

在不傳輸附加語法元素的情況下,將DMVR應用於雙向預測的Merge模式,其中一個MV來自過去的參考圖片,並且另一個MV來自未來的參考圖片。在JEM中,當為CU啟用LIC、仿射運動、FRUC或子CU Merge候選時,不應用DMVR。Without transmitting additional syntax elements, DMVR is applied to the Merge mode of bidirectional prediction, where one MV is from a reference picture in the past and the other MV is from a reference picture in the future. In JEM, when LIC, affine motion, FRUC, or sub-CU Merge candidates are enabled for CU, DMVR is not applied.

2.3.52.3.5 局部光照補償Local illumination compensation

局部光照補償(IC)基於用於光照改變的線性模型,使用縮放因數a和偏移b。並且針對每個幀間模式編碼的編碼單元(CU)自我調整地啟用或禁用局部光照補償。Local illumination compensation (IC) is based on a linear model for illumination changes, using a scaling factor a and an offset b. And for each inter-mode coding unit (CU), the local illumination compensation is enabled or disabled by self-adjustment.

當IC應用於CU時,採用最小平方誤差方法通過使用當前CU的相鄰樣點及其對應的參考樣點來推導參數a和b。更具體地,如圖25所示,使用CU的子採樣的(2:1子採樣)相鄰樣點和參考圖片中的(由當前CU或子CU的運動資訊識別的)對應樣點。IC參數被推導並分別應用於每個預測方向。When the IC is applied to the CU, the least square error method is adopted to derive the parameters a and b by using the neighboring samples of the current CU and the corresponding reference samples. More specifically, as shown in FIG. 25, the neighboring samples of the sub-sampling (2:1 sub-sampling) of the CU and the corresponding samples (identified by the motion information of the current CU or sub-CU) in the reference picture are used. IC parameters are derived and applied to each prediction direction separately.

當以Merge模式對CU進行編碼時,以與Merge模式中的運動資訊複製類似的方式從相鄰塊複製IC標誌;否則,對CU信令通知IC標誌以指示LIC是否適用。When the CU is encoded in the Merge mode, the IC flag is copied from adjacent blocks in a similar manner to the motion information copy in the Merge mode; otherwise, the IC flag is signaled to the CU to indicate whether the LIC is applicable.

當針對圖片啟用IC時,需要附加CU級別RD檢查以確定是否將LIC應用於CU。當針對CU啟用IC時,對於整數像素運動搜索和分數像素運動搜索分別使用均值移除絕對和差(Mean-Removed Sum of Absolute Diffefference,MR-SAD)以及均值移除絕對Hadamard變換和差(Mean-Removed Sum of Absolute Hadamard-Transformed Difference,MR-SATD),而不是SAD和SATD。When IC is enabled for pictures, an additional CU level RD check is required to determine whether to apply LIC to the CU. When the IC is enabled for CU, for integer-pixel motion search and fractional pixel motion search were used and the mean absolute difference is removed (Mean-Removed Sum of Absolute Diffefference, MR-SAD) and Mean Absolute Hadamard transform and remove the difference (Mean- Removed Sum of Absolute Hadamard-Transformed Difference, MR-SATD) instead of SAD and SATD.

為了降低編碼複雜度,在JEM中應用以下編碼方案。當當前圖片與其參考圖片之間沒有明顯的光照改變時,針對全部圖片禁用IC。為了識別這種情況,在編碼器處計算當前圖片的長條圖和當前圖片的每個參考圖片。如果當前圖片與當前圖片的每個參考圖片之間的長條圖差異小於給定閾值,則針對當前圖片禁用IC;否則,針對當前圖片啟用IC。In order to reduce coding complexity, the following coding scheme is applied in JEM. When there is no obvious lighting change between the current picture and its reference picture, IC is disabled for all pictures. In order to recognize this situation, the bar graph of the current picture and each reference picture of the current picture are calculated at the encoder. If the bar graph difference between the current picture and each reference picture of the current picture is less than a given threshold, IC is disabled for the current picture; otherwise, IC is enabled for the current picture.

2.3.62.3.6 具有雙向匹配細化的With two-way matching refinement Merge/Merge/ 跳過模式Skip mode

首先通過利用冗餘檢查將空間相鄰和時間相鄰塊的運動向量和參考索引插入候選清單中來構造Merge候選列表,直到可用候選的數量達到最大候選尺寸19。通過根據預定義的插入順序,在插入空間候選(圖26)、時間候選、仿射候選、高級時間MVP(Advanced Temporal MVP,ATMVP)候選、時空MVP(Spatial Temporal,STMVP)候選和HEVC中使用的附加候選(組合候選和零候選)來構造Merge/跳過模式的Merge候選清單:First, the Merge candidate list is constructed by inserting the motion vectors and reference indexes of the spatially adjacent and temporally adjacent blocks into the candidate list by using redundancy check, until the number of available candidates reaches the maximum candidate size of 19. By inserting spatial candidates (Figure 26), temporal candidates, affine candidates, advanced temporal MVP (Advanced Temporal MVP, ATMVP) candidates, spatiotemporal MVP (Spatial Temporal, STMVP) candidates and HEVC according to the predefined insertion order, Additional candidates (combined candidates and zero candidates) are added to construct the Merge candidate list of Merge/Skip mode:

(1)塊1-4的空間候選(1) Spatial candidates for blocks 1-4

(2)塊1-4的外推(extrapolated)仿射候選(2) Extrapolated affine candidates for blocks 1-4

(3)ATMVP(3) ATMVP

(4)STMVP(4) STMVP

(5)虛擬仿射候選(5) Virtual affine candidate

(6)空間候選(塊5)(僅當可用候選的數量小於6時使用)(6) Spatial candidates (block 5) (only used when the number of available candidates is less than 6)

(7)外推仿射候選(塊5)(7) Extrapolate Affine Candidates (Block 5)

(8)時間候選(如在HEVC中推導的)(8) Time candidate (as derived in HEVC)

(9)非鄰近空間候選,其後是外推仿射候選(塊6至49)(9) Non-adjacent spatial candidates, followed by extrapolated affine candidates (blocks 6 to 49)

(10)組合候選(10) Combination candidates

(11)零候選(11) Zero candidates

注意到,除了STMVP和仿射之外,IC標誌也從Merge候選繼承。而且,對於前四個空間候選,在具有單向預測的候選之前插入雙向預測候選。Note that in addition to STMVP and affine, the IC flag is also inherited from the Merge candidate. Also, for the first four spatial candidates, bi-directional prediction candidates are inserted before the candidates with uni-directional prediction.

2.3.7 JVET-K01612.3.7 JVET-K0161

在本提議中,沒有提出子塊STMVP作為空間-時間Merge模式。所提出的方法使用共位塊,其與HEVC/JEM(僅1個圖片,此處沒有時間向量)相同。所提出的方法還檢查上部和左側空間位置,該位置在該提議中被調節。具體地,為了檢查相鄰的幀間預測資訊,對於每個上方和左側檢查最多兩個位置。確切的位置如圖27所示。In this proposal, the sub-block STMVP is not proposed as the space-time Merge mode. The proposed method uses co-located blocks, which is the same as HEVC/JEM (only 1 picture, no time vector here). The proposed method also checks the upper and left spatial positions, which are adjusted in the proposal. Specifically, in order to check adjacent inter-frame prediction information, at most two positions are checked for each upper and left side. The exact location is shown in Figure 27.

Afar: (nPbW * 5 / 2, -1), Amid (nPbW / 2, -1)  (注意:當前塊上方的上方空間塊的偏移量)Afar: (nPbW * 5 / 2, -1), Amid (nPbW / 2, -1) (Note: the offset of the upper space block above the current block)

Lfar: (-1, nPbH * 5 / 2), Lmid (-1, nPbH/2)   (注意:當前塊上方的左側空間塊的偏移)Lfar: (-1, nPbH * 5 / 2), Lmid (-1, nPbH/2) (Note: the offset of the left space block above the current block)

上方塊、左側塊和時間塊的運動向量的平均值被計算為與BMS軟體實現方式相同。如果3個參考幀間預測塊可用。The average value of the motion vectors of the upper block, the left block and the time block are calculated in the same way as the BMS software implementation. If 3 reference inter prediction blocks are available.

mvLX[0] = ((mvLX_A[0] + mvLX_L[0] + mvLX_C[0]) * 43) / 128mvLX[0] = ((mvLX_A[0] + mvLX_L[0] + mvLX_C[0]) * 43) / 128

mvLX[1] = ((mvLX_A[1] + mvLX_L[1] + mvLX_C[1]) * 43) / 128mvLX[1] = ((mvLX_A[1] + mvLX_L[1] + mvLX_C[1]) * 43) / 128

如果僅有兩個或一個幀間預測塊可用,則使用兩個的平均值或者僅使用一個mv。If only two or one inter prediction blocks are available, the average of the two is used or only one mv is used.

2.3.8 JVT-K01352.3.8 JVT-K0135

為了生成平滑的細細微性運動場,圖28給出了平面運動向量預測過程的簡要描述。In order to generate a smooth and subtle motion field, Figure 28 gives a brief description of the plane motion vector prediction process.

通過如下在4×4塊的基礎上對水準和垂直線性插值求平均,來實現平面運動向量預測。The plane motion vector prediction is realized by averaging the horizontal and vertical linear interpolation on the basis of 4×4 blocks as follows.

Figure 02_image007
Figure 02_image007

WH 表示塊的寬度和高度。(x,y) 是當前子塊相對於左上角子塊的座標。所有距離由像素距離除以4表示。

Figure 02_image009
是當前子塊的運動向量。 W and H represent the width and height of the block. (x,y) is the coordinates of the current sub-block relative to the upper-left sub-block. All distances are represented by the pixel distance divided by 4.
Figure 02_image009
Is the motion vector of the current sub-block.

位置(x,y) 的水準預測

Figure 02_image011
和垂直預測
Figure 02_image011
計算如下:Level prediction of location (x,y)
Figure 02_image011
And vertical prediction
Figure 02_image011
The calculation is as follows:

Figure 02_image013
Figure 02_image013

Figure 02_image015
Figure 02_image015

其中

Figure 02_image017
Figure 02_image019
是當前塊左側和右側的4x4塊的運動向量。
Figure 02_image021
Figure 02_image023
是當前塊上方和底部的4x4塊的運動向量。among them
Figure 02_image017
with
Figure 02_image019
Is the motion vector of the 4x4 block on the left and right of the current block.
Figure 02_image021
with
Figure 02_image023
Is the motion vector of the 4x4 block above and below the current block.

從當前塊的空間相鄰塊推導出左側列和上方行相鄰塊的參考運動資訊。The reference motion information of the adjacent blocks in the left column and the upper row is derived from the spatial adjacent blocks of the current block.

右側列和底部行相鄰塊的參考運動資訊如下推導出。The reference motion information of adjacent blocks in the right column and bottom row is derived as follows.

推導右下方時間相鄰4×4塊的運動資訊Derive motion information of adjacent 4×4 blocks at the bottom right

使用推導出的右下方相鄰4×4塊的運動資訊以及右上方相鄰4×4塊的運動資訊,來計算右側列相鄰4×4塊的運動向量,如公式K1中所描述。Use the derived motion information of the lower right adjacent 4×4 block and the motion information of the upper right adjacent 4×4 block to calculate the motion vector of the adjacent 4×4 block in the right column, as described in formula K1.

使用推導出的右下方相鄰4×4塊的運動資訊以及左下方相鄰4×4塊的運動資訊,來計算底部行相鄰4×4塊的運動向量,如公式K2中所描述。Use the derived motion information of the lower right adjacent 4×4 block and the lower left adjacent 4×4 block to calculate the motion vector of the bottom row adjacent 4×4 block, as described in formula K2.

 To R(W,y) = ((H-y-1)

Figure 02_image025
AR + (y+1)
Figure 02_image027
BR)/HR(W,y) = ((Hy-1)
Figure 02_image025
AR + (y+1)
Figure 02_image027
BR)/H 公式 K1Formula K1  To B(x,H) = ((W-x-1)
Figure 02_image028
BL+ (x+1)
Figure 02_image029
BR)/W
B(x,H) = ((Wx-1)
Figure 02_image028
BL+ (x+1)
Figure 02_image029
BR)/W
公式 K2Formula K2

其中AR 是右上方空間相鄰4×4塊的運動向量,BR 是右下方時間相鄰4×4塊的運動向量,並且BL 是左下方空間相鄰4×4塊的運動向量。 AR is the motion vector of the adjacent 4×4 block in the upper right space, BR is the motion vector of the adjacent 4×4 block in the lower right space, and BL is the motion vector of the adjacent 4×4 block in the lower left space.

對於每個清單從相鄰塊獲得的運動資訊被縮放到給定清單的第一參考圖片。For each list, the motion information obtained from adjacent blocks is scaled to the first reference picture of a given list.

3.3. 通過本文公開的實施例解決的問題的示例Examples of problems solved by the embodiments disclosed herein

發明人先前已提出基於查閱資料表的運動向量預測技術,其使用存儲有至少一個運動候選的一個或多個查閱資料表以預測塊的運動資訊,其可以在各種實施例中實現以提供具有更高編碼效率的視頻編碼。每個LUT可以包括一個或多個運動候選,每個運動候選與對應的運動資訊相關聯。運動候選的運動資訊可包括預測方向、參考索引/圖片、運動向量、LIC標誌、仿射標誌、運動向量差(MVD)精度和/或MVD值。運動資訊還可以包括塊位置資訊,以指示運動資訊來自哪裡。The inventor has previously proposed a motion vector prediction technology based on a look-up table, which uses one or more look-up tables storing at least one motion candidate to predict the motion information of a block. It can be implemented in various embodiments to provide more Video coding with high coding efficiency. Each LUT may include one or more motion candidates, and each motion candidate is associated with corresponding motion information. The motion information of the motion candidate may include prediction direction, reference index/picture, motion vector, LIC flag, affine flag, motion vector difference (MVD) accuracy and/or MVD value. The motion information may also include block position information to indicate where the motion information comes from.

基於所公開的技術的基於LUT的運動向量預測可以增強現有和未來的視頻編碼標準,其在以下針對各種實現方式所描述的示例中闡明。因為LUT允許基於歷史資料(例如,已經被處理的塊)執行編碼/解碼過程,所以基於LUT的運動向量預測也可以被稱為基於歷史的運動向量預測(HMVP)方法。在基於LUT的運動向量預測方法中,在編碼/解碼過程期間保持具有來自先前被編碼的塊的運動資訊的一個或多個表。存儲在LUT中的這些運動候選被命名為HMVP候選。在一個塊的編碼/解碼期間,可以將LUT中的相關聯的運動資訊添加到運動候選清單(例如,Merge/ AMVP候選列表),並且在對一個塊進行編碼/解碼之後,可以更新LUT。然後使用更新後的LUT來編碼後續塊。也就是說,LUT中的運動候選的更新基於塊的編碼/解碼順序。以下示例應被視為解釋一般概念的示例。不應以狹窄的方式解釋這些示例。此外,這些實例可以以任何方式組合。The LUT-based motion vector prediction based on the disclosed technology can enhance existing and future video coding standards, which are illustrated in the examples described below for various implementations. Because the LUT allows the encoding/decoding process to be performed based on historical data (for example, blocks that have been processed), the LUT-based motion vector prediction may also be referred to as a history-based motion vector prediction (HMVP) method. In the LUT-based motion vector prediction method, one or more tables with motion information from previously encoded blocks are maintained during the encoding/decoding process. These motion candidates stored in the LUT are named HMVP candidates. During encoding/decoding of a block, the associated motion information in the LUT may be added to the motion candidate list (for example, Merge/AMVP candidate list), and after encoding/decoding a block, the LUT may be updated. The updated LUT is then used to encode subsequent blocks. That is, the update of the motion candidate in the LUT is based on the encoding/decoding order of the block. The following examples should be regarded as examples explaining general concepts. These examples should not be interpreted in a narrow way. In addition, these examples can be combined in any manner.

一些實施例可以使用存儲有至少一個運動候選的一個或多個查閱資料表,以預測塊的運動資訊。實施例可以使用運動候選來指示存儲在查閱資料表中的一組運動資訊。對於傳統的AMVP或Merge模式,實施例可以使用AMVP或Merge候選來存儲運動資訊。Some embodiments may use one or more lookup tables storing at least one motion candidate to predict the motion information of the block. The embodiment may use motion candidates to indicate a set of motion information stored in the lookup table. For the traditional AMVP or Merge mode, embodiments may use AMVP or Merge candidates to store motion information.

儘管當前的基於LUT的運動向量預測技術通過使用歷史資料克服了HEVC的缺點,但是,僅考慮來自空間相鄰塊的資訊。Although the current LUT-based motion vector prediction technology overcomes the shortcomings of HEVC by using historical data, it only considers information from spatially adjacent blocks.

當將來自LUT的運動候選用於AMVP或Merge列表構建過程時,直接繼承它而不做任何改變。When the motion candidate from the LUT is used in the AMVP or Merge list building process, it is directly inherited without any change.

JVET-K0161的設計有益於編碼性能。然而,它需要額外推導TMVP,這增加了計算複雜性和記憶體頻寬。The design of JVET-K0161 is beneficial to coding performance. However, it requires additional derivation of TMVP, which increases computational complexity and memory bandwidth.

4.4. 一些示例Some examples

以下示例應被視為解釋一般概念的示例。不應以狹窄的方式解釋這些示例。此外,這些實例可以以任何方式組合。The following examples should be regarded as examples explaining general concepts. These examples should not be interpreted in a narrow way. In addition, these examples can be combined in any manner.

使用當前公開的技術的一些實施例可以聯合使用來自LUT的運動候選和來自時間相鄰塊的運動資訊。此外,還提出了JVET-K0161的複雜性降低。Some embodiments using the currently disclosed technology may jointly use motion candidates from LUTs and motion information from temporal neighboring blocks. In addition, the complexity of JVET-K0161 is reduced.

利用來自Use from LUTLUT 的運動候選Motion candidates

1. 提出通過利用來自LUT的運動候選來構造新的AMVP /Merge候選。1. Propose to construct new AMVP/Merge candidates by using motion candidates from LUT.

a. 在一個示例中,可以通過對來自LUT的運動候選的運動向量添加/減去偏移(或多個偏移),推導出新的候選。a. In an example, a new candidate can be derived by adding/subtracting an offset (or multiple offsets) to the motion vector of the motion candidate from the LUT.

b. 在一個示例中,可以通過對來自LUT的所選運動候選的運動向量求平均,推導出新的候選。b. In an example, a new candidate can be derived by averaging the motion vectors of the selected motion candidates from the LUT.

i. 在一個實施例中,可以在沒有除法運算的情況下近似地實現平均。例如,MVa, MVb和MVc可以被平均為(MVa+MVb+MVc)×

Figure 02_image030
/2 N 或(MVa+MVb+MVc)×
Figure 02_image032
/2 N 。例如,當N = 7時,平均值是(MVa+MVb+MVc) ×42/128或(MVa+MVb+MVc) ×43/128。請注意,預先計算
Figure 02_image030
Figure 02_image032
並將其存儲在查閱資料表中。i. In one embodiment, averaging can be achieved approximately without a division operation. For example, MVa, MVb and MVc can be averaged to (MVa+MVb+MVc)×
Figure 02_image030
/2 N or (MVa+MVb+MVc)×
Figure 02_image032
/2 N. For example, when N = 7, the average value is (MVa+MVb+MVc) ×42/128 or (MVa+MVb+MVc) ×43/128. Note that pre-calculated
Figure 02_image030
or
Figure 02_image032
And store it in the lookup table.

ii. 在一個示例中,僅選擇具有相同參考圖片(在兩個預測方向上)的運動向量。ii. In one example, only motion vectors with the same reference picture (in two prediction directions) are selected.

iii. 在一個示例中,預先確定每個預測方向上的參考圖片,並且如果必要,將運動向量縮放到預先確定的參考圖片。iii. In one example, the reference picture in each prediction direction is predetermined, and if necessary, the motion vector is scaled to the predetermined reference picture.

1. 在一個示例中,參考圖片清單X中的第一條目(X = 0或1)被選擇作為參考圖片。1. In an example, the first entry (X=0 or 1) in the reference picture list X is selected as the reference picture.

2. 可替代地,對於每個預測方向,選擇LUT中最頻繁使用的參考圖片作為參考圖片。2. Alternatively, for each prediction direction, the most frequently used reference picture in the LUT is selected as the reference picture.

c. 在一個示例中,對於每個預測方向,首先選擇具有與預先確定的參考圖片相同的參考圖片的運動向量,然後選擇其他運動向量。c. In one example, for each prediction direction, first select a motion vector having the same reference picture as a predetermined reference picture, and then select other motion vectors.

2. 提出通過來自LUT的一個或多個運動候選和來自時間相鄰塊的運動資訊的函數來構造新的AMVP /Merge候選。2. Propose to construct a new AMVP/Merge candidate through a function of one or more motion candidates from the LUT and motion information from temporal neighboring blocks.

a. 在一個示例中,類似於STMVP或JVET-K0161,可以通過對來自LUT和TMVP的運動候選求平均,推導出新的候選。a. In an example, similar to STMVP or JVET-K0161, new candidates can be derived by averaging the motion candidates from LUT and TMVP.

b. 在一個示例中,上述塊(例如,圖27中的Amid和Afar)可以被來自LUT的一個或多個候選替換。可替代地,此外,其他過程可以保持不變,就像在JVET-K0161中已實現的那樣。b. In an example, the above-mentioned blocks (for example, Amid and Afar in Figure 27) can be replaced by one or more candidates from the LUT. Alternatively, in addition, other processes can remain unchanged, as already implemented in JVET-K0161.

3. 提出通過來自LUT的一個或多個運動候選、來自空間相鄰塊和/或空間非緊鄰的相鄰塊的AMVP/Merge候選、以及來自時間塊的運動資訊的函數,來構造新的AMVP /Merge候選。3. Propose a function of one or more motion candidates from the LUT, AMVP/Merge candidates from spatial neighboring blocks and/or spatially non-neighboring neighboring blocks, and motion information from temporal blocks to construct a new AMVP /Merge candidate.

a. 在一個示例中,上述塊中的一個或多個(例如,圖27中的Amid和Afar)可以被來自LUT的候選替換。可替代地,此外,其他過程可以保持不變,就像在JVET-K0161中已實現的那樣。a. In one example, one or more of the above blocks (for example, Amid and Afar in Figure 27) can be replaced by candidates from the LUT. Alternatively, in addition, other processes can remain unchanged, as already implemented in JVET-K0161.

b. 在一個示例中,左側塊中的一個或多個(例如,圖27中的Amid和Afar)可以被來自LUT的候選替換。可替代地,此外,其他過程可以保持不變,就像在JVET-K0161中已實現的那樣。b. In one example, one or more of the blocks on the left (for example, Amid and Afar in Figure 27) can be replaced by candidates from the LUT. Alternatively, in addition, other processes can remain unchanged, as already implemented in JVET-K0161.

4. 提出當將塊的運動資訊插入LUT時,是否對LUT中的現有條目進行修剪可以取決於塊的編碼模式。4. It is proposed that when the motion information of the block is inserted into the LUT, whether to trim the existing entries in the LUT may depend on the coding mode of the block.

a. 在一個示例中,如果以Merge模式對塊編碼,則不執行修剪。a. In one example, if the block is encoded in Merge mode, no trimming is performed.

b. 在一個示例中,如果以AMVP模式對塊編碼,則不執行修剪。b. In one example, if the block is coded in AMVP mode, no trimming is performed.

c. 在一個示例中,如果以AMVP /Merge模式對塊編碼,則僅對LUT的最新M個條目進行修剪。c. In an example, if the block is coded in AMVP/Merge mode, only the latest M entries of the LUT are trimmed.

d. 在一個示例中,當以子塊模式(例如,仿射或ATMVP)對塊編碼時,始終禁用修剪。d. In one example, when a block is encoded in a sub-block mode (for example, affine or ATMVP), trimming is always disabled.

5. 提出將來自時間塊的運動資訊添加到LUT。5. Propose adding motion information from the time block to the LUT.

a. 在一個示例中,運動資訊可以來自共位元的塊。a. In one example, the motion information can come from a co-located block.

b. 在一個示例中,運動資訊可以來自來自不同參考圖片的一個或多個塊。b. In one example, the motion information can come from one or more blocks from different reference pictures.

versus STMVPSTMVP 相關Related

1. 提出始終使用空間Merge候選推導出新的Merge候選,而不考慮TMVP候選。1. It is proposed to always use spatial Merge candidates to derive new Merge candidates, regardless of TMVP candidates.

a. 在一個示例中,可以利用兩個運動Merge候選的平均值。a. In one example, the average of two motion Merge candidates can be used.

b. 在一個示例中,可以聯合使用空間Merge候選和來自LTU的運動候選推導出新的候選。b. In an example, the spatial Merge candidate and the motion candidate from the LTU can be jointly used to derive a new candidate.

2. 提出可以利用非緊鄰塊(其不是右或左相鄰塊)推導出STMVP候選。2. Propose that STMVP candidates can be derived using non-neighboring blocks (which are not right or left neighboring blocks).

a. 在一個示例中,用於STMVP候選推導的上方塊保持不變,而使用的左側塊從相鄰塊改變為非緊鄰塊。a. In one example, the upper block used for STMVP candidate derivation remains unchanged, while the left block used is changed from a neighboring block to a non-neighboring block.

b. 在一個示例中,用於STMVP候選推導的左側塊保持不變,而所使用的上方塊從相鄰塊改變為非緊鄰塊。b. In one example, the left block used for STMVP candidate derivation remains unchanged, while the upper block used is changed from a neighboring block to a non-neighboring block.

c. 在一個示例中,可以聯合使用非緊鄰塊的候選和來自LUT的運動候選推導出新的候選。c. In one example, a new candidate can be derived using the candidate of the non-neighboring block and the motion candidate from the LUT jointly.

3. 提出始終使用空間Merge候選推導出新的Merge候選,而不考慮TMVP候選。3. It is proposed to always use spatial Merge candidates to derive new Merge candidates without considering TMVP candidates.

a. 在一個示例中,可以利用兩個運動Merge候選的平均值。a. In one example, the average of two motion Merge candidates can be used.

b. 可替代地,可以利用來自與當前塊相鄰或不相鄰的不同位置的兩個、三個或更多MV的平均值。b. Alternatively, the average value of two, three or more MVs from different positions adjacent or not adjacent to the current block can be used.

i. 在一個實施例中,MV僅可以從當前LCU(也稱為CTU)中的位置獲取。i. In one embodiment, the MV can only be obtained from the location in the current LCU (also called CTU).

ii. 在一個實施例中,MV僅可以從當前LCU行中的位置獲取。ii. In one embodiment, the MV can only be obtained from the position in the current LCU row.

iii. 在一個實施例中,MV僅可以從當前LCU行中或挨著當前LCU行的位置獲取。圖29中示出了示例。塊A、B、C、E、E和F挨著當前LCU行。iii. In one embodiment, the MV can only be obtained from the current LCU row or the position next to the current LCU row. An example is shown in Figure 29. Blocks A, B, C, E, E, and F are next to the current LCU row.

iv. 在一個實施例中,MV僅可以從當前LCU行中或挨著當前LCU行但不在左上角相鄰塊的左側的位置獲取。圖29中示出了示例。塊T是左上角相鄰塊。塊B、C、E、E和F挨著當前LCU行,但不在左上角相鄰塊的左側。iv. In one embodiment, the MV can only be obtained from the current LCU row or a position next to the current LCU row but not to the left of the adjacent block in the upper left corner. An example is shown in Figure 29. Block T is the adjacent block in the upper left corner. Blocks B, C, E, E, and F are next to the current LCU row, but not to the left of the adjacent block in the upper left corner.

c. 在一個實施例中,可以聯合使用空間Merge候選和來自LTU的運c. In one embodiment, the spatial Merge candidate and the operation from the LTU can be used jointly

動候選推導出新的候選Move candidates to derive new candidates

4. 提出圖28中用於平面運動預測的BR塊的MV不是從時間MV預測獲取的,而是從LUT的一個條目獲取的。4. Propose that the MV of the BR block used for plane motion prediction in Fig. 28 is not obtained from temporal MV prediction, but from an entry in the LUT.

5. 提出來自LUT的運動候選可以與其他類型的Merge/ AMVP候選(例如,空間Merge/ AMVP候選、時間Merge/ AMVP候選、默認運動候選)聯合使用以推導出新的候選。5. Proposed motion candidates from the LUT can be used in conjunction with other types of Merge/AMVP candidates (for example, spatial Merge/AMVP candidates, temporal Merge/AMVP candidates, default motion candidates) to derive new candidates.

在本示例和本專利檔中公開的其他示例的各種實施方式中,修剪可以包括:a)將運動資訊與現有條目進行唯一性比較,以及b)如果唯一,則將運動資訊添加到清單,或者c)如果不唯一,則要麼c1)不添加運動資訊,要麼c2)添加運動資訊並刪除匹配的現有條目。在一些實現方式中,當將運動候選從表添加到候選列表時,不調用修剪操作。In various implementations of this example and other examples disclosed in this patent file, trimming may include: a) uniquely comparing sports information with existing entries, and b) if unique, adding sports information to the list, or c) If it is not unique, either c1) do not add sports information, or c2) add sports information and delete matching existing entries. In some implementations, when a motion candidate is added from the table to the candidate list, the trim operation is not invoked.

圖30是圖示可以用於實現本公開技術的各個部分的電腦系統或其它控制設備3000的結構的示例的示意圖。在圖30中,電腦系統3000包括通過內部連接3025連接的一個或多個處理器3005和記憶體3010。內部連接3025可以表示由適當的橋、適配器或控制器連接的任何一條或多條單獨的物理匯流排、點對點連接或兩者。因此,內部連接3025可以包括例如系統匯流排、周邊元件連接(PCI)匯流排、超傳輸或工業標準架構(ISA)匯流排、小型電腦系統介面(SCSI)匯流排、通用序列匯流排(USB)、IIC(I2C)匯流排或電氣與電子工程師協會(IEEE)標準674匯流排(有時被稱為“火線”)。FIG. 30 is a schematic diagram illustrating an example of the structure of a computer system or other control device 3000 that can be used to implement various parts of the disclosed technology. In FIG. 30, the computer system 3000 includes one or more processors 3005 and a memory 3010 connected through an internal connection 3025. The internal connection 3025 may represent any one or more individual physical buses, point-to-point connections, or both connected by appropriate bridges, adapters, or controllers. Therefore, the internal connection 3025 may include, for example, a system bus, a peripheral component connection (PCI) bus, a hypertransmission or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, and a universal serial bus (USB) , IIC (I2C) bus or Institute of Electrical and Electronics Engineers (IEEE) standard 674 bus (sometimes called "FireWire").

處理器3005可以包括中央處理器(CPU),來控制例如主機的整體操作。在一些實施例中,處理器3005通過執行存儲在記憶體3010中的軟體或固件來實現這一點。處理器3005可以是或可以包括一個或多個可程式設計通用或專用微處理器、數位訊號處理器(DSP)、可程式設計控制器、專用積體電路(ASIC)、可程式設計邏輯器件(PLD)等,或這些器件的組合。The processor 3005 may include a central processing unit (CPU) to control, for example, the overall operation of the host. In some embodiments, the processor 3005 implements this by executing software or firmware stored in the memory 3010. The processor 3005 may be or may include one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSP), programmable controllers, special integrated circuits (ASICs), programmable logic devices ( PLD), etc., or a combination of these devices.

記憶體3010可以是或包括電腦系統的主記憶體。記憶體3010表示任何適當形式的隨機存取記憶體(RAM)、唯讀記憶體(ROM)、快閃記憶體等,或這些設備的組合。在使用中,記憶體3010除其它外可包含一組機器指令,當處理器3005執行該指令時,使處理器3005執行操作以實現本公開技術的實施例。The memory 3010 may be or include the main memory of the computer system. The memory 3010 represents any suitable form of random access memory (RAM), read-only memory (ROM), flash memory, etc., or a combination of these devices. In use, the memory 3010 may contain, among other things, a set of machine instructions, and when the processor 3005 executes the instructions, the processor 3005 executes operations to implement the embodiments of the disclosed technology.

通過內部連接3025連接到處理器3005的還有(可選的)網路介面卡3015。網路介面卡3015為電腦系統3000提供與遠端設備(諸如存儲客戶機和/或其它存儲伺服器)通信的能力,並且可以是例如乙太網適配器或光纖通道適配器。Also connected to the processor 3005 via the internal connection 3025 is an (optional) network interface card 3015. The network interface card 3015 provides the computer system 3000 with the ability to communicate with remote devices (such as storage clients and/or other storage servers), and may be, for example, an Ethernet adapter or a fiber channel adapter.

圖31示出了可以用於實施本公開技術的各個部分的移動設備3100的示例實施例的框圖。移動設備3100可以是筆記型電腦、智慧手機、平板電腦、攝像機或其它能夠處理視頻的設備。移動設備3100包括處理器或控制器3101來處理資料,以及與處理器3101通信的記憶體3102來存儲和/或緩衝資料。例如,處理器3101可以包括中央處理器(CPU)或微控制器單元(MCU)。在一些實現中,處理器3101可以包括現場可程式設計閘陣列(FPGA)。在一些實現中,移動設備3100包括或與圖形處理單元(GPU)、視頻處理單元(VPU)和/或無線通訊單元通信,以實現智慧手機設備的各種視覺和/或通信資料處理功能。例如,記憶體3102可以包括並存儲處理器可執行代碼,當處理器3101執行該代碼時,將移動設備3100配置為執行各種操作,例如接收資訊、命令和/或資料、處理資訊和資料,以及將處理過的資訊/資料發送或提供給另一個資料設備,諸如執行器或外部顯示器。為了支援移動設備3100的各種功能,記憶體3102可以存儲資訊和資料,諸如指令、軟體、值、圖像以及處理器3101處理或引用的其它資料。例如,可以使用各種類型的隨機存取記憶體(RAM)設備、唯讀記憶體(ROM)設備、快閃記憶體設備和其它合適的存儲介質來實現記憶體3102的存儲功能。在一些實現中,移動設備3100包括輸入/輸出(I/O)單元3103,來將處理器3101和/或記憶體3102與其它模組、單元或設備進行介面。例如,I/O單元3103可以與處理器3101和記憶體3102進行介面,以利用與典型資料通信標準相容的各種無線介面,例如,在雲中的一台或多台電腦和使用者設備之間。在一些實現中,移動設備3100可以通過I/O單元3103使用有線連接與其它設備進行介面。移動設備3100還可以與其它外部介面(例如資料記憶體)和/或可視或音訊顯示裝置3104連接,以檢索和傳輸可由處理器處理、由記憶體存儲或由顯示裝置3104或外部設備的輸出單元上顯示的資料和資訊。例如,顯示裝置3104可以根據所公開的技術顯示包括基於該塊是否是使用運動補償演算法編碼的而應用幀內塊複製的塊(CU、PU或TU)的視頻幀。FIG. 31 shows a block diagram of an example embodiment of a mobile device 3100 that can be used to implement various parts of the disclosed technology. The mobile device 3100 may be a notebook computer, a smart phone, a tablet computer, a video camera, or other devices capable of processing video. The mobile device 3100 includes a processor or controller 3101 to process data, and a memory 3102 in communication with the processor 3101 to store and/or buffer data. For example, the processor 3101 may include a central processing unit (CPU) or a microcontroller unit (MCU). In some implementations, the processor 3101 may include a field programmable gate array (FPGA). In some implementations, the mobile device 3100 includes or communicates with a graphics processing unit (GPU), a video processing unit (VPU), and/or a wireless communication unit to implement various visual and/or communication data processing functions of the smartphone device. For example, the memory 3102 may include and store processor executable code. When the processor 3101 executes the code, the mobile device 3100 is configured to perform various operations, such as receiving information, commands and/or data, processing information and data, and Send or provide the processed information/data to another data device, such as an actuator or an external display. In order to support various functions of the mobile device 3100, the memory 3102 can store information and data, such as instructions, software, values, images, and other data processed or referenced by the processor 3101. For example, various types of random access memory (RAM) devices, read-only memory (ROM) devices, flash memory devices, and other suitable storage media may be used to implement the storage function of the memory 3102. In some implementations, the mobile device 3100 includes an input/output (I/O) unit 3103 to interface the processor 3101 and/or the memory 3102 with other modules, units, or devices. For example, the I/O unit 3103 can interface with the processor 3101 and the memory 3102 to utilize various wireless interfaces compatible with typical data communication standards, for example, among one or more computers and user equipment in the cloud. between. In some implementations, the mobile device 3100 can interface with other devices through the I/O unit 3103 using a wired connection. The mobile device 3100 can also be connected with other external interfaces (such as data memory) and/or a visual or audio display device 3104 for retrieval and transmission, which can be processed by the processor, stored by the memory, or output by the display device 3104 or external device. Data and information displayed on For example, the display device 3104 may display a video frame including a block (CU, PU, or TU) to which intra block copy is applied based on whether the block is encoded using a motion compensation algorithm according to the disclosed technology.

在一些實施例中,可以實現如本文所述的基於子塊的預測的方法的視頻解碼器裝置可用於視頻解碼。In some embodiments, a video decoder device that can implement the sub-block-based prediction method as described herein can be used for video decoding.

在一些實施例中,可以使用實現在如圖30和圖31所述的硬體平臺上的解碼裝置來實現視頻解碼方法。In some embodiments, a decoding device implemented on a hardware platform as described in FIG. 30 and FIG. 31 may be used to implement the video decoding method.

在本文件中公開的各種實施例和技術可以在以下示例的列表中描述。The various embodiments and techniques disclosed in this document can be described in the list of examples below.

圖32是根據當前公開的技術的用於視頻處理的示例方法3200的流程圖。方法3200包括,在操作3202,通過平均兩個或更多選擇的運動候選,確定用於視頻處理的新候選。方法3200包括,在操作3204,將所述新候選增加到候選列表。方法3200包括,在操作3206,通過使用所述候選列表中的確定的新候選,執行視頻的第一視頻塊和視頻的位元流表示之間的轉換。Figure 32 is a flowchart of an example method 3200 for video processing in accordance with the currently disclosed technology. The method 3200 includes, in operation 3202, determining a new candidate for video processing by averaging two or more selected motion candidates. The method 3200 includes, at operation 3204, adding the new candidate to a candidate list. The method 3200 includes, in operation 3206, performing a conversion between the first video block of the video and the bitstream representation of the video by using the determined new candidate in the candidate list.

在一些實施例中,所述候選列表是Merge候選列表,以及確定的新候選是Merge候選。In some embodiments, the candidate list is a Merge candidate list, and the determined new candidate is a Merge candidate.

在一些實施例中,所述Merge候選列表是幀間預測Merge候選列表或幀內塊複製預測Merge候選列表。In some embodiments, the Merge candidate list is an inter prediction Merge candidate list or an intra block copy prediction Merge candidate list.

在一些實施例中,所述一個或多個表包括從視頻資料中所述第一視頻塊之前處理的在先處理視頻塊推導的運動候選。In some embodiments, the one or more tables include motion candidates derived from previously processed video blocks that were processed before the first video block in the video material.

在一些實施例中,在所述候選列表中不存在可用的空間候選和時間候選。In some embodiments, there are no spatial and temporal candidates available in the candidate list.

在一些實施例中,所述選擇的運動候選來自一個或多個表。In some embodiments, the selected motion candidates are from one or more tables.

在一些實施例中,在沒有除法運算的情況下實現所述平均。In some embodiments, the averaging is achieved without a division operation.

在一些實施例中,通過所述選擇的運動候選的運動向量的和與縮放因數的乘法,實現所述平均。In some embodiments, the averaging is achieved through multiplication of the sum of the motion vectors of the selected motion candidates and the scaling factor.

在一些實施例中,將所述選擇的運動候選的運動向量的水準分量進行平均以推導新候選的水準分量。In some embodiments, the level components of the motion vectors of the selected motion candidates are averaged to derive the level components of the new candidate.

在一些實施例中,將所述選擇的運動候選的運動向量的垂直分量進行平均以推導新候選的垂直分量。In some embodiments, the vertical components of the motion vectors of the selected motion candidates are averaged to derive the vertical components of the new candidate.

在一些實施例中,所述縮放因數被預先計算並存儲在查閱資料表中。In some embodiments, the zoom factor is pre-calculated and stored in the lookup table.

在一些實施例中,僅選擇具有相同參考圖片的運動向量。In some embodiments, only motion vectors with the same reference picture are selected.

在一些實施例中,在兩個預測方向上僅選擇在兩個預測方向上具有相同參考圖片的運動向量。In some embodiments, only motion vectors with the same reference picture in the two prediction directions are selected in the two prediction directions.

在一些實施例中,預先確定每個預測方向上的目標參考圖片,以及將所述運動向量縮放到預先確定的參考圖片。In some embodiments, the target reference picture in each prediction direction is predetermined, and the motion vector is scaled to the predetermined reference picture.

在一些實施例中,選擇參考圖片清單X中的第一條目作為用於參考圖片清單的目標參考圖片,X為0或1。In some embodiments, the first entry in the reference picture list X is selected as the target reference picture for the reference picture list, and X is 0 or 1.

在一些實施例中,對於每個預測方向,選擇表中最常使用的參考圖片作為目標參考圖片。In some embodiments, for each prediction direction, the most frequently used reference picture in the table is selected as the target reference picture.

在一些實施例中,對於每個預測方向,首先選擇具有與預先確定的目標參考圖片相同的參考圖片的運動向量,然後選擇其他運動向量。In some embodiments, for each prediction direction, a motion vector having the same reference picture as the predetermined target reference picture is selected first, and then other motion vectors are selected.

在一些實施例中,來自表的運動候選與運動資訊關聯,所述運動資訊包括以下的至少一種:預測方向、參考圖片索引、運動向量值、強度補償標誌、仿射標誌、運動向量差精度或運動向量差值。In some embodiments, the motion candidates from the table are associated with motion information, and the motion information includes at least one of the following: prediction direction, reference picture index, motion vector value, intensity compensation flag, affine flag, motion vector difference accuracy, or Motion vector difference.

在一些實施例中,方法3200還包括基於所述轉換更新一個或多個表。In some embodiments, the method 3200 further includes updating one or more tables based on the conversion.

在一些實施例中,一個或多個表的更新包括在執行所述轉換後基於視頻的第一視頻塊的運動資訊更新一個或多個表。In some embodiments, the updating of the one or more tables includes updating the one or more tables based on the motion information of the first video block of the video after performing the conversion.

在一些實施例中,方法3200還包括基於更新的表,執行視頻的隨後視頻塊和視頻的位元流表示之間的轉換。In some embodiments, the method 3200 further includes performing a conversion between subsequent video blocks of the video and the bitstream representation of the video based on the updated table.

在一些實施例中,所述轉換包括編碼處理和/或解碼處理。In some embodiments, the conversion includes encoding processing and/or decoding processing.

在一些實施例中,視頻編碼裝置可以在為後續視頻重建視頻期間執行本文所述的方法2900和其他方法。In some embodiments, the video encoding device may perform the method 2900 described herein and other methods during reconstruction of the video for subsequent videos.

在一些實施例中,視頻系統中的裝置可以包括被配置為執行本文描述的方法的處理器。In some embodiments, the device in the video system may include a processor configured to perform the methods described herein.

在一些實施例中,所描述的方法可以體現為存儲在電腦可讀程式介質上的電腦可執行代碼。In some embodiments, the described method may be embodied as computer executable code stored on a computer readable program medium.

圖33是根據當前公開的技術的用於視頻處理的示例方法3300的流程圖。方法3300包括在操作3302,通過使用來自一個或多個表的一個或多個運動候選來確定用於視頻處理的新運動候選,其中表包括一個或多個運動候選,並且每個運動候選是關聯的運動資訊。方法3300包括在操作3304,基於新候選者在視頻塊和視頻塊的編碼表示之間執行轉換。FIG. 33 is a flowchart of an example method 3300 for video processing according to the currently disclosed technology. The method 3300 includes at operation 3302, determining a new motion candidate for video processing by using one or more motion candidates from one or more tables, where the table includes one or more motion candidates, and each motion candidate is associated Sports information. The method 3300 includes, at operation 3304, performing a conversion between the video block and the encoded representation of the video block based on the new candidate.

在一些實施例中,通過對與來自所述一個或多個表的運動候選相關聯的運動向量加上或減去偏移,來推導出所述新的運動候選。In some embodiments, the new motion candidate is derived by adding or subtracting an offset to the motion vector associated with the motion candidate from the one or more tables.

在一些實施例中,確定新的運動候選包括:作為來自一個或多個表的一個或多個運動候選和來自時間相鄰塊的運動資訊的函數,來確定新的運動候選。In some embodiments, determining the new motion candidate includes determining the new motion candidate as a function of one or more motion candidates from one or more tables and motion information from temporal neighboring blocks.

在一些實施例中,確定新運動候選包括:對來自一個或多個表的運動候選和時間運動向量預測器進行平均。In some embodiments, determining the new motion candidate includes averaging the motion candidates from one or more tables and the temporal motion vector predictor.

在一些實施例中,對所選運動候選進行平均包括與所選運動候選相關聯的運動向量的加權平均或平均。In some embodiments, averaging the selected motion candidates includes a weighted average or average of the motion vectors associated with the selected motion candidates.

在一些實施例中,在沒有除法運算的情況下實現所述平均。In some embodiments, the averaging is achieved without a division operation.

在一些實施例中,通過來自所述一個或多個表的運動候選的運動向量之和與所述時間運動向量預測器與縮放因數的乘法運算,來實現所述平均。In some embodiments, the averaging is achieved by the sum of the motion vectors of the motion candidates from the one or more tables and the multiplication of the temporal motion vector predictor and the scaling factor.

在一些實施例中,對來自所述一個或多個表的運動候選的運動向量的水準分量與時間運動向量預測器進行平均,以推導出新的運動候選的水準分量。In some embodiments, the level component of the motion vector of the motion candidate from the one or more tables is averaged with the temporal motion vector predictor to derive the level component of the new motion candidate.

在一些實施例中,對所選擇的水準分量進行平均包括與所選運動候選相關聯的水準分量的加權平均或平均。In some embodiments, averaging the selected level components includes a weighted average or average of the level components associated with the selected motion candidate.

在一些實施例中,對所選擇的垂直分量進行平均包括與所選運動候選相關聯的垂直分量的加權平均或平均。In some embodiments, averaging the selected vertical components includes a weighted average or average of the vertical components associated with the selected motion candidate.

在一些實施例中,作為來自一個或多個表的一個或多個運動候選、來自空間相鄰塊和/或空間非緊鄰的相鄰塊的Merge候選、以及來自時間相鄰塊的運動資訊的函數,來確定新的運動候選。In some embodiments, as one or more motion candidates from one or more tables, Merge candidates from spatial neighboring blocks and/or spatially non-neighboring neighboring blocks, and motion information from temporal neighboring blocks Function to determine new motion candidates.

在一些實施例中,確定新候選者包括:作為來自一個或多個表的一個或多個運動候選、來自空間相鄰塊和/或空間非緊鄰的相鄰塊的高級運動向量預測(AMVP)候選的函數、以及來自時間相鄰塊的運動資訊,來確定新的運動候選。In some embodiments, determining new candidates includes: Advanced Motion Vector Prediction (AMVP) as one or more motion candidates from one or more tables, from spatial neighboring blocks and/or spatially non-neighboring neighboring blocks Candidate functions and motion information from temporal neighboring blocks are used to determine new motion candidates.

在一些實施例中,確定新候選者包括:作為來自一個或多個表的一個或多個運動候選、以及高級運動向量預測(AMVP)候選列表中的AMVP候選或Merge候選清單中的Merge候選的函數,來確定新的運動候選。In some embodiments, determining new candidates includes: one or more motion candidates from one or more tables, and AMVP candidates in the advanced motion vector prediction (AMVP) candidate list or Merge candidates in the Merge candidate list Function to determine new motion candidates.

在一些實施例中,將所述新的運動候選添加到Merge候選列表。In some embodiments, the new motion candidate is added to the Merge candidate list.

在一些實施例中,將所述新的運動候選添加到AMVP候選列表。In some embodiments, the new motion candidate is added to the AMVP candidate list.

在一些實施例中,一個或多個表中的每一個包括一組運動候選,其中每個運動候選與對應的運動資訊相關聯。In some embodiments, each of the one or more tables includes a set of motion candidates, where each motion candidate is associated with corresponding motion information.

在一些實施例中,運動候選與運動資訊相關聯,所述運動資訊包括以下的至少一種:預測方向、參考圖片索引、運動向量值、強度補償標誌、仿射標誌、運動向量差精度或運動向量差值。In some embodiments, the motion candidate is associated with motion information, and the motion information includes at least one of the following: prediction direction, reference picture index, motion vector value, intensity compensation flag, affine flag, motion vector difference accuracy, or motion vector Difference.

在一些實施例中,該方法還包括基於所述轉換更新一個或多個表。In some embodiments, the method further includes updating one or more tables based on the conversion.

在一些實施例中,更新一個或多個表包括在執行所述轉換之後基於第一視頻塊的運動資訊來更新一個或多個表。In some embodiments, updating the one or more tables includes updating the one or more tables based on the motion information of the first video block after performing the conversion.

在一些實施例中,該方法還包括基於更新後的表,執行視頻的後續視頻塊與視頻的位元流表示之間的轉換。In some embodiments, the method further includes performing a conversion between subsequent video blocks of the video and the bitstream representation of the video based on the updated table.

圖34是根據當前公開的技術的用於視頻處理的示例方法3400的流程圖。方法3400包括在操作3402,通過始終使用來自當前圖片中的第一視頻塊的多於一個空間相鄰塊的運動資訊,並且不使用來自與當前圖片不同的圖片中的時間塊的運動資訊,來確定用於視頻處理的新候選。方法3400包括在操作3404,通過使用所確定的新候選來執行視頻的當前圖片中的第一視頻塊與該視頻的位元流表示之間的轉換。Figure 34 is a flowchart of an example method 3400 for video processing in accordance with the currently disclosed technology. Method 3400 includes in operation 3402, by always using motion information from more than one spatial neighboring block of the first video block in the current picture, and not using motion information from temporal blocks in a picture different from the current picture. Identify new candidates for video processing. The method 3400 includes in operation 3404, performing a conversion between the first video block in the current picture of the video and the bitstream representation of the video by using the determined new candidate.

在一些實施例中,所確定的新候選被添加到候選列表,所述候選列表包括Merge候選列表或高級運動向量預測(AMVP)候選列表。In some embodiments, the determined new candidate is added to a candidate list, which includes a Merge candidate list or an advanced motion vector prediction (AMVP) candidate list.

在一些實施例中,來自多於一個空間相鄰塊的運動資訊是從相對於當前圖片中的第一視頻塊的預定義的空間相鄰塊推導出的候選、或來自一個或多個表的運動候選。In some embodiments, the motion information from more than one spatial neighboring block is a candidate derived from a predefined spatial neighboring block relative to the first video block in the current picture, or from one or more tables Motion candidates.

在一些實施例中,所述一個或多個表包括從在視頻資料中的第一視頻塊之前處理的先前處理過的視頻塊推導出的運動候選。In some embodiments, the one or more tables include motion candidates derived from previously processed video blocks processed before the first video block in the video material.

在一些實施例中,從相對於當前圖片中的第一視頻塊的預定義的空間相鄰塊推導出的候選是空間Merge候選。In some embodiments, the candidates derived from the predefined spatial neighboring blocks relative to the first video block in the current picture are spatial Merge candidates.

在一些實施例中,通過對至少兩個空間Merge候選求平均,來推導出所述新候選。In some embodiments, the new candidate is derived by averaging at least two spatial Merge candidates.

在一些實施例中,通過聯合使用空間Merge候選和來自一個或多個表的運動候選,來推導出所述新候選。In some embodiments, the new candidate is derived by jointly using spatial Merge candidates and motion candidates from one or more tables.

在一些實施例中,通過對與從不同位置推導出的候選相關聯的至少兩個運動向量求平均,來推導出所述新候選。In some embodiments, the new candidate is derived by averaging at least two motion vectors associated with the candidate derived from different positions.

在一些實施例中,所述不同位置與所述第一視頻塊相鄰。In some embodiments, the different position is adjacent to the first video block.

在一些實施例中,僅從所述第一視頻塊所屬的當前最大編碼單元中的位置獲取所述運動向量。In some embodiments, the motion vector is acquired only from the position in the current largest coding unit to which the first video block belongs.

在一些實施例中,僅從當前最大編碼單元行中的位置獲取所述運動向量。In some embodiments, the motion vector is obtained only from the position in the current largest coding unit row.

在一些實施例中,僅從當前最大編碼單元行中或挨著當前最大編碼單元行的位置獲取所述運動向量。In some embodiments, the motion vector is obtained only from a position in or next to the current largest coding unit row.

在一些實施例中,僅從當前最大編碼單元行中或挨著當前最大編碼單元行但不在左上角相鄰塊的左側的位置獲取運動向量。In some embodiments, the motion vector is obtained only from a position in the current maximum coding unit row or next to the current maximum coding unit row but not to the left of the adjacent block in the upper left corner.

在一些實施例中,用於平面運動預測的右下塊的運動向量不從時間運動向量預測候選中獲取,而是從所述表的一個條目獲取。In some embodiments, the motion vector of the lower right block used for planar motion prediction is not obtained from temporal motion vector prediction candidates, but is obtained from an entry in the table.

在一些實施例中,通過聯合使用來自一個或多個表的運動候選和其他種類的Merge/ AMVP候選,來推導出所述新候選。In some embodiments, the new candidate is derived by jointly using motion candidates from one or more tables and other kinds of Merge/AMVP candidates.

在一些實施例中, 所述一個或多個表中的運動候選與運動資訊相關聯,所述運動資訊包括以下中的至少一個:預測方向、參考圖片索引、運動向量值、強度補償標誌、仿射標誌、運動向量差精度或運動向量差值。In some embodiments, the motion candidates in the one or more tables are associated with motion information, and the motion information includes at least one of the following: prediction direction, reference picture index, motion vector value, intensity compensation flag, simulation Shooting flag, motion vector difference accuracy or motion vector difference value.

在一些實施例中,該方法還包括基於所述轉換更新一個或多個表。In some embodiments, the method further includes updating one or more tables based on the conversion.

在一些實施例中,更新一個或多個表包括在執行轉換之後基於所述第一視頻塊的運動資訊來更新一個或多個表。In some embodiments, updating one or more tables includes updating one or more tables based on the motion information of the first video block after performing the conversion.

在一些實施例中,該方法還包括基於更新後的表,執行該視頻的後續視頻塊與該視頻的位元流表示之間的轉換。In some embodiments, the method further includes performing a conversion between subsequent video blocks of the video and the bitstream representation of the video based on the updated table.

在一些實施例中,所述轉換包括編碼過程和/或解碼過程。In some embodiments, the conversion includes an encoding process and/or a decoding process.

圖35是根據當前公開的技術的用於視頻處理的示例方法3500的流程圖。方法3500包括在操作3502,通過使用來自當前圖片中的第一視頻塊的至少一個空間非緊鄰塊的運動資訊、以及從第一視頻塊的空間非緊鄰塊或並非從第一視頻塊的空間非緊鄰塊推導出的其他候選,來確定用於視頻處理的新候選。方法3500包括在操作3504,通過使用所確定的新候選,來執行視頻的第一視頻塊與該視頻的位元流表示之間的轉換。Figure 35 is a flowchart of an example method 3500 for video processing in accordance with the currently disclosed technology. The method 3500 includes in operation 3502, by using motion information from at least one spatially non-neighboring block of the first video block in the current picture, Next to the other candidates derived from the block to determine new candidates for video processing. The method 3500 includes, in operation 3504, performing a conversion between the first video block of the video and the bitstream representation of the video by using the determined new candidate.

在一些實施例中,所確定的新候選被添加到候選列表,所述候選列表包括Merge或高級運動向量預測(AMVP)候選列表。In some embodiments, the determined new candidate is added to a candidate list, which includes a Merge or Advanced Motion Vector Prediction (AMVP) candidate list.

在一些實施例中,來自多於一個空間非緊鄰塊的運動資訊是從相對於當前圖片中的第一視頻塊的預定義的空間非緊鄰塊推導出的候選。In some embodiments, the motion information from more than one spatially non-neighboring block is a candidate derived from a predefined spatially non-neighboring block relative to the first video block in the current picture.

在一些實施例中,從相對於當前圖片中的第一視頻塊的預定義的空間非緊鄰塊推導出的候選是空間-時間運動向量預測(STMVP)候選。In some embodiments, the candidate derived from the predefined spatially non-neighboring block relative to the first video block in the current picture is a spatial-temporal motion vector prediction (STMVP) candidate.

在一些實施例中,所述視頻塊的非緊鄰塊不是所述第一視頻塊的右相鄰塊或左相鄰塊。In some embodiments, the non-neighboring block of the video block is not the right neighboring block or the left neighboring block of the first video block.

在一些實施例中,所述第一視頻塊的用於STMVP候選推導的上方塊保持不變,而所使用的左側塊從相鄰塊改變為非緊鄰塊。In some embodiments, the upper block used for STMVP candidate derivation of the first video block remains unchanged, and the used left block is changed from a neighboring block to a non-neighboring block.

在一些實施例中,所述第一視頻塊的用於STMVP候選推導的左側塊保持不變,而所使用的上方塊從相鄰塊改變為非緊鄰塊。In some embodiments, the left block used for STMVP candidate derivation of the first video block remains unchanged, and the upper block used is changed from a neighboring block to a non-neighboring block.

圖36是根據當前公開的技術的用於視頻處理的示例方法3600的流程圖。方法3600包括在操作3602,通過使用來自當前圖片中的第一視頻塊的一個或多個表的運動資訊和來自不同於當前圖片的圖片中的時間塊的運動資訊,來確定用於視頻處理的新候選。方法3600包括在操作3604,通過使用所確定的新候選,來執行視頻的當前圖片中的第一視頻塊與該視頻的位元流表示之間的轉換。Figure 36 is a flowchart of an example method 3600 for video processing in accordance with the currently disclosed technology. The method 3600 includes, in operation 3602, determining a video process for video processing by using motion information from one or more tables of a first video block in the current picture and motion information from a time block in a picture different from the current picture. New candidate. The method 3600 includes, in operation 3604, performing a conversion between the first video block in the current picture of the video and the bitstream representation of the video by using the determined new candidate.

在一些實施例中,所確定的新候選被添加到候選列表,所述候選列表包括Merge或AMVP候選列表。In some embodiments, the determined new candidate is added to a candidate list, which includes a Merge or AMVP candidate list.

在一些實施例中,來自當前圖片中的一個或多個表的運動資訊與從一個或多個表中選擇的一個或多個歷史運動向量預測(HMVP)候選相關聯,並且來自不同於當前圖片的圖片中的時間塊的運動資訊是時間運動候選。In some embodiments, the motion information from one or more tables in the current picture is associated with one or more historical motion vector prediction (HMVP) candidates selected from the one or more tables, and is from a different picture than the current picture. The motion information of the temporal block in the picture is a temporal motion candidate.

在一些實施例中,通過對一個或多個HMVP候選與一個或多個時間運動候選求平均,來推導出所述新候選。In some embodiments, the new candidate is derived by averaging one or more HMVP candidates and one or more temporal motion candidates.

在一些實施例中,所述一個或多個表包括從在視頻資料中的第一視頻塊之前處理的先前處理過的視頻塊推導出的運動候選。In some embodiments, the one or more tables include motion candidates derived from previously processed video blocks processed before the first video block in the video material.

圖37是根據當前公開的技術的用於視頻處理的示例方法3700的流程圖。方法3700包括在操作3702,通過使用來自第一視頻塊的一個或多個表的運動資訊和來自第一視頻塊的一個或多個空間相鄰塊的運動資訊,來確定用於視頻處理的新候選。方法3700包括在操作3704,通過使用所確定的新候選,來執行視頻的當前圖片中的第一視頻塊與該視頻的位元流表示之間的轉換。FIG. 37 is a flowchart of an example method 3700 for video processing according to the currently disclosed technology. The method 3700 includes, in operation 3702, determining a new one for video processing by using motion information from one or more tables of a first video block and motion information from one or more spatial neighboring blocks of the first video block. Candidate. The method 3700 includes, in operation 3704, performing a conversion between the first video block in the current picture of the video and the bitstream representation of the video by using the determined new candidate.

在一些實施例中,所確定的新候選被添加到候選列表,所述候選列表包括Merge或AMVP候選列表。In some embodiments, the determined new candidate is added to a candidate list, which includes a Merge or AMVP candidate list.

在一些實施例中,來自第一視頻塊的一個或多個表的運動資訊與從一個或多個表中選擇的一個或多個歷史運動向量預測(HMVP)候選相關聯,並且來自第一視頻塊的一個或多個空間相鄰塊的運動資訊是從相對於第一視頻塊的預定義的空間塊推導出的候選。In some embodiments, the motion information from one or more tables of the first video block is associated with one or more historical motion vector prediction (HMVP) candidates selected from the one or more tables, and is from the first video The motion information of one or more spatial neighboring blocks of a block is a candidate derived from a predefined spatial block relative to the first video block.

在一些實施例中,從相對於第一視頻塊的預定義的空間塊推導出的候選是空間Merge候選。In some embodiments, the candidate derived from the predefined spatial block relative to the first video block is a spatial Merge candidate.

在一些實施例中,通過對一個或多個HMVP候選和一個或多個空間Merge候選求平均,來推導出所述新候選。In some embodiments, the new candidate is derived by averaging one or more HMVP candidates and one or more spatial Merge candidates.

在一些實施例中,所述一個或多個表包括從在視頻資料中的第一視頻塊之前處理的先前處理過的視頻塊推導出的運動候選。In some embodiments, the one or more tables include motion candidates derived from previously processed video blocks processed before the first video block in the video material.

在一些實施例中,來自表的運動候選與運動資訊相關聯,所述運動資訊包括以下中的至少一個:預測方向、參考圖片索引、運動向量值、強度補償標誌、仿射標誌、運動向量差精度或運動向量差值。In some embodiments, the motion candidates from the table are associated with motion information, and the motion information includes at least one of the following: prediction direction, reference picture index, motion vector value, intensity compensation flag, affine flag, motion vector difference Precision or motion vector difference.

在一些實施例中,該方法還包括還包括基於所述轉換更新一個或多個表。In some embodiments, the method further includes updating one or more tables based on the conversion.

在一些實施例中,更新一個或多個表包括在執行所述轉換之後基於當前視頻塊的運動資訊來更新一個或多個表。In some embodiments, updating the one or more tables includes updating the one or more tables based on the motion information of the current video block after performing the conversion.

在一些實施例中,該方法還包括基於更新後的表,在所述視頻資料的後續視頻塊與所述視頻資料的位元流表示之間執行轉換。In some embodiments, the method further includes performing conversion between subsequent video blocks of the video material and the bitstream representation of the video material based on the updated table.

圖38是根據當前公開的技術的用於視頻處理的示例方法3800的流程圖。方法3800包括,在操作3802,保持一組表,其中每個表包括運動候選,並且每個運動候選與對應的運動資訊相關聯;在操作3804,執行第一視頻塊與包括該第一視頻塊的視頻的位元流表示之間的轉換;以及在操作3806,通過基於第一視頻塊的編碼/解碼模式選擇性地對一個或多個表中的現有運動候選進行修剪,來更新該一個或多個表。FIG. 38 is a flowchart of an example method 3800 for video processing according to the currently disclosed technology. The method 3800 includes, in operation 3802, maintaining a set of tables, where each table includes motion candidates, and each motion candidate is associated with corresponding motion information; in operation 3804, performing a first video block and including the first video block And in operation 3806, by selectively trimming the existing motion candidates in one or more tables based on the encoding/decoding mode of the first video block to update the one or Multiple tables.

在一些實施例中,基於該組表中的一個或多個表,執行第一視頻塊與包括該第一視頻塊的視頻的位元流表示之間的轉換。In some embodiments, based on one or more tables in the set of tables, the conversion between the first video block and the bitstream representation of the video including the first video block is performed.

在一些實施例中,在以Merge模式對第一視頻塊編碼/解碼的情況下,省略修剪。In some embodiments, in the case of encoding/decoding the first video block in Merge mode, trimming is omitted.

在一些實施例中,在以高級運動向量預測模式對第一視頻塊編碼/解碼的情況下,省略修剪。In some embodiments, in the case of encoding/decoding the first video block in the advanced motion vector prediction mode, trimming is omitted.

在一些實施例中,在以Merge模式或高級運動向量預測模式對第一視頻塊編碼/解碼的情況下,對所述表的最新M個條目進行修剪,其中M是預先指定的整數。In some embodiments, in the case of encoding/decoding the first video block in Merge mode or advanced motion vector prediction mode, the latest M entries of the table are pruned, where M is a pre-designated integer.

在一些實施例中,在以子塊模式對第一視頻塊編碼/解碼的情況下,禁用修剪。In some embodiments, in the case of encoding/decoding the first video block in sub-block mode, pruning is disabled.

在一些實施例中,所述子塊模式包括仿射模式、可選時間運動向量預測模式。In some embodiments, the sub-block mode includes an affine mode and an optional temporal motion vector prediction mode.

在一些實施例中,所述修剪包括檢查所述表中是否存在冗余的現有運動候選。In some embodiments, the pruning includes checking whether there are redundant existing motion candidates in the table.

在一些實施例中,所述修剪還包括:如果所述表中存在冗余的現有運動候選,則將與所述第一視頻塊相關聯的運動資訊插入到所述表中,並且刪除所述表中的所述冗余的現有運動候選。In some embodiments, the pruning further includes: if there are redundant existing motion candidates in the table, inserting the motion information associated with the first video block into the table, and deleting the The redundant existing motion candidates in the table.

在一些實施例中,如果所述表中存在冗余的現有運動候選,則不使用與所述第一視頻塊相關聯的運動資訊來更新所述表。In some embodiments, if there are redundant existing motion candidates in the table, the motion information associated with the first video block is not used to update the table.

在一些實施例中,該方法還包括基於更新後的表,執行該視頻的後續視頻塊與該視頻的位元流表示之間的轉換。In some embodiments, the method further includes performing a conversion between subsequent video blocks of the video and the bitstream representation of the video based on the updated table.

圖39是根據當前公開的技術的用於視頻處理的示例方法3900的流程圖。方法3900包括,在操作3902,保持一組表,其中每個表包括運動候選,並且每個運動候選與對應的運動資訊相關聯;在操作3904,執行第一視頻塊與包括該第一視頻塊的視頻的位元流表示之間的轉換;以及在操作3906,更新一個或多個表,以包括來自所述第一視頻塊的一個或多個時間相鄰塊的運動資訊作為新的運動候選。Figure 39 is a flowchart of an example method 3900 for video processing in accordance with the currently disclosed technology. The method 3900 includes, in operation 3902, maintaining a set of tables, where each table includes motion candidates, and each motion candidate is associated with corresponding motion information; in operation 3904, performing a first video block and including the first video block Conversion between the bitstream representations of the video; and in operation 3906, update one or more tables to include the motion information of one or more temporal neighbors from the first video block as a new motion candidate .

在一些實施例中,基於該組表中的一個或多個表,執行第一視頻塊與包括該第一視頻塊的視頻的位元流表示之間的轉換。In some embodiments, based on one or more tables in the set of tables, the conversion between the first video block and the bitstream representation of the video including the first video block is performed.

在一些實施例中,所述一個或多個時間相鄰塊是共位的塊。In some embodiments, the one or more temporally neighboring blocks are co-located blocks.

在一些實施例中,所述一個或多個時間相鄰塊包括來自不同參考圖片的一個或多個塊。In some embodiments, the one or more temporal neighboring blocks include one or more blocks from different reference pictures.

在一些實施例中,該方法還包括基於更新後的表,執行所述視頻的後續視頻塊與該視頻的位元流表示之間的轉換。In some embodiments, the method further includes performing a conversion between subsequent video blocks of the video and the bitstream representation of the video based on the updated table.

圖40是根據當前公開的技術的用於更新運動候選表的示例方法4000的流程圖。方法4000包括,在操作4002,基於正被處理的視頻塊的編碼/解碼模式,選擇性地對表中的現有運動候選進行修剪,每個運動候選與對應的運動資訊相關聯;以及在操作4004,更新所述表,以包括視頻塊的運動資訊作為新的運動候選。FIG. 40 is a flowchart of an example method 4000 for updating a motion candidate table according to the currently disclosed technology. The method 4000 includes, in operation 4002, selectively trimming existing motion candidates in the table based on the encoding/decoding mode of the video block being processed, each motion candidate being associated with corresponding motion information; and in operation 4004 , Update the table to include the motion information of the video block as a new motion candidate.

在一些實施例中,在以Merge模式或高級運動向量預測模式對所述視頻塊編碼/解碼的情況下,對所述表的最新M個條目進行修剪,其中M是預先指定的整數。In some embodiments, in the case of encoding/decoding the video block in the Merge mode or the advanced motion vector prediction mode, the latest M entries of the table are pruned, where M is a pre-designated integer.

在一些實施例中,在以子塊模式對所述視頻塊編碼/解碼的情況下,禁用修剪。In some embodiments, in the case of encoding/decoding the video block in sub-block mode, pruning is disabled.

在一些實施例中,所述子塊模式包括仿射模式、可選時間運動向量預測模式。In some embodiments, the sub-block mode includes an affine mode and an optional temporal motion vector prediction mode.

在一些實施例中,所述修剪包括檢查所述表中是否存在冗余的現有運動候選。In some embodiments, the pruning includes checking whether there are redundant existing motion candidates in the table.

在一些實施例中,所述修剪還包括:如果所述表中存在冗餘的運動候選,則將與正被處理的視頻塊相關聯的運動資訊插入到所述表中,並且刪除所述表中的所述冗餘的運動候選。In some embodiments, the pruning further includes: if there are redundant motion candidates in the table, inserting the motion information associated with the video block being processed into the table, and deleting the table The redundant motion candidates in.

在一些實施例中,如果所述表中存在冗余的現有運動候選,則不使用與正被處理的視頻塊相關聯的運動資訊來更新所述表。In some embodiments, if there are redundant existing motion candidates in the table, the motion information associated with the video block being processed is not used to update the table.

圖41是根據當前公開的技術的用於更新運動候選表的示例方法4100的流程圖。方法4100包括,在操作4102,保持運動候選表,每個運動候選與對應的運動資訊相關聯;以及在操作4104,更新所述表,以包括來自正被處理的視頻塊的一個或多個時間相鄰塊的運動資訊作為新的運動候選。Figure 41 is a flowchart of an example method 4100 for updating a motion candidate table according to the currently disclosed technology. The method 4100 includes, in operation 4102, maintaining a list of motion candidates, each motion candidate being associated with corresponding motion information; and in operation 4104, updating the table to include one or more times from the video block being processed The motion information of neighboring blocks is used as a new motion candidate.

在一些實施例中,所述一個或多個時間相鄰塊是共位的塊。In some embodiments, the one or more temporally neighboring blocks are co-located blocks.

在一些實施例中,所述一個或多個時間相鄰塊包括來自不同參考圖片的一個或多個塊。In some embodiments, the one or more temporal neighboring blocks include one or more blocks from different reference pictures.

在一些實施例中, 運動候選與運動資訊相關聯,所述運動資訊包括以下中的至少一個:預測方向、參考圖片索引、運動向量值、強度補償標誌、仿射標誌、運動向量差精度或運動向量差值。In some embodiments, the motion candidate is associated with motion information, and the motion information includes at least one of the following: prediction direction, reference picture index, motion vector value, intensity compensation flag, affine flag, motion vector difference accuracy or motion Vector difference.

在一些實施例中,所述運動候選對應於用於幀內模式編碼的幀內預測模式的運動候選。In some embodiments, the motion candidate corresponds to a motion candidate of an intra prediction mode used for intra mode encoding.

在一些實施例中,所述運動候選對應於包括用於IC參數編碼的亮度補償參數的運動候選。In some embodiments, the motion candidate corresponds to a motion candidate including brightness compensation parameters used for IC parameter encoding.

圖42是根據當前公開的技術的用於視頻處理的示例方法4200的流程圖。方法4200包括,在操作4202,通過使用來自一個或多個表的一個或多個運動候選,來確定用於視頻處理的新的運動候選,其中表包括一個或多個運動候選,並且每個運動候選與運動資訊相關聯;以及在操作4104,基於新候選,在視頻塊和視頻塊的編碼表示之間執行轉換。Figure 42 is a flowchart of an example method 4200 for video processing in accordance with currently disclosed technology. The method 4200 includes, in operation 4202, determining a new motion candidate for video processing by using one or more motion candidates from one or more tables, where the table includes one or more motion candidates, and each motion The candidate is associated with the motion information; and in operation 4104, based on the new candidate, a conversion is performed between the video block and the encoded representation of the video block.

在一些實施例中,所確定的新候選被添加到候選列表,所述候選列表包括Merge或高級運動向量預測(AMVP)候選列表。In some embodiments, the determined new candidate is added to a candidate list, which includes a Merge or Advanced Motion Vector Prediction (AMVP) candidate list.

在一些實施例中,確定所述新候選包括:作為來自一個或多個表的一個或多個運動候選、以及高級運動向量預測(AMVP)候選列表中的AMVP候選或Merge候選清單中的Merge候選的函數,來確定所述新的運動候選。In some embodiments, determining the new candidate includes: being one or more motion candidates from one or more tables, and an AMVP candidate in an advanced motion vector prediction (AMVP) candidate list or a Merge candidate in a Merge candidate list Function to determine the new motion candidate.

從上述來看,應當理解的是,為了便於說明,本發明公開的技術的具體實施例已經在本文中進行了描述,但是可以在不偏離本發明範圍的情況下進行各種修改。因此,除了的之外,本發明公開的技術不限於申請專利範圍的限定。From the above point of view, it should be understood that, for ease of description, specific embodiments of the technology disclosed in the present invention have been described herein, but various modifications can be made without departing from the scope of the present invention. Therefore, in addition to the above, the technology disclosed in the present invention is not limited to the limitation of the scope of patent application.

本文中公開的和其他描述的實施例、模組和功能操作可以在數位電子電路、或電腦軟體、固件或硬體中實現,包括本文中所公開的結構及其結構等效體,或其中一個或多個的組合。公開的實施例和其他實施例可以實現為一個或多個電腦程式產品,即一個或多個編碼在電腦可讀介質上的電腦程式指令的模組,以供資料處理裝置執行或控制資料處理裝置的操作。電腦可讀介質可以是機器可讀存放裝置、機器可讀存儲基板、存放裝置、影響機器可讀傳播信號的物質組成或其中一個或多個的組合。術語“資料處理裝置”包括用於處理資料的所有裝置、設備和機器,包括例如可程式設計處理器、電腦或多處理器或電腦組。除硬體外,該裝置還可以包括為電腦程式創建執行環境的代碼,例如,構成處理器固件的代碼、協定棧、資料庫管理系統、作業系統或其中一個或多個的組合。傳播信號是人為產生的信號,例如機器產生的電信號、光學信號或電磁信號,生成這些信號以對資訊進行編碼,以便傳輸到適當的接收裝置。The embodiments, modules, and functional operations disclosed and other described herein can be implemented in digital electronic circuits, or computer software, firmware, or hardware, including the structures disclosed herein and their structural equivalents, or one of them Or a combination of multiple. The disclosed embodiments and other embodiments can be implemented as one or more computer program products, that is, one or more modules of computer program instructions encoded on a computer-readable medium for the data processing device to execute or control the data processing device Operation. The computer-readable medium may be a machine-readable storage device, a machine-readable storage substrate, a storage device, a material composition that affects a machine-readable propagation signal, or a combination of one or more of them. The term "data processing device" includes all devices, equipment, and machines used to process data, including, for example, programmable processors, computers, or multi-processors or computer sets. In addition to hardware, the device may also include code for creating an execution environment for computer programs, for example, code that constitutes processor firmware, protocol stack, database management system, operating system or a combination of one or more of them. Propagated signals are artificially generated signals, such as electrical, optical or electromagnetic signals generated by machines. These signals are generated to encode information for transmission to an appropriate receiving device.

電腦程式(也稱為程式、軟體、軟體應用、腳本或代碼)可以用任何形式的程式設計語言(包括編譯語言或解釋語言)編寫,並且可以以任何形式部署,包括作為獨立程式或作為模組、元件、副程式或其他適合在計算環境中使用的單元。電腦程式不一定與檔案系統中的檔對應。程式可以存儲在保存其他程式或資料的檔的部分中(例如,存儲在標記語言文件中的一個或多個腳本)、專用於該程式的單個檔中、或多個協調檔(例如,存儲一個或多個模組、副程式或部分代碼的檔)中。電腦程式可以部署在一台或多台電腦上來執行,這些電腦位於一個網站上或分佈在多個網站上,並通過通信網路互連。Computer programs (also called programs, software, software applications, scripts or codes) can be written in any form of programming language (including compiled language or interpreted language), and can be deployed in any form, including as stand-alone programs or as modules , Components, subprograms or other units suitable for use in a computing environment. Computer programs do not necessarily correspond to files in the file system. The program can be stored in the part of the file that saves other programs or data (for example, one or more scripts stored in a markup language file), in a single file dedicated to the program, or multiple coordination files (for example, storing one Or multiple modules, subprograms or partial code files). Computer programs can be deployed on one or more computers to be executed. These computers are located on one website or distributed on multiple websites and are interconnected by communication networks.

本文中描述的處理和邏輯流可以通過一個或多個可程式設計處理器執行,該處理器執行一個或多個電腦程式,通過在輸入資料上操作並生成輸出來執行功能。處理和邏輯流也可以通過特殊用途的邏輯電路來執行,並且裝置也可以實現為特殊用途的邏輯電路,例如,FPGA(現場可程式設計閘陣列)或ASIC(專用積體電路)。The processing and logic flow described herein can be executed by one or more programmable processors that execute one or more computer programs that perform functions by operating on input data and generating output. Processing and logic flow can also be performed by special-purpose logic circuits, and the device can also be implemented as special-purpose logic circuits, such as FPGA (Field Programmable Gate Array) or ASIC (dedicated integrated circuit).

例如,適於執行電腦程式的處理器包括通用和專用微處理器,以及任何類型數位電腦的任何一個或多個。通常,處理器將從唯讀記憶體或隨機存取記憶體或兩者接收指令和資料。電腦的基本元件是執行指令的處理器和存儲指令和資料的一個或多個存放裝置。通常,電腦還將包括一個或多個用於存儲資料的大型存放區設備,例如,磁片、磁光碟或光碟,或通過操作耦合到一個或多個大型存放區設備來從其接收資料或將資料傳輸到一個或多個大型存放區設備,或兩者兼有。然而,電腦不一定具有這樣的設備。適用於存儲電腦程式指令和資料的電腦可讀介質包括所有形式的非易失性記憶體、介質和記憶體設備,包括例如半導體記憶體設備,例如EPROM、EEPROM和快閃記憶體設備;磁片,例如內部硬碟或抽取式磁碟;磁光磁片;以及CDROM和DVD-ROM光碟。處理器和記憶體可以由專用邏輯電路來補充,或合併到專用邏輯電路中。For example, processors suitable for executing computer programs include general-purpose and special-purpose microprocessors, and any one or more of any type of digital computer. Generally, the processor will receive commands and data from read-only memory or random access memory or both. The basic components of a computer are a processor that executes instructions and one or more storage devices that store instructions and data. Generally, a computer will also include one or more large storage area devices for storing data, such as floppy disks, magneto-optical disks, or optical discs, or be operatively coupled to one or more large storage area devices to receive data or transfer data from them. Data is transferred to one or more large storage area devices, or both. However, computers do not necessarily have such equipment. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including, for example, semiconductor memory devices such as EPROM, EEPROM and flash memory devices; magnetic disks , Such as internal hard disks or removable disks; magneto-optical disks; and CDROM and DVD-ROM discs. The processor and memory can be supplemented by dedicated logic circuits or incorporated into dedicated logic circuits.

旨在將說明書與附圖一起僅視為示例性的,其中示例性意味著示例。 如這裡所使用的,單數形式“一”,“一個”和“所述”旨在也包括複數形式,除非上下文另有明確說明。另外,除非上下文另有明確說明,否則“或”的使用旨在包括“和/或”。It is intended to treat the description together with the drawings as merely exemplary, where exemplary means example. As used herein, the singular forms "a", "an" and "the" are intended to also include the plural forms, unless the context clearly dictates otherwise. In addition, the use of "or" is intended to include "and/or" unless the context clearly dictates otherwise.

雖然本專利檔包含許多細節,但不應將其解釋為對任何發明或申請專利範圍範圍的限制,而應解釋為對特定發明的特定實施例的特徵的描述。本專利檔在單獨實施例的上下文描述的某些特徵也可以在單個實施例中組合實施。相反,在單個實施例的上下文中描述的各種功能也可以在多個實施例中單獨實施,或在任何合適的子組合中實施。此外,儘管上述特徵可以描述為在某些組合中起作用,甚至最初要求是這樣,但在某些情況下,可以從組合中刪除申請專利範圍組合中的一個或多個特徵,並且申請專利範圍的組合可以指向子組合或子組合的變體。Although this patent file contains many details, it should not be construed as a limitation on the scope of any invention or patent application, but as a description of the features of a particular embodiment of a particular invention. Certain features described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various functions described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. In addition, although the above-mentioned features can be described as working in certain combinations, even as originally required, in some cases, one or more of the features in the combination can be deleted from the combination, and the scope of the patent application The combination of can point to a sub-combination or a variant of the sub-combination.

同樣,儘管圖紙中以特定順序描述了操作,但這不應理解為要獲得想要的結果必須按照所示的特定順序或循序執行此類操作,或執行所有說明的操作。此外,本專利檔所述實施例中各種系統元件的分離不應理解為在所有實施例中都需要這樣的分離。Similarly, although the operations are described in a specific order in the drawings, it should not be understood that such operations must be performed in the specific order or sequence shown, or all the operations described, to obtain the desired results. In addition, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.

僅描述了一些實現和示例,其他實現、增強和變體可以基於本專利檔中描述和說明的內容做出。Only some implementations and examples are described, and other implementations, enhancements and variations can be made based on the content described and illustrated in this patent file.

100:視訊轉碼器 3000:電腦系統 3005:處理器 3010:記憶體 3015:網路介面卡 3025:內部連接 3100:移動設備 3101:處理器/控制器 3102:記憶體 3103:輸入/輸出(I/O)單元 3104:顯示器 3200、3300、3400、3500、3600、3700、3800、3900、4000、4100、4200:方法 3202、3204、3206、3302、3304、3402、3404、3502、3504、3602、3604、3702、3704、3802、3804、3806、3902、3904、3906、4002、4004、4102、4104、4202、4204:操作 A0、A1、B1、B0、B2、C0、C1:位置 a、b、c、d、A、B、C、D、E、F、T:塊 AL、AR、BL、BR、MV0、MV0′、MV1、MV1′:運動向量 tb、td:POC距離 TD0、TD1:時間距離100: Video codec 3000: Computer system 3005: Processor 3010: Memory 3015: Network interface card 3025: Internal connection 3100: Mobile device 3101: Processor/controller 3102: Memory 3103: Input/output (I /O) Unit 3104: displays 3200, 3300, 3400, 3500, 3600, 3700, 3800, 3900, 4000, 4100, 4200: methods 3202, 3204, 3206, 3302, 3304, 3402, 3404, 3502, 3504, 3602 3604, 3702, 3704, 3802, 3804, 3806, 3902, 3904, 3906, 4002, 4004, 4102, 4104, 4202, 4204: Operation A 0 , A 1 , B 1 , B 0 , B 2 , C 0 , C 1 : Position a, b, c, d, A, B, C, D, E, F, T: block AL, AR, BL, BR, MV0, MV0′, MV1, MV1′: motion vector tb, td: POC distance TD0, TD1: time distance

圖1是示出視訊轉碼器實現的示例的框圖。 圖2圖示了H.264視頻編碼標準中的巨集塊分割。 圖3圖示了將編碼塊(CB)劃分成預測塊(PB)的示例。 圖4圖示了將編碼樹塊(CTB)細分成CB和轉換塊(TB)的示例實現。實線表示CB邊界,且虛線表示TB邊界,包括帶分割的示例CTB和相應的四叉樹。 圖5示出了用於分割視頻資料的四叉樹二叉樹(QTBT)結構的示例。 圖6示出了視頻塊分割的示例。 圖7示出了四叉樹分割的示例。 圖8示出了樹型信令的示例。 圖9示出了Merge候選列表構造的推導過程的示例。 圖10示出了空間Merge候選的示例位置。 圖11示出了考慮到空間Merge候選的冗餘檢查的候選對的示例。 圖12示出了Nx2N和2NxN分割的第二個PU的位置的示例。 圖13圖示了時域Merge候選的示例運動向量縮放。 圖14示出了時域Merge候選的候選位置以及它們的並置圖片。 圖15示出了組合雙向預測Merge候選的示例。 圖16示出了運動向量預測候選的推導過程的示例。 圖17示出了空間運動向量候選的運動向量縮放的示例。 圖18示出了編碼單元(CU)的運動預測的示例可選時域運動向量預測(ATMVP)。 圖19圖示地描繪了源塊和源圖片的識別的示例。 圖20示出了具有四個子塊和相鄰塊的一個CU的示例。 圖21圖示了雙邊匹配的示例。 圖22圖示了範本匹配的示例。 圖23描繪了畫面播放速率上轉換(FRUC)中的單邊運動估計(ME)的示例。 圖24示出了基於雙邊範本匹配的解碼器側運動向量細化(DMVR)的示例。 圖25示出了用於推導光照補償IC參數的空間相鄰塊的示例。 圖26示出了用於推導空間Merge候選的空間相鄰塊的示例。 圖27示出了使用相鄰幀間預測塊的示例。 圖28示出了平面運動向量預測過程的示例。 圖29示出了挨著當前編碼單元(CU)行的位置的示例。 圖30是說明可用於實現本公開技術的各個部分的電腦系統或其它控制設備的結構的示例的框圖。 圖31示出了可用於實現本公開技術的各個部分的移動設備的示例實施例的框圖。 圖32示出了根據當前公開的技術的用於視頻處理的示例方法的流程圖。 圖33示出了根據當前公開的技術的用於視頻處理的示例方法的流程圖。 圖34示出了根據當前公開的技術的用於視頻處理的示例方法的流程圖。 圖35示出了根據當前公開的技術的用於視頻處理的示例方法的流程圖。 圖36示出了根據當前公開的技術的用於視頻處理的示例方法的流程圖。 圖37示出了根據當前公開的技術的用於視頻處理的示例方法的流程圖。 圖38示出了根據當前公開的技術的用於視頻處理的示例方法的流程圖。 圖39示出了根據當前公開的技術的用於視頻處理的示例方法的流程圖。 圖40示出了根據當前公開的技術的用於更新運動候選表的示例方法的流程圖。 圖41示出了根據當前公開的技術的用於更新運動候選表的示例方法的流程圖。 圖42示出了根據當前公開的技術的用於視頻處理的示例方法的流程圖。FIG. 1 is a block diagram showing an example of implementation of a video transcoder. Figure 2 illustrates the macro block partitioning in the H.264 video coding standard. FIG. 3 illustrates an example of dividing a coding block (CB) into prediction blocks (PB). Figure 4 illustrates an example implementation of subdividing the coding tree block (CTB) into CB and transformation blocks (TB). The solid line represents the CB boundary, and the dashed line represents the TB boundary, including the example CTB with segmentation and the corresponding quadtree. FIG. 5 shows an example of a quadtree binary tree (QTBT) structure used to segment video materials. Fig. 6 shows an example of video block division. Fig. 7 shows an example of quadtree division. Figure 8 shows an example of tree signaling. Fig. 9 shows an example of the derivation process of the Merge candidate list construction. Fig. 10 shows example positions of spatial Merge candidates. FIG. 11 shows an example of a candidate pair considering redundancy check of spatial Merge candidates. FIG. 12 shows an example of the position of the second PU divided by Nx2N and 2NxN. Figure 13 illustrates an example motion vector scaling of a temporal Merge candidate. Fig. 14 shows candidate positions of time-domain Merge candidates and their collocated pictures. FIG. 15 shows an example of combining bidirectional prediction Merge candidates. FIG. 16 shows an example of the derivation process of motion vector prediction candidates. Fig. 17 shows an example of motion vector scaling of spatial motion vector candidates. FIG. 18 shows an example optional temporal motion vector prediction (ATMVP) of motion prediction of a coding unit (CU). Fig. 19 graphically depicts an example of identification of source blocks and source pictures. Fig. 20 shows an example of one CU with four sub-blocks and neighboring blocks. Fig. 21 illustrates an example of bilateral matching. Fig. 22 illustrates an example of template matching. Figure 23 depicts an example of unilateral motion estimation (ME) in frame rate up conversion (FRUC). Fig. 24 shows an example of decoder-side motion vector refinement (DMVR) based on bilateral template matching. FIG. 25 shows an example of spatial neighboring blocks used to derive illumination compensation IC parameters. FIG. 26 shows an example of spatial neighboring blocks used to derive spatial Merge candidates. Fig. 27 shows an example of using adjacent inter prediction blocks. Fig. 28 shows an example of a planar motion vector prediction process. Fig. 29 shows an example of the position next to the current coding unit (CU) line. FIG. 30 is a block diagram illustrating an example of the structure of a computer system or other control device that can be used to implement various parts of the disclosed technology. FIG. 31 shows a block diagram of an example embodiment of a mobile device that can be used to implement various parts of the disclosed technology. FIG. 32 shows a flowchart of an example method for video processing according to the currently disclosed technology. FIG. 33 shows a flowchart of an example method for video processing according to the currently disclosed technology. FIG. 34 shows a flowchart of an example method for video processing according to the currently disclosed technology. FIG. 35 shows a flowchart of an example method for video processing according to the currently disclosed technology. FIG. 36 shows a flowchart of an example method for video processing according to the currently disclosed technology. FIG. 37 shows a flowchart of an example method for video processing according to the currently disclosed technology. FIG. 38 shows a flowchart of an example method for video processing according to the currently disclosed technology. FIG. 39 shows a flowchart of an example method for video processing according to the currently disclosed technology. FIG. 40 shows a flowchart of an example method for updating a motion candidate table according to the currently disclosed technology. FIG. 41 shows a flowchart of an example method for updating a motion candidate table according to the currently disclosed technology. FIG. 42 shows a flowchart of an example method for video processing according to the currently disclosed technology.

3200:方法 3200: method

3202、3204、3206:步驟 3202, 3204, 3206: steps

Claims (24)

一種用於視頻處理的方法,包括: 通過平均兩個或更多選擇的運動候選,確定用於視頻處理的新候選; 將所述新候選增加到候選列表;以及 通過使用所述候選列表中的確定的新候選,執行視頻的第一視頻塊和視頻的位元流表示之間的轉換。A method for video processing, including: Determine a new candidate for video processing by averaging two or more selected motion candidates; Adding the new candidate to the candidate list; and By using the determined new candidate in the candidate list, the conversion between the first video block of the video and the bitstream representation of the video is performed. 根據申請專利範圍第1項所述的方法,其中所述候選列表是Merge候選列表,以及確定的新候選是Merge候選。The method according to item 1 of the scope of patent application, wherein the candidate list is a Merge candidate list, and the determined new candidate is a Merge candidate. 根據申請專利範圍第2項所述的方法,其中所述Merge候選列表是幀間預測Merge候選列表或幀內塊複製預測Merge候選列表。The method according to item 2 of the scope of patent application, wherein the Merge candidate list is an inter prediction Merge candidate list or an intra block copy prediction Merge candidate list. 根據申請專利範圍第1項所述的方法,其中所述選擇的運動候選來自一個或多個表。The method according to item 1 of the scope of patent application, wherein the selected motion candidates come from one or more tables. 根據申請專利範圍第4項所述的方法,其中所述一個或多個表包括從視頻資料中所述第一視頻塊之前處理的在先處理視頻塊推導的運動候選。The method according to item 4 of the scope of patent application, wherein the one or more tables include motion candidates derived from a previously processed video block processed before the first video block in the video material. 根據申請專利範圍第4項或第5項所述的方法,其中在所述候選列表中不存在可用的空間候選和時間候選。The method according to item 4 or item 5 of the scope of patent application, wherein there are no available space candidates and time candidates in the candidate list. 根據申請專利範圍第1項所述的方法,其中在沒有除法運算的情況下實現所述平均。The method according to item 1 of the scope of patent application, wherein the averaging is realized without division operation. 根據申請專利範圍第1項所述的方法,其中通過所述選擇的運動候選的運動向量的和與縮放因數的乘法,實現所述平均。The method according to item 1 of the scope of patent application, wherein the averaging is realized by multiplying the sum of the motion vectors of the selected motion candidates and the scaling factor. 根據申請專利範圍第1項所述的方法,其中將所述選擇的運動候選的運動向量的水準分量進行平均以推導新候選的水準分量。The method according to item 1 of the scope of patent application, wherein the level components of the motion vectors of the selected motion candidates are averaged to derive the level components of the new candidates. 根據申請專利範圍第1項所述的方法,其中將所述選擇的運動候選的運動向量的垂直分量進行平均以推導新候選的垂直分量。The method according to item 1 of the scope of patent application, wherein the vertical components of the motion vectors of the selected motion candidates are averaged to derive the vertical components of the new candidates. 根據申請專利範圍第7項所述的方法,其中所述縮放因數被預先計算並存儲在查閱資料表中。According to the method described in item 7 of the scope of patent application, the zoom factor is pre-calculated and stored in the look-up data table. 根據申請專利範圍第1項至第11項中任一項所述的方法,其中僅選擇具有相同參考圖片的運動向量。The method according to any one of items 1 to 11 of the scope of patent application, wherein only motion vectors with the same reference picture are selected. 根據申請專利範圍第12項所述的方法,其中在兩個預測方向上僅選擇在兩個預測方向上具有相同參考圖片的運動向量。According to the method described in item 12 of the scope of patent application, in the two prediction directions, only the motion vectors having the same reference picture in the two prediction directions are selected. 根據申請專利範圍第1項到第10項的任一所述的方法,其中預先確定每個預測方向上的目標參考圖片,以及將所述運動向量縮放到預先確定的參考圖片。The method according to any one of items 1 to 10 of the scope of patent application, wherein the target reference picture in each prediction direction is predetermined, and the motion vector is scaled to the predetermined reference picture. 根據申請專利範圍第14項所述的方法,其中選擇參考圖片清單X中的第一條目作為用於參考圖片清單的目標參考圖片,X為0或1。According to the method described in item 14 of the scope of patent application, the first entry in the reference picture list X is selected as the target reference picture for the reference picture list, and X is 0 or 1. 根據申請專利範圍第14項所述的方法,其中對於每個預測方向,選擇表中最常使用的參考圖片作為目標參考圖片。According to the method described in item 14 of the scope of patent application, for each prediction direction, the most frequently used reference picture in the table is selected as the target reference picture. 根據申請專利範圍第14項所述的方法,其中對於每個預測方向,首先選擇具有與預先確定的目標參考圖片相同的參考圖片的運動向量,然後選擇其他運動向量。According to the method described in item 14 of the scope of patent application, for each prediction direction, a motion vector having the same reference picture as a predetermined target reference picture is first selected, and then other motion vectors are selected. 根據申請專利範圍第1項至第17項中任一項所述的方法,其中來自表的運動候選與運動資訊關聯,所述運動資訊包括以下的至少一種:預測方向、參考圖片索引、運動向量值、強度補償標誌、仿射標誌、運動向量差精度或運動向量差值。The method according to any one of items 1 to 17 of the scope of patent application, wherein the motion candidates from the table are associated with motion information, and the motion information includes at least one of the following: prediction direction, reference picture index, motion vector Value, intensity compensation flag, affine flag, motion vector difference accuracy or motion vector difference value. 根據申請專利範圍第1項至第17項中任一項所述的方法,還包括:基於所述轉換更新一個或多個表。The method according to any one of items 1 to 17 of the scope of patent application, further comprising: updating one or more tables based on the conversion. 根據申請專利範圍第19項所述的方法,其中一個或多個表的更新包括在執行所述轉換後基於視頻的第一視頻塊的運動資訊更新一個或多個表。According to the method of claim 19, the updating of one or more tables includes updating one or more tables based on the motion information of the first video block of the video after the conversion is performed. 根據申請專利範圍第20項所述的方法,還包括: 基於更新的表,執行視頻的隨後視頻塊和視頻的位元流表示之間的轉換。According to the method described in item 20 of the scope of patent application, it also includes: Based on the updated table, a conversion between subsequent video blocks of the video and the bitstream representation of the video is performed. 根據申請專利範圍第1項至第21項中任一項所述的方法,其中所述轉換包括編碼處理和/或解碼處理。The method according to any one of items 1 to 21 of the scope of patent application, wherein the conversion includes encoding processing and/or decoding processing. 一種視頻系統中的裝置,包括處理器,其被配置為實現申請專利範圍第1項至第22項的一項或多項中所述的方法。A device in a video system includes a processor configured to implement the method described in one or more of items 1 to 22 in the scope of the patent application. 一種其上存儲了代碼的非暫態電腦可讀程式介質,所述代碼包括指令,當處理器執行所述指令時,使所述處理器實現申請專利範圍第1項至第22項的一項或多項中所述的方法。A non-transitory computer-readable program medium on which codes are stored, the codes including instructions, when a processor executes the instructions, the processor realizes one of items 1 to 22 of the scope of patent application Or multiple methods described in.
TW108124973A 2018-07-14 2019-07-15 Extension of look-up table based motion vector prediction with temporal information TWI820169B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN2018095716 2018-07-14
WOPCT/CN2018/095716 2018-07-14
CN2018095719 2018-07-15
WOPCT/CN2018/095719 2018-07-15

Publications (2)

Publication Number Publication Date
TW202032991A true TW202032991A (en) 2020-09-01
TWI820169B TWI820169B (en) 2023-11-01

Family

ID=67989034

Family Applications (2)

Application Number Title Priority Date Filing Date
TW108124975A TWI826486B (en) 2018-07-14 2019-07-15 Extension of look-up table based motion vector prediction with temporal information
TW108124973A TWI820169B (en) 2018-07-14 2019-07-15 Extension of look-up table based motion vector prediction with temporal information

Family Applications Before (1)

Application Number Title Priority Date Filing Date
TW108124975A TWI826486B (en) 2018-07-14 2019-07-15 Extension of look-up table based motion vector prediction with temporal information

Country Status (3)

Country Link
CN (2) CN110719463B (en)
TW (2) TWI826486B (en)
WO (2) WO2020016744A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20220131250A (en) 2020-02-05 2022-09-27 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 Deblocking parameters for chroma components
JP2023513518A (en) 2020-02-05 2023-03-31 北京字節跳動網絡技術有限公司 Palette mode for local dual trees
EP4088453A4 (en) 2020-02-14 2023-05-10 Beijing Bytedance Network Technology Co., Ltd. Collocated picture indication in video bitstreams
WO2022214077A1 (en) * 2021-04-10 2022-10-13 Beijing Bytedance Network Technology Co., Ltd. Gpm motion refinement

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102685479A (en) * 2011-03-11 2012-09-19 华为技术有限公司 Video encoding and decoding processing method and device
US20130329007A1 (en) * 2012-06-06 2013-12-12 Qualcomm Incorporated Redundancy removal for advanced motion vector prediction (amvp) in three-dimensional (3d) video coding
WO2014010537A1 (en) * 2012-07-09 2014-01-16 Mitsubishi Electric Corporation Method and system for processing multiview videos for view synthesis using motion vector predictor list
US10027981B2 (en) * 2014-09-01 2018-07-17 Hfi Innovation Inc. Method of intra picture block copy for screen content and video coding
US9918105B2 (en) * 2014-10-07 2018-03-13 Qualcomm Incorporated Intra BC and inter unification
KR20170078672A (en) * 2014-10-31 2017-07-07 삼성전자주식회사 Video coding apparatus and video decoding apparatus using high-precision skip coding and method thereof
CN108353184B (en) * 2015-11-05 2022-02-01 联发科技股份有限公司 Video coding and decoding method and device
US10812791B2 (en) * 2016-09-16 2020-10-20 Qualcomm Incorporated Offset vector identification of temporal motion vector predictor

Also Published As

Publication number Publication date
TW202021360A (en) 2020-06-01
CN110719463B (en) 2022-11-22
WO2020016745A2 (en) 2020-01-23
CN110719463A (en) 2020-01-21
CN110719476B (en) 2023-01-20
TWI826486B (en) 2023-12-21
CN110719476A (en) 2020-01-21
WO2020016745A3 (en) 2020-04-16
TWI820169B (en) 2023-11-01
WO2020016744A1 (en) 2020-01-23

Similar Documents

Publication Publication Date Title
TWI743506B (en) Selection from multiple luts
TWI818086B (en) Extended merge prediction
TWI820211B (en) Conditions for starting checking hmvp candidates depend on total number minus k
TWI704803B (en) Restriction of merge candidates derivation
TWI723443B (en) Resetting of look up table per slice/tile/lcu row
TWI728388B (en) Checking order of motion candidates in look up table
TWI723445B (en) Update of look up table: fifo, constrained fifo
TWI719525B (en) Interaction between lut and amvp
CN113383554A (en) Interaction between LUTs and shared Merge lists
TW202032982A (en) Partial/full pruning when adding a hmvp candidate to merge/amvp
TWI820169B (en) Extension of look-up table based motion vector prediction with temporal information
CN110662030B (en) Video processing method and device
TWI819030B (en) Extension of look-up table based motion vector prediction with temporal information
CN110719465B (en) Extending look-up table based motion vector prediction with temporal information
TWI839388B (en) Simplified history based motion vector prediction