WO2023205185A1 - Procédés et dispositifs de dérivation de candidats pour un mode de fusion affine en codage vidéo - Google Patents

Procédés et dispositifs de dérivation de candidats pour un mode de fusion affine en codage vidéo Download PDF

Info

Publication number
WO2023205185A1
WO2023205185A1 PCT/US2023/019002 US2023019002W WO2023205185A1 WO 2023205185 A1 WO2023205185 A1 WO 2023205185A1 US 2023019002 W US2023019002 W US 2023019002W WO 2023205185 A1 WO2023205185 A1 WO 2023205185A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
block
affine
restricted
current
Prior art date
Application number
PCT/US2023/019002
Other languages
English (en)
Inventor
Wei Chen
Xiaoyu XIU
Yi-Wen Chen
Hong-Jheng Jhu
Che-Wei Kuo
Ning Yan
Xianglin Wang
Bing Yu
Original Assignee
Beijing Dajia Internet Information Technology Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co., Ltd. filed Critical Beijing Dajia Internet Information Technology Co., Ltd.
Publication of WO2023205185A1 publication Critical patent/WO2023205185A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/55Motion estimation with spatial constraints, e.g. at image or region borders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/129Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]

Definitions

  • the present disclosure relates to video coding and compression, and in particular but not limited to, methods and apparatus on improving the affine merge candidate derivation for affine motion prediction mode in a video encoding or decoding process.
  • Video coding is performed according to one or more video coding standards.
  • video coding standards include Versatile Video Coding (VVC), High Efficiency Video Coding (HEVC, also known as H.265 or MPEG-H Part2) and Advanced Video Coding (AVC, also known as H.264 or MPEG-4 Part 10), which are jointly developed by ISO/IEC MPEG and ITU-T VECG.
  • AV Versatile Video Coding
  • HEVC High Efficiency Video Coding
  • AVC also known as H.264 or MPEG-4 Part 10
  • AOMedia Video 1 was developed by Alliance for Open Media (AOM) as a successor to its preceding standard VP9.
  • Audio Video Coding which refers to digital audio and digital video compression standard
  • AVS Audio Video Coding
  • Most of the existing video coding standards are built upon the famous hybrid video coding framework i.e., using block-based prediction methods (e.g., inter-prediction, intra-prediction) to reduce redundancy present in video images or sequences and using transform coding to compact the energy of the prediction errors.
  • An important goal of video coding techniques is to compress video data into a form that uses a lower bit rate while avoiding or minimizing degradations to video quality.
  • the first generation AVS standard includes Chinese national standard “Information Technology, Advanced Audio Video Coding, Part 2: Video” (known as AVS1) and “Information Technology, Advanced Audio Video Coding Part 16: Radio Television Video” (known as AVS+). It can offer around 50% bit-rate saving at the same perceptual quality compared to MPEG-2 standard.
  • the AVS1 standard video part was promulgated as the Chinese national standard in February 2006.
  • the second generation AVS standard includes the series of Chinese national standard “Information Technology, Efficient Multimedia Coding” (knows as AVS2), which is mainly targeted at the transmission of extra HD TV programs.
  • the coding efficiency of the AVS2 is double of that of the AVS+. In May 2016, the AVS2 was issued as the Chinese national standard.
  • the AVS2 standard video part was submitted by Institute of Electrical and Electronics Engineers (IEEE) as one international standard for applications.
  • the AVS3 standard is one new generation video coding standard for UHD video application aiming at surpassing the coding efficiency of the latest international standard HEVC.
  • March 2019, at the 68-th AVS meeting the AVS3-P2 baseline was finished, which provides approximately 30% bit-rate savings over the HEVC standard.
  • HPM high performance model
  • a decoder may obtain a restricted area that is not adjacent to a current coding unit (CU) according to a value associated with the restricted area. Additionally, the decoder may obtain one or more motion vector (MV) candidates from a plurality of non-adjacent CUs to the current CU based on the restricted area. Furthermore, the decoder may one or more control point motion vectors (CPMVs) for the current CU based on the one or more MV candidates.
  • CPMVs control point motion vectors
  • an encoder may obtain a restricted area that is not adjacent to a current CU according to a value associated with the restricted area. Additionally, the encoder may obtain one or more MV candidates from a plurality of non-adjacent CUs to the current CU based on the restricted area. Furthermore, the encoder may one or more CPMVs for the current CU based on the one or more MV candidates.
  • an apparatus for video decoding includes one or more processors and a memory coupled to the one or more processors and configured to store instructions executable by the one or more processors. Furthermore, the one or more processors, upon execution of the instructions, are configured to perform the method according to the first aspect above.
  • an apparatus for video encoding includes one or more processors and a memory coupled to the one or more processors and configured to store instructions executable by the one or more processors. Furthermore, the one or more processors, upon execution of the instructions, are configured to perform the method according to the second aspect above.
  • a non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by one or more computer processors, cause the one or more computer processors to receive a bitstream, and perform the method according to the first aspect above.
  • a non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by one or more computer processors, cause the one or more computer processors to perform the method according to the second aspect above to encode a current CU into a bitstream, and transmit the bitstream.
  • FIG. 1A is a block diagram illustrating a system for encoding and decoding video blocks in accordance with some examples of the present disclosure.
  • FIG. IB is a block diagram of an encoder in accordance with some examples of the present disclosure.
  • FIGS. 1C-1F are block diagrams illustrating how a frame is recursively partitioned into multiple video blocks of different sizes and shapes in accordance with some examples of the present disclosure.
  • FIG. 1G is a block diagram illustrating an exemplary video encoder in accordance with some examples of the present disclosure
  • FIG. 2A is a block diagram of a decoder in accordance with some examples of the present disclosure.
  • FIG. 2B is a block diagram illustrating an exemplary video decoder in accordance with some examples of the present disclosure
  • FIG. 3A is a diagram illustrating block partitions in a multi-type tree structure in accordance with some examples of the present disclosure.
  • FIG. 3B is a diagram illustrating block partitions in a multi-type tree structure in accordance with some examples of the present disclosure.
  • FIG. 3C is a diagram illustrating block partitions in a multi-type tree structure in accordance with some examples of the present disclosure.
  • FIG. 3D is a diagram illustrating block partitions in a multi-type tree structure in accordance with some examples of the present disclosure.
  • FIG. 3E is a diagram illustrating block partitions in a multi-type tree structure in accordance with some examples of the present disclosure.
  • FIG. 4A illustrates 4-parameter affine model in accordance with some examples of the present disclosure.
  • FIG. 4B illustrates 4-parameter affine model in accordance with some examples of the present disclosure.
  • FIG. 4C illustrates positions of spatial merge candidates in accordance with some examples of the present disclosure.
  • FIG. 4D illustrates candidate pairs that are considered for redundancy check of spatial merge candidates in accordance with some examples of the present disclosure.
  • FIG. 4E illustrates motion vector scaling for temporal merge candidates in accordance with some examples of the present disclosure.
  • FIG. 4F illustrates candidate positions for temporal merge candidates Co and Ci in accordance with some examples of the present disclosure.
  • FIG. 5 illustrates 6-parameter affine model in accordance with some examples of the present disclosure.
  • FIG. 6 illustrates adjacent neighboring blocks for inherited affine merge candidates in accordance with some examples of the present disclosure.
  • FIG 7 illustrates adjacent neighboring blocks for constructed affine merge candidates in accordance with some examples of the present disclosure.
  • FIG. 8 illustrates non-adjacent neighboring blocks for inherited affine merge candidates in accordance with some examples of the present disclosure.
  • FIG. 9 illustrates derivation of constructed affine merge candidates using non-adjacent neighboring blocks in accordance with some examples of the present disclosure.
  • FIG. 10 illustrates perpendicular scanning of non-adjacent neighboring blocks in accordance with some examples of the present disclosure.
  • FIG. 11 illustrates parallel scanning of non-adjacent neighboring blocks in accordance with some examples of the present disclosure.
  • FIG. 12 illustrates combined perpendicular and parallel scanning of non-adjacent neighboring blocks in accordance with some examples of the present disclosure.
  • FIG. 13A illustrates neighbor blocks with the same size as the current block in accordance with some examples of the present disclosure.
  • FIG. 13B illustrates neighbor blocks with a different size than the current block in accordance with some examples of the present disclosure.
  • FIG. 14A illustrates an example of the bottom-left or top-right block of the bottommost or rightmost block in a previous distance is used as the bottommost or rightmost block of a current distance in accordance with some examples of the present disclosure.
  • FIG. 14B illustrates an example of the left or top block of the bottommost or rightmost block in the previous distance is used as the bottommost or rightmost block of the current distance in accordance with some examples of the present disclosure.
  • FIG. 15A illustrates scanning positions at bottom-left and top-right positions used for above and left non-adjacent neighboring blocks in accordance with some examples of the present disclosure.
  • FIG. 15B illustrates scanning positions at bottom-right positions used for both above and left non-adjacent neighboring blocks in accordance with some examples of the present disclosure.
  • FIG. 15C illustrates scanning positions at bottom-left positions used for both above and left non-adjacent neighboring blocks in accordance with some examples of the present disclosure.
  • FIG. 15D illustrates scanning positions at top-right positions used for both above and left non-adjacent neighboring blocks in accordance with some examples of the present disclosure.
  • FIG 16 illustrates a simplified scanning process for deriving constructed merge candidates in accordance with some examples of the present disclosure.
  • FIG. 17A illustrates spatial neighbors for deriving inherited affine merge candidates in accordance with some examples of the present disclosure.
  • FIG. 17B illustrates spatial neighbors for deriving constructed affine merge candidates in accordance with some examples of the present disclosure.
  • FIG. 18 illustrates an example of inheritance based derivation method for deriving affine constructed candidates in accordance with some examples of the present disclosure.
  • FIG. 19 illustrates template and reference samples of a template in reference list 0 and reference list 1 in accordance with some examples of the present disclosure.
  • FIG. 20 illustrates template and reference samples of a template for block with sub-block motion using the motion information of the subblocks of a current block in accordance with some examples of the present disclosure.
  • FIG. 21 illustrates an example where non-adjacent spatial area is restricted to be within half CTU size on the area above and left of the current CTU in accordance with some examples of the present disclosure.
  • FIG. 22A illustrates one storage method of directly saving affine motion information about an affined coded block CTU in accordance with some examples of the present disclosure.
  • FIG. 22B illustrates one storage method of projecting and saving affine motion information at each sub-block in accordance with some examples of the present disclosure.
  • FIG. 23 illustrates an example of using center point to derive regular/translational motion at each 4x4 regular block in accordance with some examples of the present disclosure.
  • FIG. 24 is a diagram illustrating a computing environment coupled with a user interface in accordance with some examples of the present disclosure.
  • FIG. 25 illustrates an example of storing motion information of an affine-coded block at a granularity greater than the minimum affine block size in accordance with some examples of the present disclosure.
  • FIG. 26 illustrates merge mode with MVD (MMVD) search points respectively in L0 reference and LI reference in accordance with some examples of the present disclosure.
  • FIG. 27A illustrates an example of motion storage for non-adjacent spatial neighbors including affine neighbor CUs and non-affine neighbor CUs when allowable non-adjacent spatial area beyond a current coding tree unit (CTU) in accordance with some examples of the present disclosure.
  • CTU current coding tree unit
  • FIG. 27B illustrates an example of motion storage for non-adjacent spatial neighbors including affine neighbor CUs and non-affine neighbor CUs motion storage in line buffer in accordance with some examples of the present disclosure.
  • FIG. 28 illustrates an example of projected or clipped non-adjacent neighbor positions when a scanned non-adjacent neighbor position is beyond the allowable spatial area in accordance with some examples of the present disclosure.
  • FIG. 29 is a flow chart illustrating a method for video decoding in accordance with some examples of the present disclosure.
  • FIG. 30 is a flow chart illustrating a method for video encoding corresponding to the method for video decoding as shown in FIG. 29 in accordance with some examples of the present disclosure.
  • first,” “second,” “third,” etc. are all used as nomenclature only for references to relevant elements, e.g., devices, components, compositions, steps, etc., without implying any spatial or chronological orders, unless expressly specified otherwise.
  • a “first device” and a “second device” may refer to two separately formed devices, or two parts, components, or operational states of a same device, and may be named arbitrarily.
  • module may include memory (shared, dedicated, or group) that stores code or instructions that can be executed by one or more processors.
  • a module may include one or more circuits with or without stored code or instructions.
  • the module or circuit may include one or more components that are directly or indirectly connected. These components may or may not be physically attached to, or located adjacent to, one another.
  • a method may comprise steps of: i) when or if condition X is present, function or action X’ is performed, and ii) when or if condition Y is present, function or action Y’ is performed.
  • the method may be implemented with both the capability of performing function or action X’, and the capability of performing function or action Y’.
  • the functions X’ and Y’ may both be performed, at different times, on multiple executions of the method.
  • a unit or module may be implemented purely by software, purely by hardware, or by a combination of hardware and software.
  • the unit or module may include functionally related code blocks or software components, that are directly or indirectly linked together, so as to perform a particular function.
  • FIG. 1A is a block diagram illustrating an exemplary system 10 for encoding and decoding video blocks in parallel in accordance with some implementations of the present disclosure.
  • the system 10 includes a source device 12 that generates and encodes video data to be decoded at a later time by a destination device 14.
  • the source device 12 and the destination device 14 may include any of a wide variety of electronic devices, including desktop or laptop computers, tablet computers, smart phones, set-top boxes, digital televisions, cameras, display devices, digital media players, video gaming consoles, video streaming device, or the like.
  • the source device 12 and the destination device 14 are equipped with wireless communication capabilities.
  • the destination device 14 may receive the encoded video data to be decoded via a link 16.
  • the link 16 may include any type of communication medium or device capable of moving the encoded video data from the source device 12 to the destination device 14.
  • the link 16 may include a communication medium to enable the source device 12 to transmit the encoded video data directly to the destination device 14 in real time.
  • the encoded video data may be modulated according to a communication standard, such as a wireless communication protocol, and transmitted to the destination device 14.
  • the communication medium may include any wireless or wired communication medium, such as a Radio Frequency (RF) spectrum or one or more physical transmission lines.
  • RF Radio Frequency
  • the communication medium may form part of a packet-based network, such as a local area network, a wide-area network, or a global network such as the Internet.
  • the communication medium may include routers, switches, base stations, or any other equipment that may be useful to facilitate communication from the source device 12 to the destination device 14.
  • the encoded video data may be transmitted from an output interface 22 to a storage device 32. Subsequently, the encoded video data in the storage device 32 may be accessed by the destination device 14 via an input interface 28.
  • the storage device 32 may include any of a variety of distributed or locally accessed data storage media such as a hard drive, Blu-ray discs, Digital Versatile Disks (DVDs), Compact Disc Read-Only Memories (CD-ROMs), flash memory, volatile or non-volatile memory, or any other suitable digital storage media for storing the encoded video data.
  • the storage device 32 may correspond to a fde server or another intermediate storage device that may hold the encoded video data generated by the source device 12.
  • the destination device 14 may access the stored video data from the storage device 32 via streaming or downloading.
  • the fde server may be any type of computer capable of storing the encoded video data and transmitting the encoded video data to the destination device 14.
  • Exemplary fde servers include a web server (e.g., for a website), a File Transfer Protocol (FTP) server, Network Attached Storage (NAS) devices, or a local disk drive.
  • FTP File Transfer Protocol
  • NAS Network Attached Storage
  • the destination device 14 may access the encoded video data through any standard data connection, including a wireless channel (e.g., a Wireless Fidelity (Wi-Fi) connection), a wired connection (e.g., Digital Subscriber Line (DSL), cable modem, etc.), or a combination of both that is suitable for accessing encoded video data stored on a file server.
  • a wireless channel e.g., a Wireless Fidelity (Wi-Fi) connection
  • a wired connection e.g., Digital Subscriber Line (DSL), cable modem, etc.
  • the transmission of the encoded video data from the storage device 32 may be a streaming transmission, a download transmission, or a combination of both.
  • the source device 12 includes a video source 18, a video encoder 20 and the output interface 22.
  • the video source 18 may include a source such as a video capturing device, e.g., a video camera, a video archive containing previously captured video, a video feeding interface to receive video from a video content provider, and/or a computer graphics system for generating computer graphics data as the source video, or a combination of such sources.
  • a video capturing device e.g., a video camera, a video archive containing previously captured video, a video feeding interface to receive video from a video content provider, and/or a computer graphics system for generating computer graphics data as the source video, or a combination of such sources.
  • the source device 12 and the destination device 14 may form camera phones or video phones.
  • the implementations described in the present application may be applicable to video coding in general, and may be applied to wireless and/or wired applications.
  • the captured, pre-captured, or computer-generated video may be encoded by the video encoder 20.
  • the encoded video data may be transmitted directly to the destination device 14 via the output interface 22 of the source device 12.
  • the encoded video data may also (or alternatively) be stored onto the storage device 32 for later access by the destination device 14 or other devices, for decoding and/or playback.
  • the output interface 22 may further include a modem and/or a transmitter.
  • the destination device 14 includes the input interface 28, a video decoder 30, and a display device 34.
  • the input interface 28 may include a receiver and/or a modem and receive the encoded video data over the link 16.
  • the encoded video data communicated over the link 16, or provided on the storage device 32 may include a variety of syntax elements generated by the video encoder 20 for use by the video decoder 30 in decoding the video data. Such syntax elements may be included within the encoded video data transmitted on a communication medium, stored on a storage medium, or stored on a fde server.
  • the destination device 14 may include the display device 34, which can be an integrated display device and an external display device that is configured to communicate with the destination device 14.
  • the display device 34 displays the decoded video data to a user, and may include any of a variety of display devices such as a Liquid Crystal Display (LCD), a plasma display, an Organic Light Emitting Diode (OLED) display, or another type of display device.
  • LCD Liquid Crystal Display
  • OLED Organic Light Emitting Diode
  • the video encoder 20 and the video decoder 30 may operate according to proprietary or industry standards, such as VVC, HEVC, MPEG-4, Part 10, AVC, or extensions of such standards. It should be understood that the present application is not limited to a specific video encoding/decoding standard and may be applicable to other video encoding/decoding standards. It is generally contemplated that the video encoder 20 of the source device 12 may be configured to encode video data according to any of these current or future standards. Similarly, it is also generally contemplated that the video decoder 30 of the destination device 14 may be configured to decode video data according to any of these current or future standards.
  • the video encoder 20 and the video decoder 30 each may be implemented as any of a variety of suitable encoder and/or decoder circuitry, such as one or more microprocessors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), discrete logic, software, hardware, firmware or any combinations thereof.
  • DSPs Digital Signal Processors
  • ASICs Application Specific Integrated Circuits
  • FPGAs Field Programmable Gate Arrays
  • an electronic device may store instructions for the software in a suitable, non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the video encoding/decoding operations disclosed in the present disclosure.
  • Each of the video encoder 20 and the video decoder 30 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC) in a respective device.
  • CODEC combined encoder/decoder
  • FIG. IB is a block diagram illustrating a block-based video encoder in accordance with some implementations of the present disclosure.
  • the input video signal is processed block by block, called coding units (CUs).
  • the encoder 100 may be the video encoder 20 as shown in FIG. I A.
  • a CU can be up to 128x128 pixels.
  • one coding tree unit (CTU) is split into CUs to adapt to varying local characteristics based on quad/binary/temary-tree.
  • each CU is always used as the basic unit for both prediction and transform without further partitions.
  • the multi-type tree structure one CTU is firstly partitioned by a quad-tree structure. Then, each quad-tree leaf node can be further partitioned by a binary and ternary tree structure.
  • FIGS. 3A-3E are schematic diagrams illustrating multi-type tree splitting modes in accordance with some implementations of the present disclosure.
  • FIGS. 3A-3E respectively show five splitting types including quaternary partitioning (FIG. 3A), vertical binary partitioning (FIG. 3B), horizontal binary partitioning (FIG. 3C), vertical extended ternary partitioning (FIG. 3D), and horizontal extended ternary partitioning (FIG. 3E).
  • FIGS. 3A-3E respectively show five splitting types including quaternary partitioning (FIG. 3A), vertical binary partitioning (FIG. 3B), horizontal binary partitioning (FIG. 3C), vertical extended ternary partitioning (FIG. 3D), and horizontal extended ternary partitioning (FIG. 3E).
  • Spatial prediction uses pixels from the samples of already coded neighboring blocks (which are called reference samples) in the same video picture/slice to predict the current video block. Spatial prediction reduces spatial redundancy inherent in the video signal.
  • Temporal prediction also referred to as “inter prediction” or “motion compensated prediction” uses reconstructed pixels from the already coded video pictures to predict the current video block. Temporal prediction reduces temporal redundancy inherent in the video signal.
  • Temporal prediction signal for a given CU is usually signaled by one or more motion vectors (MVs) which indicate the amount and the direction of motion between the current CU and its temporal reference. Also, if multiple reference pictures are supported, one reference picture index is additionally sent, which is used to identify from which reference picture in the reference picture store the temporal prediction signal comes.
  • MVs motion vectors
  • an intra/inter mode decision circuitry 121 in the encoder 100 chooses the best prediction mode, for example based on the rate-distortion optimization method.
  • the block predictor 120 is then subtracted from the current video block; and the resulting prediction residual is de-correlated using the transform circuitry 102 and the quantization circuitry 104.
  • the resulting quantized residual coefficients are inverse quantized by the inverse quantization circuitry 116 and inverse transformed by the inverse transform circuitry 118 to form the reconstructed residual, which is then added back to the prediction block to form the reconstructed signal of the CU.
  • in-loop filtering 115 such as a deblocking filter, a sample adaptive offset (SAG), and/or an adaptive in-loop filter (ALF) may be applied on the reconstructed CU before it is put in the reference picture store of the picture buffer 117 and used to code future video blocks.
  • coding mode inter or intra
  • prediction mode information motion information
  • quantized residual coefficients are all sent to the entropy coding unit 106 to be further compressed and packed to form the bit-stream.
  • a deblocking filter is available in AVC, HEVC as well as the now-current version of VVC.
  • SAO is defined to further improve coding efficiency.
  • ALF is being actively investigated, and it has a good chance of being included in the final standard.
  • intra prediction is usually based on unfiltered reconstructed pixels, while inter prediction is based on filtered reconstructed pixels if these filter options are turned on by the encoder 100.
  • FIG. 2A is a block diagram illustrating a block-based video decoder 200 which may be used in conjunction with many video coding standards.
  • This decoder 200 is similar to the reconstruction-related section residing in the encoder 100 of FIG. IB.
  • the block-based video decoder 200 may be the video decoder 30 as shown in FIG. 1A.
  • an incoming video bitstream 201 is first decoded through an Entropy Decoding 202 to derive quantized coefficient levels and prediction-related information.
  • the quantized coefficient levels are then processed through an Inverse Quantization 204 and an Inverse Transform 206 to obtain a reconstructed prediction residual.
  • a block predictor mechanism implemented in an Intra/inter Mode Selector 212, is configured to perform either an Intra Prediction 208, or a Motion Compensation 210, based on decoded prediction information.
  • a set of unfiltered reconstructed pixels are obtained by summing up the reconstructed prediction residual from the Inverse Transform 206 and a predictive output generated by the block predictor mechanism, using a summer 214.
  • the reconstructed block may further go through an In-Loop Filter 209 before it is stored in a Picture Buffer 213 which functions as a reference picture store.
  • the reconstructed video in the Picture Buffer 213 may be sent to drive a display device, as well as used to predict future video blocks.
  • a filtering operation is performed on these reconstructed pixels to derive a final reconstructed Video Output 222.
  • FIG. 1 G is a block diagram illustrating another exemplary video encoder 20 in accordance with some implementations described in the present application.
  • the video encoder 20 may perform intra and inter predictive coding of video blocks within video frames.
  • Intra predictive coding relies on spatial prediction to reduce or remove spatial redundancy in video data within a given video frame or picture.
  • Inter predictive coding relies on temporal prediction to reduce or remove temporal redundancy in video data within adjacent video frames or pictures of a video sequence.
  • frame may be used as synonyms for the term “image” or “picture” in the field of video coding.
  • the video encoder 20 includes a video data memory 40, a prediction processing unit 41, a Decoded Picture Buffer (DPB) 64, a summer 50, a transform processing unit 52, a quantization unit 54, and an entropy encoding unit 56.
  • the prediction processing unit 41 further includes a motion estimation unit 42, a motion compensation unit 44, a partition unit 45, an intra prediction processing unit 46, and an intra Block Copy (BC) unit 48.
  • the video encoder 20 also includes an inverse quantization unit 58, an inverse transform processing unit 60, and a summer 62 for video block reconstruction.
  • An in-loop filter 63 such as a deblocking filter, may be positioned between the summer 62 and the DPB 64 to filter block boundaries to remove blockiness artifacts from reconstructed video.
  • Another in-loop filter such as Sample Adaptive Offset (SAO) filter and/or Adaptive in-Loop Filter (ALF), may also be used in addition to the deblocking filter to filter an output of the summer 62.
  • the in-loop filters may be omitted, and the decoded video block may be directly provided by the summer 62 to the DPB 64.
  • the video encoder 20 may take the form of a fixed or programmable hardware unit or may be divided among one or more of the illustrated fixed or programmable hardware units.
  • the video data memory 40 may store video data to be encoded by the components of the video encoder 20.
  • the video data in the video data memory 40 may be obtained, for example, from the video source 18 as shown in FIG. 1A.
  • the DPB 64 is a buffer that stores reference video data (for example, reference frames or pictures) for use in encoding video data by the video encoder 20 (e.g., in intra or inter predictive coding modes).
  • the video data memory 40 and the DPB 64 may be formed by any of a variety of memory devices.
  • the video data memory 40 may be on-chip with other components of the video encoder 20, or off-chip relative to those components.
  • the partition unit 45 within the prediction processing unit 41 partitions the video data into video blocks.
  • This partitioning may also include partitioning a video frame into slices, tiles (for example, sets of video blocks), or other larger Coding Units (CUs) according to predefined splitting structures such as a Quad-Tree (QT) structure associated with the video data.
  • the video frame is or may be regarded as a two- dimensional array or matrix of samples with sample values.
  • a sample in the array may also be referred to as a pixel or a pel.
  • a number of samples in horizontal and vertical directions (or axes) of the array or picture define a size and/or a resolution of the video frame.
  • the video frame may be divided into multiple video blocks by, for example, using QT partitioning.
  • the video block again is or may be regarded as a two-dimensional array or matrix of samples with sample values, although of smaller dimension than the video frame.
  • a number of samples in horizontal and vertical directions (or axes) of the video block define a size of the video block.
  • the video block may further be partitioned into one or more block partitions or sub-blocks (which may form again blocks) by, for example, iteratively using QT partitioning, Binary-Tree (BT) partitioning or TripleTree (TT) partitioning or any combination thereof.
  • BT Binary-Tree
  • TT TripleTree
  • block or video block may be a portion, in particular a rectangular (square or non- square) portion, of a frame or a picture.
  • the block or video block may be or correspond to a Coding Tree Unit (CTU), a CU, a Prediction Unit (PU) or a Transform Unit (TU) and/or may be or correspond to a corresponding block, e.g., a Coding Tree Block (CTB), a Coding Block (CB), a Prediction Block (PB) or a Transform Block (TB) and/or to a sub-block.
  • CTU Coding Tree Unit
  • PU Prediction Unit
  • TU Transform Unit
  • a corresponding block e.g., a Coding Tree Block (CTB), a Coding Block (CB), a Prediction Block (PB) or a Transform Block (TB) and/or to a sub-block.
  • CTB Coding Tree Block
  • PB Prediction Block
  • TB Transform Block
  • the prediction processing unit 41 may select one of a plurality of possible predictive coding modes, such as one of a plurality of intra predictive coding modes or one of a plurality of inter predictive coding modes, for the current video block based on error results (e.g., coding rate and the level of distortion).
  • the prediction processing unit 41 may provide the resulting intra or inter prediction coded block to the summer 50 to generate a residual block and to the summer 62 to reconstruct the encoded block for use as part of a reference frame subsequently.
  • the prediction processing unit 41 also provides syntax elements, such as motion vectors, intra-mode indicators, partition information, and other such syntax information, to the entropy encoding unit 56.
  • the intra prediction processing unit 46 within the prediction processing unit 41 may perform intra predictive coding of the current video block relative to one or more neighbor blocks in the same frame as the current block to be coded to provide spatial prediction.
  • the motion estimation unit 42 and the motion compensation unit 44 within the prediction processing unit 41 perform inter predictive coding of the current video block relative to one or more predictive blocks in one or more reference frames to provide temporal prediction.
  • the video encoder 20 may perform multiple coding passes, e.g., to select an appropriate coding mode for each block of video data.
  • the motion estimation unit 42 determines the inter prediction mode for a current video frame by generating a motion vector, which indicates the displacement of a video block within the current video frame relative to a predictive block within a reference video frame, according to a predetermined pattern within a sequence of video frames.
  • Motion estimation performed by the motion estimation unit 42, is the process of generating motion vectors, which estimate motion for video blocks.
  • a motion vector for example, may indicate the displacement of a video block within a current video frame or picture relative to a predictive block within a reference frame relative to the current block being coded within the current frame.
  • the predetermined pattern may designate video frames in the sequence as P frames or B frames.
  • the intra BC unit 48 may determine vectors, e.g., block vectors, for intra BC coding in a manner similar to the determination of motion vectors by the motion estimation unit 42 for inter prediction, or may utilize the motion estimation unit 42 to determine the block vector.
  • a predictive block for the video block may be or may correspond to a block or a reference block of a reference frame that is deemed as closely matching the video block to be coded in terms of pixel difference, which may be determined by Sum of Absolute Difference (SAD), Sum of Square Difference (SSD), or other difference metrics.
  • the video encoder 20 may calculate values for sub-integer pixel positions of reference frames stored in the DPB 64. For example, the video encoder 20 may interpolate values of one-quarter pixel positions, one- eighth pixel positions, or other fractional pixel positions of the reference frame. Therefore, the motion estimation unit 42 may perform a motion search relative to the full pixel positions and fractional pixel positions and output a motion vector with fractional pixel precision.
  • the motion estimation unit 42 calculates a motion vector for a video block in an inter prediction coded frame by comparing the position of the video block to the position of a predictive block of a reference frame selected from a first reference frame list (List 0) or a second reference frame list (List 1), each of which identifies one or more reference frames stored in the DPB 64.
  • the motion estimation unit 42 sends the calculated motion vector to the motion compensation unit 44 and then to the entropy encoding unit 56.
  • Motion compensation performed by the motion compensation unit 44, may involve fetching or generating the predictive block based on the motion vector determined by the motion estimation unit 42.
  • the motion compensation unit 44 may locate a predictive block to which the motion vector points in one of the reference frame lists, retrieve the predictive block from the DPB 64, and forward the predictive block to the summer 50.
  • the summer 50 then forms a residual video block of pixel difference values by subtracting pixel values of the predictive block provided by the motion compensation unit 44 from the pixel values of the current video block being coded.
  • the pixel difference values forming the residual video block may include luma or chroma component differences or both.
  • the motion compensation unit 44 may also generate syntax elements associated with the video blocks of a video frame for use by the video decoder 30 in decoding the video blocks of the video frame.
  • the syntax elements may include, for example, syntax elements defining the motion vector used to identify the predictive block, any flags indicating the prediction mode, or any other syntax information described herein. Note that the motion estimation unit 42 and the motion compensation unit 44 may be highly integrated, but are illustrated separately for conceptual purposes.
  • the intra BC unit 48 may generate vectors and fetch predictive blocks in a manner similar to that described above in connection with the motion estimation unit 42 and the motion compensation unit 44, but with the predictive blocks being in the same frame as the current block being coded and with the vectors being referred to as block vectors as opposed to motion vectors.
  • the intra BC unit 48 may determine an intra-prediction mode to use to encode a current block.
  • the intra BC unit 48 may encode a current block using various intra-prediction modes, e g., during separate encoding passes, and test their performance through rate-distortion analysis.
  • the intra BC unit 48 may select, among the various tested intra-prediction modes, an appropriate intra-prediction mode to use and generate an intra-mode indicator accordingly. For example, the intra BC unit 48 may calculate rate-distortion values using a rate-distortion analysis for the various tested intra-prediction modes, and select the intra-prediction mode having the best rate-distortion characteristics among the tested modes as the appropriate intra-prediction mode to use. Rate-distortion analysis generally determines an amount of distortion (or error) between an encoded block and an original, unencoded block that was encoded to produce the encoded block, as well as a bitrate (i.e., a number of bits) used to produce the encoded block. Intra BC unit 48 may calculate ratios from the distortions and rates for the various encoded blocks to determine which intra-prediction mode exhibits the best rate-distortion value for the block.
  • Rate-distortion analysis generally determines an amount of distortion (or error) between an encoded block and an original, unencoded block
  • the intra BC unit 48 may use the motion estimation unit 42 and the motion compensation unit 44, in whole or in part, to perform such functions for Intra BC prediction according to the implementations described herein.
  • a predictive block may be a block that is deemed as closely matching the block to be coded, in terms of pixel difference, which may be determined by SAD, SSD, or other difference metrics, and identification of the predictive block may include calculation of values for sub-integer pixel positions.
  • the video encoder 20 may form a residual video block by subtracting pixel values of the predictive block from the pixel values of the current video block being coded, forming pixel difference values.
  • the pixel difference values forming the residual video block may include both luma and chroma component differences.
  • the intra prediction processing unit 46 may intra-predict a current video block, as an alternative to the inter-prediction performed by the motion estimation unit 42 and the motion compensation unit 44, or the intra block copy prediction performed by the intra BC unit 48, as described above.
  • the intra prediction processing unit 46 may determine an intra prediction mode to use to encode a current block. To do so, the intra prediction processing unit 46 may encode a current block using various intra prediction modes, e.g., during separate encoding passes, and the intra prediction processing unit 46 (or a mode selection unit, in some examples) may select an appropriate intra prediction mode to use from the tested intra prediction modes.
  • the intra prediction processing unit 46 may provide information indicative of the selected intraprediction mode for the block to the entropy encoding unit 56.
  • the entropy encoding unit 56 may encode the information indicating the selected intra-prediction mode in the bitstream.
  • the summer 50 forms a residual video block by subtracting the predictive block from the current video block.
  • the residual video data in the residual block may be included in one or more TUs and is provided to the transform processing unit 52.
  • the transform processing unit 52 transforms the residual video data into residual transform coefficients using a transform, such as a Discrete Cosine Transform (DCT) or a conceptually similar transform.
  • DCT Discrete Cosine Transform
  • the transform processing unit 52 may send the resulting transform coefficients to the quantization unit 54.
  • the quantization unit 54 quantizes the transform coefficients to further reduce the bit rate.
  • the quantization process may also reduce the bit depth associated with some or all of the coefficients.
  • the degree of quantization may be modified by adjusting a quantization parameter.
  • the quantization unit 54 may then perform a scan of a matrix including the quantized transform coefficients.
  • the entropy encoding unit 56 may perform the scan.
  • the entropy encoding unit 56 entropy encodes the quantized transform coefficients into a video bitstream using, e g., Context Adaptive Variable Length Coding (CAVLC), Context Adaptive Binary Arithmetic Coding (CAB AC), Syntax-based context-adaptive Binary Arithmetic Coding (SBAC), Probability Interval Partitioning Entropy (PIPE) coding or another entropy encoding methodology or technique.
  • CAVLC Context Adaptive Variable Length Coding
  • CAB AC Context Adaptive Binary Arithmetic Coding
  • SBAC Syntax-based context-adaptive Binary Arithmetic Coding
  • PIPE Probability Interval Partitioning Entropy
  • the encoded bitstream may then be transmitted to the video decoder 30 as shown in FIG. 1 A, or archived in the storage device 32 as shown in FIG. 1 A for later transmission to or retrieval by the video decoder 30.
  • the entropy encoding unit 56
  • the inverse quantization unit 58 and the inverse transform processing unit 60 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual video block in the pixel domain for generating a reference block for prediction of other video blocks.
  • the motion compensation unit 44 may generate a motion compensated predictive block from one or more reference blocks of the frames stored in the DPB 64.
  • the motion compensation unit 44 may also apply one or more interpolation filters to the predictive block to calculate sub-integer pixel values for use in motion estimation.
  • the summer 62 adds the reconstructed residual block to the motion compensated predictive block produced by the motion compensation unit 44 to produce a reference block for storage in the DPB 64.
  • the reference block may then be used by the intra BC unit 48, the motion estimation unit 42 and the motion compensation unit 44 as a predictive block to inter predict another video block in a subsequent video frame.
  • FIG. 2B is a block diagram illustrating another exemplary video decoder 30 in accordance with some implementations of the present application.
  • the video decoder 30 includes a video data memory 79, an entropy decoding unit 80, a prediction processing unit 81, an inverse quantization unit 86, an inverse transform processing unit 88, a summer 90, and a DPB 92.
  • the prediction processing unit 81 further includes a motion compensation unit 82, an intra prediction unit 84, and an intra BC unit 85.
  • the video decoder 30 may perform a decoding process generally reciprocal to the encoding process described above with respect to the video encoder 20 in connection with FIG. 1G.
  • the motion compensation unit 82 may generate prediction data based on motion vectors received from the entropy decoding unit 80, while the intraprediction unit 84 may generate prediction data based on intra-prediction mode indicators received from the entropy decoding unit 80.
  • a unit of the video decoder 30 may be tasked to perform the implementations of the present application. Also, in some examples, the implementations of the present disclosure may be divided among one or more of the units of the video decoder 30.
  • the intra BC unit 85 may perform the implementations of the present application, alone, or in combination with other units of the video decoder 30, such as the motion compensation unit 82, the intra prediction unit 84, and the entropy decoding unit 80.
  • the video decoder 30 may not include the intra BC unit 85 and the functionality of intra BC unit 85 may be performed by other components of the prediction processing unit 81, such as the motion compensation unit 82.
  • the video data memory 79 may store video data, such as an encoded video bitstream, to be decoded by the other components of the video decoder 30.
  • the video data stored in the video data memory 79 may be obtained, for example, from the storage device 32, from a local video source, such as a camera, via wired or wireless network communication of video data, or by accessing physical data storage media (e.g., a flash drive or hard disk).
  • the video data memory 79 may include a Coded Picture Buffer (CPB) that stores encoded video data from an encoded video bitstream.
  • the DPB 92 of the video decoder 30 stores reference video data for use in decoding video data by the video decoder 30 (e.g., in intra or inter predictive coding modes).
  • the video data memory 79 and the DPB 92 may be formed by any of a variety of memory devices, such as dynamic random access memory (DRAM), including Synchronous DRAM (SDRAM), Magneto-resistive RAM (MRAM), Resistive RAM (RRAM), or other types of memory devices.
  • DRAM dynamic random access memory
  • SDRAM Synchronous DRAM
  • MRAM Magneto-resistive RAM
  • RRAM Resistive RAM
  • the video data memory 79 and the DPB 92 are depicted as two distinct components of the video decoder 30 in FIG. 2B. But it will be apparent to one skilled in the art that the video data memory 79 and the DPB 92 may be provided by the same memory device or separate memory devices.
  • the video data memory 79 may be on-chip with other components of the video decoder 30, or off-chip relative to those components.
  • the video decoder 30 receives an encoded video bitstream that represents video blocks of an encoded video frame and associated syntax elements.
  • the video decoder 30 may receive the syntax elements at the video frame level and/or the video block level.
  • the entropy decoding unit 80 of the video decoder 30 entropy decodes the bitstream to generate quantized coefficients, motion vectors or intra-prediction mode indicators, and other syntax elements.
  • the entropy decoding unit 80 then forwards the motion vectors or intra-prediction mode indicators and other syntax elements to the prediction processing unit 81.
  • the intra prediction unit 84 of the prediction processing unit 81 may generate prediction data for a video block of the current video frame based on a signaled intra prediction mode and reference data from previously decoded blocks of the current frame.
  • the motion compensation unit 82 of the prediction processing unit 81 produces one or more predictive blocks for a video block of the current video frame based on the motion vectors and other syntax elements received from the entropy decoding unit 80.
  • Each of the predictive blocks may be produced from a reference frame within one of the reference frame lists.
  • the video decoder 30 may construct the reference frame lists, List 0 and List 1, using default construction techniques based on reference frames stored in the DPB 92.
  • the intra BC unit 85 of the prediction processing unit 81 produces predictive blocks for the current video block based on block vectors and other syntax elements received from the entropy decoding unit 80.
  • the predictive blocks may be within a reconstructed region of the same picture as the current video block defined by the video encoder 20.
  • the motion compensation unit 82 and/or the intra BC unit 85 determines prediction information for a video block of the current video frame by parsing the motion vectors and other syntax elements, and then uses the prediction information to produce the predictive blocks for the current video block being decoded.
  • the motion compensation unit 82 uses some of the received syntax elements to determine a prediction mode (e.g., intra or inter prediction) used to code video blocks of the video frame, an inter prediction frame type (e.g., B or P), construction information for one or more of the reference frame lists for the frame, motion vectors for each inter predictive encoded video block of the frame, inter prediction status for each inter predictive coded video block of the frame, and other information to decode the video blocks in the current video frame.
  • a prediction mode e.g., intra or inter prediction
  • an inter prediction frame type e.g., B or P
  • construction information for one or more of the reference frame lists for the frame e.g., motion vectors for each inter predictive encoded video block of the frame, inter prediction status for each inter predictive coded video block of the frame, and other information to decode the video blocks in the current video frame.
  • the intra BC unit 85 may use some of the received syntax elements, e.g., a flag, to determine that the current video block was predicted using the intra BC mode, construction information of which video blocks of the frame are within the reconstructed region and should be stored in the DPB 92, block vectors for each intra BC predicted video block of the frame, intra BC prediction status for each intra BC predicted video block of the frame, and other information to decode the video blocks in the current video frame.
  • a flag e.g., a flag
  • the motion compensation unit 82 may also perform interpolation using the interpolation fdters as used by the video encoder 20 during encoding of the video blocks to calculate interpolated values for sub-integer pixels of reference blocks. In this case, the motion compensation unit 82 may determine the interpolation filters used by the video encoder 20 from the received syntax elements and use the interpolation filters to produce predictive blocks.
  • the inverse quantization unit 86 inverse quantizes the quantized transform coefficients provided in the bitstream and entropy decoded by the entropy decoding unit 80 using the same quantization parameter calculated by the video encoder 20 for each video block in the video frame to determine a degree of quantization.
  • the inverse transform processing unit 88 applies an inverse transform, e.g., an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process, to the transform coefficients in order to reconstruct the residual blocks in the pixel domain.
  • the summer 90 reconstructs decoded video block for the current video block by summing the residual block from the inverse transform processing unit 88 and a corresponding predictive block generated by the motion compensation unit 82 and the intra BC unit 85.
  • An in-loop filter 91 such as deblocking filter, SAO filter and/or ALF may be positioned between the summer 90 and the DPB 92 to further process the decoded video block.
  • the in-loop filter 91 may be omitted, and the decoded video block may be directly provided by the summer 90 to the DPB 92.
  • the decoded video blocks in a given frame are then stored in the DPB 92, which stores reference frames used for subsequent motion compensation of next video blocks.
  • the DPB 92, or a memory device separate from the DPB 92, may also store decoded video for later presentation on a display device, such as the display device 34 of FIG. 1 A.
  • motion information of the current coding block is either copied from spatial or temporal neighboring blocks specified by a merge candidate index or obtained by explicit signaling of motion estimation.
  • the focus of the present disclosure is to improve the accuracy of the motion vectors for affine merge mode by improving the derivation methods of affine merge candidates.
  • the existing affine merge mode design in the VVC standard is used as an example to illustrate the proposed ideas.
  • a video sequence typically includes an ordered set of frames or pictures.
  • Each frame may include three sample arrays, denoted SL, SCb, and SCr.
  • SL is a two-dimensional array of luma samples.
  • SCb is a two-dimensional array of Cb chroma samples.
  • SCr is a two-dimensional array of Cr chroma samples.
  • a frame may be monochrome and therefore includes only one two-dimensional array of luma samples.
  • the video encoder 20 (or more specifically a partition unit in a prediction processing unit of the video encoder 20) generates an encoded representation of a frame by first partitioning the frame into a set of CTUs.
  • a video frame may include an integer number of CTUs ordered consecutively in a raster scan order from left to right and from top to bottom.
  • Each CTU is a largest logical coding unit and the width and height of the CTU are signaled by the video encoder 20 in a sequence parameter set, such that all the CTUs in a video sequence have the same size being one of 128x 128, 64x64, 32x32, and 16x 16. But it should be noted that the present application is not necessarily limited to a particular size.
  • each CTU may include one CTB of luma samples, two corresponding coding tree blocks of chroma samples, and syntax elements used to code the samples of the coding tree blocks.
  • the syntax elements describe properties of different types of units of a coded block of pixels and how the video sequence can be reconstructed at the video decoder 30, including inter or intra prediction, intra prediction mode, motion vectors, and other parameters.
  • a CTU may include a single coding tree block and syntax elements used to code the samples of the coding tree block.
  • a coding tree block may be an NxN block of samples.
  • the video encoder 20 may recursively perform tree partitioning such as binary-tree partitioning, ternary-tree partitioning, quad-tree partitioning or a combination thereof on the coding tree blocks of the CTU and divide the CTU into smaller CUs.
  • tree partitioning such as binary-tree partitioning, ternary-tree partitioning, quad-tree partitioning or a combination thereof on the coding tree blocks of the CTU and divide the CTU into smaller CUs.
  • the 64x64 CTU 400 is first divided into four smaller CUs, each having a block size of 32x32.
  • CU 410 and CU 420 are each divided into four CUs of 16x16 by block size.
  • the two 16x16 CUs 430 and 440 are each further divided into four CUs of 8x8 by block size.
  • each leaf node of the quadtree corresponding to one CU of a respective size ranging from 32x32 to 8x8.
  • each CU may include a CB of luma samples and two corresponding coding blocks of chroma samples of a frame of the same size, and syntax elements used to code the samples of the coding blocks.
  • a CU may include a single coding block and syntax structures used to code the samples of the coding block.
  • 1E-1F is only for illustrative purposes and one CTU can be split into CUs to adapt to varying local characteristics based on quad/temary/binary-tree partitions.
  • one CTU is partitioned by a quad-tree structure and each quad-tree leaf CU can be further partitioned by a binary and ternary tree structure.
  • FIGS. 3A-3E there are five possible partitioning types of a coding block having a width W and a height H, i.e., quaternary partitioning, horizontal binary partitioning, vertical binary partitioning, horizontal ternary partitioning, and vertical ternary partitioning.
  • the video encoder 20 may further partition a coding block of a CU into one or more MxN PBs.
  • a PB is a rectangular (square or non-square) block of samples on which the same prediction, inter or intra, is applied.
  • a PU of a CU may include a PB of luma samples, two corresponding PBs of chroma samples, and syntax elements used to predict the PBs.
  • a PU may include a single PB and syntax structures used to predict the PB.
  • the video encoder 20 may generate predictive luma, Cb, and Cr blocks for luma, Cb, and Cr PBs of each PU of the CU.
  • the video encoder 20 may use intra prediction or inter prediction to generate the predictive blocks for a PU. If the video encoder 20 uses intra prediction to generate the predictive blocks of a PU, the video encoder 20 may generate the predictive blocks of the PU based on decoded samples of the frame associated with the PU. If the video encoder 20 uses inter prediction to generate the predictive blocks of a PU, the video encoder 20 may generate the predictive blocks of the PU based on decoded samples of one or more frames other than the frame associated with the PU.
  • the video encoder 20 may generate a luma residual block for the CU by subtracting the CU’s predictive luma blocks from its original luma coding block such that each sample in the CU’s luma residual block indicates a difference between a luma sample in one of the CU's predictive luma blocks and a corresponding sample in the CU's original luma coding block.
  • the video encoder 20 may generate a Cb residual block and a Cr residual block for the CU, respectively, such that each sample in the CU's Cb residual block indicates a difference between a Cb sample in one of the CU's predictive Cb blocks and a corresponding sample in the CU's original Cb coding block and each sample in the CU's Cr residual block may indicate a difference between a Cr sample in one of the CU's predictive Cr blocks and a corresponding sample in the CU's original Cr coding block.
  • the video encoder 20 may use quad-tree partitioning to decompose the luma, Cb, and Cr residual blocks of a CU into one or more luma, Cb, and Cr transform blocks respectively.
  • a transform block is a rectangular (square or non-square) block of samples on which the same transform is applied.
  • a TU of a CU may include a transform block of luma samples, two corresponding transform blocks of chroma samples, and syntax elements used to transform the transform block samples.
  • each TU of a CU may be associated with a luma transform block, a Cb transform block, and a Cr transform block.
  • the luma transform block associated with the TU may be a sub-block of the CU's luma residual block.
  • the Cb transform block may be a sub-block of the CU's Cb residual block.
  • the Cr transform block may be a sub-block of the CU's Cr residual block.
  • a TU may include a single transform block and syntax structures used to transform the samples of the transform block.
  • the video encoder 20 may apply one or more transforms to a luma transform block of a TU to generate a luma coefficient block for the TU.
  • a coefficient block may be a two- dimensional array of transform coefficients.
  • a transform coefficient may be a scalar quantity.
  • the video encoder 20 may apply one or more transforms to a Cb transform block of a TU to generate a Cb coefficient block for the TU.
  • the video encoder 20 may apply one or more transforms to a Cr transform block of a TU to generate a Cr coefficient block for the TU.
  • the video encoder 20 may quantize the coefficient block. Quantization generally refers to a process in which transform coefficients are quantized to possibly reduce the amount of data used to represent the transform coefficients, providing further compression.
  • the video encoder 20 may entropy encode syntax elements indicating the quantized transform coefficients. For example, the video encoder 20 may perform CAB AC on the syntax elements indicating the quantized transform coefficients.
  • the video encoder 20 may output a bitstream that includes a sequence of bits that forms a representation of coded frames and associated data, which is either saved in the storage device 32 or transmitted to the destination device 14.
  • the video decoder 30 may parse the bitstream to obtain syntax elements from the bitstream.
  • the video decoder 30 may reconstruct the frames of the video data based at least in part on the syntax elements obtained from the bitstream.
  • the process of reconstructing the video data is generally reciprocal to the encoding process performed by the video encoder 20.
  • the video decoder 30 may perform inverse transforms on the coefficient blocks associated with TUs of a current CU to reconstruct residual blocks associated with the TUs of the current CU.
  • the video decoder 30 also reconstructs the coding blocks of the current CU by adding the samples of the predictive blocks for PUs of the current CU to corresponding samples of the transform blocks of the TUs of the current CU After reconstructing the coding blocks for each CU of a frame, video decoder 30 may reconstruct the frame.
  • video coding achieves video compression using primarily two modes, i.e., intra-frame prediction (or intra-prediction) and inter-frame prediction (or interprediction). It is noted that IBC could be regarded as either intra-frame prediction or a third mode. Between the two modes, inter-frame prediction contributes more to the coding efficiency than intra-frame prediction because of the use of motion vectors for predicting a current video block from a reference video block.
  • motion information of spatially neighboring CUs and/or temporally co-located CUs as an approximation of the motion information (e.g., motion vector) of a current CU by exploring their spatial and temporal correlation, which is also referred to as “Motion Vector Predictor (MVP)” of the current CU.
  • MVP Motion Vector Predictor
  • the motion vector predictor of the current CU is subtracted from the actual motion vector of the current CU to produce a Motion Vector Difference (MVD) for the current CU.
  • MVD Motion Vector Difference
  • a set of rules need to be adopted by both the video encoder 20 and the video decoder 30 for constructing a motion vector candidate list (also known as a “merge list”) for a current CU using those potential candidate motion vectors associated with spatially neighboring CUs and/or temporally co-located CUs of the current CU and then selecting one member from the motion vector candidate list as a motion vector predictor for the current CU.
  • a motion vector candidate list also known as a “merge list”
  • affine motion compensated prediction is applied by signaling one flag for each inter coding block to indicate whether the translation motion model or the affine motion model is applied for inter prediction.
  • two affine modes including 4-paramter affine mode and 6- parameter affine mode, are supported for one affine coding block.
  • the 4-parameter affine model has the following parameters: two parameters for translation movement in horizontal and vertical directions respectively, one parameter for zoom motion and one parameter for rotational motion for both directions.
  • horizontal zoom parameter is equal to vertical zoom parameter
  • horizontal rotation parameter is equal to vertical rotation parameter.
  • those affine parameters are to be derived from two MVs (which are also called control point motion vector (CPMV)) located at the top-left corner and top-right corner of a current block.
  • CPMV control point motion vector
  • FIGS. 4A-4B the affine motion field of the block is described by two CPMVs (Vo, Vi). Based on the control point motion, the motion field (v v , v y ) of one affine coded block is described as
  • the 6-parameter affine mode has the following parameters: two parameters for translation movement in horizontal and vertical directions respectively, two parameters for zoom motion and rotation motion respectively in horizontal direction, another two parameters for zoom motion and rotation motion respectively in vertical direction.
  • the 6-parameter affine motion model is coded with three CPMVs. As shown in FIG. 5, the three control points of one 6-paramter affine block are located at the top-left, top-right and bottom left comer of the block.
  • the motion at topleft control point is related to translation motion
  • the motion at top-right control point is related to rotation and zoom motion in horizontal direction
  • the motion at bottom-left control point is related to rotation and zoom motion in vertical direction.
  • the rotation and zoom motion in horizontal direction of the 6-paramter may not be same as those motion in vertical direction.
  • the motion vector of each sub-block (v x , Vy) is derived using the three MVs at control points as:
  • affine merge mode the CPMVs for the current block are not explicitly signaled but derived from neighboring blocks. Specifically, in this mode, motion information of spatial neighbor blocks is used to generate CPMVs for the current block.
  • the affine merge mode candidate list has a limited size. For example, in the current VVC design, there may be up to five candidates.
  • the encoder may evaluate and choose the best candidate index based on rate-distortion optimization algorithms. The chosen candidate index is then signaled to the decoder side.
  • the affine merge candidates can be decided in three ways. In the first way, the affine merge candidates may be inherited from neighboring affine coded blocks. Tn the second way, the affine merge candidates may be constructed from translational MVs from neighboring blocks. In the third way, zero MVs are used as the affine merge candidates.
  • the candidates are obtained from the neighboring blocks located at the bottom-left of the current block (e.g., scanning order is from A0 to Al as shown in FIG. 6) and from the neighboring blocks located at the topright of the current block (e g., scanning order is from B0 to B2 as shown in FIG. 6), if available.
  • the candidates are the combinations of neighbor’s translational MVs, which may be generated by two steps.
  • Step 1 obtain four translational MVs including MV1, MV2, MV3 and MV4 from available neighbors.
  • MV1 MV from the one of the three neighboring blocks close to the top-left comer of the current block. As shown in FIG. 7, the scanning order is B2, B3 and A2.
  • MV2 MV from the one of the one from the two neighboring blocks close to the top-right comer of the current block. As shown in FIG. 7, the scanning order is Bland BO.
  • MV3 MV from the one of the one from the two neighboring blocks close to the bottomleft comer of the current block. As shown in FIG. 7, the scanning order is Aland AO.
  • MV4 MV from the temporally collocated block of the neighboring block close to the bottom-right corner of current block. As shown in the Fig, the neighboring block is T.
  • Step 2 derive combinations based on the four translational MVs from Step 1.
  • Combination 1 MV1, MV2, MV3;
  • Combination 2 MV1, MV2, MV4;
  • Combination 3 MV1, MV3, MV4;
  • Combination 4 MV2, MV3, MV4;
  • Combination 6 MV1, MV3.
  • Affine advanced motion vector prediction (AMVP) mode may be applied for CUs with both width and height larger than or equal to 16.
  • An affine flag in CU level is signaled in the bitstream to indicate whether affine AMVP mode is used and then another flag is signaled to indicate whether 4-parameter affine or 6-parameter affine.
  • the difference of the CPMVs of current CU and their CPMV predictors (CPMVPs) is signaled in the bitstream.
  • the affine AVMP candidate list size is 2 and the affine AMVP candidate list is generated by using the following four types of CPMV candidate in order below:
  • the checking order of inherited affine AMVP candidates is the same to the checking order of inherited affine merge candidates. The only difference is that, for AMVP candidate, only the affine CU that has the same reference picture as in current block is considered. No pruning process is applied when inserting an inherited affine motion predictor into the candidate list.
  • Constructed AMVP candidate is derived from the same spatial neighbors as affine merge mode.
  • the same checking order is used as done in affine merge candidate construction.
  • reference picture index of the neighboring block is also checked.
  • the first block in the checking order that is inter coded and has the same reference picture as in current CUs is used.
  • the current CU is coded with 4-parameter affine mode, and mv 0 and mv 1 are both available, mv 0 and mv 1 are added as one candidate in the affine AMVP candidate list.
  • the current CU is coded with 6-parameter affine mode, and all three CPMVs are available, they are added as one candidate in the affine AMVP candidate list. Otherwise, constructed AMVP candidate is set as unavailable.
  • mv 0 , mv r and mv 2 will be added, in order, as the translational MVs to predict all control point MVs of the current CU, when available. Finally, zero MVs are used to fill the affine AMVP list if it is still not full.
  • the regular inter merge candidate list is constructed by including the following five types of candidates in order:
  • the size of merge list is signaled in sequence parameter set header and the maximum allowed size of merge list is 6.
  • an index of best merge candidate is encoded using truncated unary binarization (TU).
  • the first bin of the merge index is coded with context and bypass coding is used for other bins.
  • the derivation of spatial merge candidates in VVC is same to that in HEVC except the positions of first two merge candidates are swapped. A maximum of four merge candidates are selected among candidates located in the positions depicted in FIG. 4C.
  • the order of derivation is B0, A0, Bl, Al and B2.
  • Position B2 is considered only when one or more than one CUs of position B0, A0, Bl, Al are not available (e.g., because it belongs to another slice or tile) or is intra coded.
  • candidate at position Al is added, the addition of the remaining candidates is subject to a redundancy check which ensures that candidates with same motion information are excluded from the list so that coding efficiency is improved.
  • FIG. 4D illustrates candidate pairs that are considered for redundancy check of spatial merge candidates.
  • a scaled motion vector is derived based on co-located CU belonging to the collocated reference picture.
  • the reference picture list and the reference index to be used for derivation of the co-located CU is explicitly signaled in the slice header.
  • the scaled motion vector for temporal merge candidate is obtained as illustrated by the dotted line in FIG.
  • tb is defined to be the POC difference between the reference picture of the current picture and the current picture
  • td is defined to be the POC difference between the reference picture of the colocated picture and the co-located picture.
  • the reference picture index of temporal merge candidate is set equal to zero.
  • the position for the temporal candidate is selected between candidates CO and Cl, as depicted in FIG. 4F. If CU at position CO is not available, is intra coded, or is outside of the current row of CTUs, position Cl is used. Otherwise, position CO is used in the derivation of the temporal merge candidate.
  • HMVP history-based MVP
  • TMVP temporal motion vector prediction
  • the HMVP table size S may be set to be 6, which indicates up to 5 History-based MVP (HMVP) candidates may be added to the table.
  • HMVP History-based MVP
  • FIFO constrained first-in-first-out
  • HMVP candidates could be used in the merge candidate list construction process.
  • the latest several HMVP candidates in the table are checked in order and inserted to the candidate list after the TMVP candidate. Redundancy check is applied on the HMVP candidates to the spatial or temporal merge candidate.
  • Pairwise average candidates are generated by averaging predefined pairs of candidates in the existing merge candidate list, using the first two merge candidates.
  • the first merge candidate is defined as pOCand and the second merge candidate can be defined as plCand, respectively.
  • the averaged motion vectors are calculated according to the availability of the motion vector of pOCand and plCand separately for each reference list. If both motion vectors are available in one list, these two motion vectors are averaged even when they point to different reference pictures, and its reference picture is set to the one of pOCand; if only one motion vector is available, use the one directly; if no motion vector is available, keep this list invalid. Also, if the half-pel interpolation fdter indices of pOCand and plCand are different, it is set to 0.
  • the reordering method is applied to regular merge mode, template matching (TM) merge mode, and affine merge mode (excluding the SbTMVP candidate), where the SbTMVP represents the Subblock-based Temporal Motion Vector Prediction candidate.
  • TM merge mode merge candidates are reordered before the refinement process.
  • merge candidates are divided into several subgroups.
  • the subgroup size is set to 5.
  • Merge candidates in each subgroup are reordered ascendingly according to cost values based on template matching. For simplification, merge candidates in the last but not the first subgroup are not reordered.
  • the template matching cost is measured by the sum of absolute differences (SAD) between samples of a template of the current block and their corresponding reference samples.
  • the template includes a set of reconstructed samples neighboring to the current block. Reference samples of the template are located by the same motion information of the current block.
  • the reference samples of the template of the merge candidate are also generated by bi-prediction as shown in FIG. 19.
  • merge mode with motion vector differences is introduced in VVC.
  • a MMVD flag is signaled right after sending a regular merge flag to specify whether MMVD mode is used for a CU.
  • MMVD after a merge candidate is selected, it is further refined by the signaled MVDs information.
  • the further information includes a merge candidate flag, an index to specify motion magnitude, and an index for indication of motion direction.
  • MMVD mode one for the first two candidates in the merge list is selected to be used as MV basis
  • the MMVD candidate flag is signaled to specify which one is used between the first and second merge candidates.
  • Distance index specifies motion magnitude information and indicate the predefined offset from the starting point. As shown in FIG. 26, an offset is added to either horizontal component or vertical component of starting MV. The relation of distance index and pre-defined offset is specified in below Table 6.
  • Direction index represents the direction of the MVD relative to the starting point.
  • the direction index can represent of the four directions as shown in above table. It’s noted that the meaning of MVD sign could be variant according to the information of starting MVs.
  • the starting MVs is a uni -prediction MV or bi-prediction MVs with both lists point to the same side of the current picture (i.e. POCs of two references are both larger than the POC of the current picture, or are both smaller than the POC of the current picture)
  • the sign in below Table 7 specifies the sign of MV offset added to the starting MV.
  • the starting MVs is bi-prediction MVs with the two MVs point to the different sides of the current picture (i.e.
  • the sign in below table specifies the sign of MV offset added to the listO MV component of starting MV and the sign for the listl MV has opposite value. Otherwise, if the difference of POC in list 1 is greater than list 0, the sign in below table specifies the sign of MV offset added to the listl MV component of starting MV and the sign for the listO MV has opposite value.
  • the MVD is scaled according to the difference of POCs in each direction. If the differences of POCs in both lists are the same, no scaling is needed. Otherwise, if the difference of POC in list 0 is larger than the one of list 1, the MVD for list 1 is scaled, by defining the POC difference of L0 as td and POC difference of LI as tb, described in FIG. 23. If the POC difference of LI is greater than L0, the MVD for list 0 is scaled in the same way. If the starting MV is unipredicted, the MVD is added to the available MV. Table 7
  • the above template includes several sub-templates with the size of Wsub x 1, and the left template includes several sub-templates with the size of 1 x Hsub.
  • Wsub is the width of the subblock and Hsub is the height of the subblock.
  • the motion information of the subblocks in the first row and the first column of current block is used to derive the reference samples of each sub -tempi ate.
  • each affine inherited candidate is derived from one neighboring block with affine motion information.
  • each affine constructed candidate is derived from two or three neighboring blocks with translational motion information.
  • the candidate derivation methods proposed for affine merge mode may be extended to other coding modes, such as affine AMVP mode and regular merge mode.
  • the candidate derivation process for affine merge mode is extended by using not only adjacent neighboring blocks but also non-adjacent neighboring blocks.
  • Detailed methods may be summarized in following aspects including affine merge candidate pruning, non-adjacent neighbor based derivation process for affine inherited merge candidates, non-adjacent neighbor based derivation process for affine constructed merge candidates, inheritance based derivation method for affine constructed merge candidates, HMVP based derivation method for affine constructed merge candidates, candidate derivation method for affine AMVP mode and regular merge mode, and motion information storage.
  • affine merge candidate list in a typical video coding standards usually has a limited size
  • candidate pruning is an essential process to remove redundant ones. For both affine merge inherited candidates and constructed candidates, this pruning process is needed.
  • CPMVs of a current block are not directly used for affine motion compensation. Instead, CPMVs need to be converted into translational MVs at the location of each sub-block within the current block.
  • the conversion process is performed by following a general affine model as shown below: where (a, b) are delta translation parameters, (c, d) are delta zoom and rotation parameters for horizontal direction, (e,f) are delta zoom and rotation parameters for vertical direction, (x, y) are the horizontal and vertical distance of the pivot location (e.g., the center or top-left comer) of a sub-block relative to the top-left corner of the current block (e.g., the coordinate (x,y) shown in FIG. 5), and (v x , v y ) is the target translational MVs of the sub-block.
  • a, b) are delta translation parameters
  • (c, d) are delta zoom and rotation parameters for horizontal direction
  • (e,f) are delta zoom and rotation parameters for vertical direction
  • (x, y) are the horizontal and vertical distance of the pivot location (e.g., the center or top-left comer) of a sub-block relative to the top-left corner of the current block (e.g
  • top-left comer CPMV and top-right comer CPMV termed as V0 and VI
  • the six parameters of a, b, c, d, e and f can be calculated as [00188]
  • top-left corner CPMV and bottom-left corner CPMV termed as V0 and V2
  • the six parameters of a, b, c, d, e and f can be calculated as
  • Step 1 given two candidate sets of CPMVs, the corresponding affine model parameters for each candidate set are derived. More specifically, the two candidate sets of CPMVs may be represented by two sets of affine model parameters, e.g., (a ⁇ b ⁇ , q, d r , and (a 2 , b 2 , c 2 , d 2 , e 2 , f 2 ).
  • Step 2 based on one or more pre-defined threshold values, similarity check is performed between the two sets of affine model parameters.
  • a positive threshold value such as the value of 1
  • the two candidates are considered to be similar and one of them can be pruned/removed and not put in the merge candidate list.
  • the divisions or right shift operations in Step 1 may be removed to simplify the calculations in the CPMV pruning process.
  • the model parameters of c, d, e and f may be calculated without being divided by the width w and height h of the current block.
  • the approximated model parameters of c', d' , e' and f may be calculated as below equation (7).
  • the model parameters may be converted to take the impact of the width and height into account.
  • the approximated model parameters of c', d', e' may be calculated based on equation (8) below.
  • the approximated model parameters of c', d' , e' and f may be calculated based on equation (9) below.
  • threshold values are needed to evaluate the similarity between two candidate sets of CPMV.
  • the threshold values may be defined per comparable parameter.
  • Table 1 is one example in this embodiment showing threshold values defined per comparable model parameter.
  • the threshold values may be defined by considering the size of the current coding block.
  • Table 2 is one example in this embodiment showing threshold values defined by the size of the current coding block.
  • the threshold values may be defined by considering the weight or the height of the current block.
  • Table 3 and Table 4 are examples in this embodiment. Table 3 shows threshold values defined by the width of the current coding block and Table 4 shows threshold values defined by the height of the current coding block.
  • the threshold values may be defined as a group of fixed values. In another embodiment, the threshold values may be defined by any combinations of above embodiments. In one example, the threshold values may be defined by considering different parameters and the weight and the height of the current block. Table 5 is one example in this embodiment showing threshold values defined by the height of the current coding block. Note that in any above proposed embodiments, the comparable parameters, if needed, may represent any parameters defined in any equations from equation (4) to equation (9).
  • the benefits of using the converted affine model parameters for candidate redundancy check include that: it creates a unified similarity check process for candidates with different affine model types, e g., one merge candidate may user 6-parameter affine model with three CPMVs while another candidate may use 4-parameter affine model with two CPMVs; it considers the different impacts of each CPMV in a merge candidate when deriving the target MV at each sub-block; and it provides the similarity significance of two affine merge candidates related to the width and height of the current block.
  • Non-Adjacent Neighbor Based Derivation Process for Affine Inherited Merge Candidates may be performed in three steps. Step 1 is for candidate scanning. Step 2 is for CPMV projection. Step 3 is for candidate pruning.
  • Step 1 non-adjacent neighboring blocks are scanned and selected by following methods.
  • non-adjacent neighboring blocks may be scanned from left area and above area of the current coding block.
  • the scanning distance may be defined as the number of coding blocks from the scanning position to the left side or top side of the current coding blocks.
  • FIG. 8 on either the left or above of the current coding block, multiple lines of non-adjacent neighboring blocks may be scanned.
  • the distance shown in FIG. 8 represents the number of coding blocks from each candidate position to the left side or top side of the current block. For example, the area with “distance 2 (D2)” on the left side of the current block indicates that the candidate neighboring blocks located in this area are 2 blocks away from the current block. Similar indications may be applied to other scanning areas with different distances.
  • the non-adj cent neighboring blocks at each distance may have the same block size as the current coding block, as shown in the FIG. 13A. As shown in FIG. 13A, the non-adjacent neighbor blocks 1301 on the left side and the non-adjacent neighbor blocks 1302 on the above side have the same size as the current block 1303. In some embodiments, the non-adjacent neighboring blocks at each distance may have a different block size as the current coding block, as shown in the FIG. 13B.
  • the neighbor block 1304 is an adjacent neighbor block to the current block 1303. As shown in FIG. 13B, the non-adjacent neighbor blocks 1305 on the left side and the non-adjacent neighbor blocks 1306 on the above side have the same size as the current block 1307.
  • the neighbor block 1308 is an adjacent neighbor block to the current block 1307.
  • the value of the block size is adaptively changed according to the partition granularity at each different area in an image.
  • the value of the block size may be predefined as a constant value, such as 4x4, 8x8 or 16x16.
  • the 4x4 non-adjacent motion fields shown in FIG. 10 and FIG. 12 are examples in this case, where the motion fields may be considered as, but not limited to, special cases of sub-blocks.
  • the non-adjacent coding blocks shown in FIG. 11 may have different sizes as well.
  • the non-adjacent coding blocks may have the size as the current coding block, which is adaptively changed.
  • the non-adjacent coding blocks may have a predefined size with a fixed value, such as 4x4, 8x8 or 16x16.
  • the total size of the scanning area on either the left or above of the current coding clock may be determined by a configurable distance value.
  • the maximum scanning distance on the left side and above side may use a same value or different values.
  • FIG. 13 shows an example where the maximum distance on both the left side and above side shares a same value of 2.
  • the maximum scanning distance value(s) may be determined by the encoder side and signaled in a bitstream Alternatively, the maximum scanning distance value(s) may be predefined as fixed value(s), such as the value of 2 or 4. When the maximum scanning distance is predefined as the value of 4, it indicates that the scanning process is terminated when the candidate list is full or all the non-adjacent neighboring blocks with at most distance 4 have been scanned, whichever comes first.
  • the starting and ending neighboring blocks may be position dependent.
  • the starting neighboring blocks may be the adjacent bottom -left block of the starting neighboring block of the adjacent scanning area with smaller distance.
  • the starting neighboring block of the “distance 2” scanning area on the left side of the current block is the adjacent bottomleft neighboring block of the starting neighboring block of the “distance 1 (DI)” scanning area.
  • DI, D2, D3 respectively indicates distance 1, distance 2, and distance 3.
  • the ending neighboring blocks may be the adjacent left block of the ending neighboring block of the above scanning area with smaller distance.
  • the ending neighboring block of the “distance 2” scanning area on the left side of the current block is the adjacent left neighboring block of the ending neighboring block of the “distance 1” scanning area above the current block.
  • the starting neighboring blocks may be the adjacent top-right block of the starting neighboring block of the adjacent scanning area with smaller distance.
  • the ending neighboring blocks may be the adjacent top-left block of the ending neighboring block of the adjacent scanning area with smaller distance.
  • the left area may be scanned first, and then followed by scanning the above areas.
  • three lines of non-adjacent areas e g., from distance 1 (DI) to distance 3 (D3)
  • DI distance 1
  • D3 distance 3
  • the left areas and above areas may be scanned alternatively. For example, as shown in FIG. 8, the left scanning area with “distance 1” is scanned first, then followed by the scanning the above area with “distance 1.”
  • the scanning order is from the areas with small distance to the areas with large distance.
  • This order may be flexibly combined with other embodiments of scanning order.
  • the left and above areas may be scanned alternatively, and the order for same side areas is scheduled to be from small distance to large distance.
  • a scanning order may be defined.
  • the scanning may be started from the bottom neighboring block to the top neighboring block.
  • the scanning may be started from the right block to the left block.
  • the neighboring blocks coded with affine mode are defined as qualified candidates.
  • the scanning process may be performed interactively. For example, the scanning performed in a specific area at a specific distance may be stopped at the instance when first X qualified candidates are identified, where X is a predefined positive value. For example, as shown in FIG. 8, the scanning in the left scanning area with distance 1 may be stopped when the first one or more qualified candidates are identified. Then the next iteration of scanning process is started by targeting at another scanning area, which is regulated by a pre-defined scanning order/rule.
  • the X may be defined for each distance.
  • X is set to be 1, which means the scanning is terminated for each distance if the first qualified candidate is found and the scanning process is restarted from a different distance of the same area or the same or different distance of a different area.
  • the value of X may be set as the same value or different values for different distances. If the maximum number of qualified candidates are found from all allowable distances (e.g., regulated by a maximum distance) of an area, the scanning process for one area is completely terminated.
  • the X may be defined for an area.
  • X is set to be 3, which means the scanning is terminated for the whole area (e.g., left or above area of the current block) if the first 3 qualified candidates are found and the scanning process is restarted from the same or different distance of another area.
  • the value of X may be set as the same value or different values for different areas. If the maximum number of qualified candidates are found from all areas, the whole scanning process is completely terminated.
  • the values of X may be defined for both distance and areas. For example, for each area (e.g., left or above area of the current block), X is set to 3, and for each distance, X is set to 1. The values of X may be set as the same value or different values for different areas and distances.
  • the scanning process may be performed continuously. For example, the scanning performed in a specific area at a specific distance may be stopped at the instance when all covered neighboring blocks are scanned and no more qualified candidates are identified or the maximum allowable number of candidates is reached.
  • each candidate non-adjacent neighboring block is determined and scanned by following the above proposed scanning methods.
  • each candidate non-adjacent neighboring block may be indicated or located by a specific scanning position. Once a specific scanning area and distance are decided by following above proposed methods, the scanning positions may be determined accordingly based on following methods.
  • bottom-left and top-right positions are used for above and left non- adjacent neighboring blocks respectively, as shown in FIG. 15 A.
  • bottom-right positions are used for both above and left non- adjacent neighboring blocks, as shown in FIG. 15B.
  • bottom-left positions are used for both above and left nonadj acent neighboring blocks, as shown in FIG. 15C.
  • top-right positions are used for both above and left non-adj acent neighboring blocks, as shown in FIG. 15D.
  • each non-adjacent neighboring block is assumed to have the same block size as the current block. Without loss of generality, this illustration may be easily extended to non-adjacent neighboring blocks with different block sizes.
  • Step 2 the same process of CPMV projection as used in the current AVS and VVC standards may be utilized.
  • the current block is assumed to share the same affine model with the selected neighboring block, then two or three comer pixel’ s coordinates (e.g., if the current block uses 4-prameter model, two coordinates (top-left pixel/sample location and top-right pixel/sample location) are used; if the current block uses 6- prameter model, three coordinates (top-left pixel/sample location, top-right pixel/sample location and bottom-left pixel/sample location) are used) are plugged into equation (1) or (2), which depends on whether the neighboring block is coded with a 4-parameter or 6-parameter affine model, to generate two or three CPMVs.
  • any qualified candidate that is identified in Step 1 and converted in Step 2 may go through a similarity check against all existing candidates that are already in the merge candidate list. The details of similarity check are already described in the section of “Affine Merge Candidate Pruning” above. If the newly qualified candidate is found to be similar with any existing candidate in the candidate list, this newly qualified candidate is removed/pruned.
  • one neighboring block is identified at one time, where this single neighboring block needs to be coded in affine mode and may contain two or three CPMVs.
  • two or three neighboring blocks may be identified at one time, where each identified neighboring block does not need to be coded in affine mode and only one translational MV is retrieved from this block.
  • FIG. 9 presents an example where constructed affine merge candidates may be derived by using non-adjacent neighboring block.
  • A, B and C are the geographical positions of three non-adjacent neighboring blocks.
  • a virtual coding block is formed by using the position of A as the top-left comer, the position of B as the top-right comer, and the position of C as the bottom -left comer.
  • the MVs at the positions of A', B' and C’ may be derived by following the equation (3), where the model parameters (a, b, c, d, e, ) may be calculated by the translational MV at the positions of A, B and C.
  • the MVs at positions of A’, B’ and C’ may be used as the three CPMVs for the current block, and the existing process (the one used in the AVS and VVC standards) of generating constructed affine merge candidates may be used.
  • non-adjacent neighbor based derivation process may be performed in five steps.
  • the non-adjacent neighbor based derivation process may be performed in the five steps in an apparatus such as an encoder or a decoder.
  • Step 1 is for candidate scanning.
  • Step 2 is for affine model determination.
  • Step 3 is for CPMV projection.
  • Step 4 is for candidate generation.
  • Step 5 is for candidate pruning.
  • non-adjacent neighboring blocks may be scanned and selected by following methods.
  • the scanning process is only performed for two non-adjacent neighboring blocks.
  • the third non-adjacent neighboring block may be dependent on the horizontal and vertical positions of the first and second non- adjacent neighboring blocks.
  • the scanning process is only performed for the positions of B and C.
  • the position of A may be uniquely determined by the horizontal position of C and the vertical position of B.
  • the position of A may need to be at least valid.
  • the validity of position A may be defined as whether the motion information at the position A is available or not.
  • the coding block located at the position A may need to be coded in inter-modes such that the motion information is available to form a virtual coding block.
  • the scanning area and distance may be defined according to a specific scanning direction.
  • the scanning direction may be perpendicular to the side of the current block.
  • the scanning area is defined as one line of continuous motion fields on the left or above the current block.
  • the scanning distance is defined as the number of motion fields from the scanning position to the side of the current block.
  • the size of the motion filed may be dependent on the max granularity of the applicable video coding standards.
  • the size of the motion field is assumed to be aligned with the current VVC standards and set to be 4x4.
  • the scanning direction may be parallel to the side of the current block.
  • the scanning area is defined as the one line of continuous coding blocks on the left or above the current block.
  • the scanning direction may be a combination of perpendicular and parallel scanning to the side of the current block.
  • the scanning direction may be also a combination of parallel and diagonal. Scanning at position B starts from left to right, and then in a diagonal direction to the left and upper block. The scanning at position B will repeat as shown in FIG. 12. Similarly, scanning at position C starts from top to bottom, and then in a diagonal direction to the left and upper block. The scanning at position C will repeat as shown in FIG. 12.
  • the scanning order may be defined as from the positions with smaller distance to the positions with larger distance to the current coding block. This order may be applied to the case of perpendicular scanning.
  • the scanning order may be defined as a fixed pattern. This fix-pattern scanning order may be used for the candidate positions with similar distance.
  • One example is the case of parallel scanning.
  • the scanning order may be defined as top-down direction for the left scanning area, and may be defined as from left to right directions for the above scanning areas, like the example shown in FIG. 11.
  • the scanning order may be a combination of fix-pattern and distance dependent, like the example shown in FIG. 12.
  • the qualified candidate does not need to be affine coded since only translational MV is needed.
  • the scanning process may be terminated when the first X qualified candidates are identified, where X is a positive value.
  • X is a positive value.
  • the scanning process in Step 1 may be only performed for identifying the non-adjacent neighboring blocks located at comers B and C, while the coordinate of A may be precisely determined by taking the horizontal coordinate of C and the vertical coordinate of B. In this way, the formed virtual coding block is restricted to be rectangle.
  • the horizontal coordinate or vertical coordinate of C may be defined as the horizontal coordinate or vertical coordinate of the top-left point of the current block respectively.
  • the comer B and/or corner C when the comer B and/or corner C is firstly determined from the scanning process in Step 1, the non-adjacent neighboring blocks located at comer B and/or C may be identified accordingly. Secondly, the position(s) of the corner B and/or C may be reset to pivot point within the corresponding non-adjacent neighboring blocks, such as the mass center of each non-adjacent neighboring block. For example, the mass center may be defined as the geometric center of each neighboring block.
  • the process may be performed jointly or independently.
  • independent scanning the previously proposed scanning methods may be applied separately on the comers B and C.
  • joint scanning there may be different methods as follows.
  • pairwise scanning may be performed.
  • the candidate positions for corners B and C are simultaneously advanced.
  • FIG. 17B it is to take FIG. 17B as an example.
  • the scanning of comer B is started from the first non-adjacent neighboring block located on the above side of the current block, in a bottom-to-up direction.
  • the scanning of corner C is started from the first non-adjacent neighboring block located on the left side of the current block, in a right-to-left direction. Therefore, in the example shown in FIG.
  • pairwise scanning may be defined as that the candidate positions of B and C are both advanced with one unit of step size, where one unit of step size is defined as the height of the current coding block for corner B and defined as the width of the current coding block for comer C.
  • alternative scanning may be performed. Tn one example of alternative scanning, the candidate positions for comers B and C are alternatively advanced. At one step, only the position of B or C may be advanced, while the position of C or B is not changed. In one example, the position of comer B may be progressively increased from the first non-adjacent neighboring block to the distance at the maximum number of non-adjacent neighboring blocks, while the position of corner C remains at the first non-adjacent neighboring block. In the next round, the position of the corner C moves to the second non-adjacent neighboring block, and the position of the comer B is traversed from the first to the maximum value again. The rounds are continued until all combinations are traversed.
  • the methods of defining scanning area and distance, scanning order, and scanning termination proposed for deriving inherited merge candidates may completely or partially reused for deriving constructed merge candidates
  • the same methods defined for inherited merge candidate scanning which include but no limited to scanning area and distance, scanning order and scanning termination, may be completely reused for constructed merge candidate scanning.
  • the same methods defined for inherited merge candidate scanning may be partly reused for constructed merge candidate scanning.
  • FIG. 16 shows an example in this case.
  • the block size of each non-adjacent neighboring blocks is same as the current block, which is similarly defined as inherited candidate scanning, but the whole process is a simplified version since the scanning at each distance is limited to be only one block.
  • FIGS. 17A-17B represent another example in this case. In FIGS. 17A-17B, both non-adjacent inherited merge candidates and non-adjacent constructed merge candidates are defined with the same block size as the current coding block, while the scanning order, scanning area, and scanning termination conditions may be defined differently.
  • the maximum distance for left side non-adj acent neighbors is 4 coding blocks, while the maximum distance for above side non-adjacent neighbors is 5 coding blocks. Also, at each distance, the scanning direction is bottom-up for left side and right-to-left for above side. In FIG. 17B, the maximum distance of non-adjacent neighbors is 4 for both left side and above side. In addition, the scanning at a specific distance is unavailable because there is only one block at each distance. In FIG. 17A, the scanning operations within each distance may be terminated if M qualified candidates are identified.
  • the value of M may be a predefined fixed value such as the value of 1 or any other positive integer, or a signaled value decided by the encoder, or a configurable value at the encoder or the decoder. In one example, the value of M may be the same as the merge candidate list size.
  • the scanning operations at different distances may be terminated if N qualified candidates are identified.
  • the value of N may be a predefined fixed value such as the value of 1 or any other positive integer, or a signaled value decided by the encoder, or a configurable value at the encoder or the decoder.
  • the value of N may be the same as the merge candidate list size.
  • the value of N may be the same as the value ofM.
  • the non-adjacent spatial neighbors with closer distance to the current block may be prioritized, which indicates that non-adjacent spatial neighbors with distance i is scanned or checked before the neighbors with distance i+1, where i may be a nonnegative integer representing a specific distance.
  • the positions of one left and above non-adjacent spatial neighbors are firstly determined independently. After that, the location of the top-left neighbor can be determined accordingly which can enclose a rectangular virtual block together with the left and above non-adjacent neighbors. Then, as shown in the FIG. 9, the motion information of the three non-adjacent neighbors is used to form the CPMVs at the top-left (A), top-right (B) and bottom-left (C) of the virtual block, which is finally projected to the current CU to generate the corresponding constructed candidates.
  • Step 2 the translational MVs at the positions of the selected candidates after step 1 are evaluated and an appropriate affine model may be determined.
  • FIG. 9 is used as an example again.
  • the scanning process may be terminated before enough number of candidates are identified. For example, the motion information of the motion field at one or more of the selected candidates after Step 1 may be unavailable.
  • the corresponding virtual coding block represents a 6-parameter affine model. If the motion information of one of the three candidates is unbailable, the corresponding virtual coding block represents a 4-parameter affine model. If the motion information of more than one of the three candidates is unbailable, the corresponding virtual coding block may be unable to represent a valid affine model.
  • the virtual block may be set to be invalid and unable to represent a valid model, then Step 3 and Step 4 may be skipped for the current iteration.
  • the virtual block may represent a valid 4-parameter affine model.
  • Step 3 if the virtual coding block is able to represent a valid affine model, the same projection process used for inherited merge candidate may be used.
  • the same projection process used for inherited merge candidate may be used.
  • a 4-parameter model represented by the virtual coding block from Step 2 is projected to a 4-parameter model for the current block
  • a 6-parameter model represented by the virtual coding block from Step 2 is projected to a 6-parameter model for the current block.
  • the affine model represented by the virtual coding block from Step 2 is always projected to a 4-parameter model or a 6-parameter model for the current block.
  • the type of the projected 4-parameter affine model is the same type of the 4-parameter affine model represented by the virtual coding block.
  • the affine model represented by the virtual coding block from Step 2 is type A or B 4-parameter affine model
  • the projected affine model for the current block is also type A or B respectively.
  • the 4-parameter affine model represented by the virtual coding block from Step 2 is always projected to the same type of 4-parameter model for the current block.
  • the type A or B of 4-parameter affine model represented by the virtual coding block is always projected to the type A 4-parameter affine model.
  • Step 4 based on the projected CPMVs after Step 3, in one example, the same candidate generation process used in the current VVC or AVS standards may be used.
  • the temporal motion vectors used in the candidate generation process for the current VVC or AVS standards may be not used for the non-adjacent neighboring blocks based derivation method. When the temporal motion vectors are not used, it indicates that the generated combinations do not contain any temporal motion vectors.
  • Step 5 any newly generated candidate after Step 4 may go through a similarity check against all existing candidates that are already in the merge candidate list. The details of similarity check are already described in the section of “Affine merge candidate pruning.” If the newly generated candidate is found to be similar with any existing candidate in the candidate list, this newly generated candidate is removed or pruned.
  • a virtual coding block is formed by determining three corner points A, B and C, and then the translational MVs of the 4x4 blocks located at the three corners are used to represent an affine model for the virtual coding block.
  • the affine model of the virtual coding block is projected to the current coding block. This whole process may be used to derive the first type of affine candidates constructed from non-adjacent spatial neighbors (e.g., the sub-blocks located by the three comer points A, B and C are non-adjacent spatial neighbors).
  • this method may be applied to an affine mode, such as affine merge mode and affine AMVP mode, and this method may be also applied to regular mode, such as regular merge mode and regular AMVP mode, because the projected affine model can be used to derive a translational MV based on a specific position (e.g., the center position) inside of a prediction block or a coding block.
  • an affine mode such as affine merge mode and affine AMVP mode
  • regular mode such as regular merge mode and regular AMVP mode
  • the combination of inheritance and construction may be realized by separating the affine model parameters into different groups, where one group of affine parameters are inherited from one neighboring block, while other groups of affine parameters are inherited from other neighboring blocks.
  • the parameters of one affine model may be constructed from two groups.
  • an affine model may contain 6 parameters, including a, b, c, d , e and f .
  • the translational parameters ⁇ a, b ⁇ may represent one group, while the non- translational parameters ⁇ c, d, e, f ⁇ may represent another group.
  • the two groups of parameters may be independently inherited from two different neighboring blocks in the first step and then concatenated/ constructed to be a complete affine model in the second step.
  • the group with non-translational parameters has to be inherited from one affine coded neighboring block, while the group with translational parameters may be from any inter-coded neighboring block, which may or may not be coded in affine mode.
  • the affine coded neighboring block may be selected from adjacent affine neighboring blocks or non-adjacent affine neighboring blocks based on previously proposed scanning methods for affine inherited candidates, such as the methods shown in FIG.
  • the affine coded neighboring block may be not physically existed, but virtually constructed from regular inter-coded neighboring blocks, such as the methods shown in FIG. 17B, that is the scanning method/rule including the scanning area and distance, scanning order, and scanning termination used in the Section of “Non-Adjacent Neighbor Based Derivation Process for Affine Constructed Merge Candidates.”
  • the neighboring blocks associated with each group may be determined in different ways.
  • the neighboring blocks for different groups of parameters may be all from non-adjacent neighboring/neighbor areas, while the scanning methods may be similarly designed as the previously proposed methods for non-adjacent neighbor based derivation process.
  • the neighboring blocks for different groups of parameters may be all from adjacent neighboring/neighbor areas, while the scanning methods may be the same as the current VVC or AVS video standards.
  • the neighboring blocks for different groups of parameters may be partly from adjacent areas and partly from non- adjacent neighboring/neighbor areas.
  • the scanning process may be differently performed from the non-adjacent neighborbased derivation process for affine inherited candidates.
  • the scanning area, distance and order may be similarly defined, but the scanning termination rule may be differently specified.
  • the non-adjacent neighboring blocks may be exhaustively scanned within a defined maximum distance at each area. In this case, all non- adjacent neighboring blocks within a distance may be scanned by following a scanning order. In some embodiments, the scanning area may be different.
  • the right bottom adjacent and non-adjacent area of the current coding block may be scanned to determine neighbors for generating translational or/and non- translational parameters.
  • the neighbors scanned at the right bottom area may be used to find collocated temporal neighbors, instead of spatial neighbors.
  • One scanning criteria may be conditionally based on whether the right-bottom collocated temporal neighbor(s) is/are already used for generating affine constructed neighbors. If used already, the scanning is not performed, otherwise the scanning is performed. Alternatively, if used already, which means the right-bottom collocated temporal neighbor(s) is/are available, the scanning is performed, otherwise the scanning is not performed.
  • the associated neighboring block or blocks for each group may be checked whether to use the same reference picture for at least one direction or both directions.
  • the associated neighboring block or blocks for each group may be checked whether use the same precision/resolution for motion vectors.
  • the first X associated neighboring block(s) for each group may be used.
  • the value of X may be defined as the same or different values for different groups of parameters.
  • the first 1 or 2 neighboring blocks containing non-translational affine parameters may be used, while the first 3 or 4 neighboring blocks containing translational affine parameters may be used.
  • the second is construction formula.
  • the CPMVs of the new candidates may be derived in equation below: where (x, y) is a comer position within the current coding block (e g., (0, 0) for top-left comer CPMV, (width, 0) for top-right corner CPMV), ⁇ c, d, e, f ⁇ is one group of parameters from one neighboring block, ⁇ a, b ⁇ is another group of parameters from another neighboring block.
  • the CPMVs of the new candidates may be derived in below equation: where the (Aw, Ah) is the distance between the top-left corner of the current coding block and the top-left corner of one of the associated neighboring block(s) for one group of parameters, such as the associated neighboring block of the group of ⁇ a, b ⁇ .
  • the definitions of the other parameters in this equation are the same as the example above.
  • the parameters may be grouped in another way: (a, b, c, d, e,f) are formed as one group, while the (Aw, Ah) are formed as another group. And the two groups of parameters are from two different neighboring blocks.
  • the value of (Aw, Ah) may be predefined as fixed values such as (0, 0) or at any constant values, which is not dependent on the distance between a neighboring block and the current block.
  • FIG. 18 shows an example of inheritance based derivation method for deriving affine constructed candidates.
  • the encoder or the decoder may perform scanning of adjacent and non-adjacent neighboring blocks for each group.
  • the encoder or the decoder may perform scanning of adjacent and non-adjacent neighboring blocks for each group.
  • two groups are defined, where neighbor 1 is coded in affine mode and provides non- translational affine parameters, while neighbor 2 provides translational affine parameters.
  • Neighbor 1 may be obtained according to the process in the Section of “Non- Adjacent Neighbor Based Derivation Process for Affine Inherited Merge Candidates” as shown in FIGS.
  • neighbor 1 may be an adjacent or non-adjacent neighbor block of the current block.
  • neighbor 2 may be obtained according to the process as shown in FIGS. 16 and 17B.
  • the neighbor 1, which is coded in the affine mode may be scanned from adjacent or/and non-adjacent areas, by following above proposed scanning methods.
  • the neighbor 2, which is coded in the affine or a non-affine mode may be also scanned from adjacent or non-adjacent areas.
  • the neighbor 2 may be from one of the scanned adjacent or non-adjacent areas if the motion information is not already used for deriving some affine merge or AMVP candidates, or from right-bottom positions of the current block if a collocated TMVP candidate at this position is available or/and already used for deriving some affine merge or AMVP candidates.
  • a small coordinate offset e.g., +1 or +2 or -1 or -2 for vertical or/and horizontal coordinates
  • Step 2 with the parameters and positions decided in Step 1, a specific affine model may be defined, which can derive different CPMVs according to the coordinate (x, y) of a CPMV.
  • the non-translational parameters ⁇ c, d, e, f ⁇ may be obtained based on neighbor 1 obtained in Stepl
  • the translational parameters ⁇ a, b ⁇ may be obtained based on neighbor 2 obtained in Step 1.
  • the distance parameters d w, J h may thus obtained based on the position of the current block (x lt y t ) and the position of neighbor 2 (x 2 ,y 2 )
  • the distance parameters Aw, Ah may respectively indicate a horizontal distance and a vertical distance between the current block and neighbor 1 or neighbor 2.
  • the distance parameters Aw, Ah may respectively indicate the horizontal distance ⁇ x 1 — x 2 ) between the current block and neighbor 2 and the vertical distance (y t — y 2 ) between the current block and neighbor 2.
  • Step 3 two or three CPMVs are derived for the current coding block, which can be constructed to form a new affine candidate
  • other prediction information may be further constructed.
  • the prediction direction (e.g., bi or uni -predicted) and indexes of reference pictures may be the same as the associated neighboring blocks if neighboring blocks are checked to have the same directions and/or reference pictures.
  • the prediction information is determined by reusing the minimum overlapped information among the associated neighboring blocks from different groups. For example, if only the reference index of one direction from one neighboring block is the same as the reference index of the same direction of the other neighboring block, the prediction direction of the new candidate is determined as uni -prediction, and the same reference index and direction are reused.
  • an affine model may be constructed by combining model parameters from different inheritances.
  • the translational model parameters may be inherited from translational blocks (e g., from adjacent or/and non-adjacent spatial neighboring 4x4 blocks), while the non-translational model parameters may be inherited from affine coded blocks (e.g., from adjacent or/and non-adjacent spatial neighboring affine coded blocks).
  • the non-translational model parameters may be inherited from historically coded affine blocks instead of explicitly scanned non-adjacent spatial neighboring affine coded blocks, while the historically coded affine blocks may be adjacent or nob-adjacent spatial neighbors.
  • This whole process may be used to derive the second type of affine candidates constructed from non- adjacent spatial neighbors (e.g., the non-translational model parameters may be inherited from non-adjacent spatial neighbors).
  • this method may be applied to an affine mode, such as affine merge mode and affine AMVP mode, and this method may be also applied to regular mode, such as regular merge mode and regular AMVP mode, because the generated affine model can be used to derive a translational MV based on a specific position (e.g., the center position) inside of a prediction block or coding block.
  • the HMVP merge mode is already adopted in the current VVC and AVS, where the translational motion information from neighboring blocks are already stored in a history table, as described in the introduction section.
  • the scanning process may be replaced by searching the HMVP table.
  • the translational motion information may be obtained from HMVP table, instead of the scanning method as shown in the FIG. 17B and FIG. 18.
  • the position information, width, height and reference information are also needed, which may be accessible if the current HMVP table can be modified. Therefore, it is proposed to extend the HMVP table to store additional information in addition to the motion information of each history neighbor.
  • the additional information may include positions of an affine or non-affine neighboring blocks, or affine motion information such as CPMVs or equivalent regular motion derived from CPMVs (e.g.., this regular motion may be from the internal sub-blocks of an affine coded neighboring block) reference index, etc.
  • affine motion information such as CPMVs or equivalent regular motion derived from CPMVs (e.g.., this regular motion may be from the internal sub-blocks of an affine coded neighboring block) reference index, etc.
  • the above provided non-adjacent neighbor based derivation process and inheritance based derivation process for affine mode may be replaced by modifying the existing HMVP table.
  • the translational motion of the derived affine model may be directly obtained from the existing HMVP table, where the translational motion is previously saved when neighboring blocks are previously coded at regular inter mode.
  • the translational motion of the derived affine model may be still obtained from the existing HMVP, where the translational motion is previously saved when neighboring blocks are previously coded at affine mode.
  • the non-translational motion of the derived affine model is obtained from the existing HMVP table, where the non-translational motion is previously saved when neighboring blocks are previously code at affine mode.
  • the reused non- translational motion may not be directly saved while original CPMVs of the previously coded affine neighboring blocks are saved. Tn this case, the existing HMVP table is updated with not only original CPMVs but also the position and size information of the previously coded affine neighboring blocks.
  • the above proposed non-adjacent neighbor based derivation process and inheritance based derivation process for affine mode may be replaced by creating one or more new HMVP tables.
  • the translation motion and non-translational motion of the derived affine model may be similarly obtained as the method of modifying the existing HMVP table
  • an affine candidate list is also needed for deriving CPMV predictors.
  • all the above proposed derivation methods may be similarly applied to affine AMVP mode.
  • the selected neighboring blocks must have the same reference picture index as the current coding block.
  • a candidate list is also constructed, but with only translational candidate MVs, not CPMVs.
  • all the above proposed derivation methods can still be applied by adding an additional derivation step.
  • this additional derivation step it is to derive a translation MV for the current block, which may be realized by selecting a specific pivot position (x, y) within the current block and then follow the same equation (3).
  • the three corner positions of the block are used as the pivot position (x, y) in equation (3)
  • the center position of the block may be used as the pivot position (x, y) in equation (3).
  • the newly derived candidates may be inserted into the affine AMVP candidate list by following the order as below:
  • the newly derived candidates may be inserted into the affine AMVP candidate list by following the order as below:
  • the newly derived candidates may be inserted into the affine AMVP candidate list by following the order as below:
  • the newly derived candidates may be inserted into the affine AMVP candidate list by following the order as below:
  • the newly derived candidates may be inserted into the affine AMVP candidate list by following the order as below:
  • the newly derived candidates may be inserted into the affine AMVP candidate list by following the order as below:
  • the candidates constructed from non-adjacent spatial neighbors may be referred to as the first type or/and the second type of candidates constructed from non-adjacent spatial neighbors.
  • the newly derived candidates may be inserted into the regular merge candidate list by following the order as below:
  • the non-adjacent spatial merge candidates may be inserted into the affine merge candidate list by following the order below: 1. Subblock-based Temporal Motion Vector Prediction (SbTMVP) candidate, if available; 2. Inherited from adjacent neighbors; 3. Inherited from non-adjacent neighbors; 4. Constructed from adjacent neighbors; 5. Constructed from non-adjacent neighbors; 6. Zero MVs.
  • SBTMVP Temporal Motion Vector Prediction
  • the non-adjacent spatial merge candidates may be inserted into the affine merge candidate list by following the order below: 1. SbTMVP candidate, if available; 2. Inherited from adjacent neighbors; 3. Constructed from adjacent neighbors; 4. Inherited from non-adjacent neighbors; 5. Constructed from non-adjacent neighbors; 6. Zero MVs.
  • the non-adjacent spatial merge candidates may be inserted into the affine merge candidate list by following the order below: 1. SbTMVP candidate, if available; 2. Inherited from adjacent neighbors; 3. Constructed from adjacent neighbors; 4. One set of zero MVs; 5. Inherited from non-adjacent neighbors; 6. Constructed from non-adjacent neighbors; 7. Remaining zero MVs, if the list is still not full.
  • the non-adjacent spatial merge candidates may be inserted into the affine merge candidate list by following the order below: 1. SbTMVP candidate, if available; 2. Inherited from adjacent neighbors; 3. Inherited from non-adjacent neighbors with distance smaller than X; 4. Constructed from adjacent neighbors; 5. Constructed from non-adjacent neighbors; 6. Constructed from inherited translational and non-translational neighbors; 7. Zero MVs, if the list is still not full.
  • the non-adjacent spatial merge candidates may be inserted into the affine merge candidate list by following the order below: 1. SbTMVP candidate, if available; 2. Inherited from adjacent neighbors; 3. Inherited from non-adjacent neighbors; 4. The first candidate constructed from adjacent neighbors; 5. The first X candidates constructed from inherited translational and non-translational neighbors; 6. Constructed from non-adjacent neighbors; 7. Other Y candidates constructed from inherited translational and non-translational neighbors; 8. Zero MVs, if the list is still not full.
  • the value of X may be the same as the value of Y.
  • the value of X may be different from the value of Y.
  • the non-adjacent spatial merge candidates may be inserted into the affine merge candidate list by following the order below: 1. SbTMVP candidate, if available; 2. Inherited from adjacent neighbors; 3. Inherited from non-adjacent neighbors with distance smaller than X; 4. Constructed from adjacent neighbors; 5. Constructed from non-adjacent neighbors with distance smaller than Y; 6. Inherited from non-adjacent neighbors with distance bigger than X; 7. Constructed from non-adjacent neighbors with distance bigger than Y; 8. Zero MVs.
  • the value X and Y may be a predefined fixed value such as the value of 2, or a signaled value decided by the encoder, or a configurable value at the encoder or the decoder.
  • the value of X may be the same as the value of Y.
  • the value of N may be different from the value of M.
  • a new candidate is derived by using the inheritance based derivation method which constructs CPMVs by combining affine motion and translational MV
  • the placement of this new candidate may be dependent on the placement the other constructed candidates.
  • the reordering of the affine merge candidate list may follow the order as below:
  • the reordering of the affine merge candidate list may follow the order below:
  • the reordering of the affine merge candidates may be partially or completely interleaved among different categories of candidates (e.g., interleaving may indicate that the candidates from the same category may not be adjacently placed in the candidates list).
  • interleaving may indicate that the candidates from the same category may not be adjacently placed in the candidates list.
  • there may be seven categories of affine merge candidates placed in the affine merge candidate list:
  • the specific order discussed above may be applied in any candidate list including an affine AMVP candidate list, a regular merge candidate list, and an affine merge candidate list.
  • the order of the candidates may remain the same as the above insertion order.
  • An adaptive reordering method may be applied to reorder the candidates afterwards; the adaptive reordering may be template based methods (ARMC) or non-template based method such as bilateral matched based methods.
  • the order of the candidates may be reordered in a specific pattern.
  • the specific pattern may be applied in any candidate list including an affine AMVP candidate list, a regular merge candidate list, and an affine merge candidate list.
  • the reordering pattern may depend on the number of available candidates for each category.
  • the reordering pattern may be defined as below:
  • the first X inherited candidates from non-adjacent neighbors (e.g., X may be a prefixed number such as 1 or a signaled number);
  • the first Y constructed candidates from the second type of constructed candidates from non-adjacent neighbors (e.g., Y may be similarly defined as X);
  • the first Z constructed candidates from the first type of constructed candidates from non-adjacent neighbors (e.g., Z may be similarly defined as X);
  • the remaining inherited candidates from non-adjacent neighbors (e.g., X may be a prefixed number such as 1 or a signaled number);
  • the remaining constructed candidates from the second type of constructed candidates from non-adjacent neighbors (e.g., Y may be similarly defined as X);
  • the reordering pattern may be an interleaved method which may merge different candidates from different categories.
  • the interleaved pattern may be defined as below:
  • the value of (Xi, Yi, Zi, Ki) may be a prefixed number such as 1 or a signaled number. If the number of available candidates of one category is smaller than other categories, the position of the candidates for this category is skipped and the remaining available candidates of other categories would take over this position.
  • the reordering pattern may be a combined version which considers both availability and interleaving method.
  • the combined pattern may be defined as below:
  • the reordering methods may be selected based on the types of the video frames/slices. For example, for low-delay pictures or slices, all the candidates of the first type of constructed candidates from non-adjacent neighbors may be placed after all the constructed candidates from adjacent neighbors. While for non-low-delay pictures or slices, the first KI candidates of the first type of constructed candidates from non-adjacent neighbors may be placed after the first K2 constructed candidates from adjacent neighbors, and the remaining candidates of the first type of constructed candidates from non-adjacent neighbors may be placed after the remaining constructed candidates from adjacent neighbors.
  • one or more candidates may be derived for an existed affine merge candidate list, or an affine AMVP candidate list, or a regular merge candidate list, where the size of the corresponding list may be statically (e.g., configurable size) or adaptively (e.g., dynamically changed according to availability at encoder and then signaled to decoder) adjusted.
  • the new candidates are firstly derived as affine candidates, and then converted to translational motion vectors by using a pivot position (e.g., center sample or pixel position) within a coding block and associated affine models before insert into the regular merge candidate list.
  • an adaptive reordering method such as ARMC may be applied to one or more of the above candidate lists after the candidate lists are updated or constructed by adding some new candidates which are derived by above proposed candidate derivation methods.
  • a temporal candidate list may be created first, where the temporal candidate list may have a larger size than the existed candidate list (e.g., affine merge candidate list, affine AMVP candidate list, regular merge candidate list).
  • an adaptive reordering method such as ARMC may be applied to reorder the temporal candidate list.
  • the first N candidates of the temporal candidate list are inserted to the existed candidate lists.
  • the value of N may be a fixed or configurable value. Tn one example, the value of N may be the same as the size of the existed candidate list, where the selected N candidates from of the temporal candidate list are located.
  • a cost function such as the sum of absolute differences (SAD) between samples of a template of the current block and their corresponding reference samples may be used.
  • the reference samples of the template may be located by the same motion information of the current block.
  • an interpolation fdtering process may be used to generate prediction samples of the template. Since the generated prediction samples are just used to comparing the motion accuracy between different candidates, not for final block reconstructions, the prediction accuracy of the template samples may be relaxed by using an interpolation filter with smaller tap.
  • a 2-tap or 4-tap any other shorter length (e.g., 6-tap, 8-tap) interpolation filter may be used to generate prediction samples for the selected template of the current block. Or even the nearest integer samples (completely skip the interpolation filtering process) may be used as the prediction samples of the template.
  • An interpolation filter with smaller tap may be similarly used when a template matching method is used to adaptively reorder the candidates in other candidate list such as regular merge candidate list or affine AMVP candidate list.
  • a cost function such as the SAD between samples of a template of the current block and their corresponding reference samples may be used.
  • the corresponding reference samples may be located at integer positions or fractional positions. When fractional positions are located, a certain level of prediction accuracy may be achieved by performing an interpolation filter process. Due to the limited prediction accuracy, the calculated matching costs for different candidates may contain noise level differences. To reduce the impact of the noise level cost difference, the calculated matching costs may be adjusted by removing a few bits of the least significance bits before candidate sorting process.
  • a candidate list may be padded with zero MVs at the end of each list, if not enough candidates could be derived by using different derivation methods.
  • the candidate cost may be only calculated for the first zero MV, while the remaining zero MVs may be statically assigned with an arbitrarily large cost value, such that these repeated zero MVs are placed at the end of the corresponding candidate list.
  • all zero MVs may be statically assigned with an arbitrarily large cost value, such that all zero MVs are placed at the end of the corresponding candidate list.
  • an early termination method may be applied for a reordering method to reduce complexity at the decoder side.
  • a candidate list when a candidate list is constructed, different types of candidates may be derived and inserted into the list. If one candidate or one type of candidates is not participated in the reorder process, but selected and signaled to the decoder, the reordering process, which is applied to other candidates, may be early terminated.
  • the SbTMVP candidate in the case of applying ARMC for the affine merge candidate list, the SbTMVP candidate may be excluded from the reordering process. In this case, if the signaled merge index value for an affine coded block indicates a SbTMVP candidate at the decoder side, the ARMC process may be skipped or early terminated for this affine block.
  • both the derivation process and the reorder process for this specific candidate or this specific type of candidates may be skipped.
  • the skipped derivation process and reordering process are only applied to the specific candidate or the specific type of candidates, while the remaining candidates or types of candidates are still performed, where the derivation process is skipped indicates that the related operations of deriving the specific candidate or this specific type of candidates are skipped, but the predefined list position (e.g., according to a predefined insertion order) of the specific candidate or this specific type of candidates may be still kept, just the candidate content such as the motion information may be invalid due to skipped derivation process.
  • the cost calculation of this specific candidate or this specific type of candidates may be skipped and the list position of this specific candidate or this specific type of candidates may be not changed after reordering other candidates.
  • the selected non-adjacent spatial neighbors may be affine coded blocks or non-affine coded blocks (e.g., regular inter AMVP or merge coded blocks).
  • the motion information may include translational MVs and corresponding reference index at each direction.
  • the motion information may include CPMVs and corresponding reference index at each direction, and also the positions and the sizes of the affine coded blocks.
  • the motion information of these blocks may need to be saved in a memory once these blocks have been coded.
  • the non-adjacent spatial neighbors may be restricted to a certain area.
  • the allowed non-adjacent area for scanning non-adjacent spatial neighboring blocks may be restricted to a limited area size.
  • the restricted area may be applied to affine or non- affine spatial neighboring blocks.
  • the size of the allowed non-adjacent area may be defined according to the size of current CTU, e.g., integer (e.g., 1 or 2 or other integer) or fractional number (e.g., 0.5 or 0.25 or other fractional number) of current CTU size.
  • the size of the allowed non-adjacent area may be defined according to a fixed number of pixels or samples, e.g., 128 samples on the above of the current CTU or/and on the left of the current CTU.
  • the size (e.g., according to the CTU size or number of samples) may be a prefixed value or a signaled value determined at the encoder and carried in the bit-stream.
  • the size of the restricted area may be separately defined for top and left non-adjacent neighboring blocks.
  • the above non-adjacent neighboring blocks may be restricted to be within the current CTU, or outside of the current CTU but within at most fixed number samples/pixels away from the top of the current CTU such that no additional line buffer is needed for saving the motion information of above non-adjacent neighboring blocks.
  • the fixed number may be defined as 8, if 8 sample rows of neighboring/neighbor area away from the current CTU top is already covered by the existing line buffer.
  • the left non-adjacent neighboring blocks may be restricted to be within the current CTU, or outside of the current CTU but within a predefined or a signaled number of samples/pixels away from the left boundary of the current CTU.
  • the allowed non-adjacent area (for either non-adj acent affine neighbors or non-affine neighbors) above the current CU may have large memory cost if the allowed non-adjacent area is beyond the current CTU.
  • the actual memory cost is proportionally increased with the picture width and the maximum allowable scanning distance in the vertical direction.
  • the height of the above non-adjacent area outside of the current CTU may be limited to a value of h, as shown in FIG. 27A.
  • this value of h may be configurable or signaled to decoder.
  • affine motion and non-affine motion are stored in a separate buffer, for the example shown in FIG. 27B, there may be different methods to save the motion in the line buffer as follows.
  • the line buffer used to store affine motion may indicate that the buffer area where the CU B is located is set to be invalid since CU B is not affine CU.
  • the line buffer used to store affine motion may indicate that the buffer area where the CU B is located is set to be valid and the affine motion is copied from CU A, since CU A is CU B’s adjacent affine neighbor.
  • the height value h and the width value w may be set to be multiples of 4 for easier implementation.
  • the value h and w may be set to be the minimum value (e.g., 4) of a non-affine CU.
  • the scanned non-adjacent neighbor position may be out of the allowed non-adjacent area. In this case, different methods may be used to solve this issue as follows.
  • the scanning process may indicate that this scanned position has no valid neighbor information.
  • the scanning process may project or clip this out-of-range position to another position which is within the allowed non-adjacent area.
  • FIG. 28 there are two positions 2801 (i.e., the two dotted spots 2801) out of the range of the allowable non- adjacent area. These two positions 2801 are projected/clipped to another two positions 2802 which are at the same vertical/horizontal coordinate but within the allowable non-adjacent area, respectively.
  • the projected/clipped new position 2802 may be located on the boundary of the allowable non-adjacent area which is closest to the original position.
  • the projected/clipped new position 2802 may be interchangeably set to be on the one boundary or the other boundary because the buffer is so small that the motion information from only one CU may be stored and, in this case, there is no difference to be clipped to one boundary side or the other side.
  • the motion information When motion information of an affine-coded block is saved in memory, the motion information, including CPMVs, reference index, block size and positions may be saved at the granularity of minimum affine block size (e.g., an 8x8 block). In case the current affine-coded block is a coding unit with larger size than the minimum affine block, the motion information may be saved in different methods.
  • the motion information saved at each minimum affine block (e.g., 8x8 block) within the current block is just a repeated copy of the motion information of the current block.
  • the position and size of the current block may need to be repeatedly saved at each minimum affine block (termed as sub-block in the FIGS. 22A-22B) as well.
  • An example for this case is shown in the FIG. 22A, where the current block (termed as the parent block) is at the size of 24x16, and the minimum affine block (termed as sub-block) is at a fixed size of 8x8.
  • the motion information saved at each minimum affine block is the motion information already projected to this minimum affine block. Since the position of each minimum affine block is known (the topleft corner of each minimum affine block), and the size of each minimum affine block is also known (the minimum size, 8x8), the position and size information of the current block (termed as the parent block in FIGS. 22A-22B) do not need to be saved. An example for this case is shown in the FIG. 22B.
  • the regular/translational motion at each inside non-affine block may be computed as follows. [00372] Taking FTG. 23 as an illustrative example. For this minimum affine block, it has three already projected CPMVs following the method shown in FIG. 22B. Based on the affine model shown in the equation (2), the regular/translational motion may be derived as below (e.g., ignoring some precision related shifts). The examples provided below are based on the center point of each 4x4 block as shown in FIG. 23, but the present disclosure is not limited to using center point to derive translation MV for each block.
  • MVl x e + (a » 2) + (c » 2)
  • MV1 _y f + (b » 2) + (d » 2)
  • a CPMV2_x - CPMVl_x
  • b CPMV2_y - CPMVl_y
  • c CPMV3_x - CPMVl_x
  • d CPMV3_y - CPMVl_y
  • e CPMVl_x
  • f CPMVl_y.
  • MV3_x MVl x + (c »1)
  • MV3_y MVl_y + (d» 1).
  • MV4_x MVl_x + ((a+c) »1)
  • MV4_y MVl_y + ((b+d)»l ).
  • each 16x16 block may only save one set of affine motion information, which includes two or three CPMVs and represents one single affine model, even though the four 8x8 sub-blocks within this 16x16 block may be from more than one affine blocks, which is shown in the FIG. 25.
  • the four 8x8 sub-blocks A, B, C and D form a 16x16 block/area, and only one affine model information is saved.
  • the four 8x8 sub-blocks are from four different affine blocks, which represent four affine models and include four sets of affine motion information. In this case, there may be different ways to derive and save one single set of affine information.
  • one of the multiple sets of available affine motion information may be selected and saved
  • the affine motion information at one fixed or configurable position e.g., the top-left minimum affine block
  • an averaged affine motion information of multiple models may be calculated for motion storage.
  • the affine motion information at a selected neighboring affine block may be simplified/compressed before storage.
  • the selected neighboring affine block is always 4-parameter model and only two CPMVs are saved.
  • the selected neighboring affine block is always uni -predicted, and only one direction of affine motion is saved.
  • each saved CPMV may be compressed before storage to further reduce the memory size.
  • One example is to use general techniques for data compression. For example, it is provided to save a compounded value from one exponent and mantissa to approximately represent each saved CPMV.
  • methods may be applied for motion information storage in any combinations.
  • the defined restricted area for non-adjacent neighboring blocks may be combined with the usage of compressed affine motion information.
  • MMVD mode the best MVD information is selected at the encoder side based on rate-distortion optimization (RDO) method, and then signaled to the decoder side.
  • RDO rate-distortion optimization
  • MVD information to refine the existing candidates in the affine AMVP or/and affine merge candidate list.
  • the new candidates after refinements are then inserted into the existing affine AMVP or/and affine merge candidate list.
  • the available number of combinations for MVD information such as the motion magnitude (e.g., offset value) and motion direction (e.g., sign value), may be the same or different as the existing MMVD mode in the VVC.
  • a smaller number of offset values such as ⁇ 1 , 2, 4, 8, 16 ⁇ , may be used.
  • a different or the same set of direction values as the existing MMVD mode may be used.
  • the base MV is any one of the candidates from the existing affine AMVP and/or merge candidate list, and there may be multiple ways to determine the selection of a potential base MV.
  • the base MV may be selected from a candidate list before or after an adaptive reordering method such as ARMC is applied to this candidate list.
  • a single base MV or multiple base MVs may be selected from a candidate list. For example, when a single base MV is selected, one or multiple combinations (e.g., Y combinations) of MVD information may be selected to refine this single base MV, which indicates that Y new candidates (e.g., each combination of MVD information is applied to the base MV and generate one new candidate) may be generated and inserted into the candidate list.
  • Y new candidates e.g., each combination of MVD information is applied to the base MV and generate one new candidate
  • MVD information may be selected to refine each selected base MV, which indicates X multiplied by Y new candidates may be generated and inserted into the candidates list.
  • the index of each selected base MV may be determined in different ways. In one or more examples, the index of the selected base MV may be determined by avoiding the base MVs which are already selected in MMVD mode, if affine MMVD mode is enabled for the current coding process.
  • the index of the selected base MV may be determined by following a predefined order. For example, N candidates from the beginning of the list are sequentially selected as the base MVs. If any base MV is already selected by the current affine MMVD mode, this base MV may be skipped.
  • the newly generated candidates after different combinations of MVD refinements may be directly inserted into the existing candidate list.
  • another round of reordering process may be applied to all the new candidates and the top Z candidates with smaller matching cost (e.g., template matching cost or bilateral matching cost) may be selected to be inserted into the candidate list.
  • FIG. 24 shows a computing environment (or a computing device) 2410 coupled with a user interface 2460.
  • the computing environment 2410 can be part of a data processing server.
  • the computing device 2410 can perform any of various methods or processes (such as encoding/decoding methods or processes) as described hereinbefore in accordance with various examples of the present disclosure.
  • the computing environment 2410 may include a processor 2420, a memory 2440, and an I/O interface 2450.
  • the processor 2420 typically controls overall operations of the computing environment 2410, such as the operations associated with the display, data acquisition, data communications, and image processing.
  • the processor 2420 may include one or more processors to execute instructions to perform all or some of the steps in the above-described methods.
  • the processor 2420 may include one or more modules that facilitate the interaction between the processor 2420 and other components.
  • the processor may be a Central Processing Unit (CPU), a microprocessor, a single chip machine, a GPU, or the like.
  • the memory 2440 is configured to store various types of data to support the operation of the computing environment 2410.
  • Memory 2440 may include predetermine software 2442. Examples of such data include instructions for any applications or methods operated on the computing environment 2410, video datasets, image data, etc.
  • the memory 2440 may be implemented by using any type of volatile or non-volatile memory devices, or a combination thereof, such as a static random access memory (SRAM), an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a programmable read-only memory (PROM), a read-only memory (ROM), a magnetic memory, a flash memory, a magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory a magnetic memory
  • flash memory a magnetic
  • the EO interface 2450 provides an interface between the processor 2420 and peripheral interface modules, such as a keyboard, a click wheel, buttons, and the like.
  • the buttons may include but are not limited to, a home button, a start scan button, and a stop scan button.
  • the EO interface 2450 can be coupled with an encoder and decoder.
  • a non-transitory computer-readable storage medium including a plurality of programs, such as included in the memory 2440, executable by the processor 2420 in the computing environment 2410, for performing the abovedescribed methods.
  • the non-transitory computer-readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disc, an optical data storage device or the like.
  • the non-transitory computer-readable storage medium has stored therein a plurality of programs for execution by a computing device having one or more processors, where the plurality of programs when executed by the one or more processors, cause the computing device to perform the above-described method for motion prediction.
  • the computing environment 2410 may be implemented with one or more application-specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field- programmable gate arrays (FPGAs), graphical processing units (GPUs), controllers, microcontrollers, microprocessors, or other electronic components, for performing the above methods.
  • ASICs application-specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field- programmable gate arrays
  • GPUs graphical processing units
  • controllers microcontrollers, microprocessors, or other electronic components, for performing the above methods.
  • FIG. 29 is a flowchart illustrating a method for video decoding according to an example of the present disclosure.
  • the processor 2420 may obtain a restricted area that is not adjacent to a current coding unit (CU) according to a value associated with the restricted area.
  • the restricted area is a predefined area associated with the current coding unit. Such association may be a spatial relationship between the restrict area and the CU, or a mapping relationship predefined between the restrict area and the CU.
  • the restricted area may be one of following areas: a first restricted neighboring/neighbor area above the current CU or a second restricted neighboring/neighbor area on the left of the current CU.
  • the first restricted neighboring/neighbor area 2706 is in the non-adjacent area of the current CU 2701 and above the CTU 2704 that the current CU 2701 located in.
  • the second restricted neighboring/neighbor area 2705 is in the non-adjacent area of the current CU 2701 and to the left of the CTU 2704 that the current CU 2701 is located in.
  • the processor 2420 may determine that the value is a height value associated with the first restricted neighbor area in response to determining that the restricted neighbor area is the first restricted neighbor area and may determine that the value is a width value associated with the second restricted neighbor area in response to determining that the restricted neighbor area is the second restricted neighbor area. For example, as shown in FIG. 27A, the first restricted neighbor area 2706 has a height value h and the second restricted neighbor area 2705 has a width value w. [00405] In some examples, the processor 2420 may obtain the value associated with the restricted area signaled in a bitstream sent by an encoder.
  • the processor 2420 may pre-define the value associated with restricted area.
  • the processor 2420 may determine that a buffer area for storing the CU is invalid in response to determining that a CU obtained by scanning the restricted area is not an affine CU.
  • the processor 2420 may obtain a second CU that is an affine CU and located adj acent to the first CU in response to determining that a first CU obtained by scanning the restricted area is not an affine CU, obtain affine motion information of the second CU, determine that a buffer area for storing the first CU is valid and store the affine motion information obtained from the second CU in a buffer area for storing the first CU.
  • the first CU may be the non-affme CU 2703
  • the second CU may be the affine CU 2702
  • the affine CU 2702 is adjacently above the non-affme CU 2703.
  • the processor 2420 may pre-define the value associated with the restricted area as a multiple of a minimum size of a non-affme CU.
  • the processor 2420 may obtain a CU at a scanning position by scanning a neighbor area of the current CU and determine that no valid neighbor information exists at the scanning position in response to determining that the scanning position is not within the restricted area.
  • the processor 2420 may obtain a CU at a scanning position by scanning a neighbor area of the current CU, obtain a projected position by projecting the CU to the restricted area in response to determining that the scanning position is not within the restricted area, and obtain motion information associated with the CU at the scanning position in a buffer area for storing a projected CU that is located at the projected position.
  • the scanning position may be the position 2801 that is beyond the allowed spatial area, i.e., outside of the restricted area.
  • the projected position may be the position 2802 that is within the allowed spatial area, i.e., within the restricted area.
  • the projected position may be located at a boundary of the restricted area.
  • the restricted area may be one of following areas: a first restricted neighbor area above the current CU or a second restricted neighbor area on the left of the current CU, and the projected position may be located at a boundary of the first restricted neighbor area or the second restricted neighbor area.
  • the processor 2420 may obtain one or more MV candidates from a plurality of non-adjacent CUs to the current CU based on the restricted area.
  • the non-adjacent CUs are the non-adjacent neighbor CUs to the current CU.
  • the plurality of non-adjacent CUs may be located within the restricted area, and the one or more MV candidates may be obtained by scanning the restricted area in which the plurality of non- adjacent CUs are located.
  • Non-adjacent CUs may be located on the boundary of the restricted area in some examples.
  • the processor 2420 may obtain one or more CPMVs for the current CU based on the one or more MV candidates.
  • FIG. 30 is a flowchart illustrating a method for video encoding corresponding the method for video decoding as shown in FIG. 29.
  • the processor 2420 at the encoder side, may obtain a restricted area that is not adjacent to a current CU according to a value associated with the restricted area.
  • the restricted area may be one of following areas: a first restricted neighbor area above the current CU or a second restricted neighbor area on the left of the current CU.
  • the first restricted neighbor area 2706 is in the non-adjacent area of the current CU 2701 and above the CTU 2704 that the current CU 2701 located in.
  • the second restricted neighbor area 2705 is in the non-adjacent area of the current CU 2701 and to the left of the CTU 2704 that the current CU 2701 is located in.
  • the processor 2420 may determine that the value is a height value associated with the first restricted neighbor area in response to determining that the restricted neighbor area is the first restricted neighbor area and may determine that the value is a width value associated with the second restricted neighbor area in response to determining that the restricted area is the second restricted neighbor area. For example, as shown in FIG. 27A, the first restricted neighbor area 2706 has a height value h and the second restricted neighbor area 2705 has a width value w. [00420] In some examples, the processor 2420 may signal the value associated with the restricted area in a bitstream that is to be sent to a decoder.
  • the processor 2420 may pre-define the value associated with restricted area.
  • the processor 2420 may determine that a buffer area for storing the CU is invalid in response to determining that a CU obtained by scanning the restricted area is not an affine CU.
  • the processor 2420 may obtain a second CU that is an affine CU and located adj acent to the first CU in response to determining that a first CU obtained by scanning the restricted area is not an affine CU, obtain affine motion information of the second CU, determine that a buffer area for storing the first CU is valid and store the affine motion information obtained from the second CU in a buffer area for storing the first CU.
  • the first CU may be the non-affme CU 2703
  • the second CU may be the affine CU 2704
  • the affine CU 2704 is adjacently above the non-affme CU 2703.
  • the processor 2420 may pre-define the value associated with the restricted area as a multiple of a minimum size of a non-affme CU.
  • the processor 2420 may obtain a CU at a scanning position by scanning a neighbor area of the current CU and determine that no valid neighbor information exists at the scanning position in response to determining that the scanning position is not within the restricted area.
  • the processor 2420 may obtain a CU at a scanning position by scanning a neighbor area of the current CU, obtain a projected position by projecting the CU to the restricted area in response to determining that the scanning position is not within the restricted area, and obtain motion information associated with the CU at the scanning position in a buffer area for storing a projected CU that is located at the projected position.
  • the scanning position may be the position 2801 that is beyond the allowed spatial area, i.e., outside of the restricted area.
  • the projected position may be the position 2802 that is within the allowed spatial area, i.e., within the restricted area.
  • the projected position may be located at a boundary of the restricted area.
  • the restricted area may be one of following areas: a first restricted neighbor area above the current CU or a second restricted neighbor area on the left of the current CU, and the projected position may be located at a boundary of the first restricted neighbor area or the second restricted neighbor area.
  • the processor 2420 may obtain one or more MV candidates from a plurality of non-adjacent neighbor CUs to the current CU based on the restricted area.
  • the non-adjacent CUs are the non-adjacent neighbor CUs to the current CU.
  • the plurality of non-adjacent CUs may be located within the restricted area, and the one or more MV candidates may be obtained by scanning the restricted area in which the plurality of non-adjacent CUs are located.
  • Non-adjacent CUs may be located on the boundary of the restricted area in some examples.
  • the processor 2420 may obtain one or more CPMVs for the current CU based on the one or more MV candidates.
  • an apparatus for video decoding includes a processor 2420 and a memory 2440 configured to store instructions executable by the processor; where the processor, upon execution of the instructions, is configured to perform any method as illustrated in FIG. 29.
  • an apparatus for video encoding includes a processor 2420 and a memory 2440 configured to store instructions executable by the processor; where the processor, upon execution of the instructions, is configured to perform any method as illustrated in FIG. 30.
  • a non-transitory computer readable storage medium having instructions stored therein.
  • the instructions When the instructions are executed by a processor 2420, the instructions cause the processor to perform any method as illustrated in FIGS. 29-30.
  • the plurality of programs may be executed by the processor 2420 in the computing environment 2410 to receive (for example, from the video encoder 20 in FIG. 1G) a bitstream or data stream including encoded video information (for example, video blocks representing encoded video frames, and/or associated one or more syntax elements, etc.), and may also be executed by the processor 2420 in the computing environment 2410 to perform the decoding method described above according to the received bitstream or data stream.
  • the plurality of programs may be executed by the processor 2420 in the computing environment 2410 to perform the encoding method described above to encode video information (for example, video blocks representing video frames, and/or associated one or more syntax elements, etc.) into a bitstream or data stream, and may also be executed by the processor 2420 in the computing environment 2410 to transmit the bitstream or data stream (for example, to the video decoder 30 in FIG. 2B).
  • the non-transitory computer-readable storage medium may have stored therein a bitstream or a data stream including encoded video information (for example, video blocks representing encoded video frames, and/or associated one or more syntax elements etc.) generated by an encoder (for example, the video encoder 20 in FIG.
  • the non-transitory computer-readable storage medium may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disc, an optical data storage device or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne des procédés de décodage et de codage vidéo, des appareils et des supports de stockage non transitoires. Dans un procédé de décodage, le décodeur obtient une zone restreinte qui n'est pas adjacente à une unité de codage actuelle (CU) selon une valeur associée à la zone restreinte. De plus, le décodeur obtient un ou plusieurs candidats de vecteur de mouvement (MV) parmi une pluralité de CU non adjacents à la CU actuelle sur la base de la zone restreinte. En outre, le décodeur obtient un ou plusieurs vecteurs de mouvement de point de commande (CPMV) pour la CU actuelle sur la base desdits candidats MV.
PCT/US2023/019002 2022-04-18 2023-04-18 Procédés et dispositifs de dérivation de candidats pour un mode de fusion affine en codage vidéo WO2023205185A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263332244P 2022-04-18 2022-04-18
US63/332,244 2022-04-18

Publications (1)

Publication Number Publication Date
WO2023205185A1 true WO2023205185A1 (fr) 2023-10-26

Family

ID=88420468

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/019002 WO2023205185A1 (fr) 2022-04-18 2023-04-18 Procédés et dispositifs de dérivation de candidats pour un mode de fusion affine en codage vidéo

Country Status (1)

Country Link
WO (1) WO2023205185A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150146780A1 (en) * 2013-11-28 2015-05-28 Fujitsu Limited Video encoder and video encoding method
US20190215522A1 (en) * 2018-01-08 2019-07-11 Qualcomm Incorporated Multiple-model local illumination compensation
US20200092577A1 (en) * 2018-09-17 2020-03-19 Qualcomm Incorporated Affine motion prediction
US11284065B2 (en) * 2018-02-28 2022-03-22 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Composed prediction and restricted merge

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150146780A1 (en) * 2013-11-28 2015-05-28 Fujitsu Limited Video encoder and video encoding method
US20190215522A1 (en) * 2018-01-08 2019-07-11 Qualcomm Incorporated Multiple-model local illumination compensation
US11284065B2 (en) * 2018-02-28 2022-03-22 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Composed prediction and restricted merge
US20200092577A1 (en) * 2018-09-17 2020-03-19 Qualcomm Incorporated Affine motion prediction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
W. CHEN (KWAI), X. XIU, H.-J. JHU, C.-W. KUO, X. WANG (KWAI), K. ZHANG (BYTEDANCE), L. ZHANG, Z. DENG, N. ZHANG, Y. WANG (BYTEDANC: "EE2-2.7, 2.8, 2.9: History-parameter-based affine model inheritance and non-adjacent spatial neighbors for affine merge mode", 26. JVET MEETING; 20220420 - 20220429; TELECONFERENCE; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 14 April 2022 (2022-04-14), XP030301027 *

Similar Documents

Publication Publication Date Title
RU2705428C2 (ru) Вывод информации движения для подблоков при видеокодировании
CN110741639A (zh) 视频译码中的运动信息传播
US20240129519A1 (en) Motion refinement with bilateral matching for affine motion compensation in video coding
CN115278256B (zh) 对视频数据进行解码的方法、装置和介质
CN117813816A (zh) 用于解码器侧帧内模式推导的方法和设备
WO2023205185A1 (fr) Procédés et dispositifs de dérivation de candidats pour un mode de fusion affine en codage vidéo
WO2023220444A1 (fr) Procédés et dispositifs de dérivation de candidats pour un mode de fusion affine dans le codage vidéo
WO2023192335A1 (fr) Procédés et dispositifs de dérivation de candidats pour un mode de fusion affine en codage vidéo
WO2023158766A1 (fr) Procédés et dispositifs de dérivation de candidats pour un mode de fusion affine en codage vidéo
WO2024010831A1 (fr) Procédés et dispositifs de dérivation de candidats pour un mode de fusion affine dans un codage vidéo
WO2023137234A1 (fr) Procédés et dispositifs de dérivation de candidats pour un mode de fusion affine en codage vidéo
WO2023114362A1 (fr) Procédés et dispositifs de dérivation de candidats pour un mode de fusion affine dans un codage vidéo
WO2023133160A1 (fr) Procédés et dispositifs de dérivation de candidats pour un mode de fusion affine en codage vidéo
WO2023097019A1 (fr) Procédés et dispositifs de dérivation de candidats pour un mode de fusion affine dans un codage vidéo
WO2023081499A1 (fr) Dérivation de candidats pour un mode de fusion affine dans un codage vidéo
WO2023141177A1 (fr) Compensation de mouvement prenant en compte des conditions hors limite dans un codage vidéo
WO2023076700A1 (fr) Compensation de mouvement prenant en compte des conditions hors limite dans codage vidéo
WO2024044404A1 (fr) Procédés et dispositifs utilisant une copie intra-bloc pour un codage vidéo
WO2023177695A1 (fr) Prédiction inter dans un codage vidéo
WO2023205283A1 (fr) Procédés et dispositifs de compensation d'éclairage local améliorée
WO2023101990A1 (fr) Compensation de mouvement prenant en compte des conditions hors limite dans un codage vidéo
CN118140480A (zh) 用于视频编解码中仿射合并模式的候选推导的方法和设备
WO2024119197A1 (fr) Procédés et dispositifs de copie intra-bloc et de mise en correspondance intra-modèle
WO2024108228A1 (fr) Procédés et dispositifs de copie intra-bloc et de mise en correspondance intra-modèle
WO2024081261A1 (fr) Procédés et dispositifs à copie intra-bloc

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23792452

Country of ref document: EP

Kind code of ref document: A1