WO2020073928A1 - Inter prediction method and apparatus - Google Patents

Inter prediction method and apparatus Download PDF

Info

Publication number
WO2020073928A1
WO2020073928A1 PCT/CN2019/110194 CN2019110194W WO2020073928A1 WO 2020073928 A1 WO2020073928 A1 WO 2020073928A1 CN 2019110194 W CN2019110194 W CN 2019110194W WO 2020073928 A1 WO2020073928 A1 WO 2020073928A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion information
blocks
sub
current picture
block
Prior art date
Application number
PCT/CN2019/110194
Other languages
French (fr)
Inventor
Xu Chen
Jianhua Zheng
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Publication of WO2020073928A1 publication Critical patent/WO2020073928A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Definitions

  • the present invention relates to the field of video encoding and decoding, and in particular, to an inter prediction method and apparatus for a video image, and a corresponding encoder and decoder.
  • Digital video capabilities can be incorporated into a wide variety of apparatuses, including digital televisions, digital live broadcast systems, wireless broadcast systems, personal digital assistants (PDA) , laptop or desktop computers, tablet computers, e-book readers, digital cameras, digital recording apparatuses, digital media players, video game apparatuses, video game consoles, cellular or satellite radio phones (such as "smartphones" ) , video conferencing apparatuses, video streaming apparatuses, and the like.
  • Digital video apparatuses implement video compression technologies, for example, video compression technologies described in standards defined by MPEG-2, MPEG-4, ITU-T H. 263, and ITU-T H. 264/MPEG-4 Part 10 advanced video coding (AVC) , the video coding standard H. 265/high efficiency video coding (HEVC) standard, and extensions of such standards.
  • a video apparatus can transmit, receive, encode, decode, and/or store digital video information more efficiently by implementing such video compression technologies.
  • Video coding (video encoding and decoding) is used in a wide range of digital video applications, for example broadcast digital TV, video transmission over internet and mobile networks, real-time conversational applications such as video chat, video conferencing, DVD and Blu-ray discs, video content acquisition and editing systems, and camcorders of security applications.
  • digital video applications for example broadcast digital TV, video transmission over internet and mobile networks, real-time conversational applications such as video chat, video conferencing, DVD and Blu-ray discs, video content acquisition and editing systems, and camcorders of security applications.
  • a video slice (that is, a video frame or a portion of a video frame) may be partitioned into picture blocks, and the picture block may also be referred to as a tree block, a coding unit (CU) , and/or a coding node.
  • a picture block in a to-be-intra-coded (I) slice of an image is coded through spatial prediction of reference samples in neighboring blocks in the same image.
  • a picture block in a to-be-inter-coded (P or B) slice of an image For a picture block in a to-be-inter-coded (P or B) slice of an image, spatial prediction of reference samples in neighboring blocks in the same image or temporal prediction of reference samples in other reference pictures may be used.
  • the image may be referred to as a frame, and the reference picture may be referred to as a reference frame.
  • video data is generally compressed before being communicated across modern day telecommunications networks.
  • the size of a video could also be an issue when the video is stored on a storage device because memory resources may be limited.
  • Video compression devices often use software and/or hardware at the source to code the video data prior to transmission or storage, thereby decreasing the quantity of data needed to represent digital video images.
  • the compressed data is then received at the destination by a video decompression device that decodes the video data.
  • Embodiments of this application provide an inter prediction method and apparatus for a video image, and a corresponding encoder and decoder according to the independent claims, to improve prediction accuracy of motion information of a picture block to some extent, thereby improving encoding and decoding performance.
  • an embodiment of this application provides an inter prediction method, including:
  • the current picture block may comprises a current coding block.
  • the initial motion information is obtained from one or two reference picture lists of the current picture block.
  • the initial motion information may comprises one or two motion vectors respectively from the one or two reference picture lists of the current picture block.
  • the initial motion information of at least two sub-blocks may comprises initial motion information of each sub-block, wherein the initial motion information of each sub-block one or two motion vectors respectively from the one or two reference picture lists of the current picture block.
  • the motion information of the at least two sub-blocks may comprises motion information of each sub-block, wherein the motion information of each sub-block may comprises one or two motion vectors determined based on the initial motion information of at least two sub-blocks.
  • the motion information may further comprises one or two reference picture indices of the the one or two reference picture lists related to the one or more motion vectors, or one or two MVDs (motion vector differences) related to the one or more motion vectors.
  • the current picture block includes the at least two sub-blocks.
  • the current picture block consists of the at least two sub-blocks.
  • the determining motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks includes: determining the motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and the positions of the at least two sub-blocks by using a decoder-side motion vector refinement DMVR method.
  • the obtaining initial motion information of at least two sub-blocks of a current picture block includes: obtaining initial motion information of the current picture block, and using the initial motion information of the current picture block as the initial motion information of the at least two sub-blocks of the current picture block.
  • the initial motion information of the current picture block may comprises one or two motion vectors respectively from the one or two reference picture lists of the current picture block.
  • the obtaining initial motion information of at least two sub-blocks of a current picture block is performed when a size of the current picture block is greater than a preset size;
  • the method further includes: when the size of the current picture block is not greater than the preset size, obtaining the initial motion information of the current picture block; determining the motion information of the current picture block based on the initial motion information of the current picture block and a position of the current picture block; and determining the prediction block of the current picture block based on the motion information of the current picture block.
  • the determining motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks is performed when a size of the current picture block is greater than a preset size.
  • the obtaining initial motion information of at least two sub-blocks of a current picture block is performed on condition that a size of the current picture block is greater than a preset size
  • the method further includes: on condition that the size of the current picture block is not greater than the preset size, obtaining the initial motion information of the current picture block; determining the motion information of the current picture block based on the initial motion information of the current picture block and a position of the current picture block; and determining the prediction block of the current picture block based on the motion information of the current picture block.
  • the determining motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks is performed on condition that a size of the current picture block is greater than a preset size.
  • the preset size is 32 times 32.
  • the determining motion information of the current picture block based on the motion information of the at least two sub-blocks includes: determining a mean of the motion information of the at least two sub-blocks as the motion information of the current picture block.
  • a clip operation or a rounding operation is performed in a process of determining/calculating the mean.
  • the method further comprises: splitting the current picture block into the at least two sub-blocks, the size of each sub-block is smaller than or equal to the preset size.
  • an embodiment of this application provides an inter prediction apparatus, including several functional units configured to implement any one of the methods in the first aspect.
  • the inter prediction apparatus may include: a motion information determining unit, configured to: obtain initial motion information of at least two sub-blocks of a current picture block; determine motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks; and determine motion information of the current picture block based on the motion information of the at least two sub-blocks; and a prediction block determining unit, configured to determine a prediction block of the current picture block based on the motion information of the current picture block.
  • the current picture block includes the at least two sub-blocks.
  • the current picture block consists of the at least two sub-blocks.
  • the motion information determining unit configured to: determine the motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and the positions of the at least two sub-blocks by using a decoder-side motion vector refinement DMVR method.
  • the motion information determining unit configured to: obtain initial motion information of the current picture block, and use the initial motion information of the current picture block as the initial motion information of the at least two sub-blocks of the current picture block.
  • the initial motion information of the current picture block may comprises one or two motion vectors respectively from the one or two reference picture lists of the current picture block.
  • the motion information determining unit configured to: obtain initial motion information of at least two sub-blocks of a current picture block is performed when a size of the current picture block is greater than a preset size;
  • the motion information determining unit further configured to: when the size of the current picture block is not greater than the preset size, obtain the initial motion information of the current picture block; determine the motion information of the current picture block based on the initial motion information of the current picture block and a position of the current picture block; and the prediction block determining unit, further configured to determining the prediction block of the current picture block based on the motion information of the current picture block.
  • the motion information determining unit configured to: determine motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks is performed when a size of the current picture block is greater than a preset size.
  • the preset size is 32 times 32.
  • the motion information determining unit configured to: determine a mean of the motion information of the at least two sub-blocks as the motion information of the current picture block.
  • a clip operation or a rounding operation is performed in a process of determining/calculating the mean.
  • the apparatus further comprises splitting unit, configured to: split the current picture block into the at least two sub-blocks, the size of each sub-block is smaller than or equal to the preset size.
  • the inter prediction apparatus is, for example, applied to a video encoding apparatus (avideo encoder) or a video decoding apparatus (avideo decoder) .
  • the method according to the first aspect of the invention can be performed by the apparatus according to the second aspect of the application. Further features and implementation forms of the method according to the second aspect of the application correspond to the features and implementation forms of the apparatus according to the first aspect of the application.
  • an embodiment of this application provides an image prediction apparatus, where the apparatus includes a processor and a memory coupled to the processor, and the processor is configured to perform the method in any one of the implementations of the first aspect.
  • an embodiment of this application provides a video decoding device, including a non-volatile storage medium and a processor.
  • the non-volatile storage medium stores an executable program
  • the processor and the non-volatile storage medium are coupled to each other, and the processor executes the executable program to implement the method in any one of the first aspect or the implementations of the first aspect.
  • an embodiment of this application provides a non-transitory machine-readable storage medium (or computer-readable storage medium) , where the computer-readable storage medium stores an instruction, and when the instruction runs on a computer, executed by one or more processors, the computer is enabled to perform the method in any one of the first aspect or the implementations of the first aspect.
  • an embodiment of this application provides a computer program product including an instruction, and when the computer program product runs on a computer, the computer is enabled to perform the method in any one of the first aspect or the implementations of the first aspect.
  • an embodiment of this application provides a computer program comprising program code for performing the method according to the first or any possible embodiment of the first aspect when executed on a computer.
  • FIG. 1A is a block diagram of an example of a video encoding and decoding system 10 according to an embodiment
  • FIG. 1B is a block diagram of an example of a video coding system 40 according to an embodiment
  • FIG. 2 is a block diagram of an example structure of an encoder 20 according to an embodiment
  • FIG. 3 is a block diagram of an example structure of a decoder 30 according to an embodiment
  • FIG. 4 is a block diagram of an example of a video coding device 400 according to an embodiment
  • FIG. 5 is a block diagram of another example of an encoding apparatus or a decoding apparatus according to an embodiment ;
  • FIG. 6 is a schematic flowchart of an inter prediction method according to an embodiment.
  • FIG. 7 is a block diagram showing an example structure of a content supply system 3100 which realizes a content delivery service.
  • FIG. 8 is a block diagram showing a structure of an example of a terminal device.
  • a corresponding device may include one or more units such as functional units for performing the described one or more method operations (for example, one unit performing the one or more operations; or a plurality of units, each of which performs one or more of the plurality of operations) , even if such one or more units are not explicitly described or illustrated in the accompanying drawings.
  • a corresponding method may include one or more operations for performing a functionality of the one or more units (for example, one operation performing the functionality of the one or more units; or a plurality of operations, each of which performs a functionality of one or more of the plurality of units) , even if such one or more operations are not explicitly described or illustrated in the accompanying drawings.
  • one operation performing the functionality of the one or more units; or a plurality of operations, each of which performs a functionality of one or more of the plurality of units
  • features of the various example embodiments and/or aspects described in this specification may be combined with each other, unless specifically noted otherwise.
  • the technical solutions in the embodiments of the present invention may not only be applied to existing video coding standards (such as the H. 264 standard and the HEVC standard) , but also be applied to future video coding standards (such as the H. 266 standard) .
  • Terms used in the implementation part of the present invention are merely intended to explain specific embodiments of the present invention, but are not intended to limit the present invention. In the following, some concepts that may be used in the embodiments of the present invention are first described briefly.
  • Video coding typically refers to processing of a sequence of pictures that form a video or a video sequence.
  • the term "frame” or “image” may be used as a synonym of the term “picture” in the field of video coding.
  • Video coding used in this application indicates either video encoding or video decoding.
  • Video encoding is performed at a source side, and typically includes: processing (for example, through compression) original video pictures to reduce an amount of data required for representing the video pictures (for more efficient storage and/or transmission) .
  • Video decoding is performed at a destination side, and typically includes: inverse processing relative to an encoder, to reconstruct the video pictures.
  • Embodiments referring to "coding” of video pictures shall be understood as relating to either “encoding” or “decoding” for a video sequence.
  • a combination of an encoding part and a decoding part is also referred to as a codec (Coding and Decoding) .
  • a video sequence includes a series of images (picture) , the image is further partitioned into slices (slice) , and the slice is further partitioned into blocks (block) .
  • video coding coding processing is performed per block.
  • a concept of block is further extended. For example, in the H. 264 standard, there is a macroblock (MB) , and the macroblock may be further partitioned into a plurality of prediction blocks (partition) that can be used for predictive coding.
  • MB macroblock
  • partition prediction blocks
  • coding unit CU
  • prediction unit PU
  • TU transform unit
  • a CU may be partitioned into smaller CUs based on a quadtree, and the smaller CU may continue to be partitioned, thereby forming a quadtree structure
  • the CU is a basic unit for partitioning and coding a coded image.
  • the PU and the TU also have a similar tree structure, and the PU may correspond to a prediction block and is a basic unit of predictive coding.
  • the CU is further partitioned into a plurality of PUs according to a partitioning mode.
  • the TU may correspond to a transform block, and is a basic unit for transforming a prediction residual. Essentially, all of the CU, the PU, and the TU are concepts of blocks (or picture blocks) .
  • a coding tree unit (CTU) is split into a plurality of CUs by using a quadtree structure denoted as a coding tree.
  • a decision on whether to code a picture area by using inter-picture (temporal) or intra-picture (spatial) prediction is made at a CU level.
  • Each CU may be further split into one, two, or four PUs based on a PU splitting type. Inside one PU, a same prediction process is applied, and related information is transmitted to a decoder on a PU basis.
  • the CU may be partitioned into TUs based on another quadtree structure similar to the coding tree used for the CU.
  • a quadtree and binary tree (QTBT) partitioning frame is used to partition a coding block.
  • a CU may have a square or rectangular shape.
  • a picture block to be coded in a current coded image may be referred to as a current block.
  • the current block in encoding, the current block is a block currently being encoded, and in decoding, the current block is a block currently being decoded.
  • a decoded picture block, in a reference picture, used for predicting the current block is referred to as a reference block.
  • the reference block is a block that provides a reference signal for the current block, where the reference signal represents a pixel value within the picture block.
  • a block that is in the reference picture and that provides a prediction signal for the current block may be referred to a prediction block, where the prediction signal represents a pixel value, a sample value, or a sampling signal within the prediction block.
  • the optimal reference block provides a prediction for the current block, and this block is referred to as a prediction block.
  • the original video pictures can be reconstructed, that is, reconstructed video pictures have same quality as the original video pictures (assuming no transmission loss or other data loss occurs during storage or transmission) .
  • further compression for example, through quantization, is performed, to reduce the amount of data representing the video pictures, which cannot be completely reconstructed at a decoder, that is, quality of reconstructed video pictures is lower or worse than quality of the original video pictures.
  • a video is typically processed, that is, encoded, at a block (video block) level, for example, by using spatial (intra picture) prediction and temporal (inter picture) prediction to generate a prediction block, subtracting the prediction block from a current block (block currently processed/to be processed) to obtain a residual block, transforming the residual block and quantizing the residual block in the transform domain to reduce an amount of data that is to be transmitted (compressed) , whereas at a decoder, inverse processing relative to the encoder is partially applied to the encoded or compressed block to reconstruct the current block for representation.
  • the encoder duplicates a decoder processing loop so that both generate identical predictions (for example, intra and inter predictions) and/or reconstructions for processing, that is, coding, subsequent blocks.
  • FIG. 1A is a schematic block diagram of an example of a video encoding and decoding system 10 according to an embodiment.
  • the video encoding and decoding system 10 may include a source device 12 and a destination device 14.
  • the source device 12 generates encoded video data, and therefore the source device 12 may be referred to as a video encoding apparatus.
  • the destination device 14 may decode the encoded video data generated by the source device 12, and therefore the destination device 14 may be referred to as a video decoding apparatus.
  • Various implementation solutions of the source device 12, the destination device 14, or both the source device 12 and the destination device 14 may include one or more processors and a memory coupled to the one or more processors.
  • the memory may include but is not limited to a RAM, a ROM, an EEPROM, a flash memory, or any other medium that can be used to store desired program code in a form of an instruction or a data structure accessible by a computer, as described in this specification.
  • the source device 12 and the destination device 14 may include various apparatuses, including a desktop computer, a mobile computing apparatus, a notebook (for example, a laptop) computer, a tablet computer, a set-top box, a telephone handset such as a so-called "smart" phone, a television, a camera, a display apparatus, a digital media player, a video game console, an in-vehicle computer, a wireless communications device, or the like.
  • FIG. 1A depicts the source device 12 and the destination device 14 as separate devices
  • a device embodiment may alternatively include both the source device 12 and the destination device 14 or functionalities of both the source device 12 and the destination device 14, that is, the source device 12 or a corresponding functionality and the destination device 14 or a corresponding functionality.
  • the source device 12 or the corresponding functionality and the destination device 14 or the corresponding functionality may be implemented by using same hardware and/or software, separate hardware and/or software, or any combination thereof.
  • a communication connection may be performed between the source device 12 and the destination device 14 through a link 13, and the destination device 14 may receive encoded video data from the source device 12 through the link 13.
  • the link 13 may include one or more media or apparatuses capable of moving the encoded video data from the source device 12 to the destination device 14.
  • the link 13 may include one or more communication media that enable the source device 12 to transmit the encoded video data directly to the destination device 14 in real time.
  • the source device 12 may modulate the encoded video data according to a communications standard (for example, a wireless communications protocol) , and may transmit modulated video data to the destination device 14.
  • a communications standard for example, a wireless communications protocol
  • the one or more communication media may include a wireless communication medium and/or a wired communication medium, for example, a radio frequency (RF) spectrum or one or more physical transmission lines.
  • the one or more communication media may form a part of a packet-based network, and the packet-based network is, for example, a local area network, a wide area network, or a global network (for example, the Internet) .
  • the one or more communication media may include a router, a switch, a base station, or another device that facilitates communication from the source device 12 to the destination device 14.
  • the source device 12 includes an encoder 20, and optionally, the source device 12 may further include a picture source 16, a picture preprocessor 18, and/or a communications interface 22.
  • the encoder 20, the picture source 16, the picture preprocessor 18, and the communications interface 22 may be hardware components in the source device 12, or may be software programs in the source device 12.
  • the picture source 16 may include or be any type of picture capturing device configured to, for example, capture a real-world picture; and/or any type of device for generating a picture or comment (for screen content encoding, some text on a screen is also considered as a part of a to-be-encoded picture or image) , for example, a computer graphics processor configured to generate a computer animation picture; or any type of device configured to obtain and/or provide a real-world picture or a computer animation picture (for example, screen content or a virtual reality (VR) picture) ; and/or any combination thereof (for example, an augmented reality (AR) picture) .
  • the picture source 16 may be a camera configured to capture a picture or a memory configured to store a picture.
  • the picture source 16 may further include any type of (internal or external) interface for storing a previously captured or generated picture and/or for obtaining or receiving a picture.
  • the picture source 16 may be, for example, a local camera or an integrated camera integrated into the source device.
  • the picture source 16 may be a local memory or, for example, an integrated memory integrated into the source device.
  • the interface may be, for example, an external interface for receiving a picture from an external video source.
  • the external video source is, for example, an external picture capturing device such as a camera, an external memory, or an external picture generating device.
  • the external picture generating device is, for example, an external computer graphics processor, a computer, or a server.
  • the interface may be any type of interface, for example, a wired or wireless interface or an optical interface, according to any proprietary or standardized interface protocol.
  • a picture may be regarded as a two-dimensional array or matrix of pixel (picture element) .
  • the pixel in the array may also be referred to as a sample.
  • a quantity of samples in horizontal and vertical directions (or axes) of the array or picture defines a size and/or resolution of the picture.
  • typically three color components are used, that is, the picture may be represented as or include three sample arrays.
  • a picture includes corresponding red, green and blue sample arrays.
  • each pixel is typically represented in a luminance/chrominance format or color space, for example, YCbCr, which includes a luminance component indicated by Y (sometimes L is used instead) and two chrominance components indicated by Cb and Cr.
  • the luminance (or luma) component Y represents brightness or grey level intensity (for example, in a grey-scale picture)
  • the two chrominance (or chroma) components Cb and Cr represent chromaticity or color information components.
  • a picture in the YCbCr format includes a luminance sample array of luminance sample values (Y) , and two chrominance sample arrays of chrominance values (Cb and Cr) .
  • Pictures in the RGB format may be converted or transformed into the YCbCr format and vice versa, and this process is also known as color transformation or conversion. If a picture is monochrome, the picture may include only a luminance sample array.
  • a picture transmitted by the picture source 16 to a picture processor (or pre-processor) may also be referred to as raw picture data 17.
  • the picture pre-processor 18 can be configured to receive the (raw) picture data 17, and perform pre-processing on the picture data 17 to obtain a pre-processed picture 19 or pre-processed picture data 19.
  • Pre-processing performed by the picture pre-processor 18 may include, for example, trimming, color format conversion (for example, from RGB to YCbCr) , color correction, or de-noising.
  • the encoder 20 (also referred to as a video encoder 20) is configured to receive the preprocessed picture data 19, and process the preprocessed picture data 19 by using a related prediction mode (such as a prediction mode in an embodiment of this specification) , to provide encoded picture data 21 (structural details of the encoder 20 are further described below based on FIG. 2, FIG. 4, or FIG. 5) .
  • the encoder 20 may be configured to perform various embodiments described below, to implement encoder-side application of a chroma block prediction method described in the present invention.
  • the communications interface 22 may be configured to receive the encoded picture data 21, and transmit the encoded picture data 21 to the destination device 14 or any other device (for example, a memory) through the link 13 for storage or direct reconstruction.
  • the other device may be any device used for decoding or storage.
  • the communications interface 22 may be, for example, configured to encapsulate the encoded picture data 21 into an appropriate format, for example, a data packet, for transmission over the link 13.
  • the destination device 14 includes a decoder 30, and optionally, the destination device 14 may further include a communications interface 28, a picture post processor 32, and/or a display device 34. Separate descriptions are as follows:
  • the communications interface 28 may be configured to receive the encoded picture data 21 from the source device 12 or any other source.
  • the any other source is, for example, a storage device, and the storage device is, for example, an encoded picture data storage device.
  • the communications interface 28 may be configured to transmit or receive the encoded picture data 21 through the link 13 between the source device 12 and the destination device 14 or through any type of network.
  • the link 13 is, for example, a direct wired or wireless connection, and the any type of network is, for example, a wired or wireless network or any combination thereof, or any type of private or public network, or any combination thereof.
  • the communications interface 28 may be, for example, configured to decapsulate the data packet transmitted through the communications interface 22, to obtain the encoded picture data 21.
  • Both the communications interface 22 and the communications interface 28 may be configured as unidirectional communications interfaces indicated by an arrow for the encoded picture data 13 in FIG. 1A pointing from the source device 12 to the destination device 14, or bidirectional communications interfaces, and may be configured, for example, to send and receive messages, for example, to set up a connection, to acknowledge and exchange any other information related to a communication link and/or data transmission, for example, encoded picture data transmission.
  • the decoder 30 may be configured to receive the encoded picture data 21 and provide decoded picture data 31 or a decoded picture 31 (further details will be described below, for example, based on FIG. 3 or FIG. 5) . In some embodiments, the decoder 30 may be configured to perform various embodiments described below, to implement decoder-side application of a chroma block prediction method described in the present invention.
  • the post-processor 32 of the destination device 14 may be configured to post-process the decoded picture data 31 (also referred to as reconstructed picture data) , for example, the decoded picture 31, to obtain post-processed picture data 33, for example, a post-processed picture 33.
  • the post-processing performed by the post-processing unit 32 may include, for example, color format conversion (for example, from YCbCr to RGB) , color correction, trimming, re-sampling, or any other processing, for example, for preparing the decoded picture data 31 for displaying, for example, by display device 34.
  • the display device 34 of the destination device 14 may be configured to receive the post-processed picture data 33 for displaying the picture, for example, to a user or viewer.
  • the display device 34 may be or include any type of display for presenting the reconstructed picture, for example, an integrated or external display or monitor.
  • the displays may include, for example, a liquid crystal display (LCD) , an organic light emitting diode (OLED) display, a plasma display, a projector, a micro LED display, liquid crystal on silicon (LCoS) , a digital light processor (DLP) , or any type of other displays.
  • FIG. 1A depicts the source device 12 and the destination device 14 as separate devices, embodiments of devices may also include both or functionalities of both, that is, the source device 12 or a corresponding functionality and the destination device 14 or corresponding functionality.
  • the source device 12 or the corresponding functionality and the destination device 14 or the corresponding functionality may be implemented by using same hardware and/or software, separate hardware and/or software, or any combination thereof.
  • the source device 12 and the destination device 14 may include any of a wide range of devices, including any type of handheld or stationary device, for example, a notebook or laptop computer, a mobile phone, a smartphone, a tablet or tablet computer, a camera, a desktop computer, a set-top box, a television, a camera, an in-vehicle device, a display device, a digital media player, a video game console, a video streaming device (such as a content service server or a content delivery server) , a broadcast receiver device, or a broadcast transmitter device, and may not use or may use any type of operating system.
  • a notebook or laptop computer a mobile phone, a smartphone, a tablet or tablet computer, a camera, a desktop computer, a set-top box, a television, a camera, an in-vehicle device, a display device, a digital media player, a video game console, a video streaming device (such as a content service server or a content delivery server) , a broadcast receiver device
  • the encoder 20 for example, the video encoder 20
  • the decoder 30 for example, the video decoder 30
  • each may be implemented as any of a variety of suitable circuits, such as one or more microprocessors, digital signal processors (DSPs) , application-specific integrated circuits (ASICs) , field-programmable gate arrays (FPGAs) , discrete logic, hardware, or any combination thereof.
  • DSPs digital signal processors
  • ASICs application-specific integrated circuits
  • FPGAs field-programmable gate arrays
  • a device may store instructions for the software in a suitable non-transitory computer-readable storage medium, and may execute the instructions in hardware by using one or more processors, to perform the techniques of this disclosure. Any of the foregoing (including hardware, software, a combination of hardware and software, and the like) may be considered as one or more processors.
  • the video coding system 10 illustrated in FIG. 1A is merely an example and the techniques of this application may apply to video coding settings (for example, video encoding or video decoding) that do not necessarily include any data communication between the encoding and decoding devices.
  • data is retrieved from a local memory, streamed over a network, or the like.
  • a video encoding device may encode and store data into a memory, and/or a video decoding device may retrieve the data from the memory and decode the data.
  • encoding and decoding are performed by devices that do not communicate with each other, but simply encode data and store the encoded data into a memory, and/or retrieve the data from the memory and decode the data.
  • FIG. 1B is an illustrative diagram of an example video coding system 40 including the encoder 20 in FIG. 2 and/or the decoder 30 in FIG. 3 according to an example embodiment.
  • the system 40 can implement techniques in accordance with various examples described in this application.
  • a video coding system 40 may include an imaging device (or imaging devices) 41, a video encoder 20, a video decoder 30 (and/or a video coder implemented by using a logic circuit 47 of a processing unit (or processing units) 46) , an antenna 42, one or more processors 43, one or more memories 44, and/or a display device 45.
  • the imaging device (s) 41, antenna 42, processing unit (s) 46, logic circuits 47, video encoder 20, video decoder 30, processor (s) 43, memory (or memories) 44, and/or display device 45 may be capable of communicating with each other.
  • the video coding system 40 may include only the video encoder 20 or only the video decoder 30 in various examples.
  • the antenna 42 may be configured to transmit or receive, for example, an encoded bitstream of video data.
  • the video coding system 40 may include a display device 45.
  • the display device 45 may be configured to present video data.
  • the logic circuit 47 may be implemented by the processing unit (s) 46.
  • the processing unit (s) 46 may include application-specific integrated circuit (ASIC) logic, graphics processor (s) , general purpose processor (s) , or the like.
  • the video coding system 40 also may include optional processor (s) 43, which may similarly include ASIC logic, graphics processor (s) , general purpose processor (s) , or the like.
  • the logic circuit 47 may be implemented by hardware, video coding dedicated hardware, or the like, and the processor (s) 43 may be implemented by general purpose software, operating systems, or the like.
  • the memory (or memories) 44 may be any type of memory such as a volatile memory (for example, a static random access memory (SRAM) or a dynamic random access memory (DRAM) ) , or a nonvolatile memory (for example, a flash memory) .
  • the memory (or memories) 44 may be implemented by a cache memory.
  • the logic circuit 47 may access the memory (or memories) 44 (for implementation of, for example, an image buffer) .
  • the logic circuit 47 and/or the processing unit (s) 46 may include memories (for example, a cache) for implementation of an image buffer or the like.
  • the video encoder 20 implemented by the logic circuit may include an image buffer (for example, implemented by either the processing unit (s) 46 or the memory (or memories) 44) and a graphics processing unit (for example, implemented by the processing unit (s) 46) .
  • the graphics processing unit may be communicatively coupled to the image buffer.
  • the graphics processing unit may include the video encoder 20 implemented by the logic circuit 47, to embody the various modules discussed with respect to FIG. 2 and/or any other encoder system or subsystem described herein.
  • the logic circuit may be configured to perform the various operations discussed herein.
  • the video decoder 30 may be implemented in a similar manner implemented by using the logic circuit 47 to embody the various modules discussed with respect to the decoder 30 in FIG. 3 and/or any other decoder system or subsystem described herein.
  • the video decoder 30 implemented by the logic circuit may include an image buffer (for example, implemented by either the processing unit (s) 420 or the memory (or memories) 44) and a graphics processing unit (for example, implemented by the processing unit (s) 46) .
  • the graphics processing unit may be communicatively coupled to the image buffer.
  • the graphics processing unit may include the video decoder 30 implemented by the logic circuit 47, to embody the various modules discussed with respect to FIG. 3 and/or any other decoder system or subsystem described herein.
  • the antenna 42 of the video coding system 40 may be configured to receive an encoded bitstream of video data.
  • the encoded bitstream may include information (e.g., data, indicators, index values, mode selection data, or the like) associated with video frame encoding discussed herein, such as data associated with coding partitioning (for example, transform coefficients or quantized transform coefficients, optional indicators (as discussed) , and/or data defining the coding partitioning) .
  • the video coding system 40 may also include the video decoder 30 coupled to the antenna 42 and configured to decode the encoded bitstream.
  • the display device 45 can be configured to present video frames.
  • the decoder 30 may be configured to perform a reverse process.
  • the decoder 30 may be configured to receive and parse such syntax elements and correspondingly decode related video data.
  • the encoder 20 may entropy encode the syntax elements into an encoded video bitstream.
  • the decoder 30 may parse such syntax elements and correspondingly decode related video data.
  • the encoder 20 and the decoder 30 in this embodiment of the present invention may be an encoder and a decoder corresponding to a video standard protocol such as H. 263, H. 264, HEVV, MPEG-2, MPEG-4, VP8, and VP9 or a next generation video standard protocol (such as H. 266) .
  • a video standard protocol such as H. 263, H. 264, HEVV, MPEG-2, MPEG-4, VP8, and VP9
  • a next generation video standard protocol such as H. 266
  • FIG. 2 shows a schematic/conceptual block diagram of an example video encoder 20 according to an embodiment of this application.
  • the video encoder 20 includes a residual calculation unit 204, a transform processing unit 206, a quantization unit 208, an inverse quantization unit 210, an inverse transform processing unit 212, a reconstruction unit 214, a buffer 216, a loop filter unit 220, a decoded picture buffer (DPB) 230, a prediction processing unit 260, and an entropy encoding unit 270.
  • the prediction processing unit 260 may include an inter prediction unit 244, an intra prediction unit 254, and a mode selection unit 262.
  • the inter prediction unit 244 may include a motion estimation unit and a motion compensation unit (not shown) .
  • the video encoder 20 shown in FIG. 2 may also be referred to as a hybrid video encoder or a video encoder according to a hybrid video codec.
  • the residual calculation unit 204, the transform processing unit 206, the quantization unit 208, the prediction processing unit 260, and the entropy encoding unit 270 form a forward signal path of the encoder 20, whereas, for example, the inverse quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214, the buffer 216, the loop filter 220, the decoded picture buffer (DPB) 230, and the prediction processing unit 260 form a backward signal path of the encoder, where the backward signal path of the encoder corresponds to the signal path of a decoder (refer to a decoder 30 in FIG. 3) .
  • the encoder 20 is configured to receive, for example, by using an input 202, a picture 201 or a block 203 of the picture 201, for example, a picture of a sequence of pictures forming a video or a video sequence.
  • the picture block 203 may also be referred to as a current picture block or a picture block to be coded, and the picture 201 as the current picture or the picture to be coded (in particular in video coding, to distinguish the current picture from other pictures, the other pictures are, for example, previously encoded and/or decoded pictures of the same video sequence, that is, the video sequence which also includes the current picture) .
  • the encoder 20 in this embodiment may include a partitioning unit (not depicted in FIG. 2) configured to partition the picture 201 into a plurality of blocks, for example, blocks 203.
  • the plurality of blocks are non-overlapping.
  • the partitioning unit may be configured to use a same block size for all pictures of a video sequence and a corresponding grid defining the block size, or to change a block size between pictures or subsets or groups of pictures, and partition each picture into corresponding blocks.
  • the prediction processing unit 260 of the video encoder 20 may be configured to perform any combination of the partitioning techniques described above.
  • the block 203 may be regarded as a two-dimensional array or matrix of samples with intensity values (sample values) , although of a smaller size than the picture 201.
  • the block 203 may include, for example, one sample array (for example, luma array in a case of a monochrome picture 201) , three sample arrays (for example, one luma array and two chroma arrays in a case of a color picture 201) , or any other quantity and/or type of arrays depending on a color format applied.
  • a quantity of samples in horizontal and vertical directions (or axes) of the block 203 defines a size of the block 203.
  • the encoder 20 shown in FIG. 2 is configured to encode the picture 201 block by block, for example, encoding and prediction is performed per block 203.
  • the residual calculation unit 204 is configured to calculate a residual block 205 based on the picture block 203 and a prediction block 265 (further details about the prediction block 265 are provided below) , for example, by subtracting sample values of the prediction block 265 from sample values of the picture block 203, sample by sample (pixel by pixel) to obtain the residual block 205 in a sample domain.
  • the transform processing unit 206 is configured to apply a transform, for example, a discrete cosine transform (DCT) or discrete sine transform (DST) , to the sample values of the residual block 205 to obtain transform coefficients 207 in a transform domain.
  • a transform for example, a discrete cosine transform (DCT) or discrete sine transform (DST)
  • DCT discrete cosine transform
  • DST discrete sine transform
  • the transform coefficients 207 may also be referred to as transform residual coefficients and represent the residual block 205 in the transform domain.
  • the transform processing unit 206 may be configured to apply integer approximations of DCTs/DSTs, such as transforms specified for HEVC/H. 265. Compared with an orthogonal DCT transform, such integer approximations are typically scaled by a specific factor. To preserve a norm of the residual block processed by forward and inverse transforms. Additional, scaling factors can be applied as a part of a transform process. The scaling factors are typically chosen based on specific constraints such as scaling factors being a power of two for shift operation, a bit depth of the transform coefficients, a tradeoff between accuracy and implementation costs, and the like.
  • Specific scaling factors may be specified for the inverse transform, for example, by the inverse transform processing unit 212, at the decoder 30 (and the corresponding inverse transform, for example, by the inverse transform processing unit 212 at the encoder 20) and corresponding scaling factors may be specified for the forward transform, for example, by transform processing unit 206, at the encoder 20.
  • the quantization unit 208 may be configured to quantize the transform coefficients 207 to obtain quantized transform coefficients 209, for example, by applying scalar quantization or vector quantization.
  • the quantized transform coefficients 209 may also be referred to as quantized residual coefficients 209.
  • the quantization process may reduce a bit depth associated with some or all of the transform coefficients 207. For example, an n-bit transform coefficient may be rounded down to an m-bit transform coefficient during quantization, where n is greater than m.
  • a quantization degree may be modified by adjusting a quantization parameter (QP) . For example, for scalar quantization, different scaling may be applied to achieve finer or coarser quantization.
  • QP quantization parameter
  • the applicable quantization operation size may be indicated by a quantization parameter (QP) .
  • the quantization parameter may be an index to a predefined set of applicable quantization operation sizes.
  • small quantization parameters may correspond to fine quantization (small quantization operation sizes) and large quantization parameters may correspond to coarse quantization (large quantization operation sizes) , or vice versa.
  • the quantization may include division by a quantization operation size and corresponding or inverse quantization, for example, by inverse quantization 210, or may include multiplication by the quantization operation size.
  • Embodiments according to some standards may be configured to use a quantization parameter to determine the quantization operation size.
  • the quantization operation size may be calculated based on a quantization parameter by using a fixed point approximation of an equation including division. Additional scaling factors may be introduced for quantization and dequantization to restore the norm of the residual block, which may be modified because of the scaling used in the fixed point approximation of the equation for the quantization operation size and quantization parameter.
  • the scaling of the inverse transform and dequantization may be combined.
  • customized quantization tables may be used and signaled from an encoder to a decoder, for example, in a bitstream.
  • the quantization is a lossy operation, where a loss increases with increasing quantization operation sizes.
  • the inverse quantization unit 210 can be configured to apply inverse quantization of the quantization unit 208 on the quantized coefficients to obtain dequantized coefficients 211 by applying the inverse of a quantization scheme applied by the quantization unit 208, based on, or by using, the same quantization operation size as the quantization unit 208.
  • the dequantized coefficients 211 may also be referred to as dequantized residual coefficients 211, and correspond to the transform coefficients 207, although the dequantized coefficients 211 are typically not identical to the transform coefficients due to a loss caused by quantization.
  • the inverse transform processing unit 212 can be configured to apply the inverse transform of the transform applied by the transform processing unit 206, for example, an inverse discrete cosine transform (DCT) or inverse discrete sine transform (DST) , to obtain an inverse transform block 213 in the sample domain.
  • the inverse transform block 213 may also be referred to as an inverse transform dequantized block 213 or an inverse transform residual block 213.
  • the reconstruction unit 214 (for example, a summer 214) is configured to add the inverse transform block 213 (that is, the reconstructed residual block 213) to the prediction block 265 to obtain a reconstructed block 215 in the sample domain, for example, by adding the sample values of the reconstructed residual block 213 and the sample values of the prediction block 265.
  • the buffer unit 216 ( "buffer" 216 for short) or a line buffer 216, is configured to buffer or store the reconstructed block 215 and the respective sample values for intra prediction.
  • the encoder may be configured to use unfiltered reconstructed blocks and/or the respective sample values stored in buffer unit 216 for any type of estimation and/or prediction, for example, intra prediction.
  • the encoder 20 in this embodiment may be configured so that the buffer unit 216 is not only used for storing the reconstructed blocks 215 for the intra prediction unit 254 but is also used for the loop filter unit 220 (not shown in FIG. 2) , such that, the buffer unit 216 and the decoded picture buffer unit 230 form one buffer.
  • filtered blocks 221 and/or blocks or samples from the decoded picture buffer 230 may be used as an input or a basis for the intra prediction unit 254.
  • the loop filter unit 220 (or “loop filter” 220 for short) is configured to filter the reconstructed block 215 to obtain a filtered block 221, for example, to smooth pixel transitions or improve video quality.
  • the loop filter unit 220 is intended to represent one or more loop filters including a de-blocking filter, a sample-adaptive offset (SAO) filter, a bilateral filter, an adaptive loop filter (ALF) , a sharpening or smoothing filter, or a collaborative filter, etc.
  • the loop filter unit 220 is shown in FIG. 2 as an in loop filter, in other configurations, the loop filter unit 220 may be implemented as a post loop filter.
  • the filtered block 221 may also be referred to as a filtered reconstructed block 221.
  • the decoded picture buffer 230 may store reconstructed coding blocks after the loop filter unit 220 performs filtering operations on the reconstructed coding blocks.
  • the encoder 20 in this embodiment may be configured to output loop filter parameters (correspondingly, the loop filter unit 220, such as sample adaptive offset information) directly, or through entropy encoding performed by the entropy encoding unit 270 or any other entropy coding unit, so that a decoder 30 can receive and apply the same loop filter parameters for decoding.
  • loop filter parameters correspondingly, the loop filter unit 220, such as sample adaptive offset information
  • the decoded picture buffer (DPB) 230 may be a reference picture memory that stores reference picture data for use in encoding video data by the video encoder 20.
  • the DPB 230 may be formed by any of a variety of memory devices, such as a dynamic random access memory (DRAM) , including a synchronous DRAM (SDRAM) , a magnetoresistive RAM (MRAM) , a resistive RAM (RRAM) , or other types of memory devices.
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • MRAM magnetoresistive RAM
  • RRAM resistive RAM
  • the DPB 230 and the buffer 216 may be provided by a same memory device or separate memory devices.
  • the decoded picture buffer (DPB) 230 is configured to store the filtered block 221.
  • the decoded picture buffer 230 may be further configured to store other previously filtered blocks, for example, previously reconstructed and filtered blocks 221, of the same current picture or of different pictures, for example, previously reconstructed pictures, and may provide complete previously reconstructed, that is, decoded, pictures (and corresponding reference blocks and samples) and/or a partially reconstructed current picture (and corresponding reference blocks and samples) , for example, for inter prediction.
  • the decoded picture buffer (DPB) 230 is configured to store the reconstructed block 215.
  • the prediction processing unit 260 also referred to as a block prediction processing unit 260, can be configured to receive or obtain the block 203 (the current block 203 of the current picture 201) and reconstructed picture data, for example, reference samples of the same (e.g., current) picture from the buffer 216 and/or reference picture data 231 from one or more previously decoded pictures from the decoded picture buffer 230, and to process such data for prediction, that is, to provide a prediction block 265, which may be an inter-predicted block 245 or an intra-predicted block 255.
  • a prediction block 265 which may be an inter-predicted block 245 or an intra-predicted block 255.
  • the mode selection unit 262 may be configured to select a prediction mode (for example, an intra or inter prediction mode) and/or a corresponding prediction block 245 or 255 to be used as the prediction block 265 for calculation of the residual block 205 and for reconstruction of the reconstructed block 215.
  • a prediction mode for example, an intra or inter prediction mode
  • a corresponding prediction block 245 or 255 to be used as the prediction block 265 for calculation of the residual block 205 and for reconstruction of the reconstructed block 215.
  • the mode selection unit 262 in an embodiment may be configured to select the prediction mode (for example, from those supported by the prediction processing unit 260) , which provides an optimal match, in other words, a minimum residual (the minimum residual means better compression for transmission or storage) , or a minimum signaling overhead (the minimum signaling overhead means better compression for transmission or storage) , or considers or balances both.
  • the mode selection unit 262 may be configured to determine the prediction mode based on rate-distortion optimization (RDO) , that is, select a prediction mode that provides minimum rate-distortion optimization or for which associated rate distortion fulfills at least a prediction mode selection criterion.
  • RDO rate-distortion optimization
  • the prediction processing for example, performed by the prediction processing unit 260
  • mode selection for example, performed by the mode selection unit 262
  • the encoder 20 is configured to determine or select the optimal or optimum prediction mode from a set of (predetermined) prediction modes.
  • the set of prediction modes may include, for example, intra prediction modes and/or inter prediction modes.
  • the set of intra prediction modes may include 35 different intra prediction modes, for example, non-directional modes such as a DC (or mean) mode and a planar mode, or directional modes, for example, as defined in H. 265, or may include 67 different intra prediction modes, for example, non-directional modes such as a DC (or mean) mode and a planar mode, or directional modes, for example, as defined in H. 266.
  • non-directional modes such as a DC (or mean) mode and a planar mode
  • directional modes for example, as defined in H. 266.
  • a set of inter prediction modes depends on available reference pictures (that is, for example, at least partially decoded pictures stored in the DPB 230, as described above) and other inter prediction parameters.
  • the set of inter prediction modes depends on whether the entire reference picture or only a part of the reference picture, for example, a search window area around an area of the current block, is used for searching for an optimal matching reference block.
  • the set of inter prediction modes depends on whether pixel interpolation such as half/semi-pel and/or quarter-pel interpolation is applied.
  • the set of inter prediction modes may include, for example, an advanced motion vector prediction (AMVP) mode, decoder side motion vector refinement (DMVR) mode, and a merge mode.
  • the set of inter prediction modes may include an AMVP mode based on a control point and/or a merge mode based on a control point.
  • the intra prediction unit 254 may be configured to perform any combination of intra prediction techniques described below.
  • a skip mode and/or a direct mode may be applied in some embodiments.
  • the prediction processing unit 260 may be further configured to partition the block 203 into smaller block partitions or sub-blocks, for example, by iteratively using quadtree partitioning (QT) , binary partitioning (BT) , triple-tree partitioning (TT) , or any combination thereof, and to perform, for example, prediction for each of the block partitions or sub-blocks, where the mode selection includes selection of a tree structure of the partitioned block 203 and prediction modes applied to each of the block partitions or sub-blocks.
  • QT quadtree partitioning
  • BT binary partitioning
  • TT triple-tree partitioning
  • the inter prediction unit 244 may include a motion estimation (ME) unit (not shown in FIG. 2) and a motion compensation (MC) unit (not shown in FIG. 2) .
  • the motion estimation unit is configured to receive or obtain the picture block 203 (the current picture block 203 of the current picture 201) and a decoded picture 231, or at least one or more previously reconstructed blocks, for example, reconstructed blocks of one or more other/different previously decoded pictures 231, for motion estimation.
  • a video sequence may include the current picture and the previously decoded pictures 231.
  • the current picture and the previously decoded pictures 231 may be a part of, or form, a sequence of pictures forming a video sequence.
  • the encoder 20 may be configured to select a reference block from a plurality of reference blocks of a same picture or different pictures of the plurality of other pictures and provide a reference picture (or a reference picture index or the like) and/or an offset (a spatial offset) between the position (coordinates X and Y) of the reference block and the position of the current block as inter prediction parameters, to the motion estimation unit (not shown in FIG. 2) .
  • This offset is also referred to as a motion vector (MV) .
  • the motion compensation unit can be configured to obtain or receive, an inter prediction parameter and to perform inter prediction based on, or by using, the inter prediction parameter, to obtain an inter prediction block 245.
  • Motion compensation performed by the motion compensation unit may include fetching or generating the prediction block based on a motion/block vector determined through motion estimation.
  • motion compensation includes performing interpolation for sub-pixel precision. Interpolation or interpolation filtering may generate additional pixel samples from known pixel samples, thereby potentially increasing a quantity of candidate prediction blocks that may be used to code a picture block.
  • the motion compensation unit 246 may locate a prediction block to which the motion vector points in one of reference picture lists.
  • the motion compensation unit 246 may also generate syntax elements associated with the blocks and the video slice for use by the video decoder 30 in decoding the picture blocks of the video slice.
  • the inter prediction unit 244 may transmit the syntax elements to the entropy encoding unit 270, and the syntax elements include the inter prediction parameter (such as indication information of selection of an inter prediction mode used for prediction of the current block after traversal of a plurality of inter prediction modes) .
  • the inter prediction parameter may be alternatively not carried in the syntax elements.
  • the decoder side 30 may perform decoding directly in a default prediction mode. It can be understood that the inter prediction unit 244 may be configured to perform any combination of inter prediction techniques.
  • the intra prediction unit 254 can be configured to obtain or receive the picture block 203 (current picture block) and one or more previously reconstructed blocks, such as reconstructed neighboring blocks, of the same picture for intra estimation.
  • the encoder 20 may be configured to select an intra prediction mode from a plurality of (e.g., predetermined) intra prediction modes.
  • the encoder 20 in this embodiment may be configured to select an intra prediction mode based on an optimization criterion, such as a minimum residual (the intra prediction mode to provide the prediction block 255 most similar to the current picture block 203) or minimum rate distortion.
  • an optimization criterion such as a minimum residual (the intra prediction mode to provide the prediction block 255 most similar to the current picture block 203) or minimum rate distortion.
  • the intra prediction unit 254 is further configured to determine the intra prediction block 255 based on an intra prediction parameter, for example, the selected intra prediction mode. In any case, after selecting an intra prediction mode for a block, the intra prediction unit 254 is also configured to provide the intra prediction parameter, that is, information indicative of the selected intra prediction mode for the block, to the entropy encoding unit 270. In one example, the intra prediction unit 254 may be configured to perform any combination of the intra prediction techniques.
  • the intra prediction unit 254 may transmit the syntax elements to the entropy encoding unit 270, and the syntax elements include the intra prediction parameter (such as indication information of selection of an intra prediction mode used for prediction of the current block after traversal of a plurality of intra prediction modes) .
  • the intra prediction parameter may be alternatively not carried in the syntax elements.
  • the decoder side 30 may perform decoding directly in a default prediction mode.
  • the entropy encoding unit 270 can be configured to apply an entropy encoding algorithm or scheme (for example, a variable length coding (VLC) scheme, a context adaptive VLC scheme (CALVC) , an arithmetic coding scheme, a context adaptive binary arithmetic coding (CABAC) , syntax-based context-adaptive binary arithmetic coding (SBAC) , probability interval partitioning entropy (PIPE) coding, or another entropy encoding methodology or technique) on the quantized residual coefficients 209, inter prediction parameters, intra prediction parameter, and/or loop filter parameters, individually or jointly (or not at all) , to obtain encoded picture data 21 that can be output by the output 272 in a form of an encoded bitstream 21.
  • VLC variable length coding
  • CABAC context adaptive binary arithmetic coding
  • SBAC syntax-based context-adaptive binary arithmetic coding
  • PIPE probability interval partitioning entropy
  • the encoded bitstream 21 may be transmitted to video decoder 30, or archived for later transmission or retrieval by the video decoder 30.
  • the entropy encoding unit 270 can be further configured to entropy encode the other syntax elements for the current video slice being coded.
  • a non-transform-based encoder 20 can quantize the residual signal directly without the transform processing unit 206 for specific blocks or frames.
  • an encoder 20 can have the quantization unit 208 and the inverse quantization unit 210 combined into a single unit.
  • the encoder 20 can be configured to implement an inter prediction method described in the following embodiment.
  • the video encoder 20 can be used to encode a video stream.
  • the video encoder 20 may quantize the residual signal directly without processing by the transform processing unit 206, and/or the inverse-transform processing unit 212.
  • the video encoder 20 does not generate residual data, and correspondingly, there is no need for the transform processing unit 206, the quantization unit 208, the inverse-quantization unit 210, and the inverse-transform processing unit 212 to perform processing.
  • the video encoder 20 may directly store a reconstructed picture block as a reference block, without processing by the filter unit 220.
  • the quantization unit 208 and the inverse-quantization unit 210 in the video encoder 20 may be combined together.
  • the loop filter unit 220 may be optional, and in a case of lossless compression encoding, the transform processing unit 206, the quantization unit 208, the inverse-quantization unit 210, and the inverse-transform processing unit 212 may be optional. It should be understood that in different application scenarios, the inter prediction unit 244 and the intra prediction unit 254 may be used selectively.
  • FIG. 3 is a schematic/conceptual block diagram of an example of a decoder 30 according to an embodiment.
  • the video decoder 30 is configured to receive encoded picture data (for example, an encoded bitstream) 21, for example, encoded by the encoder 20, to obtain a decoded picture 231.
  • encoded picture data for example, an encoded bitstream
  • the video decoder 30 receives video data, for example, an encoded video bitstream that represents picture blocks of an encoded video slice and associated syntax elements, from the video encoder 20.
  • the decoder 30 includes an entropy decoding unit 304, an inverse quantization unit 310, an inverse transform processing unit 312, a reconstruction unit 314 (for example, a summer 314) , a buffer 316, a loop filter 320, a decoded picture buffer 330, and a prediction processing unit 360.
  • the prediction processing unit 360 may include an inter prediction unit 344, an intra prediction unit 354, and a mode selection unit 362.
  • the video decoder 30 may, in some examples, perform a decoding pass generally reciprocal to the encoding pass described with respect to the video encoder 20 from FIG. 2.
  • the entropy decoding unit 304 is configured to perform entropy decoding on the encoded picture data 21 to obtain, for example, quantized coefficients 309 and/or decoded coding parameters (not shown in FIG. 3) .
  • the decoded coding parameters can include any one or all of (decoded) inter prediction parameters, intra prediction parameters, loop filter parameters, and/or other syntax elements.
  • the entropy decoding unit 304 is further configured to forward the inter prediction parameters, intra prediction parameters, and/or other syntax elements to the prediction processing unit 360.
  • the video decoder 30 may receive the syntax elements at a video slice level and/or a video block level.
  • the inverse quantization unit 310 may be identical to the inverse quantization unit 110 in function
  • the inverse transform processing unit 312 may be identical to the inverse transform processing unit 112 in function
  • the reconstruction unit 314 may be identical to the reconstruction unit 114 in function
  • the buffer 316 may be identical to the buffer 116 in function
  • the loop filter 320 may be identical to the loop filter 120 in function
  • the decoded picture buffer 330 may be identical to the decoded picture buffer 230 in function.
  • the prediction processing unit 360 may include an inter prediction unit 344 and an intra prediction unit 354, where the inter prediction unit 344 may resemble the inter prediction unit 244 in function, and the intra prediction unit 354 may resemble the intra prediction unit 254 in function.
  • the prediction processing unit 360 is typically configured to perform block prediction and/or obtain the prediction block 365 from the encoded data 21, and to receive or obtain (explicitly or implicitly) prediction related parameters and/or information about the selected prediction mode from the entropy decoding unit 304.
  • the intra prediction unit 354 of the prediction processing unit 360 is configured to generate the prediction block 365 for a picture block of the current video slice based on a signaled intra prediction mode and data from previously decoded blocks of the current frame or picture.
  • the inter prediction unit 344 for example, the motion compensation unit
  • the prediction processing unit 360 is configured to produce prediction blocks 365 for a video block (or current block) of the current video slice based on the motion vectors and other syntax elements received from the entropy decoding unit 304.
  • the prediction blocks may be produced from one of reference pictures within one of the reference picture lists.
  • the video decoder 30 may construct the reference frame lists, List 0 and List 1, by using default construction techniques based on reference pictures stored in the DPB 330.
  • the prediction processing unit 360 can be configured to determine prediction information for a video block of the current video slice by parsing the motion vectors and other syntax elements, and use the prediction information to produce the prediction blocks for the current video block being decoded. For example, the prediction processing unit 360 can use some of the received syntax elements to determine a prediction mode (for example, the intra or inter prediction) used to code the video blocks of the video slice, an inter prediction slice type (for example, B slice, P slice, or GPB slice) , construction information for one or more of the reference picture lists for the slice, motion vectors for each inter encoded video block of the slice, an inter prediction status for each inter coded video block of the slice, and other information, to decode the video blocks in the current video slice.
  • a prediction mode for example, the intra or inter prediction
  • an inter prediction slice type for example, B slice, P slice, or GPB slice
  • the syntax elements received by the video decoder 30 from a bitstream include syntax elements in one or more of an adaptive parameter set (APS) , a sequence parameter set (SPS) , a picture parameter set (PPS) , or a slice header.
  • APS adaptive parameter set
  • SPS sequence parameter set
  • PPS picture parameter set
  • slice header a slice header
  • the inverse quantization unit 310 can be configured to inversely quantize, that is, de-quantize, the quantized transform coefficients provided in the bitstream and decoded by the entropy decoding unit 304.
  • the inverse quantization process may include use of a quantization parameter calculated by the video encoder 20 for each video block in the video slice, to determine a quantization degree and, likewise, an inverse-quantization degree that should be applied.
  • the inverse transform processing unit 312 can be configured to apply an inverse transform, for example, an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process, to the transform coefficients to produce residual blocks in a pixel domain.
  • an inverse transform for example, an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process
  • the reconstruction unit 314 (for example, the summer 314) can be configured to add the inverse transform block 313 (that is, the reconstructed residual block 313) to the prediction block 365 to obtain a reconstructed block 315 in the sample domain, for example, by adding the sample values of the reconstructed residual block 313 and the sample values of the prediction block 365.
  • the loop filter unit 320 (either in a coding loop or after a coding loop) can be configured to filter the reconstructed block 315 to obtain a filtered block 321, for example, to smooth pixel transitions or improve the video quality.
  • the loop filter unit 320 may be configured to perform any combination of the filtering techniques described below.
  • the loop filter unit 320 is intended to represent one or more loop filters including a de-blocking filter, a sample-adaptive offset (SAO) filter, and other filters, for example, a bilateral filter, an adaptive loop filter (ALF) , a sharpening or smoothing filter, or a collaborative filter.
  • the loop filter unit 320 is shown in FIG. 3 as an in loop filter, in other configurations, the loop filter unit 320 may be implemented as a post loop filter.
  • the decoded video blocks 321 in a given frame or picture are then stored in the decoded picture buffer 330 that stores reference pictures used for subsequent motion compensation.
  • the decoder 30 can be configured to output the decoded picture 331, for example, by using an output 332, for presentation or viewing to a user.
  • the decoder 30 can be used to decode the compressed bitstream.
  • the decoder 30 can produce the output video stream without the loop filtering unit 320.
  • a non-transform-based decoder 30 can inversely quantize the residual signal directly without the inverse-transform processing unit 312 for specific blocks or frames.
  • the video decoder 30 can have the inverse-quantization unit 310 and the inverse-transform processing unit 312 combined into a single unit.
  • the decoder 30 can be configured to implement an inter prediction method described in the following embodiments.
  • the video decoder 30 may generate an output video stream without processing by the filter unit 320.
  • the entropy decoding unit 304 of the video decoder 30 may not obtain a quantized coefficient through decoding, and correspondingly, there is no need for the inverse-quantization unit 310 and the inverse-transform processing unit 312 to perform processing.
  • the loop filter unit 320 can be optional, and in a case of lossless compression, the inverse-quantization unit 310 and the inverse-transform processing unit 312 can be optional. It should be understood that in different application scenarios, the inter prediction unit and the intra prediction unit may be used selectively.
  • a processing result for a procedure may be further processed before it is outputted to a next procedure.
  • a procedure such as interpolation filtering, motion vector derivation, or loop filtering
  • an operation such as clip or shift is further performed on a processing result of a corresponding procedure.
  • a motion vector, derived based on a motion vector of a neighboring affine coding block, of a control point in a current picture block may be further processed.
  • a value range of the motion vector is restricted to be within a specific bit depth. Assuming that an allowed bit depth of a motion vector is bitDepth, a motion vector range is from –2 ⁇ (bitDepth–1) to 2 ⁇ (bitDepth–1) –1, where the symbol " ⁇ " represents a power. If bitDepth is 16, a value range is from –32768 to 32767. If bitDepth is 18, a value range is from –131072 to 131071. Restriction may be performed in any one of the following two manners.
  • ux (vx + 2 bitDepth ) %2 bitDepth
  • a value of vx is –32769, 32767 is obtained by using the foregoing formulas.
  • a value is stored in a computer in a two's complement form, binary supplemental code of –32769 is 1, 0111, 1111, 1111, 1111 (17 bits) , and the computer handles an overflow by discarding a high-order bit. Therefore, a value of vx is 0111, 1111, 1111, 1111, that is, 32767, which is consistent with the result obtained through processing using the above formulas.
  • vx Clip3 (–2 bitDepth-1 , 2 bitDepth–1 –1, vx)
  • vy Clip3 (–2 bitDepth-1 , 2 bitDepth–1 –1, vy)
  • Clip3 is defined to indicate clipping a value of z to a range [x, y] :
  • FIG. 4 is a schematic diagram of a video coding device 400 according to an embodiment of this disclosure.
  • the video coding device 400 is suitable for implementing the disclosed embodiments as described herein.
  • the video coding device 400 may be a decoder such as the video decoder 30 in FIG. 1A or an encoder such as the video encoder 20 in FIG. 1A.
  • the video coding device 400 may be one or more components of the video decoder 30 in FIG. 1A or the video encoder 20 in FIG. 1A as described above.
  • the video coding device 400 can include ingress ports 410 and receiver units (Rx) 420 for receiving data; a processor, a logic unit, or a central processing unit (CPU) 430 for processing the data; transmitter units (Tx) 440 and egress ports 450 for transmitting the data; and a memory 460 for storing the data.
  • the video coding device 400 may also include optical-to-electrical (OE) components and electrical-to-optical (EO) components coupled to the ingress ports 410, the receiver units 420, the transmitter units 440, and/or the egress ports 450 for egress or ingress of optical or electrical signals.
  • OE optical-to-electrical
  • EO electrical-to-optical
  • the processor 430 can be implemented by hardware and software.
  • the processor 430 may be implemented as one or more CPU chips, cores (for example, as a multi-core processor) , FPGAs, ASICs, and DSPs.
  • the processor 430 is in communication with the ingress ports 410, receiver units 420, transmitter units 440, egress ports 450, and memory 460.
  • the processor 430 includes a coding module 470.
  • the coding module 470 implements the disclosed embodiments described above. For example, the coding module 470 implements, processes, prepares, or provides the various coding operations. Inclusion of the coding module 470 therefore provides substantial improvement to the functionality of the video coding device 400 and affects a transformation of the video coding device 400 to a different state.
  • the coding module 470 is implemented as instructions stored in the memory 460 and executed by the processor 430.
  • the memory 460 includes one or more disks, tape drives, and solid state drives and may be used as an overflow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution.
  • the memory 460 may be volatile and/or nonvolatile and may be a read-only memory (ROM) , a random access memory (RAM) , a ternary content-addressable memory (TCAM) , and/or a static random access memory (SRAM) .
  • FIG. 5 is a simplified block diagram of an apparatus 500 that can be used as any one or two of the source device 12 and the destination device 14 in FIG. 1A according to an example embodiment.
  • the apparatus 500 can implement the techniques of this application.
  • FIG. 5 is a schematic block diagram of an implementation of an encoding device or a decoding device (coding device 500 for short) according to an embodiment of this application.
  • the coding device 500 may include a processor 510, a memory 530, and a bus system 550.
  • the processor is connected to the memory via the bus system.
  • the memory is configured to store an instruction, and the processor is configured to execute the instruction stored in the memory.
  • the memory of the coding device stores program code.
  • the processor can invoke the program code stored in the memory, to perform the video encoding or decoding methods described in this application, and in particular, various inter and/or intra prediction methods. To avoid repetition, details are not described herein again.
  • the processor 510 may be a central processing unit ( "CPU” for short) , or the processor 510 may be another general purpose processor, a digital signal processor (DSP) , an application-specific integrated circuit (ASIC) , a field-programmable gate array (FPGA) or another programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like.
  • the processor 510 may be a microprocessor, or any conventional processor or the like.
  • the memory 530 may include a read-only memory (ROM) device or a random access memory (RAM) device. Any other proper type of storage device may be alternatively used as the memory 530.
  • the memory 530 may include code and data 531 accessed by the processor 510 by using the bus 550.
  • the memory 530 may further include an operating system 533 and an application program 535.
  • the application program 535 includes at least one program that allows the processor 510 to perform the video encoding or decoding method (in particular, the inter and/or intra prediction methods described in this application) described in this application.
  • the application program 535 may include applications 1 to N, and further includes a video encoding or decoding application (video coding application for short) that performs the video encoding or decoding method described in this application.
  • the bus system 550 may not only include a data bus, but also include a power bus, a control bus, a status signal bus, and the like. However, for clear description, various types of buses in the figure are marked as the bus system 550.
  • the coding device 500 may further include one or more output devices, for example, a display 570.
  • the display 570 may be a touch display that combines a display and a touch unit that operably senses touch input.
  • the display 570 may be connected to the processor 510 by using the bus 550.
  • FIG. 6 is a schematic flowchart of an inter prediction method according to an embodiment.
  • the method of Figure 6 enables a coder to process image blocks of which a size bigger than a preset size associated with the coder (such as a buffer size) .
  • the method can be implemented by hardware, software, or any combination thereof.
  • the method can be implemented by inter prediction unit 244 or 344.
  • the method can be a decoding method or a encoding method. As shown in FIG. 6, the method includes the following operations.
  • Operation S601. (A coder (such as encoder 20 or decoder 30 of Figure 1) or video coding system) obtains initial motion information of at least two sub-blocks of a current picture block.
  • the current picture block can be a coding block, a CU, a PU, or a TU, etc.
  • the current picture block can be of any sizes and dimensions.
  • the picture block is divided/split into a number of sub-blocks and initial motion information is determined for at least two of the number of sub-blocks (e.g., a subset, or all of the sub-blocks) based on initial motion information for the current picture block.
  • Operation S602. determines motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks.
  • the positions of the at least two sub-blocks may be pixel positions relative to a position of the current picture block.
  • Operation S603. determines motion information of the current picture block based on the motion information of the at least two sub-blocks.
  • Operation S604. (The system) determines a prediction block of the current picture block based on the motion information of the current picture block.
  • the current picture block consists of the at least two sub-blocks.
  • the determining motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks includes: determining the motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and the positions of the at least two sub-blocks using a decoder-side motion vector refinement (DMVR) method.
  • DMVR is a inter prediction method.
  • VTM6 a bilateral-matching based decoder side motion vector refinement
  • a refined MV is searched around the initial MVs in the reference picture list L0 and reference picture list L1.
  • the BM method calculates the distortion between the two candidate blocks in the reference picture list L0 and list L1.
  • the SAD between the blocks based on each MV candidate around the initial MV is calculated.
  • the MV candidate with the lowest SAD becomes the refined MV and used to generate the bi-predicted signal.
  • the DMVR can be applied for the CUs or sub-blocks which are coded with one or more following modes and features:
  • One reference picture is in the past and another reference picture is in the future with respect to the current picture
  • CU or sub-block has more than 64 luma samples
  • Both CU or sub-block height and CU or sub-block width are larger than or equal to 8 luma samples
  • BCW weight index indicates equal weight
  • the refined MV derived by DMVR process is used to generate the inter prediction samples and also used in temporal motion vector prediction for future pictures coding. While the original MV or initial MV is used in deblocking process and also used in spatial motion vector prediction for future CU or sub-block coding.
  • search points are surrounding the initial MV and the MV offset obey the MV difference mirroring rule.
  • candidate MV pair MV0, MV1
  • MV0' MV0+MV_offset (3-31)
  • MV1' MV1-MV_offset (3-32)
  • MV offset represents the refinement offset between the initial MV and the refined MV in one of the reference pictures.
  • the refinement search range is two integer luma samples from the initial MV.
  • the searching includes the integer sample offset search stage and fractional sample refinement stage.
  • 25 points full search is applied for integer sample offset searching.
  • the SAD of the initial MV pair is first calculated. If the SAD of the initial MV pair is smaller than a threshold, the integer sample stage of DMVR is terminated. Otherwise SADs of the remaining 24 points are calculated and checked in raster scanning order. The point with the smallest SAD is selected as the output of integer sample offset searching stage. To reduce the penalty of the uncertainty of DMVR refinement, it is proposed to favor the original MV during the DMVR process. The SAD between the reference blocks referred by the initial MV candidates is decreased by 1/4 of the SAD value.
  • the integer sample search is followed by fractional sample refinement.
  • the fractional sample refinement is derived by using parametric error surface equation, instead of additional search with SAD comparison.
  • the fractional sample refinement is conditionally invoked based on the output of the integer sample search stage. When the integer sample search stage is terminated with center having the smallest SAD in either the first iteration or the second iteration search, the fractional sample refinement is further applied.
  • E (x, y) A (x-x_min ) ⁇ 2+B (y-y_min ) ⁇ 2+C (3-33)
  • x_min and y_min are automatically constrained to be between –8 and 8 since all cost values are positive and the smallest value is E (0, 0) . This corresponds to half peal offset with 1/16th-pel MV accuracy (in VTM6) .
  • the computed fractional (x_min, y_min) are added to the integer distance refinement MV to get the sub-pixel accurate refinement delta MV.
  • the resolution of the MVs is 1/16 luma samples.
  • the samples at the fractional position are interpolated using a 8-tap interpolation filter.
  • the search points are surrounding the initial fractional-pel MV with integer sample offset, therefore the samples of those fractional position needs to be interpolated for DMVR search process.
  • the bi-linear interpolation filter is used to generate the fractional samples for the searching process in DMVR. Another important effect is that by using bi-linear filter is that with 2-sample search range, the DVMR does not access more reference samples compared to the normal motion compensation process.
  • the normal 8-tap interpolation filter is applied to generate the final prediction. In order to not access more reference samples to normal MC process, the samples, which is not needed for the interpolation process based on the original MV but is needed for the interpolation process based on the refined MV, will be padded from those available samples.
  • width and/or height of a CU or sub-block are larger than 16 luma samples, it will be further split into subblocks with width and/or height equal to 16 luma samples.
  • the maximum unit size for DMVR searching process is limit to 16x16.
  • the obtaining initial motion information of at least two sub-blocks of a current picture block includes: obtaining initial motion information of the current picture block, and using the initial motion information of the current picture block as the initial motion information of the at least two sub-blocks of the current picture block.
  • the obtaining initial motion information of at least two sub-blocks of a current picture block is performed when a size of the current picture block is greater than a preset size; and the method further includes: when the size of the current picture block is not greater than the preset size, obtaining the initial motion information of the current picture block; determining the motion information of the current picture block based on the initial motion information of the current picture block and a position of the current picture block; and determining the prediction block of the current picture block based on the motion information of the current picture block.
  • the preset size is 32 times 32.
  • the determining motion information of the current picture block based on the motion information of the at least two sub-blocks includes: determining a mean of the motion information of the at least two sub-blocks as the motion information of the current picture block. In one embodiment, a clip operation or a rounding operation is performed in a process of determining the mean. In one embodiment, if a size of the current picture block is greater than a preset size, the method further splits the current picture block into at least two sub-blocks, the size of each sub-block is smaller than or equal to the preset size.
  • Forward prediction means that a reference picture is selected for a current coding block from a forward reference-picture set to obtain a reference block.
  • Backward prediction means that a reference picture is selected for a current coding block from a backward reference-picture set to obtain a reference block.
  • Bidirectional prediction means that a reference picture is selected from each of a forward reference-picture set and a backward reference-picture set to obtain a reference block.
  • the bidirectional prediction method there are two reference blocks for a current coding block. Each reference block needs to be indicated by a motion vector and a reference frame index, and a predicted value of a pixel value of a pixel in the current block is determined based on pixel values of pixels in the two reference blocks.
  • two prediction blocks formed by using an MV in a list 0 and an MV in a list 1 respectively are combined to generate a single prediction signal.
  • two bidirectional prediction motion vectors are further refined in a bilateral-template matching process. Bilateral-template matching is performed on a decoder, to perform distortion-based search between a bilateral template and a reconstructed sample in a reference picture, so as to obtain a refined MV with no need to send additional motion information.
  • a bilateral template is generated from initial MV 0 in the list 0 and MV 1 in the list 1 separately to serve as a weighted combination (namely, a mean) of the two prediction blocks.
  • the template matching operation includes cost measurement of a template generated through calculation and cost measurement of a sample area (surrounding an initial prediction block) in a reference picture. For each of the two reference pictures, an MV that causes minimum template costs is considered as an updated MV in the list to replace an original MV.
  • each list is searched for nine candidate MVs.
  • the nine candidate MVs include an original MV and eight surrounding MVs.
  • a luminance sample is shifted to the original MV in a horizontal or vertical direction or in horizontal and vertical directions.
  • two new MVs namely, MV 0' and MV 1', are used to generate a final bidirectional prediction result.
  • the sum of absolute differences (SAD) is used for cost measurement.
  • DMVR can be applied to a merge mode of bidirectional prediction.
  • One MV comes from an existing reference picture, and the other MV comes from a future reference picture, with no need to send additional syntax elements.
  • a DMVR process may be: obtaining initial motion information of a current picture block; determining positions of N forward reference blocks and positions of N backward reference blocks based on the initial motion information and a position of the current picture block, where the N forward reference blocks are in a forward reference picture, the N backward reference blocks are in a backward reference picture, and N is an integer greater than 1; based on a matching cost criterion, determining positions of a pair of reference blocks from positions of M pairs of reference blocks as a position of a target forward reference block of the current picture block and a position of a target backward reference block of the current picture block, where positions of each pair of reference blocks include a position of one forward reference block and a position of one backward reference block, and for the positions of each pair of reference blocks, a mirror relationship is formed between a first position offset and a second position offset, where the first position offset indicates a position offset of the position of the forward reference block relative to a position of an initial forward reference block, the second position offset indicates a
  • the positions of the N forward reference blocks in the forward reference picture and the positions of the N backward reference blocks in the backward reference picture form positions of N pairs of reference blocks.
  • the mirror relationship is formed between the first position offset of the position of the forward reference block relative to the position of the initial forward reference block, and the second position offset of the position of the backward reference block relative to the position of the initial backward reference block.
  • positions of a pair of reference blocks are determined from the positions of the N pairs of reference blocks as a position of a target forward reference block (that is, an optimal forward reference block/forward prediction block) of the current picture block and a position of a target backward reference block (that is, an optimal backward reference block/backward prediction block) to obtain the predicted value of the pixel value of the current picture block based on the pixel value of the target forward reference block and the pixel value of the target backward reference block.
  • this method avoids a calculation process of calculating a template matching block in advance and avoids a forward search and matching process and a backward search and matching process that are performed by using a template matching block separately, thereby simplifying an image prediction process.
  • a cache memory is a key cache.
  • a cache size of a current video codec chip is 2 ⁇ 32 ⁇ 32.
  • a current DMVR may be applied to a coding block with a maximum size of 128 ⁇ 128. That is, in a comparison between two blocks, a 2 ⁇ 128 ⁇ 128 cache size is required. Therefore, hardware implementation costs are excessively high.
  • the coding block is divided into sub-blocks, and DMVR processing is performed on the sub-blocks separately, to obtain MV information of each sub-block.
  • a mean MV of MV information of all the sub-blocks is obtained and is used as motion vector information of the coding block (an operation such as a clip operation or a rounding operation may be performed in a process of obtaining the mean) .
  • Mean processing resolves a problem such as artifacts caused by de-blocking that may occur due to different MVs of the sub-blocks, and inconsistency in H. 266.
  • an embodiment further provides an inter prediction apparatus.
  • the inter prediction apparatus includes a motion information determining unit and a prediction block determining unit.
  • the motion information determining unit and the prediction block determining unit may be applied to an inter prediction process at an encoder side or a decoder side.
  • the units can be applied to the inter prediction unit 244 in the prediction processing unit 260 of the encoder 20.
  • the units can be applied to the inter prediction unit 344 in the prediction processing unit 360 of the decoder 30.
  • the motion information determining unit and the prediction block determining unit can be implemented by hardware, software, or any combination thereof.
  • the motion information determining unit configured to: obtain initial motion information of at least two sub-blocks of a current picture block; determine motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks; and determine motion information of the current picture block based on the motion information of the at least two sub-blocks; and a prediction block determining unit, configured to determine a prediction block of the current picture block based on the motion information of the current picture block.
  • FIG. 7 is a block diagram showing a content supply system 3100 for realizing content distribution service.
  • This content supply system 3100 includes capture device 3102, terminal device 3106, and optionally includes display 3126.
  • the capture device 3102 communicates with the terminal device 3106 over communication link 3104.
  • the communication link may include the communication channel 13 described above.
  • the communication link 3104 includes but not limited to WIFI, Ethernet, Cable, wireless (3G/4G/5G) , USB, or any kind of combination thereof, or the like.
  • the capture device 3102 generates data, and may encode the data by the encoding method as shown in the above embodiments. Alternatively, the capture device 3102 may distribute the data to a streaming server (not shown in the Figures) , and the server encodes the data and transmits the encoded data to the terminal device 3106.
  • the capture device 3102 includes but not limited to camera, smart phone or Pad, computer or laptop, video conference system, PDA, vehicle mounted device, or a combination of any of them, or the like.
  • the capture device 3102 may include the source device 12 as described above. When the data includes video, the video encoder 20 included in the capture device 3102 may actually perform video encoding processing.
  • an audio encoder included in the capture device 3102 may actually perform audio encoding processing.
  • the capture device 3102 distributes the encoded video and audio data by multiplexing them together.
  • the encoded audio data and the encoded video data are not multiplexed.
  • Capture device 3102 distributes the encoded audio data and the encoded video data to the terminal device 3106 separately.
  • the terminal device 310 receives and reproduces the encoded data.
  • the terminal device 3106 could be a device with data receiving and recovering capability, such as smart phone or Pad 3108, computer or laptop 3110, network video recorder (NVR) /digital video recorder (DVR) 3112, TV 3114, set top box (STB) 3116, video conference system 3118, video surveillance system 3120, personal digital assistant (PDA) 3122, vehicle mounted device 3124, or a combination of any of them, or the like capable of decoding the above-mentioned encoded data.
  • the terminal device 3106 may include the destination device 14 as described above.
  • the encoded data includes video
  • the video decoder 30 included in the terminal device is prioritized to perform video decoding.
  • an audio decoder included in the terminal device is prioritized to perform audio decoding processing.
  • the terminal device can feed the decoded data to its display.
  • NVR network video recorder
  • DVR digital video recorder
  • TV 3114 TV 3114
  • PDA personal digital assistant
  • the terminal device can feed the decoded data to its display.
  • STB 3116, video conference system 3118, or video surveillance system 3120 an external display 3126 is contacted therein to receive and show the decoded data.
  • the picture encoding device or the picture decoding device can be used.
  • FIG. 8 is a diagram showing a structure of an example of the terminal device 3106.
  • the protocol proceeding unit 3202 analyzes the transmission protocol of the stream.
  • the protocol includes but not limited to Real Time Streaming Protocol (RTSP) , Hyper Text Transfer Protocol (HTTP) , HTTP Live streaming protocol (HLS) , MPEG-DASH, Real-time Transport protocol (RTP) , Real Time Messaging Protocol (RTMP) , or any kind of combination thereof, or the like.
  • RTSP Real Time Streaming Protocol
  • HTTP Hyper Text Transfer Protocol
  • HLS HTTP Live streaming protocol
  • MPEG-DASH Real-time Transport protocol
  • RTP Real-time Transport protocol
  • RTMP Real Time Messaging Protocol
  • stream file is generated.
  • the file is outputted to a demultiplexing unit 3204.
  • the demultiplexing unit 3204 can separate the multiplexed data into the encoded audio data and the encoded video data. As described above, for some practical scenarios, for example in the video conference system, the encoded audio data and the encoded video data are not multiplexed. In this situation, the encoded data is transmitted to video decoder 3206 and audio decoder 3208 without through the demultiplexing unit 3204.
  • video elementary stream (ES) ES
  • audio ES and optionally subtitle are generated.
  • the video decoder 3206 which includes the video decoder 30 as explained in the above mentioned embodiments, decodes the video ES by the decoding method as shown in the above-mentioned embodiments to generate video frame, and feeds this data to the synchronous unit 3212.
  • the audio decoder 3208 decodes the audio ES to generate audio frame, and feeds this data to the synchronous unit 3212.
  • the video frame may store in a buffer (not shown in FIG. 8) before feeding it to the synchronous unit 3212.
  • the audio frame may store in a buffer (not shown in FIG. 8) before feeding it to the synchronous unit 3212.
  • the synchronous unit 3212 synchronizes the video frame and the audio frame, and supplies the video/audio to a video/audio display 3214.
  • the synchronous unit 3212 synchronizes the presentation of the video and audio information.
  • Information may code in the syntax using time stamps concerning the presentation of coded audio and visual data and time stamps concerning the delivery of the data stream itself.
  • the subtitle decoder 3210 decodes the subtitle, and synchronizes it with the video frame and the audio frame, and supplies the video/audio/subtitle to a video/audio/subtitle display 3216.
  • the present invention is not limited to the above-mentioned system, and either the picture encoding device or the picture decoding device in the above-mentioned embodiments can be incorporated into other system, for example, a car system.
  • the computer-readable medium may include a computer-readable storage medium, which corresponds to a tangible medium such as a data storage medium, or a communication medium including any medium that facilitates transfer of a computer program from one place to another, for example, according to a communications protocol.
  • the computer-readable medium generally may correspond to (1) a tangible computer-readable storage medium that is non-transitory or (2) a communication medium such as a signal or a carrier wave.
  • the data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, code, and/or data structures for implementation of the techniques described in this disclosure.
  • a computer program product may include the computer-readable medium.
  • such computer-readable storage media may include a RAM, a ROM, an EEPROM, a CD-ROM or another compact disc storage, a magnetic disk storage or another magnetic storage device, a flash memory, or any other medium that can be used to store desired program code in a form of instructions or data structures and that can be accessed by a computer.
  • any connection is properly termed a computer-readable medium.
  • the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in a definition of medium.
  • the computer-readable storage medium and data storage medium do not include connections, carrier waves, signals, or other transitory media, but are non-transitory tangible storage media.
  • Disks and discs include a compact disc (CD) , a laser disc, an optical disc, a digital versatile disc (DVD) , a floppy disk, and a Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the foregoing should also be included within the scope of the computer-readable medium.
  • processors such as one or more digital signal processors (DSPs) , general purpose microprocessors, application-specific integrated circuits (ASICs) , field-programmable gate arrays (FPGAs) , or other equivalent integrated or discrete logic circuits.
  • DSPs digital signal processors
  • ASICs application-specific integrated circuits
  • FPGAs field-programmable gate arrays
  • processors may refer to any of the foregoing structures or any other structures suitable for implementation of the techniques described herein.
  • the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated into a combined codec. Further, the techniques may be fully implemented in one or more circuits or logic elements.
  • the techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) , or a set of ICs (for example, a chip set) .
  • IC integrated circuit
  • Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require implementation by different hardware units.
  • various units may be combined, in combination with suitable software and/or firmware, into a codec hardware unit, or be provided by interoperative hardware units (including one or more processors described above) .

Abstract

Inter prediction method and apparatus. The method includes: obtaining initial motion information of at least two sub-blocks of a current picture block (S601); determining motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks (S602); determining motion information of the current picture block based on the motion information of the at least two sub-blocks (S603); and determining a prediction block of the current picture block based on the motion information of the current picture block (S604).

Description

INTER PREDICTION METHOD AND APPARATUS (AN INTER PREDICTION METHOD AND RELATED APPARATUS) TECHNICAL FIELD
The present invention relates to the field of video encoding and decoding, and in particular, to an inter prediction method and apparatus for a video image, and a corresponding encoder and decoder.
BACKGROUND
Digital video capabilities can be incorporated into a wide variety of apparatuses, including digital televisions, digital live broadcast systems, wireless broadcast systems, personal digital assistants (PDA) , laptop or desktop computers, tablet computers, e-book readers, digital cameras, digital recording apparatuses, digital media players, video game apparatuses, video game consoles, cellular or satellite radio phones (such as "smartphones" ) , video conferencing apparatuses, video streaming apparatuses, and the like. Digital video apparatuses implement video compression technologies, for example, video compression technologies described in standards defined by MPEG-2, MPEG-4, ITU-T H. 263, and ITU-T H. 264/MPEG-4 Part 10 advanced video coding (AVC) , the video coding standard H. 265/high efficiency video coding (HEVC) standard, and extensions of such standards. A video apparatus can transmit, receive, encode, decode, and/or store digital video information more efficiently by implementing such video compression technologies.
Video coding (video encoding and decoding) is used in a wide range of digital video applications, for example broadcast digital TV, video transmission over internet and mobile networks, real-time conversational applications such as video chat, video conferencing, DVD and Blu-ray discs, video content acquisition and editing systems, and camcorders of security applications.
In the video compression technologies, spatial (intra-image) prediction and/or temporal (inter-image) prediction are/is performed to reduce or remove inherent redundancy in video sequences. For block-based video coding, a video slice (that is, a video frame or a portion of a video frame) may be partitioned into picture blocks, and the picture block may also be referred to as a tree block, a coding unit (CU) , and/or a coding node. A picture block in a to-be-intra-coded (I) slice of an image is coded through spatial prediction of reference samples in neighboring blocks in the same image. For a picture block in a to-be-inter-coded (P or B) slice of an image, spatial prediction of reference samples in neighboring blocks in the same image or temporal prediction of reference samples in other reference pictures may be used. The image may be referred to as a frame, and the reference picture may be referred to as a reference frame.
The amount of video data needed to depict even a relatively short video can be substantial, which may result in difficulties when the data is to be streamed or otherwise communicated across a communications network with limited bandwidth capacity. Thus, video data is generally compressed before being communicated across modern day telecommunications networks. The size of a video could also be an issue when the video is stored on a storage device because memory resources may be limited. Video compression devices often use software and/or hardware at the source to code the video data prior to transmission or storage, thereby decreasing the quantity of data needed to represent digital video images. The compressed data is then received at the destination by a video decompression device that decodes the video data. With limited network resources and ever increasing demands of higher video quality, improved compression and decompression techniques that improve compression ratio with little to no sacrifice in picture quality are desirable.
SUMMARY
Embodiments of this application provide an inter prediction method and apparatus for a video image, and a corresponding encoder and decoder according to the independent claims, to improve prediction accuracy of motion information of a picture block to some  extent, thereby improving encoding and decoding performance.
The foregoing and other objects are achieved by the subject matter of the independent claims. Further implementation forms are apparent from the dependent claims, the description and the figures.
Particular embodiments are outlined in the attached independent claims, with other embodiments in the dependent claims.
According to a first aspect, an embodiment of this application provides an inter prediction method, including:
obtaining initial motion information of at least two sub-blocks of a current picture block;
determining motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks;
determining motion information of the current picture block based on the motion information of the at least two sub-blocks; and
determining a prediction block of the current picture block based on the motion information of the current picture block.
Wherein the current picture block may comprises a current coding block.
Wherein the initial motion information is obtained from one or two reference picture lists of the current picture block.
Wherein the initial motion information of at least two sub-blocks are the same.
Wherein the initial motion information may comprises one or two motion vectors respectively from the one or two reference picture lists of the current picture block.
Wherein the initial motion information of at least two sub-blocks may comprises initial motion information of each sub-block, wherein the initial motion information of each sub-block one or two motion vectors respectively from the one or two reference picture lists of the current picture block.
Wherein the motion information of the at least two sub-blocks may comprises motion information of each sub-block, wherein the motion information of each sub-block may comprises one or two motion vectors determined based on the initial motion information of at least two sub-blocks.
Wherein the motion information may further comprises one or two reference picture indices of the the one or two reference picture lists related to the one or more motion vectors, or one or two MVDs (motion vector differences) related to the one or more motion vectors.
In a possible implementation form of the method according to the first aspect as such, the current picture block includes the at least two sub-blocks.
In a possible implementation form of the method according to the first aspect as such, the current picture block consists of the at least two sub-blocks.
In a possible implementation form of the method according to any preceding implementation of the first aspect or the first aspect as such, the determining motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks includes: determining the motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and the positions of the at least two sub-blocks by using a decoder-side motion vector refinement DMVR method.
In a possible implementation form of the method according to any preceding implementation of the first aspect or the first aspect as such, the obtaining initial motion information of at least two sub-blocks of a current picture block includes: obtaining initial motion information of the current picture block, and using the initial motion information of the current picture block as the initial motion information of the at least two sub-blocks of the current picture block.
Wherein the initial motion information of the current picture block may comprises one or two motion vectors respectively from the one or two reference picture lists of the current picture block.
In a possible implementation form of the method according to any preceding implementation of the first aspect or the first aspect as such, the obtaining initial motion information of at least two sub-blocks of a current picture block is performed when a size of the current picture block is greater than a preset size; and
the method further includes: when the size of the current picture block is not greater than the preset size, obtaining the initial motion information of the current picture  block; determining the motion information of the current picture block based on the initial motion information of the current picture block and a position of the current picture block; and determining the prediction block of the current picture block based on the motion information of the current picture block.
In a possible implementation form of the method according to any preceding implementation of the first aspect or the first aspect as such, the determining motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks is performed when a size of the current picture block is greater than a preset size.
In a possible implementation form of the method according to any preceding implementation of the first aspect or the first aspect as such, the obtaining initial motion information of at least two sub-blocks of a current picture block is performed on condition that a size of the current picture block is greater than a preset size; and
the method further includes: on condition that the size of the current picture block is not greater than the preset size, obtaining the initial motion information of the current picture block; determining the motion information of the current picture block based on the initial motion information of the current picture block and a position of the current picture block; and determining the prediction block of the current picture block based on the motion information of the current picture block.
In a possible implementation form of the method according to any preceding implementation of the first aspect or the first aspect as such, the determining motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks is performed on condition that a size of the current picture block is greater than a preset size.
In a possible implementation form of the method according to any preceding implementation of the first aspect or the first aspect as such, the preset size is 32 times 32.
In a possible implementation form of the method according to any preceding implementation of the first aspect or the first aspect as such, the determining motion  information of the current picture block based on the motion information of the at least two sub-blocks includes: determining a mean of the motion information of the at least two sub-blocks as the motion information of the current picture block.
In a possible implementation form of the method according to any preceding implementation of the first aspect or the first aspect as such, a clip operation or a rounding operation is performed in a process of determining/calculating the mean.
In a possible implementation form of the method according to any preceding implementation of the first aspect or the first aspect as such, on condition that a size of the current picture block is greater than a preset size, the method further comprises: splitting the current picture block into the at least two sub-blocks, the size of each sub-block is smaller than or equal to the preset size.
According to a second aspect, an embodiment of this application provides an inter prediction apparatus, including several functional units configured to implement any one of the methods in the first aspect. For example, the inter prediction apparatus may include: a motion information determining unit, configured to: obtain initial motion information of at least two sub-blocks of a current picture block; determine motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks; and determine motion information of the current picture block based on the motion information of the at least two sub-blocks; and a prediction block determining unit, configured to determine a prediction block of the current picture block based on the motion information of the current picture block.
In a possible implementation form of the apparatus according to the second aspect as such, the current picture block includes the at least two sub-blocks.
In a possible implementation form of the apparatus according to the second aspect as such, the current picture block consists of the at least two sub-blocks.
In a possible implementation form of the apparatus according to any preceding implementation of the second aspect or the second aspect as such, the motion information determining unit, configured to: determine the motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and the positions of the at least two sub-blocks by using a decoder-side motion vector refinement  DMVR method.
In a possible implementation form of the apparatus according to any preceding implementation of the second aspect or the second aspect as such, the motion information determining unit, configured to: obtain initial motion information of the current picture block, and use the initial motion information of the current picture block as the initial motion information of the at least two sub-blocks of the current picture block.
Wherein the initial motion information of the current picture block may comprises one or two motion vectors respectively from the one or two reference picture lists of the current picture block.
In a possible implementation form of the apparatus according to any preceding implementation of the second aspect or the second aspect as such, the motion information determining unit, configured to: obtain initial motion information of at least two sub-blocks of a current picture block is performed when a size of the current picture block is greater than a preset size; and
the motion information determining unit, further configured to: when the size of the current picture block is not greater than the preset size, obtain the initial motion information of the current picture block; determine the motion information of the current picture block based on the initial motion information of the current picture block and a position of the current picture block; and the prediction block determining unit, further configured to determining the prediction block of the current picture block based on the motion information of the current picture block.
In a possible implementation form of the apparatus according to any preceding implementation of the second aspect or the second aspect as such, the motion information determining unit, configured to: determine motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks is performed when a size of the current picture block is greater than a preset size.
In a possible implementation form of the apparatus according to any preceding implementation of the second aspect or the second aspect as such, the preset size is 32 times  32.
In a possible implementation form of the apparatus according to any preceding implementation of the second aspect or the second aspect as such, the motion information determining unit, configured to: determine a mean of the motion information of the at least two sub-blocks as the motion information of the current picture block.
In a possible implementation form of the apparatus according to any preceding implementation of the second aspect or the second aspect as such, a clip operation or a rounding operation is performed in a process of determining/calculating the mean.
In a possible implementation form of the apparatus according to any preceding implementation of the second aspect or the second aspect as such, on condition that a size of the current picture block is greater than a preset size, the apparatus further comprises splitting unit, configured to: split the current picture block into the at least two sub-blocks, the size of each sub-block is smaller than or equal to the preset size.
In different application scenarios, the inter prediction apparatus is, for example, applied to a video encoding apparatus (avideo encoder) or a video decoding apparatus (avideo decoder) .
The method according to the first aspect of the invention can be performed by the apparatus according to the second aspect of the application. Further features and implementation forms of the method according to the second aspect of the application correspond to the features and implementation forms of the apparatus according to the first aspect of the application.
According to a third aspect, an embodiment of this application provides an image prediction apparatus, where the apparatus includes a processor and a memory coupled to the processor, and the processor is configured to perform the method in any one of the implementations of the first aspect.
According to a fourth aspect, an embodiment of this application provides a video decoding device, including a non-volatile storage medium and a processor. The non-volatile storage medium stores an executable program, the processor and the non-volatile storage medium are coupled to each other, and the processor executes the executable program to implement the method in any one of the first aspect or the implementations of the first aspect.
According to a fifth aspect, an embodiment of this application provides a non-transitory machine-readable storage medium (or computer-readable storage medium) , where the computer-readable storage medium stores an instruction, and when the instruction runs on a computer, executed by one or more processors, the computer is enabled to perform the method in any one of the first aspect or the implementations of the first aspect.
According to a sixth aspect, an embodiment of this application provides a computer program product including an instruction, and when the computer program product runs on a computer, the computer is enabled to perform the method in any one of the first aspect or the implementations of the first aspect.
According to a seventh aspect, an embodiment of this application provides a computer program comprising program code for performing the method according to the first or any possible embodiment of the first aspect when executed on a computer.
It should be understood that beneficial effects achieved by the aspects and corresponding implementable design manners are similar, and are not repeated.
Details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description, drawings, and claims.
BRIEF DESCRIPTION OF THE DRAWINGS
To describe technical solutions in embodiments of the present invention or in the background more clearly, the following describes accompanying drawings required for describing the embodiments of the present invention or the background.
FIG. 1A is a block diagram of an example of a video encoding and decoding system 10 according to an embodiment;
FIG. 1B is a block diagram of an example of a video coding system 40 according to an embodiment;
FIG. 2 is a block diagram of an example structure of an encoder 20 according to an embodiment;
FIG. 3 is a block diagram of an example structure of a decoder 30 according to an  embodiment;
FIG. 4 is a block diagram of an example of a video coding device 400 according to an embodiment;
FIG. 5 is a block diagram of another example of an encoding apparatus or a decoding apparatus according to an embodiment ; and
FIG. 6 is a schematic flowchart of an inter prediction method according to an embodiment.
FIG. 7 is a block diagram showing an example structure of a content supply system 3100 which realizes a content delivery service.
FIG. 8 is a block diagram showing a structure of an example of a terminal device.
DETAILED DESCRIPTION OF THE EMBODIMENTS
The following describes embodiments of the present invention with reference to accompanying drawings in the embodiments of the present invention. In the following descriptions, reference is made to the accompanying drawings that form a part of this disclosure and that show, by way of illustration, specific aspects of the embodiments of the present invention or specific aspects in which the embodiments of the present invention may be used. It should be understood that the embodiments of the present invention may be used in other aspects, and may include structural or logical changes not depicted in the accompanying drawings. Therefore, the following detailed descriptions shall not be construed as limitation, and the scope of the present invention is defined by the appended claims. For example, it should be understood that disclosed content with reference to a described method may also hold true for a corresponding device or system configured to perform the method, and vice versa. For example, if one or more specific method operations are described, a corresponding device may include one or more units such as functional units for performing the described one or more method operations (for example, one unit performing the one or more operations; or a plurality of units, each of which performs one or more of the plurality of operations) , even if such one or more units are not explicitly described or illustrated in the accompanying drawings. Correspondingly, for example, if a specific apparatus is described  based on one or more units such as functional units, a corresponding method may include one or more operations for performing a functionality of the one or more units (for example, one operation performing the functionality of the one or more units; or a plurality of operations, each of which performs a functionality of one or more of the plurality of units) , even if such one or more operations are not explicitly described or illustrated in the accompanying drawings. Further, it should be understood that features of the various example embodiments and/or aspects described in this specification may be combined with each other, unless specifically noted otherwise.
The technical solutions in the embodiments of the present invention may not only be applied to existing video coding standards (such as the H. 264 standard and the HEVC standard) , but also be applied to future video coding standards (such as the H. 266 standard) . Terms used in the implementation part of the present invention are merely intended to explain specific embodiments of the present invention, but are not intended to limit the present invention. In the following, some concepts that may be used in the embodiments of the present invention are first described briefly.
Video coding typically refers to processing of a sequence of pictures that form a video or a video sequence. The term "frame" or "image" may be used as a synonym of the term "picture" in the field of video coding. Video coding used in this application (or this disclosure) indicates either video encoding or video decoding. Video encoding is performed at a source side, and typically includes: processing (for example, through compression) original video pictures to reduce an amount of data required for representing the video pictures (for more efficient storage and/or transmission) . Video decoding is performed at a destination side, and typically includes: inverse processing relative to an encoder, to reconstruct the video pictures. Embodiments referring to "coding" of video pictures (or pictures in general, as will be explained below) shall be understood as relating to either "encoding" or "decoding" for a video sequence. A combination of an encoding part and a decoding part is also referred to as a codec (Coding and Decoding) .
A video sequence includes a series of images (picture) , the image is further partitioned into slices (slice) , and the slice is further partitioned into blocks (block) . In video coding, coding processing is performed per block. In some new video coding standards, a  concept of block is further extended. For example, in the H. 264 standard, there is a macroblock (MB) , and the macroblock may be further partitioned into a plurality of prediction blocks (partition) that can be used for predictive coding. In the high efficiency video coding (HEVC) standard, basic concepts such as a coding unit (CU) , a prediction unit (PU) , and a transform unit (TU) are used, so that a plurality of types of block units are obtained through functional division, and the units are described with reference to a new tree-based structure. For example, a CU may be partitioned into smaller CUs based on a quadtree, and the smaller CU may continue to be partitioned, thereby forming a quadtree structure, and the CU is a basic unit for partitioning and coding a coded image. The PU and the TU also have a similar tree structure, and the PU may correspond to a prediction block and is a basic unit of predictive coding. The CU is further partitioned into a plurality of PUs according to a partitioning mode. The TU may correspond to a transform block, and is a basic unit for transforming a prediction residual. Essentially, all of the CU, the PU, and the TU are concepts of blocks (or picture blocks) .
For example, in HEVC, a coding tree unit (CTU) is split into a plurality of CUs by using a quadtree structure denoted as a coding tree. A decision on whether to code a picture area by using inter-picture (temporal) or intra-picture (spatial) prediction is made at a CU level. Each CU may be further split into one, two, or four PUs based on a PU splitting type. Inside one PU, a same prediction process is applied, and related information is transmitted to a decoder on a PU basis. After obtaining a residual block by applying the prediction process based on the PU splitting type, the CU may be partitioned into TUs based on another quadtree structure similar to the coding tree used for the CU. In the latest development of the video compression technologies, a quadtree and binary tree (QTBT) partitioning frame is used to partition a coding block. In a QTBT block structure, a CU may have a square or rectangular shape.
In this specification, for ease of description and understanding, a picture block to be coded in a current coded image may be referred to as a current block. For example, in encoding, the current block is a block currently being encoded, and in decoding, the current block is a block currently being decoded. A decoded picture block, in a reference picture, used for predicting the current block is referred to as a reference block. In other words, the  reference block is a block that provides a reference signal for the current block, where the reference signal represents a pixel value within the picture block. A block that is in the reference picture and that provides a prediction signal for the current block may be referred to a prediction block, where the prediction signal represents a pixel value, a sample value, or a sampling signal within the prediction block. For example, after a plurality of reference blocks are traversed, an optimal reference block is found, the optimal reference block provides a prediction for the current block, and this block is referred to as a prediction block.
In a case of lossless video coding, the original video pictures can be reconstructed, that is, reconstructed video pictures have same quality as the original video pictures (assuming no transmission loss or other data loss occurs during storage or transmission) . In a case of lossy video coding, further compression, for example, through quantization, is performed, to reduce the amount of data representing the video pictures, which cannot be completely reconstructed at a decoder, that is, quality of reconstructed video pictures is lower or worse than quality of the original video pictures.
Several video coding standards since H. 261 belong to the group of "lossy hybrid video codecs" (that is, spatial and temporal prediction in a sample domain is combined with 2D transform coding for applying quantization in a transform domain) . Each picture of a video sequence is typically partitioned into a set of non-overlapping blocks, and coding is typically performed at a block level. In other words, at an encoder, a video is typically processed, that is, encoded, at a block (video block) level, for example, by using spatial (intra picture) prediction and temporal (inter picture) prediction to generate a prediction block, subtracting the prediction block from a current block (block currently processed/to be processed) to obtain a residual block, transforming the residual block and quantizing the residual block in the transform domain to reduce an amount of data that is to be transmitted (compressed) , whereas at a decoder, inverse processing relative to the encoder is partially applied to the encoded or compressed block to reconstruct the current block for representation. Furthermore, the encoder duplicates a decoder processing loop so that both generate identical predictions (for example, intra and inter predictions) and/or reconstructions for processing, that is, coding, subsequent blocks.
The following describes a system architecture applied in the embodiments of the  present invention. FIG. 1A is a schematic block diagram of an example of a video encoding and decoding system 10 according to an embodiment. As shown in FIG. 1A, the video encoding and decoding system 10 may include a source device 12 and a destination device 14. The source device 12 generates encoded video data, and therefore the source device 12 may be referred to as a video encoding apparatus. The destination device 14 may decode the encoded video data generated by the source device 12, and therefore the destination device 14 may be referred to as a video decoding apparatus. Various implementation solutions of the source device 12, the destination device 14, or both the source device 12 and the destination device 14 may include one or more processors and a memory coupled to the one or more processors. The memory may include but is not limited to a RAM, a ROM, an EEPROM, a flash memory, or any other medium that can be used to store desired program code in a form of an instruction or a data structure accessible by a computer, as described in this specification. The source device 12 and the destination device 14 may include various apparatuses, including a desktop computer, a mobile computing apparatus, a notebook (for example, a laptop) computer, a tablet computer, a set-top box, a telephone handset such as a so-called "smart" phone, a television, a camera, a display apparatus, a digital media player, a video game console, an in-vehicle computer, a wireless communications device, or the like.
Although FIG. 1A depicts the source device 12 and the destination device 14 as separate devices, a device embodiment may alternatively include both the source device 12 and the destination device 14 or functionalities of both the source device 12 and the destination device 14, that is, the source device 12 or a corresponding functionality and the destination device 14 or a corresponding functionality. In such embodiments, the source device 12 or the corresponding functionality and the destination device 14 or the corresponding functionality may be implemented by using same hardware and/or software, separate hardware and/or software, or any combination thereof.
A communication connection may be performed between the source device 12 and the destination device 14 through a link 13, and the destination device 14 may receive encoded video data from the source device 12 through the link 13. The link 13 may include one or more media or apparatuses capable of moving the encoded video data from the source device 12 to the destination device 14. In one example, the link 13 may include one or more  communication media that enable the source device 12 to transmit the encoded video data directly to the destination device 14 in real time. In this example, the source device 12 may modulate the encoded video data according to a communications standard (for example, a wireless communications protocol) , and may transmit modulated video data to the destination device 14. The one or more communication media may include a wireless communication medium and/or a wired communication medium, for example, a radio frequency (RF) spectrum or one or more physical transmission lines. The one or more communication media may form a part of a packet-based network, and the packet-based network is, for example, a local area network, a wide area network, or a global network (for example, the Internet) . The one or more communication media may include a router, a switch, a base station, or another device that facilitates communication from the source device 12 to the destination device 14.
The source device 12 includes an encoder 20, and optionally, the source device 12 may further include a picture source 16, a picture preprocessor 18, and/or a communications interface 22. In one embodiment, the encoder 20, the picture source 16, the picture preprocessor 18, and the communications interface 22 may be hardware components in the source device 12, or may be software programs in the source device 12. Separate descriptions are as follows:
The picture source 16 may include or be any type of picture capturing device configured to, for example, capture a real-world picture; and/or any type of device for generating a picture or comment (for screen content encoding, some text on a screen is also considered as a part of a to-be-encoded picture or image) , for example, a computer graphics processor configured to generate a computer animation picture; or any type of device configured to obtain and/or provide a real-world picture or a computer animation picture (for example, screen content or a virtual reality (VR) picture) ; and/or any combination thereof (for example, an augmented reality (AR) picture) . The picture source 16 may be a camera configured to capture a picture or a memory configured to store a picture. The picture source 16 may further include any type of (internal or external) interface for storing a previously captured or generated picture and/or for obtaining or receiving a picture. When the picture source 16 is a camera, the picture source 16 may be, for example, a local camera or an integrated camera integrated into the source device. When the picture source 16 is a memory,  the picture source 16 may be a local memory or, for example, an integrated memory integrated into the source device. When the picture source 16 includes an interface, the interface may be, for example, an external interface for receiving a picture from an external video source. The external video source is, for example, an external picture capturing device such as a camera, an external memory, or an external picture generating device. The external picture generating device is, for example, an external computer graphics processor, a computer, or a server. The interface may be any type of interface, for example, a wired or wireless interface or an optical interface, according to any proprietary or standardized interface protocol.
A picture may be regarded as a two-dimensional array or matrix of pixel (picture element) . The pixel in the array may also be referred to as a sample. A quantity of samples in horizontal and vertical directions (or axes) of the array or picture defines a size and/or resolution of the picture. For representation of color, typically three color components are used, that is, the picture may be represented as or include three sample arrays. In an RBG format or a color space, a picture includes corresponding red, green and blue sample arrays. However, in video coding, each pixel is typically represented in a luminance/chrominance format or color space, for example, YCbCr, which includes a luminance component indicated by Y (sometimes L is used instead) and two chrominance components indicated by Cb and Cr. The luminance (or luma) component Y represents brightness or grey level intensity (for example, in a grey-scale picture) , while the two chrominance (or chroma) components Cb and Cr represent chromaticity or color information components. Accordingly, a picture in the YCbCr format includes a luminance sample array of luminance sample values (Y) , and two chrominance sample arrays of chrominance values (Cb and Cr) . Pictures in the RGB format may be converted or transformed into the YCbCr format and vice versa, and this process is also known as color transformation or conversion. If a picture is monochrome, the picture may include only a luminance sample array. In one embodiment, a picture transmitted by the picture source 16 to a picture processor (or pre-processor) may also be referred to as raw picture data 17.
The picture pre-processor 18 can be configured to receive the (raw) picture data 17, and perform pre-processing on the picture data 17 to obtain a pre-processed picture 19 or  pre-processed picture data 19. Pre-processing performed by the picture pre-processor 18 may include, for example, trimming, color format conversion (for example, from RGB to YCbCr) , color correction, or de-noising.
The encoder 20 (also referred to as a video encoder 20) is configured to receive the preprocessed picture data 19, and process the preprocessed picture data 19 by using a related prediction mode (such as a prediction mode in an embodiment of this specification) , to provide encoded picture data 21 (structural details of the encoder 20 are further described below based on FIG. 2, FIG. 4, or FIG. 5) . In some embodiments, the encoder 20 may be configured to perform various embodiments described below, to implement encoder-side application of a chroma block prediction method described in the present invention.
The communications interface 22 may be configured to receive the encoded picture data 21, and transmit the encoded picture data 21 to the destination device 14 or any other device (for example, a memory) through the link 13 for storage or direct reconstruction. The other device may be any device used for decoding or storage. The communications interface 22 may be, for example, configured to encapsulate the encoded picture data 21 into an appropriate format, for example, a data packet, for transmission over the link 13.
The destination device 14 includes a decoder 30, and optionally, the destination device 14 may further include a communications interface 28, a picture post processor 32, and/or a display device 34. Separate descriptions are as follows:
The communications interface 28 may be configured to receive the encoded picture data 21 from the source device 12 or any other source. The any other source is, for example, a storage device, and the storage device is, for example, an encoded picture data storage device. The communications interface 28 may be configured to transmit or receive the encoded picture data 21 through the link 13 between the source device 12 and the destination device 14 or through any type of network. The link 13 is, for example, a direct wired or wireless connection, and the any type of network is, for example, a wired or wireless network or any combination thereof, or any type of private or public network, or any combination thereof. The communications interface 28 may be, for example, configured to decapsulate the data packet transmitted through the communications interface 22, to obtain the encoded picture data 21.
Both the communications interface 22 and the communications interface 28 may be configured as unidirectional communications interfaces indicated by an arrow for the encoded picture data 13 in FIG. 1A pointing from the source device 12 to the destination device 14, or bidirectional communications interfaces, and may be configured, for example, to send and receive messages, for example, to set up a connection, to acknowledge and exchange any other information related to a communication link and/or data transmission, for example, encoded picture data transmission.
The decoder 30 may be configured to receive the encoded picture data 21 and provide decoded picture data 31 or a decoded picture 31 (further details will be described below, for example, based on FIG. 3 or FIG. 5) . In some embodiments, the decoder 30 may be configured to perform various embodiments described below, to implement decoder-side application of a chroma block prediction method described in the present invention.
The post-processor 32 of the destination device 14 may be configured to post-process the decoded picture data 31 (also referred to as reconstructed picture data) , for example, the decoded picture 31, to obtain post-processed picture data 33, for example, a post-processed picture 33. The post-processing performed by the post-processing unit 32 may include, for example, color format conversion (for example, from YCbCr to RGB) , color correction, trimming, re-sampling, or any other processing, for example, for preparing the decoded picture data 31 for displaying, for example, by display device 34.
The display device 34 of the destination device 14 may be configured to receive the post-processed picture data 33 for displaying the picture, for example, to a user or viewer. The display device 34 may be or include any type of display for presenting the reconstructed picture, for example, an integrated or external display or monitor. The displays may include, for example, a liquid crystal display (LCD) , an organic light emitting diode (OLED) display, a plasma display, a projector, a micro LED display, liquid crystal on silicon (LCoS) , a digital light processor (DLP) , or any type of other displays.
Although FIG. 1A depicts the source device 12 and the destination device 14 as separate devices, embodiments of devices may also include both or functionalities of both, that is, the source device 12 or a corresponding functionality and the destination device 14 or corresponding functionality. In such embodiments, the source device 12 or the corresponding  functionality and the destination device 14 or the corresponding functionality may be implemented by using same hardware and/or software, separate hardware and/or software, or any combination thereof.
As will be apparent for a skilled person based on the foregoing descriptions, existence and (exact) division of functionalities of the different units or functionalities within the source device 12 and/or the destination device 14 shown in FIG. 1A may vary with an actual device and application. The source device 12 and the destination device 14 may include any of a wide range of devices, including any type of handheld or stationary device, for example, a notebook or laptop computer, a mobile phone, a smartphone, a tablet or tablet computer, a camera, a desktop computer, a set-top box, a television, a camera, an in-vehicle device, a display device, a digital media player, a video game console, a video streaming device (such as a content service server or a content delivery server) , a broadcast receiver device, or a broadcast transmitter device, and may not use or may use any type of operating system.
The encoder 20 (for example, the video encoder 20) and the decoder 30 (for example, the video decoder 30) each may be implemented as any of a variety of suitable circuits, such as one or more microprocessors, digital signal processors (DSPs) , application-specific integrated circuits (ASICs) , field-programmable gate arrays (FPGAs) , discrete logic, hardware, or any combination thereof. If the techniques are implemented partially in software, a device may store instructions for the software in a suitable non-transitory computer-readable storage medium, and may execute the instructions in hardware by using one or more processors, to perform the techniques of this disclosure. Any of the foregoing (including hardware, software, a combination of hardware and software, and the like) may be considered as one or more processors.
In some cases, the video coding system 10 illustrated in FIG. 1A is merely an example and the techniques of this application may apply to video coding settings (for example, video encoding or video decoding) that do not necessarily include any data communication between the encoding and decoding devices. In other examples, data is retrieved from a local memory, streamed over a network, or the like. A video encoding device may encode and store data into a memory, and/or a video decoding device may retrieve the  data from the memory and decode the data. In some examples, encoding and decoding are performed by devices that do not communicate with each other, but simply encode data and store the encoded data into a memory, and/or retrieve the data from the memory and decode the data.
FIG. 1B is an illustrative diagram of an example video coding system 40 including the encoder 20 in FIG. 2 and/or the decoder 30 in FIG. 3 according to an example embodiment. The system 40 can implement techniques in accordance with various examples described in this application. In the illustrated implementation, a video coding system 40 may include an imaging device (or imaging devices) 41, a video encoder 20, a video decoder 30 (and/or a video coder implemented by using a logic circuit 47 of a processing unit (or processing units) 46) , an antenna 42, one or more processors 43, one or more memories 44, and/or a display device 45.
As illustrated, the imaging device (s) 41, antenna 42, processing unit (s) 46, logic circuits 47, video encoder 20, video decoder 30, processor (s) 43, memory (or memories) 44, and/or display device 45 may be capable of communicating with each other. As discussed, although illustrated with both the video encoder 20 and the video decoder 30, the video coding system 40 may include only the video encoder 20 or only the video decoder 30 in various examples.
In some examples, the antenna 42 may be configured to transmit or receive, for example, an encoded bitstream of video data. Further, in some examples, the video coding system 40 may include a display device 45. The display device 45 may be configured to present video data. As shown, in some examples, the logic circuit 47 may be implemented by the processing unit (s) 46. The processing unit (s) 46 may include application-specific integrated circuit (ASIC) logic, graphics processor (s) , general purpose processor (s) , or the like. The video coding system 40 also may include optional processor (s) 43, which may similarly include ASIC logic, graphics processor (s) , general purpose processor (s) , or the like. In some examples, the logic circuit 47 may be implemented by hardware, video coding dedicated hardware, or the like, and the processor (s) 43 may be implemented by general purpose software, operating systems, or the like. In addition, the memory (or memories) 44 may be any type of memory such as a volatile memory (for example, a static random access  memory (SRAM) or a dynamic random access memory (DRAM) ) , or a nonvolatile memory (for example, a flash memory) . In a non-limiting example, the memory (or memories) 44 may be implemented by a cache memory. In some examples, the logic circuit 47 may access the memory (or memories) 44 (for implementation of, for example, an image buffer) . In other examples, the logic circuit 47 and/or the processing unit (s) 46 may include memories (for example, a cache) for implementation of an image buffer or the like.
In some examples, the video encoder 20 implemented by the logic circuit may include an image buffer (for example, implemented by either the processing unit (s) 46 or the memory (or memories) 44) and a graphics processing unit (for example, implemented by the processing unit (s) 46) . The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include the video encoder 20 implemented by the logic circuit 47, to embody the various modules discussed with respect to FIG. 2 and/or any other encoder system or subsystem described herein. The logic circuit may be configured to perform the various operations discussed herein.
In some examples, the video decoder 30 may be implemented in a similar manner implemented by using the logic circuit 47 to embody the various modules discussed with respect to the decoder 30 in FIG. 3 and/or any other decoder system or subsystem described herein. In some examples, the video decoder 30 implemented by the logic circuit may include an image buffer (for example, implemented by either the processing unit (s) 420 or the memory (or memories) 44) and a graphics processing unit (for example, implemented by the processing unit (s) 46) . The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include the video decoder 30 implemented by the logic circuit 47, to embody the various modules discussed with respect to FIG. 3 and/or any other decoder system or subsystem described herein.
In some examples, the antenna 42 of the video coding system 40 may be configured to receive an encoded bitstream of video data. As discussed, the encoded bitstream may include information (e.g., data, indicators, index values, mode selection data, or the like) associated with video frame encoding discussed herein, such as data associated with coding partitioning (for example, transform coefficients or quantized transform coefficients, optional indicators (as discussed) , and/or data defining the coding partitioning) . The video coding  system 40 may also include the video decoder 30 coupled to the antenna 42 and configured to decode the encoded bitstream. The display device 45 can be configured to present video frames.
It should be understood that in this embodiment of the present invention, for the example described with regard to the encoder 20, the decoder 30 may be configured to perform a reverse process. With regard to signaling syntax elements, the decoder 30 may be configured to receive and parse such syntax elements and correspondingly decode related video data. In some examples, the encoder 20 may entropy encode the syntax elements into an encoded video bitstream. In such examples, the decoder 30 may parse such syntax elements and correspondingly decode related video data.
It should be noted that the method described in this embodiment of the present invention is mainly used for an inter prediction process, and the process exists in both the encoder 20 and the decoder 30. The encoder 20 and the decoder 30 in this embodiment of the present invention may be an encoder and a decoder corresponding to a video standard protocol such as H. 263, H. 264, HEVV, MPEG-2, MPEG-4, VP8, and VP9 or a next generation video standard protocol (such as H. 266) .
FIG. 2 shows a schematic/conceptual block diagram of an example video encoder 20 according to an embodiment of this application. In the example of FIG. 2, the video encoder 20 includes a residual calculation unit 204, a transform processing unit 206, a quantization unit 208, an inverse quantization unit 210, an inverse transform processing unit 212, a reconstruction unit 214, a buffer 216, a loop filter unit 220, a decoded picture buffer (DPB) 230, a prediction processing unit 260, and an entropy encoding unit 270. The prediction processing unit 260 may include an inter prediction unit 244, an intra prediction unit 254, and a mode selection unit 262. The inter prediction unit 244 may include a motion estimation unit and a motion compensation unit (not shown) . The video encoder 20 shown in FIG. 2 may also be referred to as a hybrid video encoder or a video encoder according to a hybrid video codec.
For example, the residual calculation unit 204, the transform processing unit 206, the quantization unit 208, the prediction processing unit 260, and the entropy encoding unit 270 form a forward signal path of the encoder 20, whereas, for example, the inverse  quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214, the buffer 216, the loop filter 220, the decoded picture buffer (DPB) 230, and the prediction processing unit 260 form a backward signal path of the encoder, where the backward signal path of the encoder corresponds to the signal path of a decoder (refer to a decoder 30 in FIG. 3) .
The encoder 20 is configured to receive, for example, by using an input 202, a picture 201 or a block 203 of the picture 201, for example, a picture of a sequence of pictures forming a video or a video sequence. The picture block 203 may also be referred to as a current picture block or a picture block to be coded, and the picture 201 as the current picture or the picture to be coded (in particular in video coding, to distinguish the current picture from other pictures, the other pictures are, for example, previously encoded and/or decoded pictures of the same video sequence, that is, the video sequence which also includes the current picture) .
The encoder 20 in this embodiment may include a partitioning unit (not depicted in FIG. 2) configured to partition the picture 201 into a plurality of blocks, for example, blocks 203. In one embodiment, the plurality of blocks are non-overlapping. The partitioning unit may be configured to use a same block size for all pictures of a video sequence and a corresponding grid defining the block size, or to change a block size between pictures or subsets or groups of pictures, and partition each picture into corresponding blocks.
In one example, the prediction processing unit 260 of the video encoder 20 may be configured to perform any combination of the partitioning techniques described above.
Like the picture 201, the block 203 may be regarded as a two-dimensional array or matrix of samples with intensity values (sample values) , although of a smaller size than the picture 201. In other words, the block 203 may include, for example, one sample array (for example, luma array in a case of a monochrome picture 201) , three sample arrays (for example, one luma array and two chroma arrays in a case of a color picture 201) , or any other quantity and/or type of arrays depending on a color format applied. A quantity of samples in horizontal and vertical directions (or axes) of the block 203 defines a size of the block 203.
The encoder 20 shown in FIG. 2 is configured to encode the picture 201 block by block, for example, encoding and prediction is performed per block 203.
The residual calculation unit 204 is configured to calculate a residual block 205 based on the picture block 203 and a prediction block 265 (further details about the prediction block 265 are provided below) , for example, by subtracting sample values of the prediction block 265 from sample values of the picture block 203, sample by sample (pixel by pixel) to obtain the residual block 205 in a sample domain.
The transform processing unit 206 is configured to apply a transform, for example, a discrete cosine transform (DCT) or discrete sine transform (DST) , to the sample values of the residual block 205 to obtain transform coefficients 207 in a transform domain. The transform coefficients 207 may also be referred to as transform residual coefficients and represent the residual block 205 in the transform domain.
The transform processing unit 206 may be configured to apply integer approximations of DCTs/DSTs, such as transforms specified for HEVC/H. 265. Compared with an orthogonal DCT transform, such integer approximations are typically scaled by a specific factor. To preserve a norm of the residual block processed by forward and inverse transforms. Additional, scaling factors can be applied as a part of a transform process. The scaling factors are typically chosen based on specific constraints such as scaling factors being a power of two for shift operation, a bit depth of the transform coefficients, a tradeoff between accuracy and implementation costs, and the like. Specific scaling factors may be specified for the inverse transform, for example, by the inverse transform processing unit 212, at the decoder 30 (and the corresponding inverse transform, for example, by the inverse transform processing unit 212 at the encoder 20) and corresponding scaling factors may be specified for the forward transform, for example, by transform processing unit 206, at the encoder 20.
The quantization unit 208 may be configured to quantize the transform coefficients 207 to obtain quantized transform coefficients 209, for example, by applying scalar quantization or vector quantization. The quantized transform coefficients 209 may also be referred to as quantized residual coefficients 209. The quantization process may reduce a bit depth associated with some or all of the transform coefficients 207. For example, an n-bit transform coefficient may be rounded down to an m-bit transform coefficient during quantization, where n is greater than m. A quantization degree may be modified by adjusting  a quantization parameter (QP) . For example, for scalar quantization, different scaling may be applied to achieve finer or coarser quantization. Smaller quantization operation sizes correspond to finer quantization, whereas larger quantization operation sizes correspond to coarser quantization. The applicable quantization operation size may be indicated by a quantization parameter (QP) . The quantization parameter may be an index to a predefined set of applicable quantization operation sizes. For example, small quantization parameters may correspond to fine quantization (small quantization operation sizes) and large quantization parameters may correspond to coarse quantization (large quantization operation sizes) , or vice versa. The quantization may include division by a quantization operation size and corresponding or inverse quantization, for example, by inverse quantization 210, or may include multiplication by the quantization operation size. Embodiments according to some standards, for example, HEVC, may be configured to use a quantization parameter to determine the quantization operation size. Generally, the quantization operation size may be calculated based on a quantization parameter by using a fixed point approximation of an equation including division. Additional scaling factors may be introduced for quantization and dequantization to restore the norm of the residual block, which may be modified because of the scaling used in the fixed point approximation of the equation for the quantization operation size and quantization parameter. In one embodiment, the scaling of the inverse transform and dequantization may be combined. In another embodiment, customized quantization tables may be used and signaled from an encoder to a decoder, for example, in a bitstream. The quantization is a lossy operation, where a loss increases with increasing quantization operation sizes.
The inverse quantization unit 210 can be configured to apply inverse quantization of the quantization unit 208 on the quantized coefficients to obtain dequantized coefficients 211 by applying the inverse of a quantization scheme applied by the quantization unit 208, based on, or by using, the same quantization operation size as the quantization unit 208. The dequantized coefficients 211 may also be referred to as dequantized residual coefficients 211, and correspond to the transform coefficients 207, although the dequantized coefficients 211 are typically not identical to the transform coefficients due to a loss caused by quantization.
The inverse transform processing unit 212 can be configured to apply the inverse  transform of the transform applied by the transform processing unit 206, for example, an inverse discrete cosine transform (DCT) or inverse discrete sine transform (DST) , to obtain an inverse transform block 213 in the sample domain. The inverse transform block 213 may also be referred to as an inverse transform dequantized block 213 or an inverse transform residual block 213.
The reconstruction unit 214 (for example, a summer 214) is configured to add the inverse transform block 213 (that is, the reconstructed residual block 213) to the prediction block 265 to obtain a reconstructed block 215 in the sample domain, for example, by adding the sample values of the reconstructed residual block 213 and the sample values of the prediction block 265.
In one embodiment, the buffer unit 216 ( "buffer" 216 for short) or a line buffer 216, is configured to buffer or store the reconstructed block 215 and the respective sample values for intra prediction. In further embodiments, the encoder may be configured to use unfiltered reconstructed blocks and/or the respective sample values stored in buffer unit 216 for any type of estimation and/or prediction, for example, intra prediction.
For example, The encoder 20 in this embodiment may be configured so that the buffer unit 216 is not only used for storing the reconstructed blocks 215 for the intra prediction unit 254 but is also used for the loop filter unit 220 (not shown in FIG. 2) , such that, the buffer unit 216 and the decoded picture buffer unit 230 form one buffer. In further embodiments, filtered blocks 221 and/or blocks or samples from the decoded picture buffer 230 (e.g., decoded picture 231) may be used as an input or a basis for the intra prediction unit 254.
The loop filter unit 220 (or "loop filter" 220 for short) is configured to filter the reconstructed block 215 to obtain a filtered block 221, for example, to smooth pixel transitions or improve video quality. The loop filter unit 220 is intended to represent one or more loop filters including a de-blocking filter, a sample-adaptive offset (SAO) filter, a bilateral filter, an adaptive loop filter (ALF) , a sharpening or smoothing filter, or a collaborative filter, etc. Although the loop filter unit 220 is shown in FIG. 2 as an in loop filter, in other configurations, the loop filter unit 220 may be implemented as a post loop filter. The filtered block 221 may also be referred to as a filtered reconstructed block 221. The  decoded picture buffer 230 may store reconstructed coding blocks after the loop filter unit 220 performs filtering operations on the reconstructed coding blocks.
The encoder 20 in this embodiment may be configured to output loop filter parameters (correspondingly, the loop filter unit 220, such as sample adaptive offset information) directly, or through entropy encoding performed by the entropy encoding unit 270 or any other entropy coding unit, so that a decoder 30 can receive and apply the same loop filter parameters for decoding.
The decoded picture buffer (DPB) 230 may be a reference picture memory that stores reference picture data for use in encoding video data by the video encoder 20. The DPB 230 may be formed by any of a variety of memory devices, such as a dynamic random access memory (DRAM) , including a synchronous DRAM (SDRAM) , a magnetoresistive RAM (MRAM) , a resistive RAM (RRAM) , or other types of memory devices. The DPB 230 and the buffer 216 may be provided by a same memory device or separate memory devices. In some examples, the decoded picture buffer (DPB) 230 is configured to store the filtered block 221. The decoded picture buffer 230 may be further configured to store other previously filtered blocks, for example, previously reconstructed and filtered blocks 221, of the same current picture or of different pictures, for example, previously reconstructed pictures, and may provide complete previously reconstructed, that is, decoded, pictures (and corresponding reference blocks and samples) and/or a partially reconstructed current picture (and corresponding reference blocks and samples) , for example, for inter prediction. In some example, if the reconstructed block 215 is reconstructed without in-loop filtering, the decoded picture buffer (DPB) 230 is configured to store the reconstructed block 215.
The prediction processing unit 260, also referred to as a block prediction processing unit 260, can be configured to receive or obtain the block 203 (the current block 203 of the current picture 201) and reconstructed picture data, for example, reference samples of the same (e.g., current) picture from the buffer 216 and/or reference picture data 231 from one or more previously decoded pictures from the decoded picture buffer 230, and to process such data for prediction, that is, to provide a prediction block 265, which may be an inter-predicted block 245 or an intra-predicted block 255.
The mode selection unit 262 may be configured to select a prediction mode (for  example, an intra or inter prediction mode) and/or a  corresponding prediction block  245 or 255 to be used as the prediction block 265 for calculation of the residual block 205 and for reconstruction of the reconstructed block 215.
The mode selection unit 262 in an embodiment may be configured to select the prediction mode (for example, from those supported by the prediction processing unit 260) , which provides an optimal match, in other words, a minimum residual (the minimum residual means better compression for transmission or storage) , or a minimum signaling overhead (the minimum signaling overhead means better compression for transmission or storage) , or considers or balances both. The mode selection unit 262 may be configured to determine the prediction mode based on rate-distortion optimization (RDO) , that is, select a prediction mode that provides minimum rate-distortion optimization or for which associated rate distortion fulfills at least a prediction mode selection criterion.
In the following, the prediction processing (for example, performed by the prediction processing unit 260) and mode selection (for example, performed by the mode selection unit 262) performed by an example encoder 20 are described in more detail.
As described above, the encoder 20 is configured to determine or select the optimal or optimum prediction mode from a set of (predetermined) prediction modes. The set of prediction modes may include, for example, intra prediction modes and/or inter prediction modes.
The set of intra prediction modes may include 35 different intra prediction modes, for example, non-directional modes such as a DC (or mean) mode and a planar mode, or directional modes, for example, as defined in H. 265, or may include 67 different intra prediction modes, for example, non-directional modes such as a DC (or mean) mode and a planar mode, or directional modes, for example, as defined in H. 266.
In one embodiment, a set of inter prediction modes depends on available reference pictures (that is, for example, at least partially decoded pictures stored in the DPB 230, as described above) and other inter prediction parameters. In one embodiment, the set of inter prediction modes depends on whether the entire reference picture or only a part of the reference picture, for example, a search window area around an area of the current block, is used for searching for an optimal matching reference block. In one embodiment, the set of  inter prediction modes depends on whether pixel interpolation such as half/semi-pel and/or quarter-pel interpolation is applied. The set of inter prediction modes may include, for example, an advanced motion vector prediction (AMVP) mode, decoder side motion vector refinement (DMVR) mode, and a merge mode. In one embodiment, the set of inter prediction modes may include an AMVP mode based on a control point and/or a merge mode based on a control point. In one example, the intra prediction unit 254 may be configured to perform any combination of intra prediction techniques described below.
In addition to the foregoing prediction modes, a skip mode and/or a direct mode may be applied in some embodiments.
The prediction processing unit 260 may be further configured to partition the block 203 into smaller block partitions or sub-blocks, for example, by iteratively using quadtree partitioning (QT) , binary partitioning (BT) , triple-tree partitioning (TT) , or any combination thereof, and to perform, for example, prediction for each of the block partitions or sub-blocks, where the mode selection includes selection of a tree structure of the partitioned block 203 and prediction modes applied to each of the block partitions or sub-blocks.
The inter prediction unit 244 may include a motion estimation (ME) unit (not shown in FIG. 2) and a motion compensation (MC) unit (not shown in FIG. 2) . The motion estimation unit is configured to receive or obtain the picture block 203 (the current picture block 203 of the current picture 201) and a decoded picture 231, or at least one or more previously reconstructed blocks, for example, reconstructed blocks of one or more other/different previously decoded pictures 231, for motion estimation. For example, a video sequence may include the current picture and the previously decoded pictures 231. In other words, the current picture and the previously decoded pictures 231 may be a part of, or form, a sequence of pictures forming a video sequence.
For example, the encoder 20 may be configured to select a reference block from a plurality of reference blocks of a same picture or different pictures of the plurality of other pictures and provide a reference picture (or a reference picture index or the like) and/or an offset (a spatial offset) between the position (coordinates X and Y) of the reference block and the position of the current block as inter prediction parameters, to the motion estimation unit  (not shown in FIG. 2) . This offset is also referred to as a motion vector (MV) .
The motion compensation unit can be configured to obtain or receive, an inter prediction parameter and to perform inter prediction based on, or by using, the inter prediction parameter, to obtain an inter prediction block 245. Motion compensation performed by the motion compensation unit (not shown in FIG. 2) may include fetching or generating the prediction block based on a motion/block vector determined through motion estimation. In one embodiment, motion compensation includes performing interpolation for sub-pixel precision. Interpolation or interpolation filtering may generate additional pixel samples from known pixel samples, thereby potentially increasing a quantity of candidate prediction blocks that may be used to code a picture block. Upon receiving a motion vector for a PU of the current picture block, the motion compensation unit 246 may locate a prediction block to which the motion vector points in one of reference picture lists. The motion compensation unit 246 may also generate syntax elements associated with the blocks and the video slice for use by the video decoder 30 in decoding the picture blocks of the video slice.
Specifically, the inter prediction unit 244 may transmit the syntax elements to the entropy encoding unit 270, and the syntax elements include the inter prediction parameter (such as indication information of selection of an inter prediction mode used for prediction of the current block after traversal of a plurality of inter prediction modes) . In one embodiment, if there is only one inter prediction mode, the inter prediction parameter may be alternatively not carried in the syntax elements. In this case, the decoder side 30 may perform decoding directly in a default prediction mode. It can be understood that the inter prediction unit 244 may be configured to perform any combination of inter prediction techniques.
The intra prediction unit 254 can be configured to obtain or receive the picture block 203 (current picture block) and one or more previously reconstructed blocks, such as reconstructed neighboring blocks, of the same picture for intra estimation. The encoder 20 may be configured to select an intra prediction mode from a plurality of (e.g., predetermined) intra prediction modes.
The encoder 20 in this embodiment may be configured to select an intra prediction mode based on an optimization criterion, such as a minimum residual (the intra prediction  mode to provide the prediction block 255 most similar to the current picture block 203) or minimum rate distortion.
The intra prediction unit 254 is further configured to determine the intra prediction block 255 based on an intra prediction parameter, for example, the selected intra prediction mode. In any case, after selecting an intra prediction mode for a block, the intra prediction unit 254 is also configured to provide the intra prediction parameter, that is, information indicative of the selected intra prediction mode for the block, to the entropy encoding unit 270. In one example, the intra prediction unit 254 may be configured to perform any combination of the intra prediction techniques.
Specifically, the intra prediction unit 254 may transmit the syntax elements to the entropy encoding unit 270, and the syntax elements include the intra prediction parameter (such as indication information of selection of an intra prediction mode used for prediction of the current block after traversal of a plurality of intra prediction modes) . In one embodiment, if there is only one intra prediction mode, the intra prediction parameter may be alternatively not carried in the syntax elements. In this case, the decoder side 30 may perform decoding directly in a default prediction mode.
The entropy encoding unit 270 can be configured to apply an entropy encoding algorithm or scheme (for example, a variable length coding (VLC) scheme, a context adaptive VLC scheme (CALVC) , an arithmetic coding scheme, a context adaptive binary arithmetic coding (CABAC) , syntax-based context-adaptive binary arithmetic coding (SBAC) , probability interval partitioning entropy (PIPE) coding, or another entropy encoding methodology or technique) on the quantized residual coefficients 209, inter prediction parameters, intra prediction parameter, and/or loop filter parameters, individually or jointly (or not at all) , to obtain encoded picture data 21 that can be output by the output 272 in a form of an encoded bitstream 21. The encoded bitstream 21 may be transmitted to video decoder 30, or archived for later transmission or retrieval by the video decoder 30. The entropy encoding unit 270 can be further configured to entropy encode the other syntax elements for the current video slice being coded.
Other structural variations of the video encoder 20 can be used to encode the video stream. For example, a non-transform-based encoder 20 can quantize the residual  signal directly without the transform processing unit 206 for specific blocks or frames. In another embodiment, an encoder 20 can have the quantization unit 208 and the inverse quantization unit 210 combined into a single unit.
Specifically, in this embodiment of the present invention, the encoder 20 can be configured to implement an inter prediction method described in the following embodiment.
It should be understood that other structural variants of the video encoder 20 can be used to encode a video stream. For example, for some picture blocks or image frames, the video encoder 20 may quantize the residual signal directly without processing by the transform processing unit 206, and/or the inverse-transform processing unit 212. In another embodiment, for some picture blocks or image frames, the video encoder 20 does not generate residual data, and correspondingly, there is no need for the transform processing unit 206, the quantization unit 208, the inverse-quantization unit 210, and the inverse-transform processing unit 212 to perform processing. In another embodiment, the video encoder 20 may directly store a reconstructed picture block as a reference block, without processing by the filter unit 220. In another embodiment, the quantization unit 208 and the inverse-quantization unit 210 in the video encoder 20 may be combined together. The loop filter unit 220 may be optional, and in a case of lossless compression encoding, the transform processing unit 206, the quantization unit 208, the inverse-quantization unit 210, and the inverse-transform processing unit 212 may be optional. It should be understood that in different application scenarios, the inter prediction unit 244 and the intra prediction unit 254 may be used selectively.
FIG. 3 is a schematic/conceptual block diagram of an example of a decoder 30 according to an embodiment. The video decoder 30 is configured to receive encoded picture data (for example, an encoded bitstream) 21, for example, encoded by the encoder 20, to obtain a decoded picture 231. In a decoding process, the video decoder 30 receives video data, for example, an encoded video bitstream that represents picture blocks of an encoded video slice and associated syntax elements, from the video encoder 20.
In the example of FIG. 3, the decoder 30 includes an entropy decoding unit 304, an inverse quantization unit 310, an inverse transform processing unit 312, a reconstruction unit 314 (for example, a summer 314) , a buffer 316, a loop filter 320, a decoded picture  buffer 330, and a prediction processing unit 360. The prediction processing unit 360 may include an inter prediction unit 344, an intra prediction unit 354, and a mode selection unit 362. The video decoder 30 may, in some examples, perform a decoding pass generally reciprocal to the encoding pass described with respect to the video encoder 20 from FIG. 2.
The entropy decoding unit 304 is configured to perform entropy decoding on the encoded picture data 21 to obtain, for example, quantized coefficients 309 and/or decoded coding parameters (not shown in FIG. 3) . The decoded coding parameters can include any one or all of (decoded) inter prediction parameters, intra prediction parameters, loop filter parameters, and/or other syntax elements. The entropy decoding unit 304 is further configured to forward the inter prediction parameters, intra prediction parameters, and/or other syntax elements to the prediction processing unit 360. The video decoder 30 may receive the syntax elements at a video slice level and/or a video block level.
The inverse quantization unit 310 may be identical to the inverse quantization unit 110 in function, the inverse transform processing unit 312 may be identical to the inverse transform processing unit 112 in function, the reconstruction unit 314 may be identical to the reconstruction unit 114 in function, the buffer 316 may be identical to the buffer 116 in function, the loop filter 320 may be identical to the loop filter 120 in function, and the decoded picture buffer 330 may be identical to the decoded picture buffer 230 in function.
The prediction processing unit 360 may include an inter prediction unit 344 and an intra prediction unit 354, where the inter prediction unit 344 may resemble the inter prediction unit 244 in function, and the intra prediction unit 354 may resemble the intra prediction unit 254 in function. The prediction processing unit 360 is typically configured to perform block prediction and/or obtain the prediction block 365 from the encoded data 21, and to receive or obtain (explicitly or implicitly) prediction related parameters and/or information about the selected prediction mode from the entropy decoding unit 304.
When the video slice is coded as an intra coded (I) slice, the intra prediction unit 354 of the prediction processing unit 360 is configured to generate the prediction block 365 for a picture block of the current video slice based on a signaled intra prediction mode and data from previously decoded blocks of the current frame or picture. When the video frame is coded as an inter coded (that is, B, or P) slice, the inter prediction unit 344 (for example, the  motion compensation unit) of the prediction processing unit 360 is configured to produce prediction blocks 365 for a video block (or current block) of the current video slice based on the motion vectors and other syntax elements received from the entropy decoding unit 304. For inter prediction, the prediction blocks may be produced from one of reference pictures within one of the reference picture lists. The video decoder 30 may construct the reference frame lists, List 0 and List 1, by using default construction techniques based on reference pictures stored in the DPB 330.
The prediction processing unit 360 can be configured to determine prediction information for a video block of the current video slice by parsing the motion vectors and other syntax elements, and use the prediction information to produce the prediction blocks for the current video block being decoded. For example, the prediction processing unit 360 can use some of the received syntax elements to determine a prediction mode (for example, the intra or inter prediction) used to code the video blocks of the video slice, an inter prediction slice type (for example, B slice, P slice, or GPB slice) , construction information for one or more of the reference picture lists for the slice, motion vectors for each inter encoded video block of the slice, an inter prediction status for each inter coded video block of the slice, and other information, to decode the video blocks in the current video slice. In another embodiment, the syntax elements received by the video decoder 30 from a bitstream include syntax elements in one or more of an adaptive parameter set (APS) , a sequence parameter set (SPS) , a picture parameter set (PPS) , or a slice header.
The inverse quantization unit 310 can be configured to inversely quantize, that is, de-quantize, the quantized transform coefficients provided in the bitstream and decoded by the entropy decoding unit 304. The inverse quantization process may include use of a quantization parameter calculated by the video encoder 20 for each video block in the video slice, to determine a quantization degree and, likewise, an inverse-quantization degree that should be applied.
The inverse transform processing unit 312 can be configured to apply an inverse transform, for example, an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process, to the transform coefficients to produce residual blocks in a pixel domain.
The reconstruction unit 314 (for example, the summer 314) can be configured to add the inverse transform block 313 (that is, the reconstructed residual block 313) to the prediction block 365 to obtain a reconstructed block 315 in the sample domain, for example, by adding the sample values of the reconstructed residual block 313 and the sample values of the prediction block 365.
The loop filter unit 320 (either in a coding loop or after a coding loop) can be configured to filter the reconstructed block 315 to obtain a filtered block 321, for example, to smooth pixel transitions or improve the video quality. In one example, the loop filter unit 320 may be configured to perform any combination of the filtering techniques described below. The loop filter unit 320 is intended to represent one or more loop filters including a de-blocking filter, a sample-adaptive offset (SAO) filter, and other filters, for example, a bilateral filter, an adaptive loop filter (ALF) , a sharpening or smoothing filter, or a collaborative filter. Although the loop filter unit 320 is shown in FIG. 3 as an in loop filter, in other configurations, the loop filter unit 320 may be implemented as a post loop filter.
The decoded video blocks 321 in a given frame or picture are then stored in the decoded picture buffer 330 that stores reference pictures used for subsequent motion compensation.
The decoder 30 can be configured to output the decoded picture 331, for example, by using an output 332, for presentation or viewing to a user.
Other variations of the video decoder 30 can be used to decode the compressed bitstream. In one embodiment, the decoder 30 can produce the output video stream without the loop filtering unit 320. In one embodiment, a non-transform-based decoder 30 can inversely quantize the residual signal directly without the inverse-transform processing unit 312 for specific blocks or frames. In another embodiment, the video decoder 30 can have the inverse-quantization unit 310 and the inverse-transform processing unit 312 combined into a single unit.
Specifically, in this embodiment, the decoder 30 can be configured to implement an inter prediction method described in the following embodiments.
It should be understood that other structural variants of the video decoder 30 can be used to decode the encoded video bitstream. For example, in one embodiment, the video  decoder 30 may generate an output video stream without processing by the filter unit 320. In another embodiment, for some picture blocks or image frames, the entropy decoding unit 304 of the video decoder 30 may not obtain a quantized coefficient through decoding, and correspondingly, there is no need for the inverse-quantization unit 310 and the inverse-transform processing unit 312 to perform processing. The loop filter unit 320 can be optional, and in a case of lossless compression, the inverse-quantization unit 310 and the inverse-transform processing unit 312 can be optional. It should be understood that in different application scenarios, the inter prediction unit and the intra prediction unit may be used selectively.
It should be understood that for the encoder 20 and the decoder 30 in this application, a processing result for a procedure may be further processed before it is outputted to a next procedure. For example, after a procedure such as interpolation filtering, motion vector derivation, or loop filtering, an operation such as clip or shift is further performed on a processing result of a corresponding procedure.
For example, a motion vector, derived based on a motion vector of a neighboring affine coding block, of a control point in a current picture block may be further processed. For example, a value range of the motion vector is restricted to be within a specific bit depth. Assuming that an allowed bit depth of a motion vector is bitDepth, a motion vector range is from –2^ (bitDepth–1) to 2^ (bitDepth–1) –1, where the symbol "^" represents a power. If bitDepth is 16, a value range is from –32768 to 32767. If bitDepth is 18, a value range is from –131072 to 131071. Restriction may be performed in any one of the following two manners.
Manner 1: An overflowing high-order bit of a motion vector is removed:
ux = (vx + 2 bitDepth) %2 bitDepth
vx = (ux >= 2 bitDepth–1) ? (ux -2 bitDepth) : ux
uy = (vy + 2 bitDepth) %2 bitDepth
vy = (uy >= 2 bitDepth-1) ? (uy -2 bitDepth) : uy
For example, a value of vx is –32769, 32767 is obtained by using the foregoing formulas. A value is stored in a computer in a two's complement form, binary supplemental code of –32769 is 1, 0111, 1111, 1111, 1111 (17 bits) , and the computer handles an overflow by  discarding a high-order bit. Therefore, a value of vx is 0111, 1111, 1111, 1111, that is, 32767, which is consistent with the result obtained through processing using the above formulas.
Manner 2: Clipping is performed on a motion vector, as shown in the following formulas:
vx = Clip3 (–2 bitDepth-1, 2 bitDepth–1–1, vx)
vy = Clip3 (–2 bitDepth-1, 2 bitDepth–1–1, vy)
where Clip3 is defined to indicate clipping a value of z to a range [x, y] :
Figure PCTCN2019110194-appb-000001
FIG. 4 is a schematic diagram of a video coding device 400 according to an embodiment of this disclosure. The video coding device 400 is suitable for implementing the disclosed embodiments as described herein. In an embodiment, the video coding device 400 may be a decoder such as the video decoder 30 in FIG. 1A or an encoder such as the video encoder 20 in FIG. 1A. In an embodiment, the video coding device 400 may be one or more components of the video decoder 30 in FIG. 1A or the video encoder 20 in FIG. 1A as described above.
The video coding device 400 can include ingress ports 410 and receiver units (Rx) 420 for receiving data; a processor, a logic unit, or a central processing unit (CPU) 430 for processing the data; transmitter units (Tx) 440 and egress ports 450 for transmitting the data; and a memory 460 for storing the data. The video coding device 400 may also include optical-to-electrical (OE) components and electrical-to-optical (EO) components coupled to the ingress ports 410, the receiver units 420, the transmitter units 440, and/or the egress ports 450 for egress or ingress of optical or electrical signals.
The processor 430 can be implemented by hardware and software. The processor 430 may be implemented as one or more CPU chips, cores (for example, as a multi-core processor) , FPGAs, ASICs, and DSPs. The processor 430 is in communication with the ingress ports 410, receiver units 420, transmitter units 440, egress ports 450, and memory 460. The processor 430 includes a coding module 470. The coding module 470 implements the disclosed embodiments described above. For example, the coding module 470 implements, processes, prepares, or provides the various coding operations. Inclusion of the coding  module 470 therefore provides substantial improvement to the functionality of the video coding device 400 and affects a transformation of the video coding device 400 to a different state. In another embodiment, the coding module 470 is implemented as instructions stored in the memory 460 and executed by the processor 430.
The memory 460 includes one or more disks, tape drives, and solid state drives and may be used as an overflow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution. The memory 460 may be volatile and/or nonvolatile and may be a read-only memory (ROM) , a random access memory (RAM) , a ternary content-addressable memory (TCAM) , and/or a static random access memory (SRAM) .
FIG. 5 is a simplified block diagram of an apparatus 500 that can be used as any one or two of the source device 12 and the destination device 14 in FIG. 1A according to an example embodiment. The apparatus 500 can implement the techniques of this application. In other words, FIG. 5 is a schematic block diagram of an implementation of an encoding device or a decoding device (coding device 500 for short) according to an embodiment of this application. The coding device 500 may include a processor 510, a memory 530, and a bus system 550. The processor is connected to the memory via the bus system. The memory is configured to store an instruction, and the processor is configured to execute the instruction stored in the memory. The memory of the coding device stores program code. The processor can invoke the program code stored in the memory, to perform the video encoding or decoding methods described in this application, and in particular, various inter and/or intra prediction methods. To avoid repetition, details are not described herein again.
In an embodiment of this application, the processor 510 may be a central processing unit ( "CPU" for short) , or the processor 510 may be another general purpose processor, a digital signal processor (DSP) , an application-specific integrated circuit (ASIC) , a field-programmable gate array (FPGA) or another programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like. The processor 510 may be a microprocessor, or any conventional processor or the like.
The memory 530 may include a read-only memory (ROM) device or a random access memory (RAM) device. Any other proper type of storage device may be alternatively  used as the memory 530. The memory 530 may include code and data 531 accessed by the processor 510 by using the bus 550. The memory 530 may further include an operating system 533 and an application program 535. The application program 535 includes at least one program that allows the processor 510 to perform the video encoding or decoding method (in particular, the inter and/or intra prediction methods described in this application) described in this application. For example, the application program 535 may include applications 1 to N, and further includes a video encoding or decoding application (video coding application for short) that performs the video encoding or decoding method described in this application.
The bus system 550 may not only include a data bus, but also include a power bus, a control bus, a status signal bus, and the like. However, for clear description, various types of buses in the figure are marked as the bus system 550.
In one embodiment, the coding device 500 may further include one or more output devices, for example, a display 570. In an example, the display 570 may be a touch display that combines a display and a touch unit that operably senses touch input. The display 570 may be connected to the processor 510 by using the bus 550.
FIG. 6 is a schematic flowchart of an inter prediction method according to an embodiment. The method of Figure 6 enables a coder to process image blocks of which a size bigger than a preset size associated with the coder (such as a buffer size) . The method can be implemented by hardware, software, or any combination thereof. The method can be implemented by  inter prediction unit  244 or 344. The method can be a decoding method or a encoding method. As shown in FIG. 6, the method includes the following operations.
Operation S601. (A coder (such as encoder 20 or decoder 30 of Figure 1) or video coding system) obtains initial motion information of at least two sub-blocks of a current picture block. The current picture block can be a coding block, a CU, a PU, or a TU, etc. The current picture block can be of any sizes and dimensions. Here, the picture block is divided/split into a number of sub-blocks and initial motion information is determined for at least two of the number of sub-blocks (e.g., a subset, or all of the sub-blocks) based on initial motion information for the current picture block.
Operation S602. (The system) determines motion information of the at least two  sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks. The positions of the at least two sub-blocks may be pixel positions relative to a position of the current picture block.
Operation S603. (The system) determines motion information of the current picture block based on the motion information of the at least two sub-blocks.
Operation S604. (The system) determines a prediction block of the current picture block based on the motion information of the current picture block.
In one embodiment, the current picture block consists of the at least two sub-blocks. In one embodiment, the determining motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks includes: determining the motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and the positions of the at least two sub-blocks using a decoder-side motion vector refinement (DMVR) method. DMVR is a inter prediction method.
The following is the description of DMVR:
In order to increase the accuracy of the MVs of the merge mode, a bilateral-matching based decoder side motion vector refinement is applied (in VTM6) . In bi-prediction operation, a refined MV is searched around the initial MVs in the reference picture list L0 and reference picture list L1. The BM method calculates the distortion between the two candidate blocks in the reference picture list L0 and list L1. The SAD between the blocks based on each MV candidate around the initial MV is calculated. The MV candidate with the lowest SAD becomes the refined MV and used to generate the bi-predicted signal.
The DMVR can be applied for the CUs or sub-blocks which are coded with one or more following modes and features:
CU level or sub-block level merge mode with bi-prediction MV
One reference picture is in the past and another reference picture is in the future with respect to the current picture
The distances (i.e. POC difference) from both reference pictures to the current picture are same
CU or sub-block has more than 64 luma samples
Both CU or sub-block height and CU or sub-block width are larger than or equal to 8 luma samples
BCW weight index indicates equal weight
WP is not enabled for the current block
CIIP mode is not used for the current block
The refined MV derived by DMVR process is used to generate the inter prediction samples and also used in temporal motion vector prediction for future pictures coding. While the original MV or initial MV is used in deblocking process and also used in spatial motion vector prediction for future CU or sub-block coding.
The additional features of DMVR are mentioned in the following sub-clauses.
Searching scheme
In DVMR, the search points are surrounding the initial MV and the MV offset obey the MV difference mirroring rule. In other words, any points that are checked by DMVR, denoted by candidate MV pair (MV0, MV1) obey the following two equations:
MV0'=MV0+MV_offset    (3-31)
MV1'=MV1-MV_offset     (3-32)
Where MV offset represents the refinement offset between the initial MV and the refined MV in one of the reference pictures. The refinement search range is two integer luma samples from the initial MV. The searching includes the integer sample offset search stage and fractional sample refinement stage.
25 points full search is applied for integer sample offset searching. The SAD of the initial MV pair is first calculated. If the SAD of the initial MV pair is smaller than a threshold, the integer sample stage of DMVR is terminated. Otherwise SADs of the remaining 24 points are calculated and checked in raster scanning order. The point with the smallest SAD is selected as the output of integer sample offset searching stage. To reduce the penalty of the uncertainty of DMVR refinement, it is proposed to favor the original MV during the DMVR process. The SAD between the reference blocks referred by the initial MV candidates is decreased by 1/4 of the SAD value.
The integer sample search is followed by fractional sample refinement. To save the calculational complexity, the fractional sample refinement is derived by using parametric  error surface equation, instead of additional search with SAD comparison. The fractional sample refinement is conditionally invoked based on the output of the integer sample search stage. When the integer sample search stage is terminated with center having the smallest SAD in either the first iteration or the second iteration search, the fractional sample refinement is further applied.
In parametric error surface based sub-pixel offsets estimation, the center position cost and the costs at four neighboring positions from the center are used to fit a 2-D parabolic error surface equation of the following form
E (x, y) =A (x-x_min ) ^2+B (y-y_min ) ^2+C   (3-33)
where (x_min, y_min) corresponds to the fractional position with the least cost and C corresponds to the minimum cost value. By solving the above equations by using the cost value of the five search points, the (x_min, y_min) is computed as:
x_min= (E (-1, 0) -E (1, 0) ) / (2 (E (-1, 0) +E (1, 0) -2E (0, 0) ) )    (3-34)
y_min= (E (0, -1) -E (0, 1) ) / (2 ( (E (0, -1) +E (0, 1) -2E (0, 0) ) )   (3-35)
The value of x_min and y_min are automatically constrained to be between –8 and 8 since all cost values are positive and the smallest value is E (0, 0) . This corresponds to half peal offset with 1/16th-pel MV accuracy (in VTM6) . The computed fractional (x_min, y_min) are added to the integer distance refinement MV to get the sub-pixel accurate refinement delta MV.
Bilinear-interpolation and sample padding
In VVC, the resolution of the MVs is 1/16 luma samples. The samples at the fractional position are interpolated using a 8-tap interpolation filter. In DMVR, the search points are surrounding the initial fractional-pel MV with integer sample offset, therefore the samples of those fractional position needs to be interpolated for DMVR search process. To reduce the calculation complexity, the bi-linear interpolation filter is used to generate the fractional samples for the searching process in DMVR. Another important effect is that by using bi-linear filter is that with 2-sample search range, the DVMR does not access more reference samples compared to the normal motion compensation process. After the refined MV is attained with DMVR search process, the normal 8-tap interpolation filter is applied to generate the final prediction. In order to not access more reference samples to normal MC  process, the samples, which is not needed for the interpolation process based on the original MV but is needed for the interpolation process based on the refined MV, will be padded from those available samples.
Maximum DMVR processing unit
When the width and/or height of a CU or sub-block are larger than 16 luma samples, it will be further split into subblocks with width and/or height equal to 16 luma samples. The maximum unit size for DMVR searching process is limit to 16x16.
In one embodiment, the obtaining initial motion information of at least two sub-blocks of a current picture block includes: obtaining initial motion information of the current picture block, and using the initial motion information of the current picture block as the initial motion information of the at least two sub-blocks of the current picture block. In one embodiment, the obtaining initial motion information of at least two sub-blocks of a current picture block is performed when a size of the current picture block is greater than a preset size; and the method further includes: when the size of the current picture block is not greater than the preset size, obtaining the initial motion information of the current picture block; determining the motion information of the current picture block based on the initial motion information of the current picture block and a position of the current picture block; and determining the prediction block of the current picture block based on the motion information of the current picture block. In another embodiment, the preset size is 32 times 32.
In one embodiment, the determining motion information of the current picture block based on the motion information of the at least two sub-blocks includes: determining a mean of the motion information of the at least two sub-blocks as the motion information of the current picture block. In one embodiment, a clip operation or a rounding operation is performed in a process of determining the mean. In one embodiment, if a size of the current picture block is greater than a preset size, the method further splits the current picture block into at least two sub-blocks, the size of each sub-block is smaller than or equal to the preset size.
Forward prediction means that a reference picture is selected for a current coding block from a forward reference-picture set to obtain a reference block. Backward prediction  means that a reference picture is selected for a current coding block from a backward reference-picture set to obtain a reference block. Bidirectional prediction means that a reference picture is selected from each of a forward reference-picture set and a backward reference-picture set to obtain a reference block. When the bidirectional prediction method is used, there are two reference blocks for a current coding block. Each reference block needs to be indicated by a motion vector and a reference frame index, and a predicted value of a pixel value of a pixel in the current block is determined based on pixel values of pixels in the two reference blocks.
During a bidirectional prediction operation, to predict a block area, in one embodiment, two prediction blocks formed by using an MV in a list 0 and an MV in a list 1 respectively are combined to generate a single prediction signal. In a decoder-side motion vector refinement (DMVR) method, two bidirectional prediction motion vectors are further refined in a bilateral-template matching process. Bilateral-template matching is performed on a decoder, to perform distortion-based search between a bilateral template and a reconstructed sample in a reference picture, so as to obtain a refined MV with no need to send additional motion information.
In DMVR, a bilateral template is generated from initial MV 0 in the list 0 and MV 1 in the list 1 separately to serve as a weighted combination (namely, a mean) of the two prediction blocks. The template matching operation includes cost measurement of a template generated through calculation and cost measurement of a sample area (surrounding an initial prediction block) in a reference picture. For each of the two reference pictures, an MV that causes minimum template costs is considered as an updated MV in the list to replace an original MV. In current development, each list is searched for nine candidate MVs. The nine candidate MVs include an original MV and eight surrounding MVs. A luminance sample is shifted to the original MV in a horizontal or vertical direction or in horizontal and vertical directions. Finally, two new MVs, namely, MV 0' and MV 1', are used to generate a final bidirectional prediction result. The sum of absolute differences (SAD) is used for cost measurement.
DMVR can be applied to a merge mode of bidirectional prediction. One MV comes from an existing reference picture, and the other MV comes from a future reference  picture, with no need to send additional syntax elements.
In one embodiment, a DMVR process may be: obtaining initial motion information of a current picture block; determining positions of N forward reference blocks and positions of N backward reference blocks based on the initial motion information and a position of the current picture block, where the N forward reference blocks are in a forward reference picture, the N backward reference blocks are in a backward reference picture, and N is an integer greater than 1; based on a matching cost criterion, determining positions of a pair of reference blocks from positions of M pairs of reference blocks as a position of a target forward reference block of the current picture block and a position of a target backward reference block of the current picture block, where positions of each pair of reference blocks include a position of one forward reference block and a position of one backward reference block, and for the positions of each pair of reference blocks, a mirror relationship is formed between a first position offset and a second position offset, where the first position offset indicates a position offset of the position of the forward reference block relative to a position of an initial forward reference block, the second position offset indicates a position offset of the position of the backward reference block relative to a position of an initial backward reference block, M is an integer greater than or equal to 1, and M is less than or equal to N; and obtaining a predicted value of a pixel value of the current picture block based on a pixel value of the target forward reference block and a pixel value of the target backward reference block. The positions of the N forward reference blocks in the forward reference picture and the positions of the N backward reference blocks in the backward reference picture form positions of N pairs of reference blocks. For the positions of each pair of reference blocks in the positions of N pairs of reference blocks, the mirror relationship is formed between the first position offset of the position of the forward reference block relative to the position of the initial forward reference block, and the second position offset of the position of the backward reference block relative to the position of the initial backward reference block. On such a basis, positions of a pair of reference blocks (for example, a pair of reference blocks with minimum matching costs) are determined from the positions of the N pairs of reference blocks as a position of a target forward reference block (that is, an optimal forward reference block/forward prediction block) of the current picture block and a position of a target  backward reference block (that is, an optimal backward reference block/backward prediction block) to obtain the predicted value of the pixel value of the current picture block based on the pixel value of the target forward reference block and the pixel value of the target backward reference block. Compared with the former DMVR technology, this method avoids a calculation process of calculating a template matching block in advance and avoids a forward search and matching process and a backward search and matching process that are performed by using a template matching block separately, thereby simplifying an image prediction process.
For hardware implementation of inter prediction, a cache memory is a key cache. A cache size of a current video codec chip is 2×32×32. However, a current DMVR may be applied to a coding block with a maximum size of 128×128. That is, in a comparison between two blocks, a 2×128×128 cache size is required. Therefore, hardware implementation costs are excessively high. In this case, when a size of a coding block is greater than 32×32, in one embodiment, the coding block is divided into sub-blocks, and DMVR processing is performed on the sub-blocks separately, to obtain MV information of each sub-block. In one embodiment, a mean MV of MV information of all the sub-blocks is obtained and is used as motion vector information of the coding block (an operation such as a clip operation or a rounding operation may be performed in a process of obtaining the mean) . Mean processing resolves a problem such as artifacts caused by de-blocking that may occur due to different MVs of the sub-blocks, and inconsistency in H. 266.
Based on an inventive idea that is the same as that of the foregoing method, an embodiment further provides an inter prediction apparatus. The inter prediction apparatus includes a motion information determining unit and a prediction block determining unit. It should be noted that the motion information determining unit and the prediction block determining unit may be applied to an inter prediction process at an encoder side or a decoder side. Specifically, at the encoder side, the units can be applied to the inter prediction unit 244 in the prediction processing unit 260 of the encoder 20. At the decoder side, the units can be applied to the inter prediction unit 344 in the prediction processing unit 360 of the decoder 30. The motion information determining unit and the prediction block determining unit can be implemented by hardware, software, or any combination thereof.
In one embodiment, the motion information determining unit, configured to: obtain initial motion information of at least two sub-blocks of a current picture block; determine motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks; and determine motion information of the current picture block based on the motion information of the at least two sub-blocks; and a prediction block determining unit, configured to determine a prediction block of the current picture block based on the motion information of the current picture block.
It should be further noted that for specific content of the motion information determining unit and the prediction block determining unit, refer to descriptions of the foregoing content including the Summary. For conciseness of this specification, details are not described herein again.
Following is an explanation of the applications of the encoding method as well as the decoding method as shown in the above-mentioned embodiments, and a system using them.
FIG. 7 is a block diagram showing a content supply system 3100 for realizing content distribution service. This content supply system 3100 includes capture device 3102, terminal device 3106, and optionally includes display 3126. The capture device 3102 communicates with the terminal device 3106 over communication link 3104. The communication link may include the communication channel 13 described above. The communication link 3104 includes but not limited to WIFI, Ethernet, Cable, wireless (3G/4G/5G) , USB, or any kind of combination thereof, or the like.
The capture device 3102 generates data, and may encode the data by the encoding method as shown in the above embodiments. Alternatively, the capture device 3102 may distribute the data to a streaming server (not shown in the Figures) , and the server encodes the data and transmits the encoded data to the terminal device 3106. The capture device 3102 includes but not limited to camera, smart phone or Pad, computer or laptop, video conference system, PDA, vehicle mounted device, or a combination of any of them, or the like. For example, the capture device 3102 may include the source device 12 as described above. When the data includes video, the video encoder 20 included in the capture device 3102 may  actually perform video encoding processing. When the data includes audio (i.e., voice) , an audio encoder included in the capture device 3102 may actually perform audio encoding processing. For some practical scenarios, the capture device 3102 distributes the encoded video and audio data by multiplexing them together. For other practical scenarios, for example in the video conference system, the encoded audio data and the encoded video data are not multiplexed. Capture device 3102 distributes the encoded audio data and the encoded video data to the terminal device 3106 separately.
In the content supply system 3100, the terminal device 310 receives and reproduces the encoded data. The terminal device 3106 could be a device with data receiving and recovering capability, such as smart phone or Pad 3108, computer or laptop 3110, network video recorder (NVR) /digital video recorder (DVR) 3112, TV 3114, set top box (STB) 3116, video conference system 3118, video surveillance system 3120, personal digital assistant (PDA) 3122, vehicle mounted device 3124, or a combination of any of them, or the like capable of decoding the above-mentioned encoded data. For example, the terminal device 3106 may include the destination device 14 as described above. When the encoded data includes video, the video decoder 30 included in the terminal device is prioritized to perform video decoding. When the encoded data includes audio, an audio decoder included in the terminal device is prioritized to perform audio decoding processing.
For a terminal device with its display, for example, smart phone or Pad 3108, computer or laptop 3110, network video recorder (NVR) /digital video recorder (DVR) 3112, TV 3114, personal digital assistant (PDA) 3122, or vehicle mounted device 3124, the terminal device can feed the decoded data to its display. For a terminal device equipped with no display, for example, STB 3116, video conference system 3118, or video surveillance system 3120, an external display 3126 is contacted therein to receive and show the decoded data.
When each device in this system performs encoding or decoding, the picture encoding device or the picture decoding device, as shown in the above-mentioned embodiments, can be used.
FIG. 8 is a diagram showing a structure of an example of the terminal device 3106. After the terminal device 3106 receives stream from the capture device 3102, the protocol  proceeding unit 3202 analyzes the transmission protocol of the stream. The protocol includes but not limited to Real Time Streaming Protocol (RTSP) , Hyper Text Transfer Protocol (HTTP) , HTTP Live streaming protocol (HLS) , MPEG-DASH, Real-time Transport protocol (RTP) , Real Time Messaging Protocol (RTMP) , or any kind of combination thereof, or the like.
After the protocol proceeding unit 3202 processes the stream, stream file is generated. The file is outputted to a demultiplexing unit 3204. The demultiplexing unit 3204 can separate the multiplexed data into the encoded audio data and the encoded video data. As described above, for some practical scenarios, for example in the video conference system, the encoded audio data and the encoded video data are not multiplexed. In this situation, the encoded data is transmitted to video decoder 3206 and audio decoder 3208 without through the demultiplexing unit 3204.
Via the demultiplexing processing, video elementary stream (ES) , audio ES, and optionally subtitle are generated. The video decoder 3206, which includes the video decoder 30 as explained in the above mentioned embodiments, decodes the video ES by the decoding method as shown in the above-mentioned embodiments to generate video frame, and feeds this data to the synchronous unit 3212. The audio decoder 3208, decodes the audio ES to generate audio frame, and feeds this data to the synchronous unit 3212. Alternatively, the video frame may store in a buffer (not shown in FIG. 8) before feeding it to the synchronous unit 3212. Similarly, the audio frame may store in a buffer (not shown in FIG. 8) before feeding it to the synchronous unit 3212.
The synchronous unit 3212 synchronizes the video frame and the audio frame, and supplies the video/audio to a video/audio display 3214. For example, the synchronous unit 3212 synchronizes the presentation of the video and audio information. Information may code in the syntax using time stamps concerning the presentation of coded audio and visual data and time stamps concerning the delivery of the data stream itself.
If subtitle is included in the stream, the subtitle decoder 3210 decodes the subtitle, and synchronizes it with the video frame and the audio frame, and supplies the video/audio/subtitle to a video/audio/subtitle display 3216.
The present invention is not limited to the above-mentioned system, and either the  picture encoding device or the picture decoding device in the above-mentioned embodiments can be incorporated into other system, for example, a car system.
A person skilled in the art can understand that, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. The computer-readable medium may include a computer-readable storage medium, which corresponds to a tangible medium such as a data storage medium, or a communication medium including any medium that facilitates transfer of a computer program from one place to another, for example, according to a communications protocol. In this manner, the computer-readable medium generally may correspond to (1) a tangible computer-readable storage medium that is non-transitory or (2) a communication medium such as a signal or a carrier wave. The data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, code, and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include the computer-readable medium.
By way of example rather than limitation, such computer-readable storage media may include a RAM, a ROM, an EEPROM, a CD-ROM or another compact disc storage, a magnetic disk storage or another magnetic storage device, a flash memory, or any other medium that can be used to store desired program code in a form of instructions or data structures and that can be accessed by a computer. In addition, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, a server, or another remote source by using a coaxial cable, a fiber optic cable, a twisted pair, a digital subscriber line (DSL) , or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in a definition of medium. It should be understood, however, that the computer-readable storage medium and data storage medium do not include connections, carrier waves, signals, or other transitory media, but are non-transitory tangible storage media. Disks and discs, as used herein, include a compact disc (CD) , a laser disc, an optical disc, a digital versatile disc (DVD) , a floppy disk, and a Blu-ray  disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the foregoing should also be included within the scope of the computer-readable medium.
Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs) , general purpose microprocessors, application-specific integrated circuits (ASICs) , field-programmable gate arrays (FPGAs) , or other equivalent integrated or discrete logic circuits. Accordingly, the term "processor" used herein may refer to any of the foregoing structures or any other structures suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated into a combined codec. Further, the techniques may be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) , or a set of ICs (for example, a chip set) . Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require implementation by different hardware units. Actually, as described above, various units may be combined, in combination with suitable software and/or firmware, into a codec hardware unit, or be provided by interoperative hardware units (including one or more processors described above) .
In the foregoing embodiments, the descriptions of each embodiment have respective focuses. For a part that is not described in detail in an embodiment, refer to related descriptions in other embodiments.
The foregoing descriptions are merely examples of specific implementations of this application, but are not intended to limit the protection scope of this application. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in this application shall fall within the protection scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.

Claims (20)

  1. An inter prediction method, wherein the method comprises:
    obtaining initial motion information of at least two sub-blocks of a current picture block;
    determining motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks;
    determining motion information of the current picture block based on the motion information of the at least two sub-blocks; and
    determining a prediction block of the current picture block based on the motion information of the current picture block.
  2. The method according to claim 1, wherein the current picture block consists of the at least two sub-blocks.
  3. The method according to claim 1 or 2, wherein the determining motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks comprises: determining the motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and the positions of the at least two sub-blocks using a decoder-side motion vector refinement (DMVR) method.
  4. The method according to any one of claims 1 to 3, wherein the obtaining initial motion information of at least two sub-blocks of a current picture block comprises: obtaining initial motion information of the current picture block, and using the initial motion information of the current picture block as the initial motion information of the at least two sub-blocks of the current picture block.
  5. The method according to any one of claims 1 to 4, wherein the obtaining initial motion information of at least two sub-blocks of a current picture block is performed when a  size of the current picture block is greater than a preset size; and
    the method further comprises: when the size of the current picture block is not greater than the preset size, obtaining the initial motion information of the current picture block; determining the motion information of the current picture block based on the initial motion information of the current picture block and a position of the current picture block; and determining the prediction block of the current picture block based on the motion information of the current picture block.
  6. The method according to claim 5, wherein the preset size is 32 times 32.
  7. The method according to any one of claims 1 to 6, wherein the determining motion information of the current picture block based on the motion information of the at least two sub-blocks comprises: determining a mean of the motion information of the at least two sub-blocks as the motion information of the current picture block.
  8. The method according to claim 7, wherein a clip operation or a rounding operation is performed in a process of determining the mean.
  9. The method according to claim 1, further comprising: on condition that a size of the current picture block is greater than a preset size, splitting the current picture block into at least two sub-blocks, the size of each sub-block is smaller than or equal to the preset size.
  10. An inter prediction apparatus comprising:
    a processor;
    a motion information determining unit coupled to the processor and configured to:
    obtain initial motion information of at least two sub-blocks of a current picture block;
    determine motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks; and
    determine motion information of the current picture block based on the motion information of the at least two sub-blocks;
    and a prediction block determining unit coupled to the processor and configured to: determine a prediction block of the current picture block based on the motion information of the current picture block.
  11. The apparatus according to claim 10, wherein the current picture block consists of the at least two sub-blocks.
  12. The apparatus according to claim 10, wherein the determining motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks comprises: determining the motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and the positions of the at least two sub-blocks using a decoder-side motion vector refinement (DMVR) method.
  13. The apparatus according to claim 10, wherein the obtaining initial motion information of at least two sub-blocks of a current picture block comprises: obtaining initial motion information of the current picture block, and using the initial motion information of the current picture block as the initial motion information of the at least two sub-blocks of the current picture block.
  14. The apparatus according to claim 10, wherein the obtaining initial motion information of at least two sub-blocks of a current picture block is performed only when a size of the current picture block is greater than a preset size; and
    the motion information determining unit is further configured to: when the size of the current picture block is not greater than the preset size, obtain the initial motion information of the current picture block; determine the motion information of the current picture block based on the initial motion information of the current picture block and a position of the current picture block; and
    and the prediction block determining unit is further configured to determine the prediction block of the current picture block based on the motion information of the current picture block.
  15. The apparatus according to claim 14, wherein the preset size is 32 times 32.
  16. The apparatus according to claim 10, wherein the determining motion information of the current picture block based on the motion information of the at least two sub-blocks comprises: determining a mean of the motion information of the at least two sub-blocks as the motion information of the current picture block.
  17. The apparatus according to claim 16, wherein a clip operation or a rounding operation is performed in a process of determining the mean.
  18. The apparatus according to claim 10, wherein the motion information determining unit is further to: if a size of the current picture block is greater than a preset size, split the current picture block into at least two sub-blocks, each small than or equal to the preset size.
  19. A non-transitory machine-readable storage medium including instructions which, when executed by one or more processors, cause the one or more processors to perform a method, the method comprising:
    obtaining initial motion information of at least two sub-blocks of a current picture block;
    determining motion information of the at least two sub-blocks based on the initial motion information of the at least two sub-blocks and positions of the at least two sub-blocks;
    determining motion information of the current picture block based on the motion information of the at least two sub-blocks; and
    determining a prediction block of the current picture block based on the motion information of the current picture block.
  20. The non-transitory machine-readable storage medium according to claim 19, wherein the current picture block consists of the at least two sub-blocks.
PCT/CN2019/110194 2018-10-09 2019-10-09 Inter prediction method and apparatus WO2020073928A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862743533P 2018-10-09 2018-10-09
US62/743,533 2018-10-09

Publications (1)

Publication Number Publication Date
WO2020073928A1 true WO2020073928A1 (en) 2020-04-16

Family

ID=70164840

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/110194 WO2020073928A1 (en) 2018-10-09 2019-10-09 Inter prediction method and apparatus

Country Status (1)

Country Link
WO (1) WO2020073928A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170085913A1 (en) * 2014-04-01 2017-03-23 Mediatek Inc. Method of Adaptive Interpolation Filtering in Video Coding
US20180192071A1 (en) * 2017-01-05 2018-07-05 Mediatek Inc. Decoder-side motion vector restoration for video coding
CN108271023A (en) * 2017-01-04 2018-07-10 华为技术有限公司 Image prediction method and relevant device
CN108271022A (en) * 2016-12-30 2018-07-10 展讯通信(上海)有限公司 A kind of method and device of estimation
US20180270500A1 (en) * 2017-03-14 2018-09-20 Qualcomm Incorporated Affine motion information derivation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170085913A1 (en) * 2014-04-01 2017-03-23 Mediatek Inc. Method of Adaptive Interpolation Filtering in Video Coding
CN108271022A (en) * 2016-12-30 2018-07-10 展讯通信(上海)有限公司 A kind of method and device of estimation
CN108271023A (en) * 2017-01-04 2018-07-10 华为技术有限公司 Image prediction method and relevant device
US20180192071A1 (en) * 2017-01-05 2018-07-05 Mediatek Inc. Decoder-side motion vector restoration for video coding
US20180270500A1 (en) * 2017-03-14 2018-09-20 Qualcomm Incorporated Affine motion information derivation

Similar Documents

Publication Publication Date Title
US11765343B2 (en) Inter prediction method and apparatus
WO2020253858A1 (en) An encoder, a decoder and corresponding methods
US11895292B2 (en) Encoder, decoder and corresponding methods of boundary strength derivation of deblocking filter
WO2020224545A1 (en) An encoder, a decoder and corresponding methods using an adaptive loop filter
JP7314300B2 (en) Method and apparatus for intra prediction
AU2019386917B2 (en) Encoder, decoder and corresponding methods of most probable mode list construction for blocks with multi-hypothesis prediction
US20210360275A1 (en) Inter prediction method and apparatus
US11889109B2 (en) Optical flow based video inter prediction
US20240121433A1 (en) Method and apparatus for chroma intra prediction in video coding
CN112088534A (en) Inter-frame prediction method and device
WO2020173196A1 (en) An encoder, a decoder and corresponding methods for inter prediction
WO2020073928A1 (en) Inter prediction method and apparatus
US11722668B2 (en) Video encoder, video decoder, and corresponding method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19870143

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19870143

Country of ref document: EP

Kind code of ref document: A1