WO2020228833A1 - Adaptive resolution change in video coding - Google Patents

Adaptive resolution change in video coding Download PDF

Info

Publication number
WO2020228833A1
WO2020228833A1 PCT/CN2020/090799 CN2020090799W WO2020228833A1 WO 2020228833 A1 WO2020228833 A1 WO 2020228833A1 CN 2020090799 W CN2020090799 W CN 2020090799W WO 2020228833 A1 WO2020228833 A1 WO 2020228833A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
resolution
samples
video block
current
Prior art date
Application number
PCT/CN2020/090799
Other languages
French (fr)
Inventor
Kai Zhang
Li Zhang
Hongbin Liu
Yue Wang
Original Assignee
Beijing Bytedance Network Technology Co., Ltd.
Bytedance Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bytedance Network Technology Co., Ltd., Bytedance Inc. filed Critical Beijing Bytedance Network Technology Co., Ltd.
Priority to CN202080036230.7A priority Critical patent/CN113841395B/en
Publication of WO2020228833A1 publication Critical patent/WO2020228833A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution

Definitions

  • This patent document relates to video coding techniques, devices and systems.
  • Devices, systems and methods related to digital video coding, and specifically, to bit-depth and color format conversions for video coding may be applied to both the existing video coding standards (e.g., High Efficiency Video Coding (HEVC) ) and future video coding standards or video codecs.
  • HEVC High Efficiency Video Coding
  • a method for video processing comprising: determining, for a current video block, a relationship between resolutions of two reference pictures to which the current video block refers; determining, in response to the relationship, whether and /or how to perform a specific operation for the current video block during an adaptive resolution change (ARC) process; and performing a conversion between a bitstream representation of the current video block and the current video block based on the specific operation.
  • ARC adaptive resolution change
  • method for video processing comprising: determining, for a video block within a current picture, a relationship between a resolution of the current picture and that of a reference picture to which the video block refers; and performing, in response to the relationship, a specific operation for samples within the reference picture or for a predictive block of the video block during an adaptive resolution change (ARC) process; and performing a conversion between a bitstream representation of the current video block and the current video block based on the specific operation.
  • ARC adaptive resolution change
  • the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.
  • a device that is configured or operable to perform the above-described method.
  • the device may include a processor that is programmed to implement this method.
  • a video decoder apparatus may implement a method as described herein.
  • FIG. 1 shows an example of adaptive stream of two representations of the same content coded at different resolutions.
  • FIG. 2 shows another example of adaptive stream of two representations of the same content coded at different resolutions, where segments use either closed Group of Pictures (GOP) or open GOP prediction structures.
  • GOP closed Group of Pictures
  • FIG. 3 shows an example of open GOP prediction structures of the two representations.
  • FIG. 4 shows an example of representation switching at an open GOP position.
  • FIG. 5 shows an example of using resampled reference pictures from another bitstream as a reference for decoding Random Access Skipped Leading (RASL) pictures.
  • RASL Random Access Skipped Leading
  • FIGS. 6A-6C show examples of motion-constrained tile set (MCTS) -based region-wise mixed-resolution (RWMR) viewport-dependent 360 streaming.
  • MCTS motion-constrained tile set
  • RWMR region-wise mixed-resolution
  • FIG. 7 shows an example of collocated sub-picture representations of different intra random access point (IRAP) intervals and different sizes.
  • IIRAP intra random access point
  • FIG. 8 shows an example of segments received when a viewing orientation change causes a resolution change at the start of a segment.
  • FIG. 9 shows an example of a viewing orientation change.
  • FIG. 10 shows an example of sub-picture representations for two sub-picture locations.
  • FIG. 11 shows an example of encoder modifications for adaptive resolution conversion (ARC) .
  • FIG. 12 shows an example of decoder modifications for ARC.
  • FIG. 13 shows an example of tile group based resampling for ARC.
  • FIG. 14 shows an example of an ARC process.
  • FIG. 15 shows an example of alternative temporal motion vector prediction (ATMVP) for a coding unit.
  • ATMVP alternative temporal motion vector prediction
  • FIGS. 16A-16B show an example of a simplified affine motion model.
  • FIG. 17 shows an example of an affine motion vector field (MVF) per sub-block.
  • FIGS. 18A and 18B show an example of the 4-parameter affine model and the 6-parameter affine model, respectively.
  • FIG. 19 shows an example of a motion vector prediction (MVP) for AF_INTER for inherited affine candidates.
  • MVP motion vector prediction
  • FIG. 20 shows an example of an MVP for AF_INTER for constructed affine candidates.
  • FIGS. 21A and 21B show examples of candidates for AF_MERGE.
  • FIG. 22 shows an example of candidate positions for affine merge mode.
  • FIG. 23 is a block diagram of an example of a hardware platform for implementing a visual media decoding or a visual media encoding technique described in the present document.
  • FIG. 24 shows a flowchart of an example method for video processing.
  • FIG. 25 shows a flowchart of another example method for video processing.
  • Embodiments of the disclosed technology may be applied to existing video coding standards (e.g., HEVC, H. 265) and future standards to improve compression performance. Section headings are used in the present document to improve readability of the description and do not in any way limit the discussion or the embodiments (and/or implementations) to the respective sections only.
  • Video codecs typically include an electronic circuit or software that compresses or decompresses digital video, and are continually being improved to provide higher coding efficiency.
  • a video codec converts uncompressed video to a compressed format or vice versa.
  • the compressed format usually conforms to a standard video compression specification, e.g., the High Efficiency Video Coding (HEVC) standard (also known as H. 265 or MPEG-H Part 2) , the Versatile Video Coding standard to be finalized, or other current and/or future video coding standards.
  • HEVC High Efficiency Video Coding
  • MPEG-H Part 2 MPEG-H Part 2
  • Video coding standards have evolved primarily through the development of the well-known ITU-T and ISO/IEC standards.
  • the ITU-T produced H. 261 and H. 263, ISO/IEC produced MPEG-1 and MPEG-4 Visual, and the two organizations jointly produced the H. 262/MPEG-2 Video and H. 264/MPEG-4 Advanced Video Coding (AVC) and H. 265/HEVC standards.
  • AVC H. 264/MPEG-4 Advanced Video Coding
  • H. 265/HEVC High Efficiency Video Coding
  • the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized.
  • Joint Video Exploration Team JVET was founded by VCEG and MPEG jointly in 2015.
  • JVET Joint Exploration Model
  • AVC and HEVC does not have the ability to change resolution without having to introduce an IDR or intra random access point (IRAP) picture; such ability can be referred to as adaptive resolution change (ARC) .
  • ARC adaptive resolution change
  • Rate adaption in video telephony and conferencing For adapting the coded video to the changing network conditions, when the network condition gets worse so that available bandwidth becomes lower, the encoder may adapt to it by encoding smaller resolution pictures.
  • changing picture resolution can be done only after an IRAP picture; this has several issues.
  • An IRAP picture at reasonable quality will be much larger than an inter-coded picture and will be correspondingly more complex to decode: this costs time and resource. This is a problem if the resolution change is requested by the decoder for loading reasons. It can also break low-latency buffer conditions, forcing an audio re-sync, and the end-to-end delay of the stream will increase, at least temporarily. This can give a poor user experience.
  • Active speaker changes in multi-party video conferencing For multi-party video conferencing, it is common that the active speaker is shown in bigger video size than the video for the rest of conference participants. When the active speaker changes, picture resolution for each participant may also need to be adjusted. The need to have ARC feature becomes more important when such change in active speaker happens frequently.
  • the Dynamic Adaptive Streaming over HTTP (DASH) specification includes a feature named @mediaStreamStructureId. This enables switching between different representations at open-GOP random access points with non-decodable leading pictures, e.g., CRA pictures with associated RASL pictures in HEVC.
  • CRA pictures with associated RASL pictures in HEVC.
  • switching between the two representations at a CRA picture with associated RASL pictures can be performed, and the RASL pictures associated with the switching-at CRA pictures can be decoded with acceptable quality hence enabling seamless switching.
  • the @mediaStreamStructureId feature would also be usable for switching between DASH representations with different spatial resolutions.
  • ARC is also known as Dynamic resolution conversion.
  • ARC may also be regarded as a special case of Reference Picture Resampling (RPR) such as H. 263 Annex P.
  • RPR Reference Picture Resampling
  • This mode describes an algorithm to warp the reference picture prior to its use for prediction. It can be useful for resampling a reference picture having a different source format than the picture being predicted. It can also be used for global motion estimation, or estimation of rotating motion, by warping the shape, size, and location of the reference picture.
  • the syntax includes warping parameters to be used as well as a resampling algorithm.
  • the simplest level of operation for the reference picture resampling mode is an implicit factor of 4 resampling as only an FIR filter needs to be applied for the upsampling and downsampling processes. In this case, no additional signaling overhead is required as its use is understood when the size of a new picture (indicated in the picture header) is different from that of the previous picture.
  • the spatial resolution may differ from the nominal resolution by a factor 0.5, applied to both dimensions.
  • the spatial resolution may increase or decrease, yielding scaling ratios of 0.5 and 2.0.
  • the cropping areas are scaled in proportion to spatial resolutions.
  • the down-sampling points are at even sample positions and are co-sited.
  • the same filter is used for luma and chroma.
  • the combined up-and down-sampling will not change phase or the position of chroma sampling points.
  • pic_width_in_luma_samples specifies the width of each decoded picture in units of luma samples. pic_width_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
  • pic_height_in_luma_samples specifies the height of each decoded picture in units of luma samples. pic_height_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY. ] ]
  • num_pic_size_in_luma_samples_minus1 plus 1 specifies the number of picture sizes (width and height) in units of luma samples that may be present in the coded video sequence.
  • pic_width_in_luma_samples [i] specifies the i-th width of decoded pictures in units of luma samples that may be present in the coded video sequence.
  • pic_width_in_luma_samples [i] shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
  • pic_height_in_luma_samples [i] specifies the i-th height of decoded pictures in units of luma samples that may be present in the coded video sequence.
  • pic_height_in_luma_samples [i] shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
  • pic_size_idx specifies the index to the i-th picture size in the sequence parameter set.
  • the width of pictures that refer to the picture parameter set is pic_width_in_luma_samples [pic_size_idx] in luma samples.
  • the height of pictures that refer to the picture parameter set is pic_height_in_luma_samples [pic_size_idx] in luma samples.
  • sub-picture track is defined as follows in Omnidirectional Media Format (OMAF) : track that is with spatial relationships to other track (s) and that represents that represents a spatial subset of the original video content, which has been split into spatial subsets before video encoding at the content production side.
  • OMAF Omnidirectional Media Format
  • a sub-picture track for HEVC can be constructed by rewriting the parameter sets and slice segment headers for a motion-constrained tile set so that it becomes a self-standing HEVC bitstream.
  • a sub-picture Representation can be defined as a DASH Representation that carries a sub-picture track.
  • JVET-M0261 used the term sub-picture as a spatial partitioning unit for VVC, summarized as follows:
  • Pictures are divided into sub-pictures, tile groups and tiles.
  • a sub-picture is a rectangular set of tile groups that starts with a tile group that has tile_group_address equal to 0.
  • Each sub-picture may refer to its own PPS and may hence have its own tile partitioning.
  • Sub-pictures are treated like pictures in the decoding process.
  • the reference pictures for decoding the sub-picture are generated by extracting the area collocating with the current sub-picture from the reference pictures in the decoded picture buffer.
  • the extracted area shall be a decoded sub-picture, i.e. inter prediction takes place between sub-pictures of the same size and the same location within the picture.
  • a tile group is a sequence of tiles in tile raster scan of a sub-picture.
  • sub-picture as defined in JVET-M0261.
  • a track that encapsulates a sub-picture sequence as defined in JVET-M0261 has very similar properties as a sub-picture track defined in OMAF, the examples given below apply in both cases.
  • Section 5.13 ( "Support for Adaptive Streaming" ) of MPEG N17074 includes the following requirement for VVC:
  • the standard shall support fast representation switching in the case of adaptive streaming services that offer multiple representations of the same content, each having different properties (e.g. spatial resolution or sample bit depth) .
  • the standard shall enable the use of efficient prediction structures (e.g. so-called open groups of pictures) without compromising from the fast and seamless representation switching capability between representations of different properties, such as different spatial resolutions.
  • Content generation for adaptive bitrate streaming includes generations of different Representations, which can have different spatial resolutions.
  • the client requests Segments from the Representations and can hence decide at which resolution and bitrate the content is received.
  • the Segments of different Representations are concatenated, decoded, and played.
  • the client should be able to achieve seamless playout with one decoder instance. Closed GOP structures (starting with an IDR picture) are conventionally used as illustrated in FIG. 1.
  • Open GOP prediction structures starting with CRA pictures
  • Open GOP prediction structures reportedly also reduce subjectively visible quality pumping.
  • the Segments starting with a CRA picture contain RASL pictures for which at least one reference picture is in the previous Segment. This is illustrated in FIG. 3, where picture 0 in both bitstreams resides in the previous Segment and is used as reference for predicting the RASL pictures.
  • FIG. 4 The Representation switching marked with a dashed rectangle in FIG. 2 is illustrated in FIG. 4. It can be observed that the reference picture ( "picture 0" ) for RASL pictures has not been decoded. Consequently, RASL pictures are not decodable and there will be a gap in the playout of the video.
  • RWMR 360° streaming offers an increased effective spatial resolution on the viewport.
  • Schemes where tiles covering the viewport originate from a 6K (6144 ⁇ 3072) ERP picture or an equivalent CMP resolution, illustrated in FIG. 6, with "4K" decoding capacity (HEVC Level 5.1) were included in clauses D. 6.3 and D. 6.4 of OMAF and also adopted in the VR Industry Forum Guidelines. Such resolutions are asserted to be suitable for head-mounted displays using quad-HD (2560 ⁇ 1440) display panel.
  • Encoding The content is encoded at two spatial resolutions with cube face size 1536 ⁇ 1536 and 768 ⁇ 768, respectively. In both bitstreams a 6 ⁇ 4 tile grid is used and a motion-constrained tile set (MCTS) is coded for each tile position.
  • MCTS motion-constrained tile set
  • Each MCTS sequence is encapsulated as a sub-picture track and made available as a sub-picture Representation in DASH.
  • Merging MCTSs to a bitstream to be decoded The received MCTSs of a single time instance are merged into a coded picture of 1920 ⁇ 4608, which conforms to HEVC Level 5.1. Another option for the merged picture is to have 4 tile columns of width 768, two tile columns of width 384, and three tile rows of height 768 luma samples, resulting into a picture of 3840 ⁇ 2304 luma samples.
  • Sub-picture Representations are merged to coded pictures for decoding, and hence the VCL NAL unit types are aligned in all selected sub-picture Representations.
  • multiple versions of the content can be coded at different IRAP intervals. This is illustrated in FIG. 7 for one set of collocated sub-picture Representations for encoding presented in FIG. 6.
  • FIG. 8 presents an example where the sub-picture location is first selected to be received at the lower resolution (384 ⁇ 384) .
  • a change in the viewing orientation causes a new selection of the sub-picture locations to be received at the higher resolution (768 ⁇ 768) .
  • the viewing orientation change happens so that Segment 4 is received from the short-IRAP-interval sub-picture Representations. After that, the viewing orientation is stable and thus, the long-IRAP-interval version can be used starting from Segment 5 onwards.
  • FIG. 9 illustrates a viewing orientation change from FIG. 6 slightly upwards and towards the right cube face. Cube face partitions that have a different resolution as earlier are indicated with "C" . It can be observed that the resolution changed in 6 out of 24 cube face partitions. However, as discussed above, Segments starting with an IRAP picture need to be received for all 24 cube face partitions in response to the viewing orientation change. Updating all sub-picture locations with Segments starting with an IRAP picture is inefficient in terms of streaming rate-distortion performance.
  • the ability to use open GOP prediction structures with sub-picture Representations of RWMR 360° streaming is desirable to improve rate-distortion performance and to avoid visible picture quality pumping caused by closed GOP prediction structures.
  • the VVC design should allow merging of a sub-picture originating from a random-access picture and another sub-picture originating from a non-random-access picture into the same coded picture conforming to VVC.
  • the VVC design should enable the use of open GOP prediction structure in sub-picture representations without compromising from the fast and seamless representation switching capability between sub-picture representations of different properties, such as different spatial resolutions, while enabling merging of sub-picture representations into a single VVC bitstream.
  • FIG. 10 The design goals can be illustrated with FIG. 10, in which sub-picture Representations for two sub-picture locations are presented. For both sub-picture locations, a separate version of the content is coded for each combination among two resolutions and two random access intervals. Some of the Segments start with an open GOP prediction structure. A viewing orientation change causes the resolution of sub-picture location 1 to be switched at the start of Segment 4. Since Segment 4 starts with a CRA picture, which is associated with RASL pictures, those reference pictures of the RASL pictures that are in Segment 3 need to be resampled. It is remarked that this resampling applies to sub-picture location 1 while decoded sub-pictures of some other sub-picture locations are not resampled.
  • the viewing orientation change does not cause changes in the resolution of sub-picture location 2 and thus decoded sub-pictures of sub-picture location 2 are not resampled.
  • the Segment for sub-picture location 1 contains a sub-picture originating from a CRA picture
  • the Segment for sub-picture location 2 contains a sub-picture originating from a non-random-access picture. It is suggested that merging of these sub-pictures into a coded picture is allowed in VVC.
  • JCTVC-F158 proposed adaptive resolution change mainly for video conferencing.
  • the following sub-sections are copied from JCTVC-F158 and present the use cases where adaptive resolution change is asserted to be useful.
  • the IDR is typically sent at low quality, using a similar number of bits to a P frame, and it takes a significant time to return to full quality for the given resolution.
  • the quality can be very low indeed and there is often a visible blurring before the image is “refocused” .
  • the Intra frame is doing very little useful work in compression terms: it is just a method of re-starting the stream.
  • Video conferences also often have a feature whereby the person speaking is shown full-screen and other participants are shown in smaller resolution windows. To support this efficiently, often the smaller pictures are sent at lower resolution. This resolution is then increased when the participant becomes the speaker and is full-screened. Sending an intra frame at this point causes an unpleasant hiccup in the video stream. This effect can be quite noticeable and unpleasant if speakers alternate rapidly.
  • VVC version 1 The following is high-level design choices are proposed for VVC version 1:
  • VVC version 1 1. It is proposed to include a reference picture resampling process in VVC version 1 for the following use cases:
  • VVC design is proposed to allow merging of a sub-picture originating from a random-access picture and another sub-picture originating from a non-random-access picture into the same coded picture conforming to VVC. This is asserted to enable efficient handling of viewing orientation changes in mixed-quality and mixed-resolution viewport-adaptive 360° streaming.
  • VVC version 1 It is proposed to include sub-picture-wise resampling process in VVC version 1. This is asserted to enable efficient prediction structure for more efficient handling of viewing orientation changes in mixed-resolution viewport-adaptive 360° streaming.
  • Section 5.13 ( "Support for Adaptive Streaming" ) of MPEG N17074 includes the following requirement for VVC:
  • the standard shall support fast representation switching in the case of adaptive streaming services that offer multiple representations of the same content, each having different properties (e.g. spatial resolution or sample bit depth) .
  • the standard shall enable the use of efficient prediction structures (e.g. so-called open groups of pictures) without compromising from the fast and seamless representation switching capability between representations of different properties, such as different spatial resolutions.
  • JVET-M0259 discusses how to meet this requirement by resampling of reference pictures of leading pictures.
  • JVET-M0259 discusses how to address this use case by resampling certain independently coded picture regions of reference pictures of leading pictures.
  • sps_max_rpr specifies the maximum number of active reference pictures in reference picture list 0 or 1 for any tile group in the CVS that have pic_width_in_luma_samples and pic_height_in_luma_samples not equal to pic_width_in_luma_samples and pic_height_in_luma_samples, respectively, of the current picture.
  • max_width_in_luma_samples specifies that it is a requirement of bitstream conformance that pic_width_in_luma_samples in any active PPS for any picture of a CVS for which this SPS is active is less than or equal to max_width_in_luma_samples.
  • max_height_in_luma_samples specifies that it is a requirement of bitstream conformance that pic_height_in_luma_samples in any active PPS for any picture of a CVS for which this SPS is active is less than or equal to max_height_in_luma_samples.
  • the decoding process operates as follows for the current picture CurrPic:
  • Variables and functions relating to picture order count are derived as specified in clause 8.3.1. This needs to be invoked only for the first tile group of a picture.
  • the reference picture used as input to the resampling process is marked as "unused for reference” .
  • the current decoded picture is marked as "used for short-term reference" .
  • SHVC resampling process (HEVC clause H. 8.1.4.2) is proposed with the following additions:
  • Adaptive resolution change as a concept in video compression standards, has been around since at least 1996; in particular H. 263+ related proposals towards reference picture resampling (RPR, Annex P) and Reduced Resolution Update (Annex Q) . It has recently gained a certain prominence, first with proposals by Cisco during the JCT-VC time, then in the context of VP9 (where it is moderately widely deployed nowadays) , and more recently in the context of VVC.
  • ARC allows reducing the number of samples required to be coded for a given picture, and upsampling the resulting reference picture to a higher resolution when such is desirable.
  • Intra coded pictures such as IDR pictures are often considerably larger than inter pictures. Downsampling pictures intended to be intra coded, regardless of reason, may provide a better input for future prediction. It’s also clearly advantageous from a rate control viewpoint, at least in low delay applications.
  • ARC may become handy even for non-intra coded pictures, such as in scene transitions without a hard transition point.
  • ARC can be implemented as reference picture resampling.
  • Implementing reference picture resampling has two major aspects: the resampling filters, and the signaling of the resampling information in the bitstream. This document focusses on the latter and touches the former only to the extent we have implementation experience. More study of suitable filter design is encouraged.
  • FIGS. 11 and 12 illustrate an existing ARC en-/decoder implementation, respectively.
  • the input image data is down-sampled to the selected picture size for the current picture encoding.
  • the decoded picture is stored in the decoded picture buffer (DPB) .
  • the reference picture (s) in the DPB is/are up-/down-scaled according the spatial ratio between the picture size of the reference and the current picture size.
  • the decoded picture is stored in the DPB without resampling.
  • the reference picture in the DPB is up-/down-scaled in relation to the spatial ratio between the currently decoded picture and the reference, when used for motion compensation.
  • the decoded picture is up-sampled to the original picture size or the desired output picture size when bumped out for display.
  • motion vectors are scaled in relation to picture size ratio as well as picture order count difference.
  • ARC parameters is used herein as a combination of any parameters required to make ARC work.
  • TGs may have different ARC parameters
  • the appropriate place for ARC parameters would be either in the TG header or in a parameter set with the scope of a TG, and referenced by the TG header-the Adaptation Parameter Set in the current VVC draft, or a more detailed reference (an index) into a table in a higher parameter set.
  • the use of the PPS for the reference is counter-indicated if, as we do, the per tile group signaling of ARC parameters is a design criterion.
  • Down-sampling per tile group is preferred to allow for picture composition/extraction. However, it is not critical from a signaling viewpoint. If the group were making the unwise decision of allowing ARC only at picture granularity, we can always include a requirement for bitstream conformance that all TGs use the same ARC parameters.
  • max_pic_width_in_luma_samples specifies the maximum width of decoded pictures in units of luma samples in the bitstream.
  • max_pic_width_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
  • the value of dec_pic_width_in_luma_samples [i] cannot be greater than the value of max_pic_width_in_luma_samples.
  • max_pic_height_in_luma_samples specifies the maximum height of decoded pictures in units of luma samples.
  • max_pic_height_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
  • the value of dec_pic_height_in_luma_samples [i] cannot be greater than the value of max_pic_height_in_luma_samples.
  • adaptive_pic_resolution_change_flag 1 specifies that an output picture size (output_pic_width_in_luma_samples, output_pic_height_in_luma_samples) , an indication of the number of decoded picture sizes (num_dec_pic_size_in_luma_samples_minus1) and at least one decoded picture size (dec_pic_width_in_luma_samples [i] , dec_pic_height_in_luma_samples [i] ) are present in the SPS.
  • a reference picture size reference_pic_width_in_luma_samples, reference_pic_height_in_luma_samples is present conditioned on the value of reference_pic_size_present_flag.
  • output_pic_width_in_luma_samples specifies the width of the output picture in units of luma samples. output_pic_width_in_luma_samples shall not be equal to 0.
  • output_pic_height_in_luma_samples specifies the height of the output picture in units of luma samples. output_pic_height_in_luma_samples shall not be equal to 0.
  • reference_pic_size_present_flag 1 specifies that reference_pic_width_in_luma_samples and reference_pic_height_in_luma_samples are present.
  • reference_pic_width_in_luma_samples specifies the width of the reference picture in units of luma samples. output_pic_width_in_luma_samples shall not be equal to 0. If not present, the value of reference_pic_width_in_luma_samples is inferred to be equal to dec_pic_width_in_luma_samples [i] .
  • reference_pic_height_in_luma_samples specifies the height of the reference picture in units of luma samples. output_pic_height_in_luma_samples shall not be equal to 0. If not present, the value of reference_pic_height_in_luma_samples is inferred to be equal to dec_pic_height_in_luma_samples [i] .
  • the size of the output picture shall be equal to the values of output_pic_width_in_luma_samples and output_pic_height_in_luma_samples.
  • the size of the reference picture shall be equal to the values of reference_pic_width_in_luma_samples and _pic_height_in_luma_samples, when the reference picture is used for motion compensation.
  • num_dec_pic_size_in_luma_samples_minus1 plus 1 specifies the number of the decoded picture size (dec_pic_width_in_luma_samples [i] , dec_pic_height_in_luma_samples [i] ) in units of luma samples in the coded video sequence.
  • dec_pic_width_in_luma_samples [i] specifies the i-th width of the decoded picture sizes in units of luma samples in the coded video sequence.
  • dec_pic_width_in_luma_samples [i] shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
  • dec_pic_height_in_luma_samples [i] specifies the i-th height of the decoded picture sizes in units of luma samples in the coded video sequence.
  • dec_pic_height_in_luma_samples [i] shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
  • the i-th decoded picture size (dec_pic_width_in_luma_samples [i] , dec_pic_height_in_luma_samples [i] ) may be equal to the decoded picture size of the decoded picture in the coded video sequence.
  • dec_pic_size_idx specifies that the width of the decoded picture shall be equal to pic_width_in_luma_samples [dec_pic_size_idx] and the height of the decoded picture shall be equal to pic_height _in_luma_samples [dec_pic_size_idx] .
  • the proposed design conceptually includes four different filter sets: down-sampling filter from the original picture to the input picture, up-/down-sampling filters to rescale reference pictures for motion estimation/compensation, and up-sampling filter from the decoded picture to the output picture.
  • the first and last ones can be left as non-normative matters.
  • up-/down-sampling filters need to be explicitly signaled in an appropriate parameter set, or pre-defined.
  • SHVC SHM ver. 12.4
  • SHVC SHM ver. 12.4
  • 2D separable filter a 12-tap and 2D separable filter
  • the phase of the down-sampling filter is set equal to zero by default.
  • 8-tap interpolation filters are used, with 16-phases, to shift the phase and align the luma and chroma pixel positions to the original positions.
  • Table 3 provides the 12-tap filter coefficients for down-sampling process. The same filter coefficients are used for both luma and chroma for down-sampling.
  • the parameter dec_pic_size_idx can be moved into whatever header that starts a sub-picture. Our current feeling is that most likely that will continue to be a tile group header.
  • FIG. 13 is made up of four sub-pictures (expressed perhaps as four rectangular tile groups in the bitstream syntax) . To the left, the bottom right TG is subsampled to half the size. What do we do with the samples outside the relevant area, marked as “Half” ?
  • a list of picture resolutions is signalled in the SPS, and an index to the list is signalled in the PPS to specify the size of an individual picture.
  • the decoded picture before resampling is cropped (as necessary) and outputted, i.e., a resampled picture is not for output, only for inter prediction reference.
  • both the resampled version and the original, resampled version of the reference picture are stored in the DPB, and thus both would affect the DPB fullness.
  • a resampled reference picture is marked as "unused for reference” when the corresponding un-resampled reference picture is marked as "unused for reference” .
  • the RPL signalling syntax is kept unchanged, while the RPL construction process is modified as follows: When a reference picture needs to be included into a RPL entry, and a version of that reference picture with the same resolution as the current picture is not in the DPB, the picture resampling process is invoked and the resampled version of that reference picture is included into the RPL entry.
  • the number of resampled reference pictures that may be present in the DPB should be limited, e.g., to be less than or equal to 2.
  • Another option is that, when resampling and quarter-pel interpolation need to be done, the two filters are combined and the operation is applied at once.
  • temporal motion vector scaling is applied as needed.
  • the ARC software was implemented on top of VTM-4.0.1, with the following changes:
  • the spatial resolution signalling was moved from SPS to PPS.
  • a picture-based resampling scheme was implemented for resampling reference pictures. After a picture is decoded, the reconstructed picture may be resampled to a different spatial resolution. The original reconstructed picture and the resampled reconstructed picture are both stored in the DPB and are available for reference by future pictures in decoding order.
  • the up-sampling filter 4-tap +/-quarter-phase DCTIF with taps (-4, 54, 16, -2) /64
  • the down-sampling filter the h11 filter with taps (1, 0, -3, 0, 10, 16, 10, 0, -3, 0, 1)/32
  • the reference picture lists of the current picture i.e., L0 and L1
  • the reference pictures may be available in both their original sizes or the resampled sizes.
  • TMVP and ATVMP may be enabled; however, when the original coding resolutions of the current picture and a reference picture are different, TMVP and ATMVP are disabled for that reference picture.
  • the decoder when outputting a picture, the decoder outputs the highest available resolution.
  • the decoded picture before resampling is cropped (as necessary) and outputted, i.e., a resampled picture is not for output, only for inter prediction reference.
  • the ARC resampling filters should be designed to optimize the use of the resampled pictures for inter prediction, and such filters may not be optimal for picture outputting/displaying purpose, while video terminal devices usually have optimized output zooming/scaling functionalities already implemented.
  • Resampling of a decoded picture can be either picture-based or block-based.
  • block-based resampling over picture-based resampling.
  • JVET makes a decision on which of these two should be specified for ARC support in VVC.
  • a reference picture may need to be resampled multiple times since multiple pictures may refer to the same reference picture, and .
  • both the resampled version and the original, resampled version of the reference picture are stored in the DPB, and thus both would affect the DPB fullness.
  • a resampled reference picture is marked as "unused for reference” when the corresponding un-resampled reference picture is marked as "unused for reference” .
  • the reference picture lists (RPLs) of each tile group contain reference pictures that have the same resolution as the current picture. While there is no need for a change to the RPL signalling syntax, the RPL construction process is modified to ensure what is said in the previous sentence, as follows: When a reference picture needs to be included into a RPL entry while a version of that reference picture with the same resolution as the current picture is not yet available, the picture resampling process is invoked and the resampled version of that reference picture is included.
  • the number of resampled reference pictures that may be present in the DPB should be limited, e.g., to be less than or equal to 2.
  • temporal MV usage e.g. merge mode and ATMVP
  • scaling temporal MV to the current resolution as needed.
  • a reference block is resampled whenever needed, and no resampled picture is stored in the DPB.
  • the main issue here is the additional decoder complexity. This is because a block in a reference picture may be referred to multiple times by multiple blocks in another picture and by blocks in multiple pictures.
  • the reference block is resampled by invocation of the interpolation filter such that the reference block has the integer- pel resolution.
  • the interpolation process is invoked again to obtain the resampled reference block in the quarter-pel resolution. Therefore, for each motion compensation operation for the current block from a reference block involving different resolutions, up to two, instead of one, interpolation filtering operations are needed. Without ARC support, up to only one interpolation filter operation (i.e., for generation of the reference block in the quarter-pel resolution) is needed.
  • block blkA in the current picture picA refers to a reference block blkB in a reference picture picB
  • block blkA shall be a uni-predicted block.
  • the worst-case number of interpolation operations needed to decode a block is limited to two. If a block refers to a block from a different-resolution picture, the number of interpolation operations needed is two as discussed above. This is the same as in the case when the block refers to a reference block from a same-resolution picture and coded as a bi-predicted block since the number of interpolation operations is also two (i.e., one for getting the quarter-pel resolution for each reference block) .
  • the corresponding positions of every pixel of predictors are calculated first, and then the interpolation is applied only one time. That is, two interpolation operations (i.e. one for resampling and one for quarter-pel interpolation) are combined into only one interpolation operation.
  • the sub-pel interpolation filters in the current VVC can be reused, but, in this case, the granularity of interpolation should be enlarged but the interpolation operation times are reduced from two to one.
  • temporal MV usage e.g. merge mode and ATMVP
  • scaling temporal MV to the current resolution as needed.
  • the DPB may contains decoded pictures of different spatial resolutions within the same CVS.
  • counting DPB size and fullness in units decoded picture no longer works.
  • PicSizeInSamplesY Pic_width_in_luma_samples *pic_height_in_luma_samples
  • MaxDpbSize maximum number of reference picture that may present in the DPB
  • MinPicSizeInSampleY (Width of the smallest picture resolution in the bitstream) * (Height of the smallest resolution in the bitstream)
  • MaxDpbSize is modified as follows (based on the HEVC equation) :
  • MaxDpbSize Min (4 *maxDpbPicBuf, 16)
  • PictureSizeUnit is an integer value that specifies how big a decoded picture size is relative to the MinPicSizeInSampleY.
  • the definition of PictureSizeUnit depends on what resampling ratios are supported for ARC in VVC.
  • the PictureSizeUnit is defined as follows:
  • Decoded pictures having the resolution that is 2 by 2 of the smallest resolution in the bitstream is associated with PictureSizeUnit of 4 (i.e., 1 *4) .
  • the PictureSizeUnit is defined as follows:
  • Decoded pictures having the resolution that is 1.5 by 1.5 of the smallest resolution in the bitstream is associated with PictureSizeUnit of 9 (i.e., 2.25 *4) .
  • Decoded pictures having the resolution that is 2 by 2 of the smallest resolution in the bitstream is associated with PictureSizeUnit of 16 (i.e., 4 *4) .
  • MinPictureSizeUnit be the smallest possible value of PictureSizeUnit. That is, if ARC supports only resampling ratio of 2, MinPictureSizeUnit is 1; if ARC supports resampling ratios of 1.5 and 2, MinPictureSizeUnit is 4; likewise, the same principle is used to determine the value of MinPictureSizeUnit.
  • the value range of sps_max_dec_pic_buffering_minus1 [i] is specified to range from 0 to (MinPictureSizeUnit * (MaxDpbSize –1) ) .
  • MinPictureSizeUnit is the smallest possible value of PictureSizeUnit.
  • the DPB fullness operation is specified based on PictureSizeUnit as follows:
  • the HRD is initialized at decoding unit 0, with both the CPB and the DPB being set to be empty (the DPB fullness is set equal to 0) .
  • the DPB fullness is set equal to 0.
  • the DPB fullness is decrement by the value of PictureSizeUnit associated with the removed picture.
  • the DPB fullness is increment by the value of PictureSizeUnit associated with the inserted picture.
  • the implemented resampling filters were simply taken from previously available filters described in JCTVC-H0234. Other resampling filters should be tested and used if they provide better performance and/or lower complexity. We propose that various resampling filters to be tested to strike a trade-off between complexity and performance. Such tests can be done in a CE.
  • the standard shall support fast representation switching in the case of adaptive streaming services that offer multiple representations of the same content, each having different properties (e.g. spatial resolution or sample bit depth) . ”
  • allowing resolution change within a coded video sequence without inserting an I picture can not only adapt the video data to dynamic channel conditions or user preference seamlessly, but also remove the beating effect caused by I pictures.
  • FIG. 14 A hypothetical example of adaptive resolution change is shown in FIG. 14 where the current picture is predicted from reference pictures of different sizes.
  • This contribution proposes high level syntax to signal adaptive resolution change as well as modifications to the current motion compensated prediction process in the VTM. These modifications are limited to motion vector scaling and subpel location derivations with no changes in the existing motion compensation interpolators. This would allow the existing motion compensation interpolators to be reused and not require new processing blocks to support adaptive resolution change which would introduce additional cost.
  • pic_width_in_luma_samples specifies the width of each decoded picture in units of luma samples. pic_width_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY. ] ]
  • pic_height_in_luma_samples specifies the height of each decoded picture in units of luma samples. pic_height_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY. ] ]
  • max_pic_width_in_luma_samples specifies the maximum width of decoded pictures referring to the SPS in units of luma samples.
  • max_pic_width_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
  • max_pic_height_in_luma_samples specifies the maximum height of decoded pictures referring to the SPS in units of luma samples.
  • max_pic_height_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
  • pic_size_different_from_max_flag 1 specifies that the PPS signals different picture width or picture height from the max_pic_width_in_luma_samples and max_pic_height_in_luma_sample in the referred SPS.
  • pic_size_different_from_max_flag 0 specifies that pic_width_in_luma_samples and pic_height_in_luma_sample are the same as max_pic_width_in_luma_samples and max_pic_height_in_luma_sample in the referred SPS.
  • pic_width_in_luma_samples specifies the width of each decoded picture in units of luma samples.
  • pic_width_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
  • pic_width_in_luma_samples is not present, it is inferred to be equal to max_pic_width_in_luma_samples
  • pic_height_in_luma_samples specifies the height of each decoded picture in units of luma samples.
  • pic_height_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
  • pic_height_in_luma_samples is not present, it is inferred to be equal to max_pic_height_in_luma_samples.
  • horizontal and vertical scaling ratios shall be in the range of 1/8 to 2, inclusive for every active reference picture.
  • the scaling ratios are defined as follows:
  • horizontal_scaling_ratio ( (reference_pic_width_in_luma_samples ⁇ 14) + (pic_width_in_luma_samples/2) ) /pic_width_in_luma_samples
  • vertical_scaling_ratio ( (reference_pic_height_in_luma_samples ⁇ 14) + (pic_height_in_lu ma_samples/2) ) /pic_height_in_luma_samples
  • a picture When there is a resolution change within a CVS, a picture may have a different size from one or more of its reference pictures.
  • This proposal normalizes all motion vectors to the current picture grid instead of their corresponding reference picture grids. This is asserted to be beneficial to keep the design consistent and make resolution changes transparent to the motion vector prediction process. Otherwise, neighboring motion vectors pointing to reference pictures with different sizes cannot be used directly for spatial motion vector prediction due to the different scale.
  • the scaling range is limited to [1/8, 2] , i.e. the upscaling is limited to 1: 8 and downscaling is limited to 2: 1. Note that upscaling refers to the case where the reference picture is smaller than the current picture, while downscaling refers to the case where the reference picture is larger than the current picture. In the following sections, the scaling process is described in more detail.
  • the scaling process includes two parts:
  • the subpel location (x′, y′) in the reference picture pointed to by a motion vector (mvX, mvY) in units of 1/16 th pel is specified as follows:
  • x′ ( (x ⁇ 4) +mvX) ⁇ hori_scale_fp, (3)
  • the reference location of the upper left corner pixel of the current block is at (x′, y′) .
  • the other reference subple/pel locations are calculated relative to (x′, y′) with horizontal and vertical step sizes. Those step sizes are derived with 1/1024-pel accuracy from the above horizontal and vertical scaling factors as follows:
  • x′ i and y′ j have to be broken up into full-pel parts and fractional-pel parts:
  • the existing motion compensation interpolators can be used without any additional changes.
  • the full-pel location will be used to fetch the reference block patch from the reference picture and the fractional-pel location will be used to select the proper interpolation filter.
  • chroma motion vectors When the chroma format is 4: 2: 0, chroma motion vectors have 1/32-pel accuracy.
  • the scaling process of chroma motion vectors and chroma reference blocks is almost the same as for luma blocks except a chroma format related adjustment.
  • x c ′ ( (x c ⁇ 5) +mvX) ⁇ hori_scale_fp, (1)
  • mvX and mvY are the original luma motion vector but now should be examined with 1/32-pel accuracy.
  • x c ′ and y c ′ are further scaled down to keep 1/1024 pel accuracy
  • chroma pixel at (i, j) relative to the upper left corner pixel its reference pixel’s horizontal and vertical coordinates are derived by
  • tile_group_temporal_mvp_enabled_flag When tile_group_temporal_mvp_enabled_flag is equal to 1, the current picture and its collocated picture shall have the same size.
  • decoder motion vector refinement shall be turned off.
  • sps_bdof_enabled_flag shall be equal to 0.
  • CTB Coding Tree Block
  • ALF Adaptive Loop Filter
  • Adaptive parameter set was adopted in VTM4.
  • Each APS contains one set of signalled ALF filters, up to 32 APSs are supported.
  • slice-level temporal filter is tested.
  • a tile group can re-use the ALF information from an APS to reduce the overhead.
  • the APSs are updated as a first-in-first-out (FIFO) buffer.
  • For luma component when ALF is applied to a luma CTB, the choice among 16 fixed, 5 temporal or 1 signaled filter sets is indicated. Only the filter set index is signalled. For one slice, only one new set of 25 filters can be signaled. If a new set is signalled for a slice, all the luma CTBs in the same slice share that set. Fixed filter sets can be used to predict the new slice-level filter set and can be used as candidate filter sets for a luma CTB as well. The number of filters is 64 in total.
  • chroma component when ALF is applied to a chroma CTB, if a new filter is signalled for a slice, the CTB used the new filter, otherwise, the most recent temporal chroma filter satisfying the temporal scalability constrain is applied.
  • the APSs are updated as a first-in-first-out (FIFO) buffer.
  • the motion vectors temporal motion vector prediction is modified by fetching multiple sets of motion information (including motion vectors and reference indices) from blocks smaller than the current CU.
  • the sub-CUs are square N ⁇ N blocks (N is set to 8 by default) .
  • ATMVP predicts the motion vectors of the sub-CUs within a CU in two steps.
  • the first step is to identify the corresponding block in a reference picture with a so-called temporal vector.
  • the reference picture is called the motion source picture.
  • the second step is to split the current CU into sub-CUs and obtain the motion vectors as well as the reference indices of each sub-CU from the block corresponding to each sub-CU, as shown in FIG. 15.
  • a reference picture and the corresponding block is determined by the motion information of the spatial neighbouring blocks of the current CU.
  • the merge candidate from block A0 (the left block) in the merge candidate list of the current CU is used.
  • the first available motion vector from block A0 referring to the collocated reference picture are set to be the temporal vector. This way, in ATMVP, the corresponding block may be more accurately identified, compared with TMVP, wherein the corresponding block (sometimes called collocated block) is always in a bottom-right or center position relative to the current CU.
  • a corresponding block of the sub-CU is identified by the temporal vector in the motion source picture, by adding to the coordinate of the current CU the temporal vector.
  • the motion information of its corresponding block (the smallest motion grid that covers the center sample) is used to derive the motion information for the sub-CU.
  • the motion information of a corresponding N ⁇ N block is identified, it is converted to the motion vectors and reference indices of the current sub-CU, in the same way as TMVP of HEVC, wherein motion scaling and other procedures apply.
  • HEVC high definition motion model
  • MCP motion compensation prediction
  • a simplified affine transform motion compensation prediction is applied with 4-parameter affine model and 6-parameter affine model.
  • FIGS. 16A-16B the affine motion field of the block is described by two control point motion vectors (CPMVs) for the 4-parameter affine model and 3 CPMVs for the 6-parameter affine model.
  • CPMVs control point motion vectors
  • the motion vector field (MVF) of a block is described by the following equations with the 4-parameter affine model (wherein the 4-parameter are defined as the variables a, b, e and f) in equation (1) and 6-parameter affine model (wherein the 4-parameter are defined as the variables a, b, c, d, e and f) in equation (2) respectively:
  • control point motion vectors (CPMV)
  • (x, y) represents the coordinate of a representative point relative to the top-left sample within current block
  • (mv h (x, y) , mv v (x, y) ) is the motion vector derived for a sample located at (x, y) .
  • the CP motion vectors may be signaled (like in the affine AMVP mode) or derived on-the-fly (like in the affine merge mode) .
  • w and h are the width and height of the current block.
  • the division is implemented by right-shift with a rounding operation.
  • the representative point is defined to be the center position of a sub-block, e.g., when the coordinate of the left-top corner of a sub-block relative to the top-left sample within current block is (xs, ys) , the coordinate of the representative point is defined to be (xs+2, ys+2) .
  • the representative point is utilized to derive the motion vector for the whole sub-block.
  • sub-block based affine transform prediction is applied.
  • the motion vector of the center sample of each sub-block is calculated according to Equation (1) and (2) , and rounded to 1/16 fraction accuracy.
  • the motion compensation interpolation filters for 1/16-pel are applied to generate the prediction of each sub-block with derived motion vector.
  • the interpolation filters for 1/16-pel are introduced by the affine mode.
  • the high accuracy motion vector of each sub-block is rounded and saved as the same accuracy as the normal motion vector.
  • AFFINE_INTER Similar to the translational motion model, there are also two modes for signaling the side information due affine prediction. They are AFFINE_INTER and AFFINE_MERGE modes.
  • AF_INTER mode can be applied.
  • An affine flag in CU level is signalled in the bitstream to indicate whether AF_INTER mode is used.
  • an affine AMVP candidate list is constructed with three types of affine motion predictors in the following order, wherein each candidate includes the estimated CPMVs of the current block.
  • the differences of the best CPMVs found at the encoder side (such as mv 0 mv 1 mv 2 in FIG. 18) and the estimated CPMVs are signalled.
  • the index of affine AMVP candidate from which the estimated CPMVs are derived is further signalled.
  • the checking order is similar to that of spatial MVPs in HEVC AMVP list construction.
  • a left inherited affine motion predictor is derived from the first block in ⁇ A1, A0 ⁇ that is affine coded and has the same reference picture as in current block.
  • an above inherited affine motion predictor is derived from the first block in ⁇ B1, B0, B2 ⁇ that is affine coded and has the same reference picture as in current block.
  • the five blocks A1, A0, B1, B0, B2 are depicted in FIG. 19.
  • the CPMVs of the coding unit covering the neighboring block are used to derive predictors of CPMVs of current block. For example, if A1 is coded with non-affine mode and A0 is coded with 4-parameter affine mode, the left inherited affine MV predictor will be derived from A0. In this case, the CPMVs of a CU covering A0, as denoted by for the top-left CPMV and for the top-right CPMV in FIG.
  • 21B are utilized to derive the estimated CPMVs of current block, denoted by for the top-left (with coordinate (x0, y0) ) , top-right (with coordinate (x1, y1) ) and bottom-right positions (with coordinate (x2, y2) ) of current block.
  • a constructed affine motion predictor consists of control-point motion vectors (CPMVs) that are derived from neighboring inter coded blocks, as shown in FIG. 20 that have the same reference picture.
  • CPMVs control-point motion vectors
  • the number of CPMVs is 2, otherwise if the current affine motion model is 6-parameter affine, the number of CPMVs is 3.
  • the top-left CPMV is derived by the MV at the first block in the group ⁇ A, B, C ⁇ that is inter coded and has the same reference picture as in current block.
  • the top-right CPMV is derived by the MV at the first block in the group ⁇ D, E ⁇ that is inter coded and has the same reference picture as in current block.
  • the bottom-left CPMV is derived by the MV at the first block in the group ⁇ F, G ⁇ that is inter coded and has the same reference picture as in current block.
  • a constructed affine motion predictor is inserted into the candidate list only if both and are founded, that is, and are used as the estimated CPMVs for top-left (with coordinate (x0, y0) ) , top-right (with coordinate (x1, y1) ) positions of current block.
  • a constructed affine motion predictor is inserted into the candidate list only if and are all founded, that is, and are used as the estimated CPMVs for top-left (with coordinate (x0, y0) ) , top-right (with coordinate (x1, y1) ) and bottom-right (with coordinate (x2, y2) ) positions of current block.
  • MVD In AF_INTER mode, when 4/6-parameter affine mode is used, 2/3 control points are required, and therefore 2/3 MVD needs to be coded for these control points, as shown in FIGS. 18A and 18B.
  • JVET-K0337 it is proposed to derive the MV as follows, i.e., mvd 1 and mvd 2 are predicted from mvd 0 .
  • two motion vectors e.g., mvA (xA, yA) and mvB (xB, yB)
  • newMV mvA + mvB and the two components of newMV is set to (xA + xB) and (yA + yB) , respectively.
  • a CU When a CU is applied in AF_MERGE mode, it gets the first block coded with affine mode from the valid neighbour reconstructed blocks. And the selection order for the candidate block is from left, above, above right, left bottom to above left as shown in FIG. 21A (denoted by A, B, C, D, E in order) .
  • the neighbour left bottom block is coded in affine mode as denoted by A0 in FIG. 21B
  • the Control Point (CP) motion vectors mv 0 N , mv 1 N and mv 2 N of the top left corner, above right corner and left bottom corner of the neighbouring CU/PU which contains the block A are fetched.
  • the motion vector mv 0 C , mv 1 C and mv 2 C (which is only used for the 6-parameter affine model) of the top left corner/top right/bottom left on the current CU/PU is calculated based on mv 0 N , mv 1 N and mv 2 N .
  • sub-block e.g. 4 ⁇ 4 block in VTM located at the top-left corner stores mv0
  • the sub-block located at the top-right corner stores mv1 if the current block is affine coded.
  • the sub-block located at the bottom-left corner stores mv2; otherwise (with the 4-parameter affine model) , LB stores mv2’.
  • Other sub-blocks stores the MVs used for MC.
  • the MVF of the current CU is generated.
  • an affine flag is signalled in the bitstream when there is at least one neighbour block is coded in affine mode.
  • an affine merge candidate list is constructed with following steps:
  • Inherited affine candidate means that the candidate is derived from the affine motion model of its valid neighbor affine coded block.
  • the maximum two inherited affine candidates are derived from affine motion model of the neighboring blocks and inserted into the candidate list.
  • the scan order is ⁇ A0, A1 ⁇ ; for the above predictor, the scan order is ⁇ B0, B1, B2 ⁇ .
  • Constructed affine candidate means the candidate is constructed by combining the neighbor motion information of each control point.
  • T is temporal position for predicting CP4.
  • the coordinates of CP1, CP2, CP3 and CP4 is (0, 0) , (W, 0) , (H, 0) and (W, H) , respectively, where W and H are the width and height of current block.
  • the motion information of each control point is obtained according to the following priority order:
  • the checking priority is B2->B3->A2.
  • B2 is used if it is available. Otherwise, if B2 is available, B3 is used. If both B2 and B3 are unavailable, A2 is used. If all the three candidates are unavailable, the motion information of CP1 cannot be obtained.
  • the checking priority is B1->B0.
  • the checking priority is A1->A0.
  • T is used.
  • Motion information of three control points are needed to construct a 6-parameter affine candidate.
  • the three control points can be selected from one of the following four combinations ( ⁇ CP1, CP2, CP4 ⁇ , ⁇ CP1, CP2, CP3 ⁇ , ⁇ CP2, CP3, CP4 ⁇ , ⁇ CP1, CP3, CP4 ⁇ ) .
  • Combinations ⁇ CP1, CP2, CP3 ⁇ , ⁇ CP2, CP3, CP4 ⁇ , ⁇ CP1, CP3, CP4 ⁇ will be converted to a 6-parameter motion model represented by top-left, top-right and bottom-left control points.
  • Motion information of two control points are needed to construct a 4-parameter affine candidate.
  • the two control points can be selected from one of the two combinations ( ⁇ CP1, CP2 ⁇ , ⁇ CP1, CP3 ⁇ ) .
  • the two combinations will be converted to a 4-parameter motion model represented by top-left and top-right control points.
  • the reference indices of list X for each CP are checked, if they are all the same, then this combination has valid CPMVs for list X. If the combination does not have valid CPMVs for both list 0 and list 1, then this combination is marked as invalid. Otherwise, it is valid, and the CPMVs are put into the sub-block merge list.
  • ARC When applied in VVC, ARC may have the following problems:
  • bit-depth and color format such as 4: 0: 0, 4: 2: 0 or 4: 4: 4) may also be changed from one picture to another picture in one sequence.
  • Shift (x, n) (x+ offset0) >>n.
  • offset0 and/or offset1 are set to (1 ⁇ n) >>1 or (1 ⁇ (n-1) ) . In another example, offset0 and/or offset1 are set to 0.
  • Clip3 (min, max, x) is defined as
  • Floor (x) is defined as the largest integer less than or equal to x.
  • Ceil (x) the smallest integer greater than or equal to x.
  • Log2 (x) is defined as the base-2 logarithm of x.
  • Whether to enable DMVR/BIO or other kinds of motion derivation/refinement at the decoder side may depend on whether the two reference pictures are in the same resolution or not.
  • motion derivation/refinement such as DMVR/BIO is disabled.
  • how to apply motion derivation/refinement at the decoder side may depend on whether the two reference pictures are in the same resolution or not.
  • the allowed MVDs associated with each reference picture may be scaled, e.g., according to resolutions.
  • all reference pictures stored in the decoded picture buffer are in the same resolution (denoted as a first resolution) , such as the maximumly/minimally allowed resolution.
  • the samples in a decoded picture may be firstly modified, (e.g., via up-sampling or down-sampling) , before being stored in the decoded picture buffer.
  • the modification may be according to the first resolution.
  • the modification may be according to the resolution for the reference picture, and that for the current picture.
  • all reference pictures stored in the decoded picture buffer may be in the resolution that picture has been coded with.
  • conversion of reference samples may be firstly applied (e.g., via up-sampling or down-sampling) , before invoking the motion compensation process.
  • the motion compensation (MC) may be done directly using reference samples in the reference pictures.
  • the prediction block generated from the MC process may be further modified (e.g., via up-sampling or down-sampling) , and the final prediction block for current block may depend on the modified prediction block.
  • ABS Adaptive Bit-depth Conversion
  • one or multiple sets of sample bit-depths for one or multiple components may be signaled in a video unit such as DPS, VPS, SPS, PPS, APS, picture header, slice header, tile group header.
  • one or multiple sets of sample bit-depths for one or multiple components may be signaled in a Supplemental Enhancement Information (SEI) message.
  • SEI Supplemental Enhancement Information
  • one or multiple sets of sample bit-depths for one or multiple components may be signaled in an individual video unit for Adaptive bit-depth conversion.
  • a set of sample bit-depths for one or multiple components may be coupled the dimensions of the picture.
  • one or multiple combinations of sample bit-depths for one or multiple components and the corresponding dimensions/down sampling ratios/up sampling ratios of the picture may be signaled in the same video unit.
  • indication of the ABCMaxBD and/or ABCMinBD may be signaled.
  • differences of other bit-depth values compared to ABCMaxBD and/or ABCMinBD may be signaled.
  • all reference pictures stored in the decoded picture buffer are in the same bit-depth (denoted as the a first bit-depth) , such as ABCMaxBD [i] or ABCMinBD [i] (with i being 0.. 2 indicating the color component indices) .
  • the samples in a decoded picture may be firstly modified, via left-shift or right-shift, before being stored in the decoded picture buffer.
  • the modification may be according to the first bit-depth.
  • the modification may be according to the defined bit-depth for reference picture, and that for current picture.
  • all reference pictures stored in the decoded picture buffer are in the bit-depth of what has been coded with.
  • conversion of reference samples may be firstly applied, before invoking the motion compensation process.
  • the motion compensation is done directly using reference samples.
  • the prediction block generated from the MC process may be further modified (e.g., via shifting) , and the final prediction block for current block may depend on the modified prediction block.
  • S’ S ⁇ (BD0-BD1) + (1 ⁇ (BD0-BD1-1) ) .
  • the reference picture with samples not converted may be removed after a reference picture is converted from it.
  • the reference picture with samples not converted may be kept but marked as unavailable after a reference picture is converted from it.
  • the reference picture with samples not converted may be put in the reference picture list.
  • the reference samples are converted when they are used to in the inter-prediction.
  • ARC is conducted first, then ABC is conducted.
  • samples are up-sampled/down-sampled according to different picture dimensions first, following by left-shifted/right-shifted according to different bit-depths in the ARC+ABC conversion.
  • ABC is conducted first, then ARC is conducted.
  • samples are left-shifted/right-shifted according to different bit-depths first, following by up-sampled/down-sampled according to different picture dimensions in the ABC+ARC conversion.
  • a merge candidate referring to a reference picture with a higher sample bit-depth may have a higher priority than a merge candidate referring to a reference picture with a lower bit-depth.
  • a merge candidate referring to a reference picture with a higher sample bit-depth may be put before a merge candidate referring to a reference picture with a lower sample bit-depth in the merge candidate list.
  • a motion vector referring to a reference picture with a sample bit-depth lower than the sample bit-depth of the current picture cannot be in the merge candidate list.
  • ALF parameters signaled in a video unit such as APS may be associated with one or multiple sample bit-depths.
  • a video unit such as APS signaling ALF parameters may be associated with one or multiple sample bit-depths.
  • a picture may only apply ALF parameters signaled in a video unit such as APS, associated with the same sample bit-depth.
  • a picture may use ALF parameters associated with a different sample bit-depth.
  • ALF parameters associated with a first corresponding sample bit-depth may inherit or be predicted from ALF parameters associated with a second corresponding sample bit-depth.
  • the first corresponding sample bit-depth must be the same as the second corresponding sample bit-depth.
  • the first corresponding sample bit-depth may be different to the second corresponding sample bit-depth.
  • LMCS parameters signaled in a video unit such as APS may be associated with one or multiple sample bit-depths.
  • a video unit such as APS signaling LMCS parameters may be associated with one or multiple sample bit-depths.
  • a picture may only apply LMCS parameters signaled in a video unit such as APS, associated with the same sample bit-depth.
  • LMCS parameters associated with a first corresponding sample bit-depth may inherit or be predicted from LMCS parameters associated with a second corresponding sample bit-depth.
  • the first corresponding sample bit-depth must be the same with the second corresponding sample bit-depth.
  • the first corresponding sample bit-depth may be different to the second corresponding sample bit-depth.
  • coding tool X may be disabled for a block if the block refers to at least one reference picture with a different sample bit-depth to the current picture.
  • the information related to the coding tool X may not be signaled.
  • the block cannot refer to a reference picture with a different sample bit-depth to the current picture.
  • a merge candidate referring to a reference picture with a different sample bit-depth to the current picture may be skipped or not put into the merge candidate list.
  • the reference index corresponds to a reference picture with a different sample bit-depth to the current picture may be skipped or not allowed to be signaled.
  • the coding tool X may be anyone below.
  • color format may refer to 4: 4: 4, 4: 2: 2, 4: 2: 0 or 4: 0: 0,
  • color format may refer to YCbCr or RGB
  • a video unit such as DPS, VPS, SPS, PPS, APS, picture header, slice header, tile group header.
  • one or multiple color formats may be signaled in a Supplemental Enhancement Information (SEI) message.
  • SEI Supplemental Enhancement Information
  • one or multiple color formats may be signaled in an individual video unit for ACC.
  • a color format may be coupled the dimensions and/or sample bit-depth of the picture.
  • one or multiple combinations of color formats, and/or sample bit-depth for one or multiple components and/or the corresponding dimensions of the picture may be signaled in the same video unit.
  • pictures with different color format are disallowed to be put in a reference picture list for a block in current picture.
  • the reference samples may be converted accordingly.
  • samples of chroma components in the reference picture may be up-sampled vertically by a ratio of 1: 2.
  • samples of chroma components in the reference picture may be up-sampled horizontally by a ratio of 1: 2.
  • samples of chroma components in the reference picture may be up-sampled vertically by a ratio of 1: 2 and up-sampled horizontally by a ratio of 1: 2.
  • samples of chroma components in the reference picture may be down-sampled vertically by a ratio of 1: 2.
  • samples of chroma components in the reference picture may be down-sampled horizontally by a ratio of 1: 2.
  • samples of chroma components in the reference picture may be down-sampled vertically by a ratio of 1: 2 and down-sampled horizontally by a ratio of 1: 2.
  • samples of the luma component in the reference picture may be used to perform inter-prediction to the current picture.
  • samples of the luma component in the reference picture may be used to perform inter-prediction to the current picture.
  • the reference picture with samples not converted may be removed after a reference picture is converted from it.
  • the reference picture with samples not converted may be kept but marked as unavailable after a reference picture is converted from it.
  • the reference picture with samples not converted may be put in the reference picture list.
  • the reference samples are converted when they are used to in the inter-prediction.
  • ARC is conducted first, then ACC is conducted.
  • the samples are first down-sampled/up-sampled according to the different picture dimensions, following by down-sampled/up-sampled according to the different color formats in the ARC+ACC conversion.
  • ACC is conducted first, then ARC is conducted.
  • the samples are first down-sampled/up-sampled according to the different color formats, following by down-sampled/up-sampled according to the different picture dimensions in the ARC+ACC conversion.
  • ACC and ARC may be conducted together.
  • the samples are down-sampled/up-sampled according to the scaling ratio derived from different color formats and different picture dimensions in the ARC+ACC or ACC+ARC conversion.
  • ACC is conducted first, then ABC is conducted.
  • samples are up-sampled/down-sampled according to the different color formats first, following by left-shifted/right-shifted according to different bit-depths in the ACC+ABC conversion.
  • ABC is conducted first, then ACC is conducted.
  • samples are left-shifted/right-shifted according to different bit-depths first, following by up-sampled/down-sampled according to the different color formats in the ABC+ACC conversion.
  • ALF parameters signaled in a video unit such as APS may be associated with one or multiple color formats.
  • a video unit such as APS signaling ALF parameters may be associated with one or multiple color formats.
  • a picture may only apply ALF parameters signaled in a video unit such as APS, associated with the same color format.
  • ALF parameters associated with a first corresponding color format may inherit or be predicted from ALF parameters associated with a second corresponding color format.
  • the first corresponding color format must be the same as the second corresponding color format.
  • the first corresponding color format may be different to the second corresponding color format.
  • different default ALF parameters may be designed for YCbCr and RGB format.
  • different default ALF parameters may be designed for 4: 4: 4, 4: 2: 2, 4: 2: 0, 4: 0: 0 format.
  • LMCS parameters signaled in a video unit such as APS may be associated with one or multiple color formats.
  • a video unit such as APS signaling LMCS parameters may be associated with one or multiple color formats.
  • a picture may only apply LMCS parameters signaled in a video unit such as APS, associated with the same color format.
  • LMCS parameters associated with a first corresponding sample color format may inherit or be predicted from LMCS parameters associated with a second corresponding color format.
  • the first corresponding color format must be the same with the second corresponding color format.
  • the first corresponding color format may be different to the second corresponding color format.
  • the indication of whether the chroma residue scaling is applied may not be signaled and inferred to be “not used” if the color format is 4: 0: 0.
  • the indication of whether the chroma residue scaling is applied must be “not used” in a conformance bit-stream if the color format is 4: 0: 0.
  • the signaled indication of whether the chroma residue scaling is applied is ignored and set to be “not used” by the decoder if the color format is 4: 0: 0.
  • coding tool X may be disabled for a block if the block refers to at least one reference picture with a different color format to the current picture.
  • the information related to the coding tool X may not be signaled.
  • the block cannot refer to a reference picture with a different color format to the current picture.
  • a merge candidate referring to a reference picture with a different color format to the current picture may be skipped or not put into the merge candidate list.
  • the reference index corresponds to a reference picture with a different color format to the current picture may be skipped or not allowed to be signaled.
  • the coding tool X may be anyone below.
  • method 2400 or 2500 may be implemented at a video decoder or a video encoder.
  • FIG. 24 shows a flowchart of an exemplary method for video processing.
  • the method 2400 includes, at step 2402, determining, for a current video block, a relationship between resolutions of two reference pictures to which the current video block refers; at step 2404, determining, in response to the relationship, whether and /or how to perform a specific operation for the current video block during an adaptive resolution change (ARC) process; and at step 2406, performing a conversion between a bitstream representation of the current video block and the current video block based on the specific operation.
  • ARC adaptive resolution change
  • FIG. 25 shows a flowchart of another exemplary method for video processing.
  • the method 2500 includes, at step 2502, determining, for a video block within a current picture, a relationship between a resolution of the current picture and that of a reference picture to which the video block refers; at step 2504, performing, in response to the relationship, a specific operation for samples within the reference picture or for a predictive block of the video block during an adaptive resolution change (ARC) process; and at step 2506, performing a conversion between a bitstream representation of the current video block and the current video block based on the specific operation.
  • ARC adaptive resolution change
  • a method for video processing comprising: determining, for a current video block, a relationship between resolutions of two reference pictures to which the current video block refers; determining, in response to the relationship, whether and /or how to perform a specific operation for the current video block during an adaptive resolution change (ARC) process; and performing a conversion between a bitstream representation of the current video block and the current video block based on the specific operation.
  • ARC adaptive resolution change
  • the specific operation comprises a bi-prediction from the two reference pictures for the current video block.
  • the specific operation comprises at least one of a motion derivation and a motion refinement for the current video block.
  • the motion refinement comprises a decoder-side motion vector refinement (DMVR) .
  • DMVR decoder-side motion vector refinement
  • the motion derivation comprises a bi-directional optical flow (BIO) process.
  • BIO bi-directional optical flow
  • the specific operation is disabled.
  • motion vector differences associated with at least one of the two reference pictures are scaled in the specific operation based on the relationship between resolutions of two reference pictures.
  • a method for video processing comprising: determining, for a video block within a current picture, a relationship between a resolution of the current picture and that of a reference picture to which the video block refers; performing, in response to the relationship, a specific operation for samples within the reference picture or for a predictive block of the video block during an adaptive resolution change (ARC) process; and performing a conversion between a bitstream representation of the current video block and the current video block based on the specific operation.
  • ARC adaptive resolution change
  • the specific operation comprises: performing a modification on samples within the reference picture before invoking motion compensation process of the current video block.
  • the specific operation comprises: performing a motion compensation for the video block by using the samples within the reference picture to generate a predictive block for the video block, and performing a modification on samples within the predictive block.
  • the reference picture is stored in a decoded picture buffer in a resolution that the reference picture is coded with.
  • the reference picture is stored in a decoded picture buffer in a resolution which is a maximally allowed resolution, a minimally allowed resolution, or a predefined resolution.
  • the reference picture is stored in a decoded picture buffer in a resolution which is based on a resolution of the current picture and that of the reference picture.
  • the method further comprises: storing a decoded picture comprising the video block in a decoded picture buffer in a resolution that the decoded picture is coded with.
  • the method further comprises: storing a decoded picture comprising the video block in a decoded picture buffer in a resolution which is a maximally allowed resolution, minimally allowed resolution, or a predefined resolution.
  • pictures in the decoded picture buffer are in a same resolution.
  • the modification comprises at least one of an up-sampling or a down-sampling.
  • the conversion includes encoding the current video block into the bitstream representation of the current video block and decoding the current video block from the bitstream representation of the current video block.
  • an apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method as described above.
  • a non-transitory computer readable media having program code stored thereupon, the program code, when executed, causing a processor to implement the method as described above.
  • FIG. 23 is a block diagram of a video processing apparatus 2300.
  • the apparatus 2300 may be used to implement one or more of the methods described herein.
  • the apparatus 2300 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on.
  • the apparatus 2300 may include one or more processors 2302, one or more memories 2304 and video processing hardware 2306.
  • the processor (s) 2302 may be configured to implement one or more methods (including, but not limited to, method 2300) described in the present document.
  • the memory (memories) 2304 may be used for storing data and code used for implementing the methods and techniques described herein.
  • the video processing hardware 2306 may be used to implement, in hardware circuitry, some techniques described in the present document.
  • the video coding methods may be implemented using an apparatus that is implemented on a hardware platform as described with respect to FIG. 23.
  • Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus.
  • the computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them.
  • data processing unit or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document) , in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code) .
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) .
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

Abstract

Devices, systems and methods for digital video coding, which includes bit-depth and color format conversions for video coding, are described. An exemplary method for video processing includes a method for video processing, includes: determining, for a current video block, a relationship between resolutions of two reference pictures to which the current video block refers; determining, in response to the relationship, whether and/or how to perform a specific operation for the current video block during an adaptive resolution change (ARC) process; and performing a conversion between a bitstream representation of the current video block and the current video block based on the specific operation.

Description

ADAPTIVE RESOLUTION CHANGE IN VIDEO CODING
Under the applicable patent law and/or rules pursuant to the Paris Convention, this application is made to timely claim the priority to and benefits of International Patent Application No. PCT/CN2019/087209, filed on May 16, 2019. The entire disclosures thereof are incorporated by reference as part of the disclosure of this application.
TECHNICAL FIELD
This patent document relates to video coding techniques, devices and systems.
BACKGROUND
In spite of the advances in video compression, digital video still accounts for the largest bandwidth use on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, it is expected that the bandwidth demand for digital video usage will continue to grow.
SUMMARY
Devices, systems and methods related to digital video coding, and specifically, to bit-depth and color format conversions for video coding. The described methods may be applied to both the existing video coding standards (e.g., High Efficiency Video Coding (HEVC) ) and future video coding standards or video codecs.
In one representative aspect, there is disclosed a method for video processing, comprising: determining, for a current video block, a relationship between resolutions of two reference pictures to which the current video block refers; determining, in response to the relationship, whether and /or how to perform a specific operation for the current video block during an adaptive resolution change (ARC) process; and performing a conversion between a bitstream representation of the current video block and the current video block based on the specific operation.
In another representative aspect, there is disclosed method for video processing, comprising: determining, for a video block within a current picture, a relationship between a resolution of the  current picture and that of a reference picture to which the video block refers; and performing, in response to the relationship, a specific operation for samples within the reference picture or for a predictive block of the video block during an adaptive resolution change (ARC) process; and performing a conversion between a bitstream representation of the current video block and the current video block based on the specific operation.
In yet another representative aspect, the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.
In yet another representative aspect, a device that is configured or operable to perform the above-described method is disclosed. The device may include a processor that is programmed to implement this method.
In yet another representative aspect, a video decoder apparatus may implement a method as described herein.
The above and other aspects and features of the disclosed technology are described in greater detail in the drawings, the description and the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows an example of adaptive stream of two representations of the same content coded at different resolutions.
FIG. 2 shows another example of adaptive stream of two representations of the same content coded at different resolutions, where segments use either closed Group of Pictures (GOP) or open GOP prediction structures.
FIG. 3 shows an example of open GOP prediction structures of the two representations.
FIG. 4 shows an example of representation switching at an open GOP position.
FIG. 5 shows an example of using resampled reference pictures from another bitstream as a reference for decoding Random Access Skipped Leading (RASL) pictures.
FIGS. 6A-6C show examples of motion-constrained tile set (MCTS) -based region-wise mixed-resolution (RWMR) viewport-dependent 360 streaming.
FIG. 7 shows an example of collocated sub-picture representations of different intra random access point (IRAP) intervals and different sizes.
FIG. 8 shows an example of segments received when a viewing orientation change causes a resolution change at the start of a segment.
FIG. 9 shows an example of a viewing orientation change.
FIG. 10 shows an example of sub-picture representations for two sub-picture locations.
FIG. 11 shows an example of encoder modifications for adaptive resolution conversion (ARC) .
FIG. 12 shows an example of decoder modifications for ARC.
FIG. 13 shows an example of tile group based resampling for ARC.
FIG. 14 shows an example of an ARC process.
FIG. 15 shows an example of alternative temporal motion vector prediction (ATMVP) for a coding unit.
FIGS. 16A-16B show an example of a simplified affine motion model.
FIG. 17 shows an example of an affine motion vector field (MVF) per sub-block.
FIGS. 18A and 18B show an example of the 4-parameter affine model and the 6-parameter affine model, respectively.
FIG. 19 shows an example of a motion vector prediction (MVP) for AF_INTER for inherited affine candidates.
FIG. 20 shows an example of an MVP for AF_INTER for constructed affine candidates.
FIGS. 21A and 21B show examples of candidates for AF_MERGE.
FIG. 22 shows an example of candidate positions for affine merge mode.
FIG. 23 is a block diagram of an example of a hardware platform for implementing a visual media decoding or a visual media encoding technique described in the present document.
FIG. 24 shows a flowchart of an example method for video processing.
FIG. 25 shows a flowchart of another example method for video processing.
DETAILED DESCRIPTION
Embodiments of the disclosed technology may be applied to existing video coding standards (e.g., HEVC, H. 265) and future standards to improve compression performance. Section headings are used in the present document to improve readability of the description and do not in any way limit the discussion or the embodiments (and/or implementations) to the respective sections only.
1. Video coding introduction
Due to the increasing demand of higher resolution video, video coding methods and techniques are ubiquitous in modern technology. Video codecs typically include an electronic circuit or software that compresses or decompresses digital video, and are continually being improved to provide higher coding efficiency. A video codec converts uncompressed video to a compressed format or vice versa. There are complex relationships between the video quality, the amount of data used to represent the video (determined by the bit rate) , the complexity of the encoding and decoding algorithms, sensitivity to data losses and errors, ease of editing, random access, and end-to-end delay (latency) . The compressed format usually conforms to a standard video compression specification, e.g., the High Efficiency Video Coding (HEVC) standard (also known as H. 265 or MPEG-H Part 2) , the Versatile Video Coding standard to be finalized, or other current and/or future video coding standards.
Video coding standards have evolved primarily through the development of the well-known ITU-T and ISO/IEC standards. The ITU-T produced H. 261 and H. 263, ISO/IEC produced MPEG-1 and MPEG-4 Visual, and the two organizations jointly produced the H. 262/MPEG-2 Video and H. 264/MPEG-4 Advanced Video Coding (AVC) and H. 265/HEVC standards. Since H. 262, the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized. To explore the future video coding technologies beyond HEVC, Joint Video Exploration Team (JVET) was founded by VCEG and MPEG jointly in 2015. Since then, many new methods have been adopted by JVET and put into the reference software named Joint Exploration Model (JEM) . In April 2018, the Joint Video Expert Team (JVET) between VCEG (Q6/16) and ISO/IEC JTC1 SC29/WG11 (MPEG) was created to work on the VVC standard targeting at 50%bitrate reduction compared to HEVC.
AVC and HEVC does not have the ability to change resolution without having to introduce an IDR or intra random access point (IRAP) picture; such ability can be referred to as adaptive resolution change (ARC) . There are use cases or application scenarios that would benefit from an ARC feature, including the following:
– Rate adaption in video telephony and conferencing: For adapting the coded video to the changing network conditions, when the network condition gets worse so that available  bandwidth becomes lower, the encoder may adapt to it by encoding smaller resolution pictures. Currently, changing picture resolution can be done only after an IRAP picture; this has several issues. An IRAP picture at reasonable quality will be much larger than an inter-coded picture and will be correspondingly more complex to decode: this costs time and resource. This is a problem if the resolution change is requested by the decoder for loading reasons. It can also break low-latency buffer conditions, forcing an audio re-sync, and the end-to-end delay of the stream will increase, at least temporarily. This can give a poor user experience.
– Active speaker changes in multi-party video conferencing: For multi-party video conferencing, it is common that the active speaker is shown in bigger video size than the video for the rest of conference participants. When the active speaker changes, picture resolution for each participant may also need to be adjusted. The need to have ARC feature becomes more important when such change in active speaker happens frequently.
– Fast start in streaming: For streaming application, it is common that the application would buffer up to certain length of decoded picture before start displaying. Starting the bitstream with smaller resolution would allow the application to have enough pictures in the buffer to start displaying faster.
Adaptive stream switching in streaming: The Dynamic Adaptive Streaming over HTTP (DASH) specification includes a feature named @mediaStreamStructureId. This enables switching between different representations at open-GOP random access points with non-decodable leading pictures, e.g., CRA pictures with associated RASL pictures in HEVC. When two different representations of the same video have different bitrates but the same spatial resolution while they have the same value of @mediaStreamStructureId, switching between the two representations at a CRA picture with associated RASL pictures can be performed, and the RASL pictures associated with the switching-at CRA pictures can be decoded with acceptable quality hence enabling seamless switching. With ARC, the @mediaStreamStructureId feature would also be usable for switching between DASH representations with different spatial resolutions.
ARC is also known as Dynamic resolution conversion.
ARC may also be regarded as a special case of Reference Picture Resampling (RPR) such as H. 263 Annex P.
1.1. Reference picture resampling in H. 263 Annex P
This mode describes an algorithm to warp the reference picture prior to its use for prediction. It can be useful for resampling a reference picture having a different source format than the picture being predicted. It can also be used for global motion estimation, or estimation of rotating motion, by warping the shape, size, and location of the reference picture. The syntax includes warping parameters to be used as well as a resampling algorithm. The simplest level of operation for the reference picture resampling mode is an implicit factor of 4 resampling as only an FIR filter needs to be applied for the upsampling and downsampling processes. In this case, no additional signaling overhead is required as its use is understood when the size of a new picture (indicated in the picture header) is different from that of the previous picture.
1.2. Contributions on ARC to VVC
1.2.1. JVET-M0135
A preliminary design of ARC as described below, with some parts taken from JCTVC-F158, is suggested to be a place holder just to trigger the discussion.
1.2.1.1 Description of basic tools
The basic tools constraints for supporting ARC are as follows:
● The spatial resolution may differ from the nominal resolution by a factor 0.5, applied to both dimensions. The spatial resolution may increase or decrease, yielding scaling ratios of 0.5 and 2.0.
● The aspect ratios and chroma formats of the video format are not changed.
● The cropping areas are scaled in proportion to spatial resolutions.
● The reference pictures are simply re-scaled as needed and inter prediction is applied as usual.
1.2.1.2 Scaling operations
It is proposed to use simple, zero-phase separable down-and up-scaling filters. Note that these filters are for prediction only; a decoder may use more sophisticated scaling for output purposes. The following 1: 2 down-scaling filter is used, which has zero phase and 5 taps:
(-1, 9, 16, 9, -1) /32
The down-sampling points are at even sample positions and are co-sited. The same filter is used for luma and chroma.
For 2: 1 up-sampling, additional samples at odd grid positions are generated using the half-pel motion compensation interpolation filter coefficients in the latest VVC WD.
The combined up-and down-sampling will not change phase or the position of chroma sampling points.
1.2.1.3 Resolution description in parameter set
The signaling of picture resolution in the SPS is changed as shown below, with deletions marked with double brackets (e.g., [ [a] ] denotes the deletion of the character “a” ) both in the description below and in the remainder of the document.
Sequence parameter set RBSP syntax and semantics
Figure PCTCN2020090799-appb-000001
[ [pic_width_in_luma_samples specifies the width of each decoded picture in units of luma samples. pic_width_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
pic_height_in_luma_samples specifies the height of each decoded picture in units of luma samples. pic_height_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY. ] ]
num_pic_size_in_luma_samples_minus1 plus 1 specifies the number of picture sizes (width and height) in units of luma samples that may be present in the coded video sequence.
pic_width_in_luma_samples [i] specifies the i-th width of decoded pictures in units of luma samples that may be present in the coded video sequence. pic_width_in_luma_samples [i] shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
pic_height_in_luma_samples [i] specifies the i-th height of decoded pictures in units of luma samples that may be present in the coded video sequence. pic_height_in_luma_samples [i] shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
Picture parameter set RBSP syntax and semantics
Figure PCTCN2020090799-appb-000002
pic_size_idx specifies the index to the i-th picture size in the sequence parameter set. The width of pictures that refer to the picture parameter set is pic_width_in_luma_samples [pic_size_idx] in luma samples. Likewise, the height of pictures that refer to the picture parameter set is pic_height_in_luma_samples [pic_size_idx] in luma samples.
1.2.2. JVET-M0259
1.2.2.1. Background: sub-pictures
The terms sub-picture track is defined as follows in Omnidirectional Media Format (OMAF) : track that is with spatial relationships to other track (s) and that represents that represents a spatial subset of the original video content, which has been split into spatial subsets before video encoding at the content production side. A sub-picture track for HEVC can be constructed by rewriting the parameter sets and slice segment headers for a motion-constrained tile set so that it becomes a self-standing HEVC bitstream. A sub-picture Representation can be defined as a DASH Representation that carries a sub-picture track.
JVET-M0261 used the term sub-picture as a spatial partitioning unit for VVC, summarized as follows:
1. Pictures are divided into sub-pictures, tile groups and tiles.
2. A sub-picture is a rectangular set of tile groups that starts with a tile group that has tile_group_address equal to 0.
3. Each sub-picture may refer to its own PPS and may hence have its own tile partitioning.
4. Sub-pictures are treated like pictures in the decoding process.
5. The reference pictures for decoding the sub-picture are generated by extracting the area collocating with the current sub-picture from the reference pictures in the decoded picture buffer. The extracted area shall be a decoded sub-picture, i.e. inter prediction takes place between sub-pictures of the same size and the same location within the picture.
6. A tile group is a sequence of tiles in tile raster scan of a sub-picture.
In this contribution we refer to the term sub-picture as defined in JVET-M0261. However, a track that encapsulates a sub-picture sequence as defined in JVET-M0261 has very similar properties as a sub-picture track defined in OMAF, the examples given below apply in both cases.
1.2.2.2. Use cases
1.2.2.2.1. Adaptive resolution change in streaming
Requirement for the support of adaptive streaming
Section 5.13 ( "Support for Adaptive Streaming" ) of MPEG N17074 includes the following requirement for VVC:
The standard shall support fast representation switching in the case of adaptive streaming services that offer multiple representations of the same content, each having different properties (e.g. spatial resolution or sample bit depth) . The standard shall enable the use of efficient prediction structures (e.g. so-called open groups of pictures) without compromising from the fast and seamless representation switching capability between representations of different properties, such as different spatial resolutions.
Example of the open GOP prediction structure with representation switching
Content generation for adaptive bitrate streaming includes generations of different Representations, which can have different spatial resolutions. The client requests Segments from the Representations and can hence decide at which resolution and bitrate the content is received. At the client, the Segments of different Representations are concatenated, decoded, and played. The client should be able to achieve seamless playout with one decoder instance. Closed GOP structures (starting with an IDR picture) are conventionally used as illustrated in FIG. 1.
Open GOP prediction structures (starting with CRA pictures) enable better compression performance than the respective closed GOP prediction structures. For example, an average bitrate reduction of 5.6%in terms of luma Bjontegaard delta bitrate was achieved with an IRAP picture interval of 24 pictures.
Open GOP prediction structures reportedly also reduce subjectively visible quality pumping.
A challenge in the use of open GOPs in streaming is that RASL pictures cannot be decoded with correct reference pictures after switching Representations. We describe this challenge below in relation to the Representations presented in FIG. 2.
The Segments starting with a CRA picture contain RASL pictures for which at least one reference picture is in the previous Segment. This is illustrated in FIG. 3, where picture 0 in both bitstreams resides in the previous Segment and is used as reference for predicting the RASL pictures.
The Representation switching marked with a dashed rectangle in FIG. 2 is illustrated in FIG. 4. It can be observed that the reference picture ( "picture 0" ) for RASL pictures has not been decoded. Consequently, RASL pictures are not decodable and there will be a gap in the playout of the video.
However, it has been found to be subjectively acceptable to decode RASL pictures with resampled reference pictures, as described by embodiments of the present invention. Resampling of "picture 0" and using it as a reference picture for decoding the RASL pictures is illustrated in FIG. 5.
1.2.2.2.2. Viewport change in region-wise mixed-resolution (RWMR) 360° video streaming Background: HEVC-based RWMR streaming
RWMR 360° streaming offers an increased effective spatial resolution on the viewport. Schemes where tiles covering the viewport originate from a 6K (6144×3072) ERP picture or an equivalent CMP resolution, illustrated in FIG. 6, with "4K" decoding capacity (HEVC Level 5.1) were included in clauses D. 6.3 and D. 6.4 of OMAF and also adopted in the VR Industry Forum Guidelines. Such resolutions are asserted to be suitable for head-mounted displays using quad-HD (2560×1440) display panel.
Encoding: The content is encoded at two spatial resolutions with cube face size 1536×1536 and 768×768, respectively. In both bitstreams a 6×4 tile grid is used and a motion-constrained tile set (MCTS) is coded for each tile position.
Encapsulation: Each MCTS sequence is encapsulated as a sub-picture track and made available as a sub-picture Representation in DASH.
Selection of streamed MCTSs: 12 MCTSs from the high-resolution bitstream are selected and the complementary 12 MCTSs are extracted from the low-resolution bitstream. Thus, a hemi-sphere (180°×180°) of the streamed content originates from the high-resolution bitstream.
Merging MCTSs to a bitstream to be decoded: The received MCTSs of a single time instance are merged into a coded picture of 1920×4608, which conforms to HEVC Level 5.1. Another option for the merged picture is to have 4 tile columns of width 768, two tile columns of width 384, and three tile rows of height 768 luma samples, resulting into a picture of 3840×2304 luma samples.
Background: several Representations of different IRAP intervals for viewport-dependent 360° streaming
When viewing orientation changes in HEVC-based viewport-dependent 360° streaming, a new selection of sub-picture Representations can take effect at the next IRAP-aligned Segment boundary. Sub-picture Representations are merged to coded pictures for decoding, and hence the VCL NAL unit types are aligned in all selected sub-picture Representations.
To provide a trade-off between the response time to react to viewing orientation changes and the rate-distortion performance when the viewing orientation is stable, multiple versions of the content can be coded at different IRAP intervals. This is illustrated in FIG. 7 for one set of collocated sub-picture Representations for encoding presented in FIG. 6.
FIG. 8 presents an example where the sub-picture location is first selected to be received at the lower resolution (384×384) . A change in the viewing orientation causes a new selection of the sub-picture locations to be received at the higher resolution (768×768) . In this example the viewing orientation change happens so that Segment 4 is received from the short-IRAP-interval sub-picture Representations. After that, the viewing orientation is stable and thus, the long-IRAP-interval version can be used starting from Segment 5 onwards.
Problem statement
Since the viewing orientation moves gradually in typical viewing situations, the resolution changes in only a subset of the sub-picture locations in RWMR viewport-dependent streaming. FIG. 9 illustrates a viewing orientation change from FIG. 6 slightly upwards and towards the right cube face. Cube face partitions that have a different resolution as earlier are indicated with "C" . It can be observed that the resolution changed in 6 out of 24 cube face partitions. However, as discussed above, Segments starting with an IRAP picture need to be received for all 24 cube face partitions in response to the viewing orientation change. Updating all sub-picture locations with Segments starting with an IRAP picture is inefficient in terms of streaming rate-distortion performance.
In addition, the ability to use open GOP prediction structures with sub-picture Representations of RWMR 360° streaming is desirable to improve rate-distortion performance and to avoid visible picture quality pumping caused by closed GOP prediction structures.
Proposed design goals
The following design goals are proposed:
1. The VVC design should allow merging of a sub-picture originating from a random-access picture and another sub-picture originating from a non-random-access picture into the same coded picture conforming to VVC.
2. The VVC design should enable the use of open GOP prediction structure in sub-picture representations without compromising from the fast and seamless representation switching capability between sub-picture representations of different properties, such as different spatial resolutions, while enabling merging of sub-picture representations into a single VVC bitstream.
The design goals can be illustrated with FIG. 10, in which sub-picture Representations for two sub-picture locations are presented. For both sub-picture locations, a separate version of the content is coded for each combination among two resolutions and two random access intervals. Some of the Segments start with an open GOP prediction structure. A viewing orientation change causes the resolution of sub-picture location 1 to be switched at the start of Segment 4. Since Segment 4 starts with a CRA picture, which is associated with RASL pictures, those reference pictures of the RASL pictures that are in Segment 3 need to be resampled. It is  remarked that this resampling applies to sub-picture location 1 while decoded sub-pictures of some other sub-picture locations are not resampled. In this example, the viewing orientation change does not cause changes in the resolution of sub-picture location 2 and thus decoded sub-pictures of sub-picture location 2 are not resampled. In the first picture of Segment 4, the Segment for sub-picture location 1 contains a sub-picture originating from a CRA picture, while the Segment for sub-picture location 2 contains a sub-picture originating from a non-random-access picture. It is suggested that merging of these sub-pictures into a coded picture is allowed in VVC.
1.2.2.2.3. Adaptive resolution change in video conferencing
JCTVC-F158 proposed adaptive resolution change mainly for video conferencing. The following sub-sections are copied from JCTVC-F158 and present the use cases where adaptive resolution change is asserted to be useful.
Seamless network adaption and error resilience
Applications such as video conferencing and streaming over packets networks frequently require that the encoded stream adapt to changing network conditions, especially when bit rates are too high and data is being lost. Such applications typically have a return channel allowing the encoder to detect the errors and perform adjustments. The encoder has two main tools at its disposal: bit rate reduction and changing the resolution, either temporal or spatial. Temporal resolution changes can be effectively achieved by coding using hierarchical prediction structures. However, for best quality spatial resolution changes are needed as well as part of a well-designed encoder for video communications.
Changing spatial resolution within AVC requires an IDR frame is sent and the stream is reset. This causes significant problems. An IDR frame at reasonable quality will be much larger than an Inter picture, and will be correspondingly more complex to decode: this costs time and resource. This is a problem if the resolution change is requested by the decoder for loading reasons. It can also break low-latency buffer conditions, forcing an audio re-sync, and the end-to-end delay of the stream will increase, at least temporarily. This gives a poor user experience.
To minimize these problems, the IDR is typically sent at low quality, using a similar number of bits to a P frame, and it takes a significant time to return to full quality for the given resolution. To get low enough delay, the quality can be very low indeed and there is often a visible blurring  before the image is “refocused” . In effect, the Intra frame is doing very little useful work in compression terms: it is just a method of re-starting the stream.
So there is a requirement for methods in HEVC which allow resolution to be changed, especially in challenging network conditions, with minimal impact on subjective experience.
Fast start
It would be useful to have a “fast start” mode where the first frame is sent at reduced resolution and resolution is increased over the next few frames, in order to reduce delay and get to normal quality more quickly without unacceptable image blurring at the beginning.
Conference “compose”
Video conferences also often have a feature whereby the person speaking is shown full-screen and other participants are shown in smaller resolution windows. To support this efficiently, often the smaller pictures are sent at lower resolution. This resolution is then increased when the participant becomes the speaker and is full-screened. Sending an intra frame at this point causes an unpleasant hiccup in the video stream. This effect can be quite noticeable and unpleasant if speakers alternate rapidly.
1.2.2.3. Proposed design goals
The following is high-level design choices are proposed for VVC version 1:
1. It is proposed to include a reference picture resampling process in VVC version 1 for the following use cases:
● Usage of efficient prediction structures (e.g. so-called open groups of pictures) in adaptive streaming without compromising from the fast and seamless representation switching capability between representations of different properties, such as different spatial resolutions.
● Adapting low-delay conversational video content to network conditions and application-originated resolution changes without significant delay or delay variation.
2. The VVC design is proposed to allow merging of a sub-picture originating from a random-access picture and another sub-picture originating from a non-random-access picture into the same coded picture conforming to VVC. This is asserted to enable efficient handling of  viewing orientation changes in mixed-quality and mixed-resolution viewport-adaptive 360° streaming.
3. It is proposed to include sub-picture-wise resampling process in VVC version 1. This is asserted to enable efficient prediction structure for more efficient handling of viewing orientation changes in mixed-resolution viewport-adaptive 360° streaming.
1.2.3. JVET-N0048
The use cases and design goals for adaptive resolution changing (ARC) were discussed in detail in JVET-M0259. A summary is provided below:
1. Real-time communication
The following use cases for adaptive resolution change were originally included in JCTVC-F158:
a. Seamless network adaption and error resilience (through dynamic adaptive resolution changes)
b. Fast start (gradual increase of resolution at session start or reset)
c. Conference “compose” (person speaking is given a higher resolution)
2. Adaptive streaming
Section 5.13 ( "Support for Adaptive Streaming" ) of MPEG N17074 includes the following requirement for VVC:
The standard shall support fast representation switching in the case of adaptive streaming services that offer multiple representations of the same content, each having different properties (e.g. spatial resolution or sample bit depth) . The standard shall enable the use of efficient prediction structures (e.g. so-called open groups of pictures) without compromising from the fast and seamless representation switching capability between representations of different properties, such as different spatial resolutions.
JVET-M0259 discusses how to meet this requirement by resampling of reference pictures of leading pictures.
3. 360-degree viewport-dependent streaming
JVET-M0259 discusses how to address this use case by resampling certain independently coded picture regions of reference pictures of leading pictures.
This contribution proposes an adaptive resolution coding approach, which is asserted to meet all the use cases and design goals above. 360-degree viewport-dependent streaming and conference "compose" uses cases are handled by this proposal together with JVET-N0045 (proposing independent sub-picture layers) .
Proposed specification text
Signaling
sps_max_rpr
Figure PCTCN2020090799-appb-000003
sps_max_rpr specifies the maximum number of active reference pictures in reference picture list 0 or 1 for any tile group in the CVS that have pic_width_in_luma_samples and pic_height_in_luma_samples not equal to pic_width_in_luma_samples and pic_height_in_luma_samples, respectively, of the current picture.
Picture width and height
Figure PCTCN2020090799-appb-000004
Figure PCTCN2020090799-appb-000005
max_width_in_luma_samples specifies that it is a requirement of bitstream conformance that pic_width_in_luma_samples in any active PPS for any picture of a CVS for which this SPS is active is less than or equal to max_width_in_luma_samples.
max_height_in_luma_samples specifies that it is a requirement of bitstream conformance that pic_height_in_luma_samples in any active PPS for any picture of a CVS for which this SPS is active is less than or equal to max_height_in_luma_samples.
High-level decoding process
The decoding process operates as follows for the current picture CurrPic:
1. The decoding of NAL units is specified in clause 8.2.
2. The processes in clause 8.3 specify the following decoding processes using syntax elements in the tile group header layer and above:
– Variables and functions relating to picture order count are derived as specified in clause 8.3.1. This needs to be invoked only for the first tile group of a picture.
– At the beginning of the decoding process for each tile group of a non-IDR picture, the decoding process for reference picture lists construction specified in clause 8.3.2 is invoked for derivation of reference picture list 0 (RefPicList [0] ) and reference picture list 1 (RefPicList [1] ) .
– The decoding process for reference picture marking in clause 8.3.3 is invoked, wherein reference pictures may be marked as "unused for reference" or "used for long-term reference" . This needs to be invoked only for the first tile group of a picture.
– For each active reference picture in RefPicList [0] and RefPicList [1] that has pic_width_in_luma_samples or pic_height_in_luma_samples not equal to pic_width_in_luma_samples or pic_height_in_luma_samples, respectively, of CurrPic, the following applies:
– The resampling process in clause X.Y. Z is invoked [Ed. (MH) : details of invocation parameters to be added] with the output having the same reference picture marking and picture order count as the input.
– The reference picture used as input to the resampling process is marked as "unused for reference" .
3. [Ed. (YK) : Add herein the invocation of the decoding processes for coding tree units, scaling, transform, in-loop filtering, etc. ]
4. After all tile groups of the current picture have been decoded, the current decoded picture is marked as "used for short-term reference" .
Resampling process
SHVC resampling process (HEVC clause H. 8.1.4.2) is proposed with the following additions:
If sps_ref_wraparound_enabled_flag is equal to 0, the sample value tempArray [n] with n = 0.. 7, is derived as follows:
tempArray [n] =
(f L [xPhase, 0] *rlPicSample L [Clip3 (0, refW -1, xRef -3) , yPosRL] +
f L [xPhase, 1] *rlPicSample L [Clip3 (0, refW -1, xRef -2) , yPosRL] +
f L [xPhase, 2] *rlPicSample L [Clip3 (0, refW -1, xRef -1) , yPosRL] +
f L [xPhase, 3] *rlPicSample L [Clip3 (0, refW -1, xRef) , yPosRL] +
f L [xPhase, 4] *rlPicSample L [Clip3 (0, refW -1, xRef + 1) , yPosRL] + (H-38)
f L [xPhase, 5] *rlPicSample L [Clip3 (0, refW -1, xRef + 2) , yPosRL] +
f L [xPhase, 6] *rlPicSample L [Clip3 (0, refW -1, xRef + 3) , yPosRL] +
f L [xPhase, 7] *rlPicSample L [Clip3 (0, refW -1, xRef + 4) , yPosRL] ) >> shift1
Otherwise, the sample value tempArray [n] with n = 0.. 7, is derived as follows:
refOffset = (sps_ref_wraparound_offset_minus1 + 1) *MinCbSizeY tempArray [n] =
(f L [xPhase, 0] *rlPicSample L [ClipH (refOffset, refW, xRef -3) , yPosRL] +
f L [xPhase, 1] *rlPicSample L [ClipH (refOffset, refW, xRef -2) , yPosRL] +
f L [xPhase, 2] *rlPicSample L [ClipH (refOffset, refW, xRef -1) , yPosRL] +
f L [xPhase, 3] *rlPicSample L [ClipH (refOffset, refW, xRef) , yPosRL] +
f L [xPhase, 4] *rlPicSample L [ClipH (refOffset, refW, xRef + 1) , yPosRL] +
f L [xPhase, 5] *rlPicSample L [ClipH (refOffset, refW, xRef + 2) , yPosRL] +
f L [xPhase, 6] *rlPicSample L [ClipH (refOffset, refW, xRef + 3) , yPosRL] +
f L [xPhase, 7] *rlPicSample L [ClipH (refOffset, refW, xRef + 4) , yPosRL] ) >> shift1
If sps_ref_wraparound_enabled_flag is equal to 0, the sample value tempArray [n] with n = 0.. 3, is derived as follows:
tempArray [n] = (f C [xPhase, 0] *rlPicSample C [Clip3 (0, refWC -1, xRef -1) , yPosRL] +
f C [xPhase, 1] *rlPicSample C [Clip3 (0, refWC -1, xRef) , yPosRL] +
f C [xPhase, 2] *rlPicSample C [Clip3 (0, refWC -1, xRef + 1) , yPosRL] +      (H-50)
f C [xPhase, 3] *rlPicSample C [Clip3 (0, refWC -1, xRef + 2) , yPosRL] ) >> shift1
Otherwise, the sample value tempArray [n] with n = 0.. 3, is derived as follows:
refOffset = (sps_ref_wraparound_offset_minus1 + 1) *MinCbSizeY) /SubWidthC tempArray [n] =
(f C [xPhase, 0] *rlPicSample C [ClipH (refOffset, refWC, xRef -1) , yPosRL] +
f C [xPhase, 1] *rlPicSample C [ClipH (refOffset, refWC, xRef) , yPosRL] +
f C [xPhase, 2] *rlPicSample C [ClipH (refOffset, refWC, xRef + 1) , yPosRL] +
f C [xPhase, 3] *rlPicSample C [ClipH (refOffset, refWC, xRef + 2) , yPosRL] ) >> shift1
1.2.4. JVET-N0052
Adaptive resolution change, as a concept in video compression standards, has been around since at least 1996; in particular H. 263+ related proposals towards reference picture resampling (RPR, Annex P) and Reduced Resolution Update (Annex Q) . It has recently gained a certain prominence, first with proposals by Cisco during the JCT-VC time, then in the context of VP9 (where it is moderately widely deployed nowadays) , and more recently in the context of VVC. ARC allows reducing the number of samples required to be coded for a given picture, and upsampling the resulting reference picture to a higher resolution when such is desirable.
We consider ARC of particular interest in two scenarios:
1) Intra coded pictures, such as IDR pictures are often considerably larger than inter pictures. Downsampling pictures intended to be intra coded, regardless of reason, may provide a better input for future prediction. It’s also clearly advantageous from a rate control viewpoint, at least in low delay applications.
2) When operating the codec near its breaking point, as at least some cable and satellite operators routinely seem to do, ARC may become handy even for non-intra coded pictures, such as in scene transitions without a hard transition point.
3) Looking perhaps a bit too much forward: is the concept of a fixed resolution generally defensible? With the departure of CRTs and the ubiquity of scaling engines in rendering devices, the hard bind between rendering and coding resolution is a thing of the past. Also, we note that there is certain research available that suggests that most people are not able to concentrate on fine details (associated perhaps with high resolution) when there is a lot of activity going on in the video sequence, even if that activity is elsewhere spatially. If that were true and generally accepted, fine granularity resolution changes may be a better rate control mechanism than adaptive QP. We put this point out for discussion at this time, as we lack data-feedback from those in the knows is appreciated. Of course, doing away with the concept of fixed resolution bitstreams has a myriad of system layer and implementation implications, of which (at least at the level of their existence, if not their detailed nature) we are well aware of.
Technically, ARC can be implemented as reference picture resampling. Implementing reference picture resampling has two major aspects: the resampling filters, and the signaling of the resampling information in the bitstream. This document focusses on the latter and touches the former only to the extent we have implementation experience. More study of suitable filter design is encouraged.
Overview of an existing ARC implementation
FIGS. 11 and 12 illustrate an existing ARC en-/decoder implementation, respectively. In our implementation, it is possible to change the picture width and height, on a per picture granularity irrespective of the picture type. At the encoder, the input image data is down-sampled to the selected picture size for the current picture encoding. After the first input picture is encoded as intra-picture, the decoded picture is stored in the decoded picture buffer (DPB) . When the  consequent picture is down-sampled with a different sampling ratio and encoded as inter-picture, the reference picture (s) in the DPB is/are up-/down-scaled according the spatial ratio between the picture size of the reference and the current picture size. At the decoder, the decoded picture is stored in the DPB without resampling. However, the reference picture in the DPB is up-/down-scaled in relation to the spatial ratio between the currently decoded picture and the reference, when used for motion compensation. The decoded picture is up-sampled to the original picture size or the desired output picture size when bumped out for display. In motion estimation/compensation process, motion vectors are scaled in relation to picture size ratio as well as picture order count difference.
Signaling of ARC parameters
The term ARC parameters is used herein as a combination of any parameters required to make ARC work. In the easiest case, that could be a zoom factor, or an index into a table with defined zoom factors. It could be a target resolution (for example in sample or max CU size granularity) , or an index into a table providing a target resolution, like what was proposed in JVET-M0135. Also included would be filter selectors or even filter parameters (all the way up to filter coefficients) of the up/down-sampling filters in use.
From the outset, we propose herein to allow, at least conceptually, different ARC parameters for different parts of a picture. We propose that the appropriate syntax structure as per the current VVC draft would be a rectangular tile group (TG) . Those folks using scan-order TGs would be restricted to use ARC only to a full picture, or to the extent scan order TGs are included in a rectangular TG (we don’t recall that TG nesting has been discussed so far, and perhaps it’s a bad idea) . That can easily be specified by a bitstream constraint.
As different TGs may have different ARC parameters, the appropriate place for ARC parameters would be either in the TG header or in a parameter set with the scope of a TG, and referenced by the TG header-the Adaptation Parameter Set in the current VVC draft, or a more detailed reference (an index) into a table in a higher parameter set. Of these three choices, we propose, at this point, to use the TG header to code a reference to a table entry including the ARC parameters, and that table be located in the SPS, with maximum table values coded in the (forthcoming) DPS. We can live with coding a zoom factor directly into the TG header, without using any parameter set values. The use of the PPS for the reference, as proposed in JVET- M0135, is counter-indicated if, as we do, the per tile group signaling of ARC parameters is a design criterion.
As for the table entries themselves, we see many options:
● coding down-sample factors, either one for both dimension or independently in X and Y dimension? That’s mostly a (HW-) implementation discussion, and some would perhaps prefer an outcome where the zoom factor in X dimension is fairly flexible, but in Y dimension is fixed to 1, or has very few choices. We suggest that the syntax is the wrong place for expressing such constraints and, if they were desirable, we prefer the constraints expressed as requirements for conformance. In other words, keep the syntax flexible.
● Coding target resolutions. That’s what we propose below. There could be more or less complex constraints about those resolutions in relation to the current resolution, expressed perhaps in bitstream conformance requirements.
● Down-sampling per tile group is preferred to allow for picture composition/extraction. However, it is not critical from a signaling viewpoint. If the group were making the unwise decision of allowing ARC only at picture granularity, we can always include a requirement for bitstream conformance that all TGs use the same ARC parameters.
● Control information related to ARC. In our design below, that includes the reference picture size.
● Do we need to have flexibility in filter design? Anything bigger than a handful of codepoints? If yes, put those into APS? (No, please no APS update discussion again. If down-sample filter changes and ALF stays, we propose that bitstream has to eat the overhead. )
For now, in order to keep proposed technology aligned and simple (to the extent possible) , we propose
● Fixed filter design
● Target resolutions in a table in the SPS, with bitstream constraints TBD.
● Minimum/Maximum target resolution in DPS to facilitate cap exchange/negotiation.
The resulting syntax could look as follows:
Decoder parameter set RBSP syntax
Figure PCTCN2020090799-appb-000006
max_pic_width_in_luma_samples specifies the maximum width of decoded pictures in units of luma samples in the bitstream. max_pic_width_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY. The value of dec_pic_width_in_luma_samples [i] cannot be greater than the value of max_pic_width_in_luma_samples.
max_pic_height_in_luma_samples specifies the maximum height of decoded pictures in units of luma samples. max_pic_height_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY. The value of dec_pic_height_in_luma_samples [i] cannot be greater than the value of max_pic_height_in_luma_samples.
Sequence parameter set RBSP syntax
Figure PCTCN2020090799-appb-000007
Figure PCTCN2020090799-appb-000008
adaptive_pic_resolution_change_flag equal to 1 specifies that an output picture size (output_pic_width_in_luma_samples, output_pic_height_in_luma_samples) , an indication of the number of decoded picture sizes (num_dec_pic_size_in_luma_samples_minus1) and at least one decoded picture size (dec_pic_width_in_luma_samples [i] , dec_pic_height_in_luma_samples [i] ) are present in the SPS. A reference picture size (reference_pic_width_in_luma_samples, reference_pic_height_in_luma_samples) is present conditioned on the value of reference_pic_size_present_flag.
output_pic_width_in_luma_samples specifies the width of the output picture in units of luma samples. output_pic_width_in_luma_samples shall not be equal to 0.
output_pic_height_in_luma_samples specifies the height of the output picture in units of luma samples. output_pic_height_in_luma_samples shall not be equal to 0.
reference_pic_size_present_flag equal 1 specifies that reference_pic_width_in_luma_samples and reference_pic_height_in_luma_samples are present.
reference_pic_width_in_luma_samples specifies the width of the reference picture in units of luma samples. output_pic_width_in_luma_samples shall not be equal to 0. If not present, the value of reference_pic_width_in_luma_samples is inferred to be equal to dec_pic_width_in_luma_samples [i] .
reference_pic_height_in_luma_samples specifies the height of the reference picture in units of luma samples. output_pic_height_in_luma_samples shall not be equal to 0. If not present, the value of reference_pic_height_in_luma_samples is inferred to be equal to dec_pic_height_in_luma_samples [i] .
NOTE1 –The size of the output picture shall be equal to the values of output_pic_width_in_luma_samples and output_pic_height_in_luma_samples. The size of the reference picture shall be equal to the values of reference_pic_width_in_luma_samples and _pic_height_in_luma_samples, when the reference picture is used for motion compensation.
num_dec_pic_size_in_luma_samples_minus1 plus 1 specifies the number of the decoded picture size (dec_pic_width_in_luma_samples [i] , dec_pic_height_in_luma_samples [i] ) in units of luma samples in the coded video sequence.
dec_pic_width_in_luma_samples [i] specifies the i-th width of the decoded picture sizes in units of luma samples in the coded video sequence. dec_pic_width_in_luma_samples [i] shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
dec_pic_height_in_luma_samples [i] specifies the i-th height of the decoded picture sizes in units of luma samples in the coded video sequence. dec_pic_height_in_luma_samples [i] shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
NOTE2 –The i-th decoded picture size (dec_pic_width_in_luma_samples [i] , dec_pic_height_in_luma_samples [i] ) may be equal to the decoded picture size of the decoded picture in the coded video sequence.
Tile group header syntax
Figure PCTCN2020090799-appb-000009
dec_pic_size_idx specifies that the width of the decoded picture shall be equal to pic_width_in_luma_samples [dec_pic_size_idx] and the height of the decoded picture shall be equal to pic_height _in_luma_samples [dec_pic_size_idx] .
Filters
The proposed design conceptually includes four different filter sets: down-sampling filter from the original picture to the input picture, up-/down-sampling filters to rescale reference pictures for motion estimation/compensation, and up-sampling filter from the decoded picture to the output picture. The first and last ones can be left as non-normative matters. In the scope of specification, up-/down-sampling filters need to be explicitly signaled in an appropriate parameter set, or pre-defined.
Our implementation uses the down-sampling filters of SHVC (SHM ver. 12.4) , which is a 12-tap and 2D separable filter, for down-sampling to resize the reference picture to be used for motion compensation. In the current implementation, only dyadic sampling is supported. Therefore, the phase of the down-sampling filter is set equal to zero by default. For up-sampling, 8-tap interpolation filters are used, with 16-phases, to shift the phase and align the luma and chroma pixel positions to the original positions.
Tables 1&2 provide the 8-tap filter coefficients fL [p, x] with p = 0.. 15 and x = 0.. 7 used for the luma up-sampling process, and the 4-tap filter coefficients fC [p, x] with p = 0.. 15 and x = 0.. 3 used for the chroma up-sampling process.
Table 3 provides the 12-tap filter coefficients for down-sampling process. The same filter coefficients are used for both luma and chroma for down-sampling.
Table 1. Luma up-sampling filter with 16 phases
Figure PCTCN2020090799-appb-000010
Table 2. Chroma up-sampling filter with 16 phases
Figure PCTCN2020090799-appb-000011
Table 3. Down-sampling filter coefficient for luma and chroma
Figure PCTCN2020090799-appb-000012
We have not experimented with other filter designs. We anticipate that (perhaps significant) subjective and objective gains can be expected when using filters adaptive to content and/or to scaling factors.
Tile Group boundary discussions
As it is perhaps true with a lot of tile group related work, our implementation is not quite finished with respect to tile group (TG) based ARC. Our preference is to revisit that implementation once the discussion of spatial composition and extraction, in the compressed domain, of multiple sub-pictures into a composed picture has yielded at least a working draft.  That, however, does not prevent us from extrapolating the outcome to some extent, and to adapt our signaling design accordingly.
For now, we believe that the tile group header is the correct place for something like dec_pic_size_idx as proposed above, for reasons already stated. We use a single ue (v) codepoint dec_pic_size_idx, conditionally present in the tile group header, to indicate the employed ARC parameters. In order to match our implementation which is ARC per picture only, the one thing in spec-space we need to do right now is to code a single tile group only, or to make it a condition of bitstream compliance that all TG headers of a given coded picture have the same value of dec_pic_size_idx (when present) .
The parameter dec_pic_size_idx can be moved into whatever header that starts a sub-picture. Our current feeling is that most likely that will continue to be a tile group header.
Beyond these syntactical considerations, some additional work is needed to enable tile group or sub-picture based ARC. The perhaps most difficult part is how to address the issue of unneeded samples in a picture where a sub-picture has been resampled to a lower size.
Consider the right portion of FIG. 13, which is made up of four sub-pictures (expressed perhaps as four rectangular tile groups in the bitstream syntax) . To the left, the bottom right TG is subsampled to half the size. What do we do with the samples outside the relevant area, marked as “Half” ?
Some existing video coding standards had in common that spatial extraction of parts of a picture in the compressed domain was not supported. That implied that each sample of a picture is represented by one or more syntax elements, and each syntax elements impacts at least one sample. If we want to keep that up, we would need to populate the area around the samples covered by the downsampled TG labelled “Half” somehow. H. 263+ Annex P solved that problem by padding; in fact, the sample values of the padded samples could be signaled in the bitstream (within certain strict limits) .
An alternative that would perhaps constitute a significant departure from previous assumptions, but may be needed in any case if we want to support sub-bitstream extraction (and composition) based on rectangular parts of a picture, would be to relax the current understanding that each sample of a reconstructed picture must be represented by something in the coded picture (even if that something is only a skipped block) .
Implementation considerations, system implications and Profiles/Levels
We propose basic ARC to be included in the “baseline/main” profiles. Sub-profiling may be used to remove them if not needed for certain application scenarios. Certain restrictions may be acceptable. In that regard we note that certain H. 263+ profiles and “recommended modes” (which pre-dated profiles) included a restriction for Annex P to be used only as “implicit factor of 4” , i.e. dyadic downsampling in both dimensions. That was enough to support fast start (get the I frame over quickly) in video conferencing.
The design is such that we believe that all filtering can be done “on the fly” and that there is no, or only negligible, increases in memory bandwidth. Insofar, we do not see a need to punt ARC into exotic profiles.
We do not believe that complex tables and such can be meaningfully used in capability exchange, as it was argued in Marrakech in conjunction with JVET-M0135. The number of options is simply to big to allow for meaningful cross-vendor interop, assuming offer-answer and similar limited-depth handshakes. Insofar, realistically, to support ARC in a meaningful way in a capability exchange scenario, we have to fallback to a handful, at most interop points. For example: no ARC, ARC with implicit factor of 4, full ARC. As an alternative, we could spec the required support for all ARC, and leave the restrictions in bitstream complexity to higher level SDOs. That’s a strategic discussion we should have at some point anyway (beyond what we had already in the sub-profiling and flags context) .
As for levels: we believe the basic design principle needs to be that, as a condition of bitstream conformance, the sample count of an upsampled pictures must fit into level of bitstream no matter how much upsampling is signalled in bitstream, and that all samples must fit into the upsampled coded picture. We note that this was not the case in H263+; there, it was possible that certain samples were not present.
1.2.5. JVET-N0118
The following aspects are proposed:
1) A list of picture resolutions is signalled in the SPS, and an index to the list is signalled in the PPS to specify the size of an individual picture.
2) For any picture that is to be outputted, the decoded picture before resampling is cropped (as necessary) and outputted, i.e., a resampled picture is not for output, only for inter prediction reference.
3) Support 1.5x and 2x resampling ratios. No support of arbitrary resampling ratio. Further study the need of one or two more other resampling ratios.
4) Between picture-level resampling and block-level resampling, the proponents prefer block-level resampling.
a. However, if picture-level resampling is chosen, the following aspects are proposed:
i. When a reference picture is resampled, both the resampled version and the original, resampled version of the reference picture are stored in the DPB, and thus both would affect the DPB fullness.
ii. A resampled reference picture is marked as "unused for reference" when the corresponding un-resampled reference picture is marked as "unused for reference" .
iii. The RPL signalling syntax is kept unchanged, while the RPL construction process is modified as follows: When a reference picture needs to be included into a RPL entry, and a version of that reference picture with the same resolution as the current picture is not in the DPB, the picture resampling process is invoked and the resampled version of that reference picture is included into the RPL entry.
iv. The number of resampled reference pictures that may be present in the DPB should be limited, e.g., to be less than or equal to 2.
b. Otherwise (block-level resampling is chosen) , the following are suggested:
i. To limit the worst-case decoder complexity, it is proposed that bi-prediction of a block from a reference picture with a different resolution than the current picture is disallowed.
ii. Another option is that, when resampling and quarter-pel interpolation need to be done, the two filters are combined and the operation is applied at once.
5) Regardless of which of the picture-based and block-based resampling approaches is chosen, it is proposed that temporal motion vector scaling is applied as needed.
1.2.5.1. Implementation
The ARC software was implemented on top of VTM-4.0.1, with the following changes:
– A list of supported resolutions is signalled in SPS.
– The spatial resolution signalling was moved from SPS to PPS.
– A picture-based resampling scheme was implemented for resampling reference pictures. After a picture is decoded, the reconstructed picture may be resampled to a different spatial resolution. The original reconstructed picture and the resampled reconstructed picture are both stored in the DPB and are available for reference by future pictures in decoding order.
– The implemented resampling filters are based on the filters tested in JCTVC-H0234, as follows:
○ The up-sampling filter: 4-tap +/-quarter-phase DCTIF with taps (-4, 54, 16, -2) /64
○ The down-sampling filter: the h11 filter with taps (1, 0, -3, 0, 10, 16, 10, 0, -3, 0, 1)/32
– When constructing the reference picture lists of the current picture (i.e., L0 and L1) , only the reference pictures with the same resolution as the current picture are used. Note that the reference pictures may be available in both their original sizes or the resampled sizes.
– TMVP and ATVMP may be enabled; however, when the original coding resolutions of the current picture and a reference picture are different, TMVP and ATMVP are disabled for that reference picture.
– For convenience and simplicity of the starting-point software implementation, when outputting a picture, the decoder outputs the highest available resolution.
1.2.5.2. On signaling of picture sizes and picture output
1. On the list of spatial resolutions of coded pictures in the bitstream
Currently all coded pictures in a CVS have the same resolution. Thus it is straightforward to signal just one resolution (i.e., picture width and height) in the SPS. With ARC support, instead of one resolution, a list of picture resolutions needs to be signalled, and we propose  that this list is signalled in the SPS, and an index to the list is signalled in the PPS to specify the size of an individual picture.
2. On picture output
We propose that, for any picture that is to be outputted, the decoded picture before resampling is cropped (as necessary) and outputted, i.e., a resampled picture is not for output, only for inter prediction reference. The ARC resampling filters should be designed to optimize the use of the resampled pictures for inter prediction, and such filters may not be optimal for picture outputting/displaying purpose, while video terminal devices usually have optimized output zooming/scaling functionalities already implemented.
1.2.5.3. On resampling
Resampling of a decoded picture can be either picture-based or block-based. For the final ARC design in VVC, we prefer block-based resampling over picture-based resampling. We recommend that these two approaches are discussed and the JVET makes a decision on which of these two should be specified for ARC support in VVC.
Picture-based resampling
In picture-based resampling for ARC, a picture is resampled only once for a particular resolution, which is then stored in the DPB, while the un-resampled version of the same picture is also kept in the DPB.
Employing picture-based resampling for ARC has two issues: 1) additional DPB buffer is required for storing resampled reference pictures, and 2) additional memory bandwidth is required since due to increased operations of reading reference picture data from the DPB and writing reference picture data into the DPB.
Keeping only one version of a reference picture in the DPB would not a good idea for picture-based resampling. If we keep only the un-resampled version, a reference picture may need to be resampled multiple times since multiple pictures may refer to the same reference picture, and . On the other hand, if a reference picture is resampled and we only keep the resampled version, then we need to apply inverse resampling when the reference picture needs to be outputted, since it's better to output un-resampled pictures, as discussed above. This is a problem since resampling process is not a lossless operation. Take a picture A and downsample it thenupsample it to get A with the same resolution as A, A and A would not be the same; A′ would  contain less information than A since some high frequency information has been lost during the downsampling and upsampling processes.
To deal with the issues of additional DPB buffer and memory bandwidth, we proposed that, if the ARC design in VVC uses picture-based resampling, the following applies:
1. When a reference picture is resampled, both the resampled version and the original, resampled version of the reference picture are stored in the DPB, and thus both would affect the DPB fullness.
2. A resampled reference picture is marked as "unused for reference" when the corresponding un-resampled reference picture is marked as "unused for reference" .
3. The reference picture lists (RPLs) of each tile group contain reference pictures that have the same resolution as the current picture. While there is no need for a change to the RPL signalling syntax, the RPL construction process is modified to ensure what is said in the previous sentence, as follows: When a reference picture needs to be included into a RPL entry while a version of that reference picture with the same resolution as the current picture is not yet available, the picture resampling process is invoked and the resampled version of that reference picture is included.
4. The number of resampled reference pictures that may be present in the DPB should be limited, e.g., to be less than or equal to 2.
Furthermore, to enable temporal MV usage (e.g. merge mode and ATMVP) for the case that the temporal MV comes from the reference frame with different resolution than the current one, we propose that, scaling temporal MV to the current resolution as needed.
Block-based ARC resampling
In block-based resampling for ARC, a reference block is resampled whenever needed, and no resampled picture is stored in the DPB.
The main issue here is the additional decoder complexity. This is because a block in a reference picture may be referred to multiple times by multiple blocks in another picture and by blocks in multiple pictures.
When a block in a reference picture is referred to by a block in the current picture and the resolutions of the reference picture and the current picture are different, the reference block is resampled by invocation of the interpolation filter such that the reference block has the integer- pel resolution. When the motion vector is in quarter-pel, the interpolation process is invoked again to obtain the resampled reference block in the quarter-pel resolution. Therefore, for each motion compensation operation for the current block from a reference block involving different resolutions, up to two, instead of one, interpolation filtering operations are needed. Without ARC support, up to only one interpolation filter operation (i.e., for generation of the reference block in the quarter-pel resolution) is needed.
To limit the worst-case complexity, we propose that, if the ARC design in VVC uses block-based resampling, the following apply:
– Bi-prediction of a block from a reference picture with a different resolution than the current picture is disallowed.
– To be more more precise, the constraint is as follow: For the current block blkA in the current picture picA refers to a reference block blkB in a reference picture picB, when picA and picB have different resolutions, block blkA shall be a uni-predicted block.
With this constraint, the worst-case number of interpolation operations needed to decode a block is limited to two. If a block refers to a block from a different-resolution picture, the number of interpolation operations needed is two as discussed above. This is the same as in the case when the block refers to a reference block from a same-resolution picture and coded as a bi-predicted block since the number of interpolation operations is also two (i.e., one for getting the quarter-pel resolution for each reference block) .
To simplify the implementation, we propose another variation that if the ARC design in VVC uses block-based resampling, the following apply:
– If the reference frame and current frame have different resolutions, the corresponding positions of every pixel of predictors are calculated first, and then the interpolation is applied only one time. That is, two interpolation operations (i.e. one for resampling and one for quarter-pel interpolation) are combined into only one interpolation operation. The sub-pel interpolation filters in the current VVC can be reused, but, in this case, the granularity of interpolation should be enlarged but the interpolation operation times are reduced from two to one.
– To enable temporal MV usage (e.g. merge mode and ATMVP) for the case that the temporal MV comes from the reference frame with different resolution than the current one, we propose that, scaling temporal MV to the current resolution as needed.
Resampling ratios
In JVET-M0135, to start the discussion on ARC, it was proposed that for the starting point of ARC, consider only the resampling ratio of 2x (meaning 2 x 2 for upsampling and 1/2 x 1/2 for downsampling) . From further discussion on this topic after the Marrakech meeting, we learned that supporting only the resampling ratio of 2x is very limited, as in some cases a smaller difference between resampled and un-resampled resolutions would be more beneficial.
Although support of arbitrary resampling ratio may be desirable, it support seemed difficult. This is because to support arbitrary resampling ratio, the number of resampling filters that have to be defined and implemented seemed to be too many and to impose a significant burden on decoder implementations.
We propose that more than one but a small number of resampling ratios should be supported, but at least 1.5x and 2x resampling ratios, and arbitrary resampling ratio is not supported.
1.2.5.4. Max DPB buffer size and buffer fullness
With ARC, the DPB may contains decoded pictures of different spatial resolutions within the same CVS. For DPB management and related aspects, counting DPB size and fullness in units decoded picture no longer works.
Below are discussions of some specific aspects that need to be addressed and possible solutions in the final VVC specification if ARC is supported (we are not proposing to adopt the possible solutions at this meeting) :
1. Rather than using the value of PicSizeInSamplesY (i.e., PicSizeInSamplesY = pic_width_in_luma_samples *pic_height_in_luma_samples) for deriving MaxDpbSize (i.e., maximum number of reference picture that may present in the DPB) , derivation of MaxDpbSize is based on the value of MinPicSizeInSamplesY. MinPicSizeInSampleY defined as follows:
MinPicSizeInSampleY = (Width of the smallest picture resolution in the bitstream) * (Height of the smallest resolution in the bitstream)
The derivation of MaxDpbSize is modified as follows (based on the HEVC equation) :
if (MinPicSizeInSamplesY <= (MaxLumaPs >> 2) ) MaxDpbSize = Min (4 *maxDpbPicBuf, 16)
else if (MinPicSizeInSamplesY <= (MaxLumaPs >> 1) ) MaxDpbSize = Min (2 *maxDpbPicBuf, 16)
else if (MinPicSizeInSamplesY <= ( (3 *MaxLumaPs) >> 2) ) MaxDpbSize = Min ( (4 *maxDpbPicBuf) /3, 16)
else
MaxDpbSize = maxDpbPicBuf
2. Each decoded picture is associated with a value called PictureSizeUnit. PictureSizeUnit is an integer value that specifies how big a decoded picture size is relative to the MinPicSizeInSampleY. The definition of PictureSizeUnit depends on what resampling ratios are supported for ARC in VVC.
For example, if ARC supports only the resampling ratio of 2, the PictureSizeUnit is defined as follows:
– Decoded pictures having the smallest resolution in the bitstream are associated with PictureSizeUnit of 1.
– Decoded pictures having the resolution that is 2 by 2 of the smallest resolution in the bitstream is associated with PictureSizeUnit of 4 (i.e., 1 *4) .
For another example, if ARC supports both the resampling ratios of 1.5 and 2, the PictureSizeUnit is defined as follows:
– Decoded pictures having the smallest resolution in the bitstream is associated with PictureSizeUnit of 4.
– Decoded pictures having the resolution that is 1.5 by 1.5 of the smallest resolution in the bitstream is associated with PictureSizeUnit of 9 (i.e., 2.25 *4) .
– Decoded pictures having the resolution that is 2 by 2 of the smallest resolution in the bitstream is associated with PictureSizeUnit of 16 (i.e., 4 *4) .
For other resampling rations supported by ARC, the same principle as given by the examples above should be used to determine the value of PictureSizeUnit for each picture size.
3. Let the variable MinPictureSizeUnit be the smallest possible value of PictureSizeUnit. That is, if ARC supports only resampling ratio of 2, MinPictureSizeUnit is 1; if ARC supports resampling ratios of 1.5 and 2, MinPictureSizeUnit is 4; likewise, the same principle is used to determine the value of MinPictureSizeUnit.
4. The value range of sps_max_dec_pic_buffering_minus1 [i] is specified to range from 0 to (MinPictureSizeUnit * (MaxDpbSize –1) ) . The variable MinPictureSizeUnit is the smallest possible value of PictureSizeUnit.
5. The DPB fullness operation is specified based on PictureSizeUnit as follows:
– The HRD is initialized at decoding unit 0, with both the CPB and the DPB being set to be empty (the DPB fullness is set equal to 0) .
– When the DPB is flushed (i.e., all pictures are removed from the DPB) , the DPB fullness is set equal to 0.
– When a picture is removed from the DPB, the DPB fullness is decrement by the value of PictureSizeUnit associated with the removed picture.
– When a picture is inserted into the DPB, the DPB fullness is increment by the value of PictureSizeUnit associated with the inserted picture.
1.2.5.5. Resampling filters
In the software implementation, the implemented resampling filters were simply taken from previously available filters described in JCTVC-H0234. Other resampling filters should be tested and used if they provide better performance and/or lower complexity. We propose that various resampling filters to be tested to strike a trade-off between complexity and performance. Such tests can be done in a CE.
1.2.5.6. Miscellaneous necessary modifications to existing tools
To support ARC, some modifications and /or additional operations may be needed to some of the existing coding tools. For example, in the ARC software implementation picture-based resampling, for simplicity we disabled TMVP and ATMVP when the original coding resolutions of current picture and reference picture are different.
1.2.6. JVET-N0279
According to “Requirements for a Future Video Coding Standard” , “the standard shall support fast representation switching in the case of adaptive streaming services that offer multiple  representations of the same content, each having different properties (e.g. spatial resolution or sample bit depth) . ” In real-time video communication, allowing resolution change within a coded video sequence without inserting an I picture can not only adapt the video data to dynamic channel conditions or user preference seamlessly, but also remove the beating effect caused by I pictures. A hypothetical example of adaptive resolution change is shown in FIG. 14 where the current picture is predicted from reference pictures of different sizes.
This contribution proposes high level syntax to signal adaptive resolution change as well as modifications to the current motion compensated prediction process in the VTM. These modifications are limited to motion vector scaling and subpel location derivations with no changes in the existing motion compensation interpolators. This would allow the existing motion compensation interpolators to be reused and not require new processing blocks to support adaptive resolution change which would introduce additional cost.
1.2.6.1. Adaptive Resolution Change Signalling
1.2.6.1.1. SPS
Figure PCTCN2020090799-appb-000013
[ [pic_width_in_luma_samples specifies the width of each decoded picture in units of luma samples. pic_width_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY. ] ]
[ [pic_height_in_luma_samples specifies the height of each decoded picture in units of luma samples. pic_height_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY. ] ]
max_pic_width_in_luma_samples specifies the maximum width of decoded pictures referring to the SPS in units of luma samples. max_pic_width_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
max_pic_height_in_luma_samples specifies the maximum height of decoded pictures referring to the SPS in units of luma samples. max_pic_height_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY.
1.2.6.1.2. PPS
Figure PCTCN2020090799-appb-000014
pic_size_different_from_max_flag equal to 1 specifies that the PPS signals different picture width or picture height from the max_pic_width_in_luma_samples and max_pic_height_in_luma_sample in the referred SPS. pic_size_different_from_max_flag equal to 0 specifies that pic_width_in_luma_samples and pic_height_in_luma_sample are the same as max_pic_width_in_luma_samples and max_pic_height_in_luma_sample in the referred SPS.
pic_width_in_luma_samples specifies the width of each decoded picture in units of luma samples. pic_width_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY. When pic_width_in_luma_samples is not present, it is inferred to be equal to max_pic_width_in_luma_samples
pic_height_in_luma_samples specifies the height of each decoded picture in units of luma samples. pic_height_in_luma_samples shall not be equal to 0 and shall be an integer multiple of MinCbSizeY. When pic_height_in_luma_samples is not present, it is inferred to be equal to max_pic_height_in_luma_samples.
It is a requirement of bitstream conformance that horizontal and vertical scaling ratios shall be in the range of 1/8 to 2, inclusive for every active reference picture. The scaling ratios are defined as follows:
– horizontal_scaling_ratio= ( (reference_pic_width_in_luma_samples<<14) + (pic_width_in_luma_samples/2) ) /pic_width_in_luma_samples
– vertical_scaling_ratio= ( (reference_pic_height_in_luma_samples<<14) + (pic_height_in_lu ma_samples/2) ) /pic_height_in_luma_samples
Figure PCTCN2020090799-appb-000015
Reference Picture Scaling Process
When there is a resolution change within a CVS, a picture may have a different size from one or more of its reference pictures. This proposal normalizes all motion vectors to the current picture grid instead of their corresponding reference picture grids. This is asserted to be beneficial to keep the design consistent and make resolution changes transparent to the motion vector prediction process. Otherwise, neighboring motion vectors pointing to reference pictures with different sizes cannot be used directly for spatial motion vector prediction due to the different scale.
When a resolution change happens, both the motion vectors and reference blocks have to be scaled while doing motion compensated prediction. The scaling range is limited to [1/8, 2] , i.e. the upscaling is limited to 1: 8 and downscaling is limited to 2: 1. Note that upscaling refers to the  case where the reference picture is smaller than the current picture, while downscaling refers to the case where the reference picture is larger than the current picture. In the following sections, the scaling process is described in more detail.
Luma Block
The scaling factors and their fixed-point representations are defined as
Figure PCTCN2020090799-appb-000016
Figure PCTCN2020090799-appb-000017
The scaling process includes two parts:
1. Map the upper left corner pixel of the current block to the reference picture;
2. Use the horizontal and vertical step sizes to address the reference locations of the current block’s other pixels.
If the coordinate of the upper left corner pixel of the current block is (x, y) , the subpel location (x′, y′) in the reference picture pointed to by a motion vector (mvX, mvY) in units of 1/16 th pel is specified as follows:
● The horizontal location in the reference picture is
x′= ( (x<<4) +mvX) ·hori_scale_fp,                     (3)
and x′ is further scaled down to only keep 10 fractional bits
x′= Sign (x′) · ( (Abs (x′) + (1<<7) ) >>8) .                 (4)
● Similarly, the vertical location in the reference picture is
y′= ( (y<<4) +mvY) ·vert_scale_fp,                         (5)
and y′ is further scaled down to
y′= Sign (y′) · ( (Abs (y ) + (1<<7) ) >>8) .                        (6)
At this point, the reference location of the upper left corner pixel of the current block is at (x′, y′) . The other reference subple/pel locations are calculated relative to (x′, y′) with horizontal and vertical step sizes. Those step sizes are derived with 1/1024-pel accuracy from the above horizontal and vertical scaling factors as follows:
x_step = (hori_scale_fp + 8) >> 4,                        (7)
y_step = (vert_scale_fp + 8) >> 4.                        (8)
As an example, if a pixel in the current block is i-column and j-row away from the upper left corner pixel, its corresponding reference pixel’s horizontal and vertical coordinates are derived by
x′ i=x′+ i *x_step,                              (9)
y′ j=y′+ j *y_step.                              (10)
In subpel interpolation, x′ i and y′ j have to be broken up into full-pel parts and fractional-pel parts:
● The full-pel parts for addressing reference block are equal to
(x′ i+32) >>10,                                 (11)
(y′ j+32) >>10.                                 (12)
● The fractional-pel parts used to select interpolation filters are equal to
Δx= ( (x′ i+32) >>6) &15,                        (13)
Δy= ( (y′ j+32) >>6) &15.                        (14)
Once the full-pel and fractional-pel locations within a reference picture are determined, the existing motion compensation interpolators can be used without any additional changes. The full-pel location will be used to fetch the reference block patch from the reference picture and the fractional-pel location will be used to select the proper interpolation filter.
Chroma Block
When the chroma format is 4: 2: 0, chroma motion vectors have 1/32-pel accuracy. The scaling process of chroma motion vectors and chroma reference blocks is almost the same as for luma blocks except a chroma format related adjustment.
When the coordinate of the upper left corner pixel of the current chroma block is (x c, y c) , the initial horizontal and vertical locations in the reference chroma picture are
x c′= ( (x c<<5) +mvX) ·hori_scale_fp,                       (1)
y c′=( (y c<<5) +mvY) ·vert_scale_fp,                        (2)
where mvX and mvY are the original luma motion vector but now should be examined with 1/32-pel accuracy.
x c′ and y c′ are further scaled down to keep 1/1024 pel accuracy
x c′= Sign (x c′) · ( (Abs (x c′) + (1<<8) ) >>9) ,         (3)
y c′= Sign (y c′) · ( (Abs y c′) + (1<<8) ) >>9) .          (4)
Compared to the associated luma equations, the above right shift is increased by one extra bit.
The step sizes used are the same as for luma. For a chroma pixel at (i, j) relative to the upper left corner pixel, its reference pixel’s horizontal and vertical coordinates are derived by
Figure PCTCN2020090799-appb-000018
Figure PCTCN2020090799-appb-000019
In subpel interpolation, 
Figure PCTCN2020090799-appb-000020
and
Figure PCTCN2020090799-appb-000021
are also broken up into full-pel parts and fractional-pel parts:
● The full-pel parts for addressing reference block are equal to
Figure PCTCN2020090799-appb-000022
Figure PCTCN2020090799-appb-000023
● The fractional-pel parts used to select interpolation filters are equal to
Figure PCTCN2020090799-appb-000024
Figure PCTCN2020090799-appb-000025
Interaction with Other Coding Tools
Because of the extra complexity and memory bandwidth associated with the interaction of some coding tools with reference picture scaling, it is recommended to add the following restrictions to the VVC specification:
– When tile_group_temporal_mvp_enabled_flag is equal to 1, the current picture and its collocated picture shall have the same size.
– When resolution change is allowed within a sequence, decoder motion vector refinement shall be turned off.
– When resolution change is allowed within a sequence, sps_bdof_enabled_flag shall be equal to 0.
1.3. Coding Tree Block (CTB) -based Adaptive Loop Filter (ALF) in JVET-N0415
Slice-level temporal filter
Adaptive parameter set (APS) was adopted in VTM4. Each APS contains one set of signalled ALF filters, up to 32 APSs are supported. In the proposal, slice-level temporal filter is tested. A tile group can re-use the ALF information from an APS to reduce the overhead. The APSs are updated as a first-in-first-out (FIFO) buffer.
CTB based ALF
For luma component, when ALF is applied to a luma CTB, the choice among 16 fixed, 5 temporal or 1 signaled filter sets is indicated. Only the filter set index is signalled. For one slice, only one new set of 25 filters can be signaled. If a new set is signalled for a slice, all the luma CTBs in the same slice share that set. Fixed filter sets can be used to predict the new slice-level filter set and can be used as candidate filter sets for a luma CTB as well. The number of filters is 64 in total.
For chroma component, when ALF is applied to a chroma CTB, if a new filter is signalled for a slice, the CTB used the new filter, otherwise, the most recent temporal chroma filter satisfying the temporal scalability constrain is applied.
As the slice-level temporal filter, the APSs are updated as a first-in-first-out (FIFO) buffer.
1.4. Alternative temporal motion vector prediction (a.k.a. subblock-based temporal merging candidate in VVC)
In the alternative temporal motion vector prediction (ATMVP) method, the motion vectors temporal motion vector prediction (TMVP) is modified by fetching multiple sets of motion information (including motion vectors and reference indices) from blocks smaller than the current CU. As shown in FIG. 15, the sub-CUs are square N×N blocks (N is set to 8 by default) .
ATMVP predicts the motion vectors of the sub-CUs within a CU in two steps. The first step is to identify the corresponding block in a reference picture with a so-called temporal vector. The reference picture is called the motion source picture. The second step is to split the current CU  into sub-CUs and obtain the motion vectors as well as the reference indices of each sub-CU from the block corresponding to each sub-CU, as shown in FIG. 15.
In the first step, a reference picture and the corresponding block is determined by the motion information of the spatial neighbouring blocks of the current CU. To avoid the repetitive scanning process of neighbouring blocks, the merge candidate from block A0 (the left block) in the merge candidate list of the current CU is used. The first available motion vector from block A0 referring to the collocated reference picture are set to be the temporal vector. This way, in ATMVP, the corresponding block may be more accurately identified, compared with TMVP, wherein the corresponding block (sometimes called collocated block) is always in a bottom-right or center position relative to the current CU.
In the second step, a corresponding block of the sub-CU is identified by the temporal vector in the motion source picture, by adding to the coordinate of the current CU the temporal vector. For each sub-CU, the motion information of its corresponding block (the smallest motion grid that covers the center sample) is used to derive the motion information for the sub-CU. After the motion information of a corresponding N×N block is identified, it is converted to the motion vectors and reference indices of the current sub-CU, in the same way as TMVP of HEVC, wherein motion scaling and other procedures apply.
1.5. Affine motion prediction
In HEVC, only translation motion model is applied for motion compensation prediction (MCP) . While in the real world, there are many kinds of motion, e.g. zoom in/out, rotation, perspective motions and the other irregular motions. In VVC, a simplified affine transform motion compensation prediction is applied with 4-parameter affine model and 6-parameter affine model. As shown FIGS. 16A-16B, the affine motion field of the block is described by two control point motion vectors (CPMVs) for the 4-parameter affine model and 3 CPMVs for the 6-parameter affine model.
The motion vector field (MVF) of a block is described by the following equations with the 4-parameter affine model (wherein the 4-parameter are defined as the variables a, b, e and f) in equation (1) and 6-parameter affine model (wherein the 4-parameter are defined as the variables a, b, c, d, e and f) in equation (2) respectively:
Figure PCTCN2020090799-appb-000026
Figure PCTCN2020090799-appb-000027
where (mv h 0, mv h 0) is motion vector of the top-left corner control point, and (mv h 1, mv h 1) is motion vector of the top-right corner control point and (mv h 2, mv h 2) is motion vector of the bottom-left corner control point, all of the three motion vectors are called control point motion vectors (CPMV) , (x, y) represents the coordinate of a representative point relative to the top-left sample within current block and (mv h (x, y) , mv v (x, y) ) is the motion vector derived for a sample located at (x, y) . The CP motion vectors may be signaled (like in the affine AMVP mode) or derived on-the-fly (like in the affine merge mode) . w and h are the width and height of the current block. In practice, the division is implemented by right-shift with a rounding operation. In VTM, the representative point is defined to be the center position of a sub-block, e.g., when the coordinate of the left-top corner of a sub-block relative to the top-left sample within current block is (xs, ys) , the coordinate of the representative point is defined to be (xs+2, ys+2) . For each sub-block (i.e., 4x4 in VTM) , the representative point is utilized to derive the motion vector for the whole sub-block.
In order to further simplify the motion compensation prediction, sub-block based affine transform prediction is applied. To derive motion vector of each M×N (both M and N are set to 4 in current VVC) sub-block, the motion vector of the center sample of each sub-block, as shown in FIG. 17, is calculated according to Equation (1) and (2) , and rounded to 1/16 fraction accuracy. Then the motion compensation interpolation filters for 1/16-pel are applied to generate the prediction of each sub-block with derived motion vector. The interpolation filters for 1/16-pel are introduced by the affine mode.
After MCP, the high accuracy motion vector of each sub-block is rounded and saved as the same accuracy as the normal motion vector.
1.5.1. Signaling of affine prediction
Similar to the translational motion model, there are also two modes for signaling the side information due affine prediction. They are AFFINE_INTER and AFFINE_MERGE modes.
1.5.2. AF_INTER mode
For CUs with both width and height larger than 8, AF_INTER mode can be applied. An affine flag in CU level is signalled in the bitstream to indicate whether AF_INTER mode is used.
In this mode, for each reference picture list (List 0 or List 1) , an affine AMVP candidate list is constructed with three types of affine motion predictors in the following order, wherein each candidate includes the estimated CPMVs of the current block. The differences of the best CPMVs found at the encoder side (such as mv 0 mv 1 mv 2 in FIG. 18) and the estimated CPMVs are signalled. In addition, the index of affine AMVP candidate from which the estimated CPMVs are derived is further signalled.
1) Inherited affine motion predictors
The checking order is similar to that of spatial MVPs in HEVC AMVP list construction. First, a left inherited affine motion predictor is derived from the first block in {A1, A0} that is affine coded and has the same reference picture as in current block. Second, an above inherited affine motion predictor is derived from the first block in {B1, B0, B2} that is affine coded and has the same reference picture as in current block. The five blocks A1, A0, B1, B0, B2 are depicted in FIG. 19.
Once a neighboring block is found to be coded with affine mode, the CPMVs of the coding unit covering the neighboring block are used to derive predictors of CPMVs of current block. For example, if A1 is coded with non-affine mode and A0 is coded with 4-parameter affine mode, the left inherited affine MV predictor will be derived from A0. In this case, the CPMVs of a CU covering A0, as denoted by
Figure PCTCN2020090799-appb-000028
for the top-left CPMV and
Figure PCTCN2020090799-appb-000029
for the top-right CPMV in FIG. 21B are utilized to derive the estimated CPMVs of current block, denoted by
Figure PCTCN2020090799-appb-000030
for the top-left (with coordinate (x0, y0) ) , top-right (with coordinate (x1, y1) ) and bottom-right positions (with coordinate (x2, y2) ) of current block.
2) Constructed affine motion predictors
A constructed affine motion predictor consists of control-point motion vectors (CPMVs) that are derived from neighboring inter coded blocks, as shown in FIG. 20 that have the same reference picture. If the current affine motion model is 4-paramter affine, the number of CPMVs is 2, otherwise if the current affine motion model is 6-parameter affine, the number of CPMVs is 3. The top-left CPMV
Figure PCTCN2020090799-appb-000031
is derived by the MV at the first block in the group {A, B, C} that is inter coded and has the same reference picture as in current block. The top-right CPMV
Figure PCTCN2020090799-appb-000032
is derived by the MV at the first block in the group {D, E} that is inter coded and has the same reference picture as in current block. The bottom-left CPMV 
Figure PCTCN2020090799-appb-000033
is derived by the MV at the first block in the group {F, G} that is inter coded and has the same reference picture as in current block.
– If the current affine motion model is 4-parameter affine, then a constructed affine motion predictor is inserted into the candidate list only if both
Figure PCTCN2020090799-appb-000034
and
Figure PCTCN2020090799-appb-000035
are founded, that is, 
Figure PCTCN2020090799-appb-000036
and
Figure PCTCN2020090799-appb-000037
are used as the estimated CPMVs for top-left (with coordinate (x0, y0) ) , top-right (with coordinate (x1, y1) ) positions of current block.
– If the current affine motion model is 6-parameter affine, then a constructed affine motion predictor is inserted into the candidate list only if
Figure PCTCN2020090799-appb-000038
and
Figure PCTCN2020090799-appb-000039
are all founded, that is, 
Figure PCTCN2020090799-appb-000040
and
Figure PCTCN2020090799-appb-000041
are used as the estimated CPMVs for top-left (with coordinate (x0, y0) ) , top-right (with coordinate (x1, y1) ) and bottom-right (with coordinate (x2, y2) ) positions of current block.
No pruning process is applied when inserting a constructed affine motion predictor into the candidate list.
3) Normal AMVP motion predictors
The following applies until the number of affine motion predictors reaches the maximum.
1) Derive an affine motion predictor by setting all CPMVs equal to
Figure PCTCN2020090799-appb-000042
if available.
2) Derive an affine motion predictor by setting all CPMVs equal to
Figure PCTCN2020090799-appb-000043
if available.
3) Derive an affine motion predictor by setting all CPMVs equal to
Figure PCTCN2020090799-appb-000044
if available.
4) Derive an affine motion predictor by setting all CPMVs equal to HEVC TMVP if available.
5) Derive an affine motion predictor by setting all CPMVs to zero MV.
Note that
Figure PCTCN2020090799-appb-000045
is already derived in constructed affine motion predictor.
In AF_INTER mode, when 4/6-parameter affine mode is used, 2/3 control points are required, and therefore 2/3 MVD needs to be coded for these control points, as shown in FIGS. 18A and 18B. In JVET-K0337, it is proposed to derive the MV as follows, i.e., mvd 1 and mvd 2 are predicted from mvd 0.
Figure PCTCN2020090799-appb-000046
Figure PCTCN2020090799-appb-000047
Figure PCTCN2020090799-appb-000048
Wherein
Figure PCTCN2020090799-appb-000049
mvd i and mv 1 are the predicted motion vector, motion vector difference and motion vector of the top-left pixel (i = 0) , top-right pixel (i = 1) or left-bottom pixel (i = 2) respectively, as shown in FIG. 18B. Please note that the addition of two motion vectors (e.g., mvA (xA, yA) and mvB (xB, yB) ) is equal to summation of two components separately, that is, newMV = mvA + mvB and the two components of newMV is set to (xA + xB) and (yA + yB) , respectively.
1.5.2.1. AF_MERGE mode
When a CU is applied in AF_MERGE mode, it gets the first block coded with affine mode from the valid neighbour reconstructed blocks. And the selection order for the candidate block is from left, above, above  right, left bottom to above left as shown in FIG. 21A (denoted by A, B, C, D, E in order) . For example, if the neighbour left bottom block is coded in affine mode as denoted by A0 in FIG. 21B, the Control Point (CP) motion vectors mv 0 N, mv 1 N and mv 2 N of the top left corner, above right corner and left bottom corner of the neighbouring CU/PU which contains the block A are fetched. And the motion vector mv 0 C, mv 1 C and mv 2 C (which is only used for the 6-parameter affine model) of the top left corner/top right/bottom left on the current CU/PU is calculated based on mv 0 N, mv 1 N and mv 2 N. It should be noted that in VTM-2.0, sub-block (e.g. 4×4 block in VTM) located at the top-left corner stores mv0, the sub-block located at the top-right corner stores mv1 if the current block is affine coded. If the current block is coded with the 6-parameter affine model, the sub-block located at the bottom-left corner stores mv2; otherwise (with the 4-parameter affine model) , LB stores mv2’. Other sub-blocks stores the MVs used for MC.
After the CPMV of the current CU mv 0 C, mv 1 C and mv 2 C are derived, according to the simplified affine motion model Equation (1) and (2) , the MVF of the current CU is generated. In order to identify whether the current CU is coded with AF_MERGE mode, an affine flag is signalled in the bitstream when there is at least one neighbour block is coded in affine mode.
In JVET-L0142 and JVET-L0632, an affine merge candidate list is constructed with following steps:
1) Insert inherited affine candidates
Inherited affine candidate means that the candidate is derived from the affine motion model of its valid neighbor affine coded block. The maximum two inherited affine candidates are derived from affine motion model of the neighboring blocks and inserted into the candidate list. For the left predictor, the scan order is {A0, A1} ; for the above predictor, the scan order is {B0, B1, B2} .
2) Insert constructed affine candidates
If the number of candidates in affine merge candidate list is less than MaxNumAffineCand (e.g., 5) , constructed affine candidates are inserted into the candidate list. Constructed affine candidate means the candidate is constructed by combining the neighbor motion information of each control point.
a) The motion information for the control points is derived firstly from the specified spatial neighbors and temporal neighbor shown in FIG. 22 CPk (k=1, 2, 3, 4) represents the k-th control point. A0, A1, A2, B0, B1, B2 and B3 are spatial positions for predicting CPk (k=1, 2, 3) ; T is temporal position for predicting CP4.
The coordinates of CP1, CP2, CP3 and CP4 is (0, 0) , (W, 0) , (H, 0) and (W, H) , respectively, where W and H are the width and height of current block.
The motion information of each control point is obtained according to the following priority order:
– For CP1, the checking priority is B2->B3->A2. B2 is used if it is available. Otherwise, if B2 is available, B3 is used. If both B2 and B3 are unavailable, A2 is used. If all the three candidates are unavailable, the motion information of CP1 cannot be obtained.
– For CP2, the checking priority is B1->B0.
– For CP3, the checking priority is A1->A0.
– For CP4, T is used.
b) Secondly, the combinations of controls points are used to construct an affine merge candidate.
I. Motion information of three control points are needed to construct a 6-parameter affine candidate. The three control points can be selected from one of the following four combinations ( {CP1, CP2, CP4} , {CP1, CP2, CP3} , {CP2, CP3, CP4} , {CP1, CP3, CP4} ) . Combinations {CP1, CP2, CP3} , {CP2, CP3, CP4} , {CP1, CP3, CP4} will be converted to a 6-parameter motion model represented by top-left, top-right and bottom-left control points.
II. Motion information of two control points are needed to construct a 4-parameter affine candidate. The two control points can be selected from one of the two combinations ( {CP1, CP2} , {CP1, CP3} ) . The two combinations will be converted to a 4-parameter motion model represented by top-left and top-right control points.
III. The combinations of constructed affine candidates are inserted into to candidate list as following order:
{CP1, CP2, CP3} , {CP1, CP2, CP4} , {CP1, CP3, CP4} , {CP2, CP3, CP4} , {CP1, CP2} , {CP1, CP3}
i. For each combination, the reference indices of list X for each CP are checked, if they are all the same, then this combination has valid CPMVs for list X. If the combination does not have valid CPMVs for both list 0 and list 1, then this combination is marked as invalid. Otherwise, it is valid, and the CPMVs are put into the sub-block merge list.
3) Padding with zero motion vectors
If the number of candidates in affine merge candidate list is less than 5, zero motion vectors with zero reference indices are insert into the candidate list, until the list is full.
More specifically, for the sub-block merge candidate list, a 4-parameter merge candidate with MVs set to (0, 0) and prediction direction set to uni-prediction from list 0 (for P slice) and bi-prediction (for B slice) .
2. Drawbacks of existing implementations
When applied in VVC, ARC may have the following problems:
1. Besides resolution, other fundamental parameters such as bit-depth and color format (such as 4: 0: 0, 4: 2: 0 or 4: 4: 4) may also be changed from one picture to another picture in one sequence.
3. Example methods for bit-depth and colour format conversions
The detailed inventions below should be considered as examples to explain general concepts. These inventions should not be interpreted in a narrow way. Furthermore, these inventions can be combined in any manner.
In the following discussion, SatShift (x, n) is defined as
Figure PCTCN2020090799-appb-000050
Shift (x, n) is defined as Shift (x, n) = (x+ offset0) >>n.
In one example, offset0 and/or offset1 are set to (1<<n) >>1 or (1<< (n-1) ) . In another example, offset0 and/or offset1 are set to 0.
In another example, offset0=offset1= ( (1<<n) >>1) -1 or ( (1<< (n-1) ) ) -1.
Clip3 (min, max, x) is defined as
Figure PCTCN2020090799-appb-000051
Floor (x) is defined as the largest integer less than or equal to x.
Ceil (x) the smallest integer greater than or equal to x.
Log2 (x) is defined as the base-2 logarithm of x.
Adaptive resolution change (ARC)
1. It is proposed to disallow bi-prediction from two reference pictures with different resolutions.
2. Whether to enable DMVR/BIO or other kinds of motion derivation/refinement at the decoder side may depend on whether the two reference pictures are in the same resolution or not.
a. If the two references are in different resolutions, motion derivation/refinement such as DMVR/BIO is disabled.
b. Alternatively, how to apply motion derivation/refinement at the decoder side may depend on whether the two reference pictures are in the same resolution or not.
i. In one example, the allowed MVDs associated with each reference picture may be scaled, e.g., according to resolutions.
3. It is proposed that how to manipulate the reference samples may depend on the relationship between the resolution of the reference picture and that of the current picture.
a. In one example, all reference pictures stored in the decoded picture buffer are in the same resolution (denoted as a first resolution) , such as the maximumly/minimally allowed resolution.
i. Alternatively, furthermore, the samples in a decoded picture may be firstly modified, (e.g., via up-sampling or down-sampling) , before being stored in the decoded picture buffer.
1) The modification may be according to the first resolution.
2) The modification may be according to the resolution for the reference picture, and that for the current picture.
b. In one example, all reference pictures stored in the decoded picture buffer may be in the resolution that picture has been coded with.
i. In one example, if one block’s motion vector points to a reference picture in a different resolution from the current picture, conversion of reference samples may be firstly applied (e.g., via up-sampling or down-sampling) , before invoking the motion compensation process.
ii. Alternatively, the motion compensation (MC) may be done directly using reference samples in the reference pictures. Afterwards, the prediction block generated from the MC process may be further modified (e.g., via up-sampling or down-sampling) , and the final prediction block for current block may depend on the modified prediction block.
Adaptive Bit-depth Conversion (ABC)
Denote the maximum allowed bit-depth for the i-th color component by ABCMaxBD [i] , the minimum allowed bit-depth for a color component by ABCMinBD [i] (e.g., i being 0.. 2) .
4. It is proposed that one or multiple sets of sample bit-depths for one or multiple components may be signaled in a video unit such as DPS, VPS, SPS, PPS, APS, picture header, slice header, tile group header.
a. Alternatively, one or multiple sets of sample bit-depths for one or multiple components may be signaled in a Supplemental Enhancement Information (SEI) message.
b. Alternatively, one or multiple sets of sample bit-depths for one or multiple components may be signaled in an individual video unit for Adaptive bit-depth conversion.
c. In one example, a set of sample bit-depths for one or multiple components may be coupled the dimensions of the picture.
i. In one example, one or multiple combinations of sample bit-depths for one or multiple components and the corresponding dimensions/down sampling ratios/up sampling ratios of the picture, may be signaled in the same video unit.
d. In one example, indication of the ABCMaxBD and/or ABCMinBD may be signaled. Alternatively, the differences of other bit-depth values compared to ABCMaxBD and/or ABCMinBD may be signaled.
5. It is proposed that when more than one combinations of sample bit-depths for one or multiple components and the corresponding dimensions/down sampling ratios/up sampling ratios of the picture in a single video unit, such as DPS, VPS, SPS, PPS, APS, picture header, slice header, tile group header etc., or in an individual video unit for ARC/ABC to be named, it is disallowed that a first combination is identical to a second combination.
6. It is proposed that how to manipulate the reference samples may depend on the relationship between the sample bit-depth of a component in the reference picture and that of the current picture.
a. In one example, all reference pictures stored in the decoded picture buffer are in the same bit-depth (denoted as the a first bit-depth) , such as ABCMaxBD [i] or ABCMinBD [i] (with i being 0.. 2 indicating the color component indices) .
i. Alternatively, furthermore, the samples in a decoded picture may be firstly modified, via left-shift or right-shift, before being stored in the decoded picture buffer.
1) The modification may be according to the first bit-depth.
2) The modification may be according to the defined bit-depth for reference picture, and that for current picture.
b. In one example, all reference pictures stored in the decoded picture buffer are in the bit-depth of what has been coded with.
i. In one example, if reference samples in a reference picture are with a different bit-depth from that of the current picture, conversion of reference samples may be firstly applied, before invoking the motion compensation process.
1) Alternatively, the motion compensation (MC) is done directly using reference samples. Afterwards, the prediction block generated from the MC process may be further modified (e.g., via  shifting) , and the final prediction block for current block may depend on the modified prediction block.
c. It is proposed that if the sample bit-depth of a component in the reference picture (denoted as BD1) is lower than that in the current picture (denoted as BD0) , the reference samples may be converted accordingly.
i. In one example, the reference sample S may be converted to S’ as S’=S<< (BD0-BD1) .
ii. Alternatively, S’=S<< (BD0-BD1) + (1<< (BD0-BD1-1) ) .
d. It is proposed that if the sample bit-depth of a component in the reference picture (denoted as BD1) is larger than that in the current picture (denoted as BD0) , the reference samples may be converted accordingly.
i. In one example, the reference sample S may be converted to S’ as S’=Shift (S, BD1-BD0) .
e. In one example, the reference picture with samples not converted may be removed after a reference picture is converted from it.
i. Alternatively, the reference picture with samples not converted may be kept but marked as unavailable after a reference picture is converted from it.
f. In one example, the reference picture with samples not converted may be put in the reference picture list. The reference samples are converted when they are used to in the inter-prediction.
7. It is proposed that the ARC and ABC may be conducted both.
a. In one example, ARC is conducted first, then ABC is conducted.
i. For example, samples are up-sampled/down-sampled according to different picture dimensions first, following by left-shifted/right-shifted according to different bit-depths in the ARC+ABC conversion.
b. In one example, ABC is conducted first, then ARC is conducted.
i. For example, samples are left-shifted/right-shifted according to different bit-depths first, following by up-sampled/down-sampled according to different picture dimensions in the ABC+ARC conversion.
8. It is proposed that a merge candidate referring to a reference picture with a higher sample bit-depth may have a higher priority than a merge candidate referring to a reference picture with a lower bit-depth.
a. For example, a merge candidate referring to a reference picture with a higher sample bit-depth may be put before a merge candidate referring to a reference picture with a lower sample bit-depth in the merge candidate list.
b. For example, a motion vector referring to a reference picture with a sample bit-depth lower than the sample bit-depth of the current picture cannot be in the merge candidate list.
9. It is proposed that a picture should be filtered with ALF parameters associated with corresponding sample bit-depth.
a. In one example, ALF parameters signaled in a video unit such as APS may be associated with one or multiple sample bit-depths.
b. In one example, a video unit such as APS signaling ALF parameters may be associated with one or multiple sample bit-depths.
c. For example, a picture may only apply ALF parameters signaled in a video unit such as APS, associated with the same sample bit-depth.
d. Alternatively, a picture may use ALF parameters associated with a different sample bit-depth.
10. It is proposed that ALF parameters associated with a first corresponding sample bit-depth may inherit or be predicted from ALF parameters associated with a second corresponding sample bit-depth.
a. In one example, the first corresponding sample bit-depth must be the same as the second corresponding sample bit-depth.
b. In one example, the first corresponding sample bit-depth may be different to the second corresponding sample bit-depth.
11. It is proposed that different default ALF parameters may be designed for different sample bit-depth.
12. It is proposed that samples in a picture should be reshaped with LMCS parameters associated with corresponding sample bit-depth.
a. In one example, LMCS parameters signaled in a video unit such as APS may be associated with one or multiple sample bit-depths.
b. In one example, a video unit such as APS signaling LMCS parameters may be associated with one or multiple sample bit-depths.
c. For example, a picture may only apply LMCS parameters signaled in a video unit such as APS, associated with the same sample bit-depth.
13. It is proposed that LMCS parameters associated with a first corresponding sample bit-depth may inherit or be predicted from LMCS parameters associated with a second corresponding sample bit-depth.
a. In one example, the first corresponding sample bit-depth must be the same with the second corresponding sample bit-depth.
b. In one example, the first corresponding sample bit-depth may be different to the second corresponding sample bit-depth.
14. It is proposed that coding tool X may be disabled for a block if the block refers to at least one reference picture with a different sample bit-depth to the current picture.
a. In one example, the information related to the coding tool X may not be signaled.
b. Alternatively, if coding tool X is applied in a block, the block cannot refer to a reference picture with a different sample bit-depth to the current picture.
i. In one example, a merge candidate referring to a reference picture with a different sample bit-depth to the current picture may be skipped or not put into the merge candidate list.
ii. In one example, the reference index corresponds to a reference picture with a different sample bit-depth to the current picture may be skipped or not allowed to be signaled.
c. The coding tool X may be anyone below.
i. ALF
ii. LMCS
iii. DMVR
iv. BDOF
v. Affine prediction
vi. TPM
vii. SMVD
viii. MMVD
ix. Inter-inter prediction in VVC
x. LIC
xi. HMVP
xii. Multiple Transform Set (MTS)
xiii. Sub-Block Transform (SBT)
15. It is proposed that bi-prediction from two reference pictures with different bit-depth may be disallowed.
Adaptive Color-format Conversion (ACC)
16. It is proposed that one or multiple color formats (In one example, color format may refer to 4: 4: 4, 4: 2: 2, 4: 2: 0 or 4: 0: 0, In another example, color format may refer to YCbCr or RGB) may be signaled in a video unit such as DPS, VPS, SPS, PPS, APS, picture header, slice header, tile group header.
a. Alternatively, one or multiple color formats may be signaled in a Supplemental Enhancement Information (SEI) message.
b. Alternatively, one or multiple color formats may be signaled in an individual video unit for ACC.
c. In one example, a color format may be coupled the dimensions and/or sample bit-depth of the picture.
i. In one example, one or multiple combinations of color formats, and/or sample bit-depth for one or multiple components and/or the corresponding dimensions of the picture, may be signaled in the same video unit.
17. It is proposed that when more than one combinations of color formats, and/or sample bit-depth for one or multiple components, and/or the corresponding dimensions of the picture  in a single video unit, such as DPS, VPS, SPS, PPS, APS, picture header, slice header, tile group header etc., or in an individual video unit for ARC/ABC/ACC to be named, it is disallowed that a first combination is identical to a second combination.
18. It is proposed to disable prediction from a reference picture in a different color format from the current picture.
a. Alternatively, furthermore, pictures with different color format are disallowed to be put in a reference picture list for a block in current picture.
19. It is proposed that how to manipulate the reference samples may depend on the color format of the reference picture and that of the current picture.
a. It is proposed that if a first color format of the reference picture is not identical to a second color format of the current picture, the reference samples may be converted accordingly.
i. In one example, if the first format is 4: 2: 0 and the second format is 4: 2: 2, then samples of chroma components in the reference picture may be up-sampled vertically by a ratio of 1: 2.
ii. In one example, if the first format is 4: 2: 2 and the second format is 4: 4: 4, then samples of chroma components in the reference picture may be up-sampled horizontally by a ratio of 1: 2.
iii. In one example, if the first format is 4: 2: 0 and the second format is 4: 4: 4, then samples of chroma components in the reference picture may be up-sampled vertically by a ratio of 1: 2 and up-sampled horizontally by a ratio of 1: 2.
iv. In one example, if the first format is 4: 2: 2 and the second format is 4: 2: 0, then samples of chroma components in the reference picture may be down-sampled vertically by a ratio of 1: 2.
v. In one example, if the first format is 4: 4: 4 and the second format is 4: 2: 2, then samples of chroma components in the reference picture may be down-sampled horizontally by a ratio of 1: 2.
vi. In one example, if the first format is 4: 4: 4 and the second format is 4: 2: 0, then samples of chroma components in the reference picture may be down-sampled vertically by a ratio of 1: 2 and down-sampled horizontally by a ratio of 1: 2.
vii. In one example, if the first format is not 4: 0: 0 and the second format is 4: 0: 0, then samples of the luma component in the reference picture may be used to perform inter-prediction to the current picture.
viii. In one example, if the first format is 4: 0: 0 and the second format is not 4: 0: 0, then samples of the luma component in the reference picture may be used to perform inter-prediction to the current picture.
1) Alternatively, the if the first format is 4: 0: 0 and the second format is not 4: 0: 0, then samples in the reference picture cannot be used to perform inter-prediction to the current picture.
b. In one example, the reference picture with samples not converted may be removed after a reference picture is converted from it.
i. Alternatively, the reference picture with samples not converted may be kept but marked as unavailable after a reference picture is converted from it.
c. In one example, the reference picture with samples not converted may be put in the reference picture list. The reference samples are converted when they are used to in the inter-prediction.
20. It is proposed that the ARC and ACC may be conducted both.
a. In one example, ARC is conducted first, then ACC is conducted.
i. In one example, the samples are first down-sampled/up-sampled according to the different picture dimensions, following by down-sampled/up-sampled according to the different color formats in the ARC+ACC conversion.
b. In one example, ACC is conducted first, then ARC is conducted.
i. In one example, the samples are first down-sampled/up-sampled according to the different color formats, following by down-sampled/up-sampled according to the different picture dimensions in the ARC+ACC conversion.
c. In one example, ACC and ARC may be conducted together.
i. For example, the samples are down-sampled/up-sampled according to the scaling ratio derived from different color formats and different picture dimensions in the ARC+ACC or ACC+ARC conversion.
21. It is proposed that the ACC and ABC may be conducted both.
a. In one example, ACC is conducted first, then ABC is conducted.
i. For example, samples are up-sampled/down-sampled according to the different color formats first, following by left-shifted/right-shifted according to different bit-depths in the ACC+ABC conversion.
b. In one example, ABC is conducted first, then ACC is conducted.
i. For example, samples are left-shifted/right-shifted according to different bit-depths first, following by up-sampled/down-sampled according to the different color formats in the ABC+ACC conversion.
22. It is proposed that a picture should be filtered with ALF parameters associated with corresponding color format.
a. In one example, ALF parameters signaled in a video unit such as APS may be associated with one or multiple color formats.
b. In one example, a video unit such as APS signaling ALF parameters may be associated with one or multiple color formats.
c. For example, a picture may only apply ALF parameters signaled in a video unit such as APS, associated with the same color format.
23. It is proposed that ALF parameters associated with a first corresponding color format may inherit or be predicted from ALF parameters associated with a second corresponding color format.
a. In one example, the first corresponding color format must be the same as the second corresponding color format.
b. In one example, the first corresponding color format may be different to the second corresponding color format.
24. It is proposed that different default ALF parameters may be designed for different color formats.
a. In one example, different default ALF parameters may be designed for YCbCr and RGB format.
b. In one example, different default ALF parameters may be designed for 4: 4: 4, 4: 2: 2, 4: 2: 0, 4: 0: 0 format.
25. It is proposed that samples in a picture should be reshaped with LMCS parameters associated with corresponding color format.
a. In one example, LMCS parameters signaled in a video unit such as APS may be associated with one or multiple color formats.
b. In one example, a video unit such as APS signaling LMCS parameters may be associated with one or multiple color formats.
c. For example, a picture may only apply LMCS parameters signaled in a video unit such as APS, associated with the same color format.
26. It is proposed that LMCS parameters associated with a first corresponding sample color format may inherit or be predicted from LMCS parameters associated with a second corresponding color format.
a. In one example, the first corresponding color format must be the same with the second corresponding color format.
b. In one example, the first corresponding color format may be different to the second corresponding color format.
27. It is proposed that the chroma residue scaling process in LMCS is not applied for a picture with the color format 4: 0: 0.
a. In one example, the indication of whether the chroma residue scaling is applied may not be signaled and inferred to be “not used” if the color format is 4: 0: 0.
b. In one example, the indication of whether the chroma residue scaling is applied must be “not used” in a conformance bit-stream if the color format is 4: 0: 0.
c. In one example, the signaled indication of whether the chroma residue scaling is applied is ignored and set to be “not used” by the decoder if the color format is 4: 0: 0.
28. It is proposed that coding tool X may be disabled for a block if the block refers to at least one reference picture with a different color format to the current picture.
a. In one example, the information related to the coding tool X may not be signaled.
b. Alternatively, if coding tool X is applied in a block, the block cannot refer to a reference picture with a different color format to the current picture.
i. In one example, a merge candidate referring to a reference picture with a different color format to the current picture may be skipped or not put into the merge candidate list.
ii. In one example, the reference index corresponds to a reference picture with a different color format to the current picture may be skipped or not allowed to be signaled.
c. The coding tool X may be anyone below.
i. ALF
ii. LMCS
iii. DMVR
iv. BDOF
v. Affine prediction
vi. TPM
vii. SMVD
viii. MMVD
ix. Inter-inter prediction in VVC
x. LIC
xi. HMVP
xii. Multiple Transform Set (MTS)
xiii. Sub-Block Transform (SBT)
29. It is proposed that bi-prediction from two reference pictures with different color format may be disallowed.
The examples described above may be incorporated in the context of the methods described below, e.g.,  method  2400 or 2500, which may be implemented at a video decoder or a video encoder.
FIG. 24 shows a flowchart of an exemplary method for video processing. The method 2400 includes, at step 2402, determining, for a current video block, a relationship between resolutions of two reference pictures to which the current video block refers; at step 2404, determining, in response to the relationship, whether and /or how to perform a specific operation for the current  video block during an adaptive resolution change (ARC) process; and at step 2406, performing a conversion between a bitstream representation of the current video block and the current video block based on the specific operation.
FIG. 25 shows a flowchart of another exemplary method for video processing. The method 2500 includes, at step 2502, determining, for a video block within a current picture, a relationship between a resolution of the current picture and that of a reference picture to which the video block refers; at step 2504, performing, in response to the relationship, a specific operation for samples within the reference picture or for a predictive block of the video block during an adaptive resolution change (ARC) process; and at step 2506, performing a conversion between a bitstream representation of the current video block and the current video block based on the specific operation.
In one aspect, there is disclosed a method for video processing, comprising: determining, for a current video block, a relationship between resolutions of two reference pictures to which the current video block refers; determining, in response to the relationship, whether and /or how to perform a specific operation for the current video block during an adaptive resolution change (ARC) process; and performing a conversion between a bitstream representation of the current video block and the current video block based on the specific operation.
In one example, the specific operation comprises a bi-prediction from the two reference pictures for the current video block.
In one example, the specific operation comprises at least one of a motion derivation and a motion refinement for the current video block.
In one example, the motion refinement comprises a decoder-side motion vector refinement (DMVR) .
In one example, the motion derivation comprises a bi-directional optical flow (BIO) process.
In one example, if it is determined that the two reference pictures have different resolutions, the specific operation is disabled.
In one example, if it is determined that the two reference pictures have different resolutions, motion vector differences (MVD) associated with at least one of the two reference pictures are scaled in the specific operation based on the relationship between resolutions of two reference pictures.
In another aspect, there is disclosed a method for video processing, comprising: determining, for a video block within a current picture, a relationship between a resolution of the current picture and that of a reference picture to which the video block refers; performing, in response to the relationship, a specific operation for samples within the reference picture or for a predictive block of the video block during an adaptive resolution change (ARC) process; and performing a conversion between a bitstream representation of the current video block and the current video block based on the specific operation.
In one example, if it is determined that the current picture and the reference picture have different resolutions, the specific operation comprises: performing a modification on samples within the reference picture before invoking motion compensation process of the current video block.
In one example, if it is determined that the current picture and the reference picture have different resolutions, the specific operation comprises: performing a motion compensation for the video block by using the samples within the reference picture to generate a predictive block for the video block, and performing a modification on samples within the predictive block.
In one example, the reference picture is stored in a decoded picture buffer in a resolution that the reference picture is coded with.
In one example, the reference picture is stored in a decoded picture buffer in a resolution which is a maximally allowed resolution, a minimally allowed resolution, or a predefined resolution.
In one example, the reference picture is stored in a decoded picture buffer in a resolution which is based on a resolution of the current picture and that of the reference picture.
In one example, the method further comprises: storing a decoded picture comprising the video block in a decoded picture buffer in a resolution that the decoded picture is coded with.
In one example, the method further comprises: storing a decoded picture comprising the video block in a decoded picture buffer in a resolution which is a maximally allowed resolution, minimally allowed resolution, or a predefined resolution.
In one example, pictures in the decoded picture buffer are in a same resolution.
In one example, the modification comprises at least one of an up-sampling or a down-sampling.
In one example, the conversion includes encoding the current video block into the bitstream representation of the current video block and decoding the current video block from the bitstream representation of the current video block.
In still another aspect, there is disclosed an apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method as described above.
In still another aspect, there is disclosed a non-transitory computer readable media, having program code stored thereupon, the program code, when executed, causing a processor to implement the method as described above.
4. Example implementations of the disclosed technology
FIG. 23 is a block diagram of a video processing apparatus 2300. The apparatus 2300 may be used to implement one or more of the methods described herein. The apparatus 2300 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on. The apparatus 2300 may include one or more processors 2302, one or more memories 2304 and video processing hardware 2306. The processor (s) 2302 may be configured to implement one or more methods (including, but not limited to, method 2300) described in the present document. The memory (memories) 2304 may be used for storing data and code used for implementing the methods and techniques described herein. The video processing hardware 2306 may be used to implement, in hardware circuitry, some techniques described in the present document.
In some embodiments, the video coding methods may be implemented using an apparatus that is implemented on a hardware platform as described with respect to FIG. 23.
From the foregoing, it will be appreciated that specific embodiments of the presently disclosed technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the presently disclosed technology is not limited except as by the appended claims.
Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the  subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing unit” or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document) , in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code) . A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) .
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of  digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
It is intended that the specification, together with the drawings, be considered exemplary only, where exemplary means an example. As used herein, the use of “or” is intended to include “and/or” , unless the context clearly indicates otherwise.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims (20)

  1. A method for video processing, comprising:
    determining, for a current video block, a relationship between resolutions of two reference pictures to which the current video block refers;
    determining, in response to the relationship, whether and /or how to perform a specific operation for the current video block during an adaptive resolution change (ARC) process; and
    performing a conversion between a bitstream representation of the current video block and the current video block based on the specific operation.
  2. The method of claim 1, wherein the specific operation comprises a bi-prediction from the two reference pictures for the current video block.
  3. The method of claim 1, wherein the specific operation comprises at least one of a motion derivation and a motion refinement for the current video block.
  4. The method of claim 3, wherein the motion refinement comprises a decoder-side motion vector refinement (DMVR) .
  5. The method of claim 3, wherein the motion derivation comprises a bi-directional optical flow (BIO) process.
  6. The method of any one of claims 2-4, wherein if it is determined that the two reference pictures have different resolutions, the specific operation is disabled.
  7. The method of any one of claims 3-5, wherein if it is determined that the two reference pictures have different resolutions, motion vector differences (MVD) associated with at least one  of the two reference pictures are scaled in the specific operation based on the relationship between resolutions of two reference pictures.
  8. A method for video processing, comprising:
    determining, for a video block within a current picture, a relationship between a resolution of the current picture and that of a reference picture to which the video block refers;
    performing, in response to the relationship, a specific operation for samples within the reference picture or for a predictive block of the video block during an adaptive resolution change (ARC) process; and
    performing a conversion between a bitstream representation of the current video block and the current video block based on the specific operation.
  9. The method of claim 8, wherein if it is determined that the current picture and the reference picture have different resolutions, the specific operation comprises:
    performing a modification on samples within the reference picture before invoking motion compensation process of the current video block.
  10. The method of claim 8, wherein if it is determined that the current picture and the reference picture have different resolutions, the specific operation comprises:
    performing a motion compensation for the video block by using the samples within the reference picture to generate a predictive block for the video block, and
    performing a modification on samples within the predictive block.
  11. The method of any one of claims 8-10, wherein the reference picture is stored in a decoded picture buffer in a resolution that the reference picture is coded with.
  12. The method of any one of claims 8-10, wherein the reference picture is stored in a decoded picture buffer in a resolution which is a maximally allowed resolution, a minimally allowed resolution, or a predefined resolution.
  13. The method of any one of claims 8-10, wherein the reference picture is stored in a decoded picture buffer in a resolution which is based on a resolution of the current picture and that of the reference picture.
  14. The method of any one of claims 8-10, wherein the method further comprises:
    storing a decoded picture comprising the video block in a decoded picture buffer in a resolution that the decoded picture is coded with.
  15. The method of any one of claims 8-10, wherein the method further comprises:
    storing a decoded picture comprising the video block in a decoded picture buffer in a resolution which is a maximally allowed resolution, minimally allowed resolution, or a predefined resolution.
  16. The method of claim 12 or 15, wherein pictures in the decoded picture buffer are in a same resolution.
  17. The method of any one of claims 9-10, wherein the modification comprises at least one of an up-sampling or a down-sampling.
  18. The method of any of claims 1 to 17, wherein the conversion includes encoding the current video block into the bitstream representation of the current video block and decoding the current video block from the bitstream representation of the current video block.
  19. An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of claims 1 to 18.
  20. A non-transitory computer readable media, having program code stored thereupon, the program code, when executed, causing a processor to implement the method in any one of claims 1 to 18.
PCT/CN2020/090799 2019-05-16 2020-05-18 Adaptive resolution change in video coding WO2020228833A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202080036230.7A CN113841395B (en) 2019-05-16 2020-05-18 Adaptive resolution change in video coding and decoding

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2019/087209 2019-05-16
CN2019087209 2019-05-16

Publications (1)

Publication Number Publication Date
WO2020228833A1 true WO2020228833A1 (en) 2020-11-19

Family

ID=73289350

Family Applications (3)

Application Number Title Priority Date Filing Date
PCT/CN2020/090799 WO2020228833A1 (en) 2019-05-16 2020-05-18 Adaptive resolution change in video coding
PCT/CN2020/090801 WO2020228835A1 (en) 2019-05-16 2020-05-18 Adaptive color-format conversion in video coding
PCT/CN2020/090800 WO2020228834A1 (en) 2019-05-16 2020-05-18 Adaptive bit-depth conversion in video coding

Family Applications After (2)

Application Number Title Priority Date Filing Date
PCT/CN2020/090801 WO2020228835A1 (en) 2019-05-16 2020-05-18 Adaptive color-format conversion in video coding
PCT/CN2020/090800 WO2020228834A1 (en) 2019-05-16 2020-05-18 Adaptive bit-depth conversion in video coding

Country Status (2)

Country Link
CN (3) CN113841395B (en)
WO (3) WO2020228833A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022180261A1 (en) * 2021-02-26 2022-09-01 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Video coding concept allowing for limitation of drift
EP4087238A1 (en) * 2021-05-07 2022-11-09 Panasonic Intellectual Property Corporation of America Encoder, decoder, encoding method, and decoding method

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220182643A1 (en) * 2020-12-04 2022-06-09 Ofinno, Llc No Reference Image Quality Assessment Based Decoder Side Inter Prediction
CA3142044A1 (en) * 2020-12-14 2022-06-14 Comcast Cable Communications, Llc Methods and systems for improved content encoding
WO2022268627A1 (en) * 2021-06-24 2022-12-29 Interdigital Vc Holdings France, Sas Methods and apparatuses for encoding/decoding a video
US20230281848A1 (en) * 2022-03-03 2023-09-07 Qualcomm Incorporated Bandwidth efficient image processing
WO2023171940A1 (en) * 2022-03-08 2023-09-14 현대자동차주식회사 Method and apparatus for video coding, using adaptive chroma conversion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107431819A (en) * 2014-12-31 2017-12-01 诺基亚技术有限公司 For scalable video and the inter-layer prediction of decoding
CN108370444A (en) * 2015-12-21 2018-08-03 汤姆逊许可公司 It combines adaptive resolution and internal bit depth increases the method and apparatus of coding
US20190132606A1 (en) * 2017-11-02 2019-05-02 Mediatek Inc. Method and apparatus for video coding

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2294827A4 (en) * 2008-06-12 2017-04-19 Thomson Licensing Methods and apparatus for video coding and decoding with reduced bit-depth update mode and reduced chroma sampling update mode
KR101441874B1 (en) * 2009-08-21 2014-09-25 에스케이텔레콤 주식회사 Video Coding Method and Apparatus by Using Adaptive Motion Vector Resolution
PL2882190T3 (en) * 2011-04-21 2019-04-30 Hfi Innovation Inc Method and apparatus for improved in-loop filtering
JP6190103B2 (en) * 2012-10-29 2017-08-30 キヤノン株式会社 Moving picture coding apparatus, moving picture coding method, and program
JP6290924B2 (en) * 2013-01-07 2018-03-07 ノキア テクノロジーズ オサケユイチア Method and apparatus for video coding and decoding
JP6093009B2 (en) * 2013-04-19 2017-03-08 日立マクセル株式会社 Encoding method and encoding apparatus
US9225991B2 (en) * 2013-05-30 2015-12-29 Apple Inc. Adaptive color space transform coding
US10462464B2 (en) * 2013-11-24 2019-10-29 Lg Electronics Inc. Method and apparatus for encoding and decoding video signal using adaptive sampling
KR101789954B1 (en) * 2013-12-27 2017-10-25 인텔 코포레이션 Content adaptive gain compensated prediction for next generation video coding
EP3114835B1 (en) * 2014-03-04 2020-04-22 Microsoft Technology Licensing, LLC Encoding strategies for adaptive switching of color spaces
EP3565251B1 (en) * 2014-03-04 2020-09-16 Microsoft Technology Licensing, LLC Adaptive switching of color spaces
US9948933B2 (en) * 2014-03-14 2018-04-17 Qualcomm Incorporated Block adaptive color-space conversion coding
KR102073930B1 (en) * 2014-03-14 2020-02-06 브이아이디 스케일, 인크. Systems and methods for rgb video coding enhancement
US10448029B2 (en) * 2014-04-17 2019-10-15 Qualcomm Incorporated Signaling bit depth values for 3D color prediction for color gamut scalability
US10154286B2 (en) * 2014-06-19 2018-12-11 Vid Scale, Inc. Systems and methods for model parameter optimization in three dimensional based color mapping
US10687069B2 (en) * 2014-10-08 2020-06-16 Microsoft Technology Licensing, Llc Adjustments to encoding and decoding when switching color spaces
KR101770300B1 (en) * 2015-06-09 2017-08-22 삼성전자주식회사 Method and apparatus for video encoding, method and apparatus for video decoding
EP3355581A4 (en) * 2015-09-23 2019-04-17 LG Electronics Inc. Image encoding/decoding method and device for same
US20170105014A1 (en) * 2015-10-08 2017-04-13 Qualcomm Incorporated Luma-driven chroma scaling for high dynamic range and wide color gamut contents

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107431819A (en) * 2014-12-31 2017-12-01 诺基亚技术有限公司 For scalable video and the inter-layer prediction of decoding
CN108370444A (en) * 2015-12-21 2018-08-03 汤姆逊许可公司 It combines adaptive resolution and internal bit depth increases the method and apparatus of coding
US20190132606A1 (en) * 2017-11-02 2019-05-02 Mediatek Inc. Method and apparatus for video coding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HENDRY ET AL.: "AHG19: Adaptive resolution change (ARC) support in VVC", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 14TH MEETING: GENEVA, CH, 19–27 MARCH 2019 JVET-N0118-V1, 27 March 2019 (2019-03-27), DOI: 20200807134805X *
HENDRY ET AL: "AHG19: Adaptive resolution change (ARC) support in VVC", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, no. JVET-N0118, 27 March 2019 (2019-03-27), Geneva,CH *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022180261A1 (en) * 2021-02-26 2022-09-01 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Video coding concept allowing for limitation of drift
EP4087238A1 (en) * 2021-05-07 2022-11-09 Panasonic Intellectual Property Corporation of America Encoder, decoder, encoding method, and decoding method

Also Published As

Publication number Publication date
CN113826382A (en) 2021-12-21
CN113841395B (en) 2022-10-25
WO2020228835A1 (en) 2020-11-19
CN113875232A (en) 2021-12-31
WO2020228834A1 (en) 2020-11-19
CN113826382B (en) 2023-06-20
CN113841395A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
US11575887B2 (en) Selective use of coding tools in video processing
US11671602B2 (en) Signaling for reference picture resampling
WO2020228833A1 (en) Adaptive resolution change in video coding
US11689747B2 (en) High level syntax for video coding tools
KR102637881B1 (en) Usage and signaling of tablet video coding tools
CN114641992B (en) Signaling of reference picture resampling
KR20220070437A (en) Level-based signaling in video coding tools
KR102653570B1 (en) Signal for reference picture resampling

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20806789

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20806789

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22-03-2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20806789

Country of ref document: EP

Kind code of ref document: A1