CN114930837A - Restriction of inter prediction for sub-pictures - Google Patents

Restriction of inter prediction for sub-pictures Download PDF

Info

Publication number
CN114930837A
CN114930837A CN202180008179.3A CN202180008179A CN114930837A CN 114930837 A CN114930837 A CN 114930837A CN 202180008179 A CN202180008179 A CN 202180008179A CN 114930837 A CN114930837 A CN 114930837A
Authority
CN
China
Prior art keywords
picture
sub
video
current
slice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180008179.3A
Other languages
Chinese (zh)
Inventor
王业奎
张莉
张凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ByteDance Inc
Original Assignee
ByteDance Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ByteDance Inc filed Critical ByteDance Inc
Publication of CN114930837A publication Critical patent/CN114930837A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Methods, devices, and systems for video coding and decoding are described that include restriction of inter prediction of sub-pictures. One example method of video processing includes performing a conversion between a video including a plurality of pictures including one or more sub-pictures and a bitstream of the video, wherein the bitstream conforms to a format rule that specifies whether the current sub-picture is not allowed to reference a previous sub-picture for inter-prediction in a case where the current sub-picture in the current picture has a current identifier that is different from an identifier of the previous sub-picture in the previous picture at a same position as the current sub-picture.

Description

Restriction of inter prediction for sub-pictures
Cross Reference to Related Applications
This application claims priority and benefit from U.S. provisional patent application No. us 62/957,123 filed on 4/1/2020 in time, according to applicable patent laws and/or regulations of the paris convention. The entire disclosure of the above application is incorporated herein by reference as part of the disclosure of the present application for all purposes of law.
Technical Field
This patent document relates to image and video coding and decoding.
Background
In the internet and other digital communication networks, digital video occupies the largest bandwidth. As the number of connected user devices capable of receiving and displaying video increases, the bandwidth requirements for pre-counting the use of digital video will continue to grow.
Disclosure of Invention
This document discloses techniques that may be used by video encoders and decoders for video encoding or decoding and that include a limitation on inter prediction of sub-pictures.
In one example aspect, a video processing method is disclosed. The method includes performing a conversion between a video including a plurality of pictures including one or more sub-pictures and a bitstream of the video, wherein the bitstream conforms to a format rule that specifies whether a current sub-picture in the current picture is not allowed to reference a previous sub-picture for inter-prediction if the current sub-picture has a current identifier that is different from an identifier of a previous sub-picture in a previous picture at a same position as the current sub-picture.
In another example aspect, a video processing method is disclosed. The method includes performing a conversion between a video including a video region and a bitstream of the video including a plurality of codec layers, wherein the bitstream conforms to a format rule, and wherein the format rule specifies whether inter-layer prediction (ILP) between video regions in different ones of the plurality of codec layers is allowed based on a condition.
In yet another example aspect, a video processing method is disclosed. The method comprises performing a conversion between a video comprising a current video region and a bitstream of the video comprising a plurality of codec layers, wherein the bitstream complies with a format rule, and wherein the format rule specifies that the bitstream comprises an indication of whether inter-layer prediction (ILP) between the current video region and a video region in a reference layer is allowed.
In yet another example aspect, a video encoder apparatus is disclosed. The video encoder comprises a processor configured to implement the above-described method.
In yet another example aspect, a video decoder apparatus is disclosed. The video decoder comprises a processor configured to implement the above-described method.
In yet another example aspect, a computer-readable medium having code stored thereon is disclosed. The code embodies one of the methods described herein in the form of processor executable code.
These and other features will be described throughout this document.
Drawings
Fig. 1 shows an example of dividing a picture by a luma Codec Tree Unit (CTU).
Fig. 2 shows another example of splitting a picture with luminance CTUs.
Fig. 3 shows an example segmentation of a picture.
Fig. 4 shows another example segmentation of a picture.
FIG. 5 is a block diagram of an example video processing system in which the disclosed technology may be implemented.
FIG. 6 is a block diagram of an example hardware platform for video processing.
Fig. 7 is a block diagram illustrating a video codec system according to some embodiments of the present disclosure.
Fig. 8 is a block diagram illustrating an encoder in accordance with some embodiments of the present disclosure.
Fig. 9 is a block diagram illustrating a decoder in accordance with some embodiments of the present disclosure.
10-12 show flowcharts of example methods for video processing.
Detailed Description
Section headings are used in this document for ease of understanding and do not limit the applicability of the techniques and embodiments disclosed in each section to that section alone. Furthermore, the use of the H.266 term in some of the descriptions is intended only to be understood easily and is not intended to limit the scope of the disclosed technology. Thus, the techniques described herein are also applicable to other video codec protocols and designs.
1. Initial discussion
This document relates to video codec technology. In particular, it is a signaling about sub-pictures, slices and stripes. These concepts may be applied, alone or in various combinations, to any video codec standard or non-standard video codec that supports multi-layer video codecs, such as the multi-function video codec (VVC) being developed.
2. Abbreviations
APS adaptive parameter set
AU access unit
AUD access unit delimiter
AVC advanced video coding and decoding
CLVS codec layer video sequence
CPB coding and decoding picture buffer zone
CRA Clean Random Access (Clean Random Access)
CTU coding and decoding tree unit
CVS codec video sequence
DPB decoded picture buffer
DPS decoding parameter set
End of EOB bit stream
End of EOS sequence
GDR progressive decoding refresh
HEVC (high efficiency video coding and decoding)
HRD hypothetical reference decoder
IDR instantaneous decoding refresh
JEM joint exploration model
MCTS motion constraint slice set
NAL network abstraction layer
OLS output layer set
PH picture header
PPS picture parameter set
PTL profiles, hierarchies and levels
PU picture unit
RBSP original byte sequence payload
SEI supplemental enhancement information
SPS sequence parameter set
SVC scalable video coding and decoding
VCL video coding and decoding layer
VPS video parameter set
VTM VVC test model
VUI video usability information
VVC multifunctional video coding and decoding
3. Brief introduction to video codec
Video codec standards have been developed primarily through the development of the well-known ITU-T and ISO/IEC standards. ITU-T makes H.261 and H.263, ISO/IEC makes MPEG-1 and MPEG-4 visuals, and these two organizations together make the H.262/MPEG-2 video and the H.264/MPEG-4 Advanced Video Codec (AVC) and H.265/HEVC standards. Since h.262, video codec standards have been based on hybrid video codec structures, in which temporal prediction plus transform coding is utilized. To explore future video codec technologies beyond HEVC, VCEG and MPEG have together established the joint video exploration team (jfet) in 2015. Since then, JFET has adopted many new approaches and applied them to a reference software named Joint Exploration Model (JEM). While the jviet conference is held once a quarter, and the goal of the new codec standard is to reduce the bit rate by 50% compared to HEVC. The new video codec standard was formally named multifunctional video codec (VVC) at the jfet meeting in year 2018, month 4, and a first version of VVC Test Model (VTM) was also released at that time. As a result of the continuing effort to contribute to VVC standardization, new codec techniques are adopted in the VVC standard at every jfet conference. The working draft and test model VTM of the VVC are then updated after each meeting. The VVC project is now targeted for technical completion (FDIS) at a 7-month conference in 2020.
Picture segmentation scheme in HEVC
HEVC includes four different picture segmentation schemes, namely conventional slice, dependent slice, slice and Wavefront Parallel Processing (WPP), which can be applied for Maximum Transmission Unit (MTU) size matching, parallel processing and reduced end-to-end delay.
Conventional stripes are similar to those in H.264/AVC. Each regular slice is encapsulated in its own NAL unit and in-picture prediction (intra sample point prediction, motion information prediction, codec mode prediction) and entropy codec dependencies across slice boundaries are disabled. Thus, the regular slices can be reconstructed independently of other regular slices within the same picture (although interdependencies may still exist due to loop filtering operations).
Conventional stripes are the only tools that can be used for parallelization, and are also available in almost the same form in h.264/AVC. Conventional slice-based parallelization does not require much inter-processor or inter-core communication (except for inter-processor or inter-core data sharing for motion compensation when decoding predicted codec pictures, which is typically much more burdensome than inter-processor or inter-core data sharing due to intra-picture prediction). However, for the same reason, the use of conventional stripes incurs a significant amount of codec overhead due to the bit cost of the stripe header and the lack of prediction across stripe boundaries. Furthermore, due to the intra-picture independence of regular slices and each regular slice being encapsulated in its own NAL unit, regular slices (compared to other tools mentioned below) also serve as a key mechanism for bitstream splitting to match MTU size requirements. In many cases, parallelized and MTU size-matched targets place conflicting requirements on the stripe layout in the picture. Recognition of this situation has led to the development of parallelization tools as mentioned below.
Dependent slices have short slice headers and allow splitting of the bitstream at the tree block boundaries without breaking any intra-picture prediction. Basically, fragmentation of a conventional slice is provided into multiple NAL units depending on the slice to provide reduced end-to-end delay by allowing a portion of the conventional slice to be sent out before encoding of the entire conventional slice is complete.
In WPP, a picture is divided into a single row of Coded Tree Blocks (CTBs). Entropy decoding and prediction is allowed to use data from CTBs in other partitions. Parallel processing is possible by parallel decoding of the CTB lines, where the start of decoding of a CTB line is delayed by two CTBs, thus ensuring that data related to the CTBs above and to the right of the subject CTB is available before the subject CTB is decoded. With the start of this interleaving (which looks like a wavefront when represented graphically), at most as many processors/cores as pictures contain rows of CTBs can be used for parallelization. Because intra-picture prediction between adjacent tree block lines within a picture is permitted, the inter-processor/inter-core communication required to enable intra-picture prediction can be substantial. WPP fragmentation does not result in the generation of additional NAL units than when not applied, and therefore WPP is not a tool for MTU size matching. However, if MTU size matching is required, conventional stripes can be used with WPP with some degree of codec overhead.
A slice defines horizontal and vertical boundaries that divide the slice into slice columns and rows. The columns of tiles extend from the top of the picture to the bottom of the picture. Likewise, the slice rows extend from the left side of the picture to the right side of the picture. The number of slices in a picture can be derived simply by multiplying the number of columns of slices by the number of rows of slices.
The scan order of the CTBs is changed to be local within a slice (in the order of the CTB raster scan of the slice) before decoding the top left CTB of the next slice in the order of the slice raster scan of the picture. Like conventional slices, slices break intra-picture prediction dependencies as well as entropy decoding dependencies. However, they need not be included in a separate NAL unit (same as WPP in this respect); therefore, the patch cannot be used for MTU size matching. Each slice may be processed by one processor/core and the inter-processor/core communication required to decode intra-picture prediction between processing units of adjacent slices is limited to the transfer of shared slice headers if a slice spans more than one slice, and loop filtering related to the sharing of reconstructed samples and metadata. When more than one slice or WPP segment is included in a slice, the entry point byte offset for each slice or WPP segment in the slice except the first is signaled in the slice header.
For simplicity, limitations on the application of four different picture segmentation schemes have been specified in HEVC. A given codec video sequence cannot include both slices and wavefronts for most profiles specified in HEVC. For each strip and sheet, one or both of the following conditions must be satisfied: 1) all the coding and decoding tree blocks in the strip belong to the same slice; 2) all coded treeblocks in a slice belong to the same stripe. Finally, a wave front contains exactly one row of CTBs, and when WPP is used, if a stripe starts within a row of CTBs, it must end up in the same row of CTBs.
The latest revision to HEVC is specified in JCT-VC output files JCTVC-AC1005, j.boyce, a.ramasuramsonian, r.skupin, g.j.subllivan, a.tourapis, y.k.wang (edited), HEVC additional supplemental enhancement information (draft 4), published by 24/10/2017: http:
int-evry, fr/jct/doc _ end _ user/documents/29_ Macau/wg11/JCTVC-AC1005-v2. zip. By including this revision, HEVC specifies three SEI messages related to MCTS, namely a temporal MCTS SEI message, an MCTS extraction information set SEI message, and an MCTS extraction information nesting SEI message.
The temporal MCTS SEI message indicates the presence of MCTSs in the bitstream and signals the MCTSs. For each MCTS, the motion vectors are restricted to point to full-sample positions inside the MCTS and fractional-sample positions that only require full-sample positions inside the MCTS for interpolation, and the use of motion vector candidates for temporal motion vector prediction derived from blocks outside the MCTS is not allowed. In this way, each MCTS can be decoded independently without slices that are not included in the MCTS.
The MCTS extraction information set SEI message provides supplemental information (specified as part of the semantics of the SEI message) that can be used in MCTS sub-bitstream extraction to generate a conforming bitstream for the MCTS set. This information consists of a plurality of sets of extracted information, each defining a plurality of MCTS sets and containing RBSP bytes of the replacement VPS, SPS and PPS to be used during the MCTS sub-bitstream extraction process. When extracting a sub-bitstream according to the MCTS sub-bitstream extraction process, parameter sets (VPS, SPS, and PPS) need to be rewritten or replaced, slice headers need to be slightly updated, because one or all slice address-related syntax elements (including first _ slice _ segment _ in _ pic _ flag and slice _ segment _ address) typically need to have different values.
Picture segmentation in VVC
In VVC, a picture is divided into one or more slice rows and one or more slice columns. A slice is a sequence of CTUs covering a rectangular area of a picture. The CTUs in a slice are scanned within the slice in a raster scan order.
A slice consists of an integer number of complete slices or an integer number of consecutive complete rows of CTUs within a slice of a picture.
Two stripe modes are supported, namely a raster scan stripe mode and a rectangular stripe mode. In raster scan stripe mode, a stripe contains a complete sequence of slices in a slice raster scan of a picture. In the rectangular slice mode, a slice contains a plurality of complete slices that together form a rectangular region of the picture, or a plurality of consecutive complete CTU rows that together form one slice of a rectangular region of the picture. Slices within a rectangular strip are scanned in a slice raster scan order within the rectangular area corresponding to the strip.
The sub-picture includes one or more strips that collectively cover a rectangular area of the picture.
Fig. 1 shows an example of raster scan strip segmentation of a picture, wherein the picture is divided into 12 slices and 3 raster scan strips.
Fig. 2 shows an example of rectangular slice segmentation of a picture, wherein the picture is divided into 24 slices (6 slice columns and 4 slice rows) and 9 rectangular slices.
Fig. 3 shows an example of a picture divided into slices and rectangular slices, wherein the picture is divided into 4 slices (2 slice columns and 2 slice rows) and 4 rectangular slices.
Fig. 4 shows an example of sub-picture splitting of a picture, where the picture is split into 18 slices, each of the 12 slices on the left hand side covering one slice of 4 x 4 CTU, and each of the 6 slices on the right hand side covering 2 vertically stacked slices of 2 x 2 CTU, resulting in 24 slices and 24 sub-pictures (each slice being one sub-picture) of different dimensions in total.
Signaling of subpictures, slices and stripes in VVC
In the latest VVC draft text, the information of the sub-picture includes the sub-picture layout (i.e. the number of sub-pictures per picture and the position and size of each picture) and other sequence level sub-picture information, which is signaled in the SPS. The order of the sub-pictures signaled in the SPS defines the sub-picture index. For example, in SPS or PPS, a sub-picture ID list (one ID per sub-picture) may be explicitly signaled.
Slices in VVC are conceptually the same as in HEVC, i.e., each picture is partitioned into slice columns and slice rows, but has a different syntax in PPS for signaling of slices.
In VVC, the stripe mode is also signaled in PPS. When the slice mode is a rectangular slice mode, the PPS signals the slice layout of each picture (i.e., the number of slices per picture and the position and size of each slice). The order of rectangular slices within a picture signaled in PPS defines the picture-level slice index. The sub-picture level slice index is defined as the order of the slices within a sub-picture in ascending order of their picture level slice index. The position and size of the rectangular slice are signaled/derived based on the sub-picture position and size signaled in SPS (when each sub-picture contains only one slice) or based on the slice position and size signaled in PPS (when one sub-picture may contain more than one slice). When the slice mode is raster scan slice mode, the layout of the intra-picture slices is signaled in the slices themselves, similar to in HEVC, with different details.
SPS, PPS, and slice header syntax and semantics in the latest VVC draft text most relevant to the invention herein are as follows.
7.3.2.3 sequence parameter set RBSP syntax
Figure BDA0003728218670000081
Figure BDA0003728218670000091
7.4.3.3 sequence parameter set RBSP semantics
...
A sub _ present _ flag equal to 1 specifies the presence of sub-picture parameters in the SPS RBSP syntax.
A sub _ present _ flag equal to 0 specifies that no sub-picture parameters exist in the SPS RBSP syntax.
Note 2-when the bitstream is the result of the sub bitstream extraction process and contains only a subset of the sub-pictures of the input bitstream to the sub bitstream extraction process, it may be necessary to set the value of the sub _ present _ flag to 1 in the RBSP of the SPS.
sps _ num _ sub _ minus1 plus 1 specifies the number of sub-pictures. The sps _ num _ sub _ minus1 should be in the range of 0 to 254. When not present, the value of sps _ num _ subpacs _ minus1 is inferred to be equal to 0.
subpic _ CTU _ top _ left _ x [ i ] specifies the horizontal position of the upper left corner CTU of the ith sub-picture in CtbSizeY units. The length of the syntax element is Ceil Ceil (Log2(pic _ width _ max _ in _ luma _ samples/CtbSizeY)) bits. When not present, infer the value of subacid _ ctu _ top _ left _ x [ i ] is equal to 0.
subpic _ CTU _ top _ left _ y [ i ] specifies the vertical position of the upper left corner CTU of the ith sub-picture in CtbSizeY units. The length of the syntax element is Ceil (Log2(pic _ height _ max _ in _ luma _ samples/CtbSizeY)) bits. When not present, infer the value of subacic _ ctu _ top _ left _ y [ i ] equals 0.
The subapic _ width _ minus1[ i ] plus 1 specifies the width of the ith sub-picture in units of CtbSizeY.
The length of the syntax element is Ceil (Log2(pic _ width _ max _ in _ luma _ samples/CtbSizeY)) bits. When not present, the value of subapic _ width _ minus1[ i ] is inferred to be equal to Ceil (pic _ width _ max _ in _ luma _ samples/CtbSizeY) -1.
sub _ height _ minus1[ i ] plus 1 specifies the height of the ith sub-picture in units of CtbSizeY.
The length of the syntax element is Ceil (Log2(pic _ height _ max _ in _ luma _ samples/CtbSizeY)) bits. When not present, the value of supplemental height minus1[ i ] is inferred to be equal to Ceil (pic height max in luma samples/CtbSizeY) -1.
A sub _ linear _ as _ pic _ flag [ i ] equal to 1 specifies that the ith sub-picture of each coded picture in CLVS is considered a picture in a decoding process that does not include an in-loop filtering operation.
sub _ linear _ as _ pic _ flag [ i ] equal to 0 specifies that the ith sub-picture of each coded picture in CLVS is not considered a picture in a decoding process that does not include an in-loop filtering operation. When not present, the value of subpac _ linear _ as _ pic _ flag [ i ] is inferred to be equal to 0.
loop _ filter _ across _ temporal _ enabled _ flag [ i ] equal to 1 specifies that in-loop filtering operations can be performed across the boundary of the ith sub-picture in each coded picture in the CLVS.
loop _ filter _ across _ temporal _ enabled _ flag [ i ] equal to 0 specifies that the in-loop filtering operation is not performed across the boundary of the ith sub-picture in each coded picture in the CLVS. When not present, infer
The value of loop _ filter _ across _ subacic _ enabled _ pic _ flag [ i ] is equal to 1.
The requirement for bitstream conformance is to apply the following constraints:
for any two sub-pictures, subpicta and subpictb, when the subpicta sub-picture index is smaller than the subpictb sub-picture index, any codec slice NAL unit of subpicta should precede any codec slice NAL unit of subpictb in decoding order.
The shape of the sub-pictures should be such that each sub-picture, when decoded, should have its entire left boundary and entire upper boundary composed of picture boundaries, or of boundaries of previously decoded sub-pictures.
SPS _ sub _ ID _ present _ flag equal to 1 specifies that the sub-picture ID mapping exists in the SPS.
SPS _ sub _ ID _ present _ flag equal to 0 specifies that there is no sub-picture ID mapping in the SPS.
SPS _ sub _ ID _ signaling _ present _ flag equal to 1 specifies that the sub-picture ID mapping is signaled in the SPS. SPS _ sub _ ID _ signaling _ present _ flag equal to 0 specifies that the sub-picture ID mapping is not signaled in the SPS. When not present, the value of sps _ supplemental _ id _ signaling _ present _ flag is inferred to be equal to 0.
sps _ subacid _ len _ minus1 plus 1 specifies the number of bits used to represent the syntax element sps _ subacid _ id [ i ]. The value of sps _ subacid _ len _ minus1 should be in the range of 0 to 15 (inclusive).
sps _ sub _ ID [ i ] specifies the ith sub-picture ID. The length of the sps _ subacid _ id [ i ] syntax element is sps _ subacid _ id _ len _ minus1+1 bits. When not present, and when sps _ sub _ id _ present _ flag is equal to 0, for each i in the range of 0 to sps _ num _ sub _ minus1 (inclusive), the value of sps _ sub _ id [ i ] is inferred to be equal to i
...
7.3.2.4 Picture parameter set RBSP syntax
Figure BDA0003728218670000101
Figure BDA0003728218670000111
Figure BDA0003728218670000121
7.4.3.4 picture parameter set RBSP semantics
...
PPS _ supplemental _ ID _ signaling _ present _ flag equal to 1 specifies that the sub-picture ID mapping is signaled in the PPS. PPS _ sub _ ID _ signaling _ present _ flag equal to 0 specifies that the sub-picture ID mapping is not signaled in the PPS. When the sps _ sub _ id _ present _ flag is 0 or the sps _ sub _ id _ signalling _ present _ flag is equal to 1, pps _ sub _ id _ signalling _ present _ flag should be equal to 0.
PPS _ num _ sub _ minus1 plus 1 specifies the number of sub-pictures in the coded picture that refer to the PPS. The requirement for bitstream conformance is that the value of pps _ num _ sub _ minus1 should be equal to sps _ num _ sub _ minus 1.
pps _ subpic _ id _ len _ minus1 plus 1 specifies the number of bits used to represent the syntax element pps _ subpic _ id [ i ]. The value of pps _ subpic _ id _ len _ minus1 should be in the range of 0 to 15 (inclusive). The requirement for bitstream conformance is that the value of PPS _ sub _ id _ len _ minus1 should be the same for all PPS referenced by the codec picture in CLVS.
pps _ subpic _ ID [ i ] specifies the sub-picture ID of the ith sub-picture. The length of the pps _ subpic _ id [ i ] syntax element is pps _ subpic _ id _ len _ minus1+1 bits.
no _ pic _ partition _ flag equal to 1 specifies that no picture partition is applied to each picture of the reference PPS.
no _ pic _ partition _ flag equal to 0 specifies that each picture of the reference PPS may be partitioned into more than one slice or slice.
The requirement for bitstream conformance is that the value of no _ pic _ partition _ flag should be the same for all PPS referenced by the coded picture within CLVS.
The requirement for bitstream conformance is that the value of no _ pic _ partition _ flag should not be equal to 1 when the value of sps _ num _ sub _ minus1+1 is greater than 1.
pps _ log2_ CTU _ size _ minus5 plus 5 specifies the luma codec tree block size for each CTU.
pps _ log2_ ctu _ size _ minus5 should be equal to sps _ log2_ ctu _ size _ minus 5.
num _ exp _ tile _ columns _ minus1 plus 1 specifies the number of slice column widths explicitly provided.
The value of num _ exp _ tile _ columns _ minus1 should be in the range of 0 to PicWidthInCtbsY-1 (inclusive). When no _ pic _ partition _ flag is equal to 1, it is inferred that the value of num _ exp _ tile _ columns _ minus1 is equal to 0.
num _ exp _ tile _ rows _ minus1 plus 1 specifies the number of explicitly provided slice heights.
The value of num _ exp _ tile _ rows _ minus1 should be in the range of 0 to picheaightinctbsy-1, inclusive. When no _ pic _ partition _ flag is equal to 1, the value num _ tile _ rows _ minus1 is inferred to be equal to 0.
tile _ column _ width _ minus1[ i ] plus 1 specifies the width in CTB for the ith slice column for i in the range of 0 to num _ exp _ tile _ columns _ minus1-1 (inclusive). tile _ column _ width _ minus1 num _ exp _ tile _ columns _ minus1 is used to derive the width of a slice column with an index greater than or equal to num _ exp _ tile _ columns _ minus1 (as specified in section 6.5.1). When not present, the value of tile _ column _ width _ minus1[0] is inferred to be equal to PicWidthInCtbsY-1.
tile _ row _ height _ minus1[ i ] plus 1 specifies the height in CTB for the ith slice row for i in the range of 0 to num _ exp _ tile _ rows _ minus1-1 (inclusive). tile _ row _ height _ minus1[ num _ exp _ tile _ rows _ minus1] is used to derive the height of a slice row with an index greater than or equal to num _ exp _ tile _ rows _ minus1 (as specified in section 6.5.1). When not present, the value of tile _ row _ height _ minus1[0] is inferred to be equal to PicHeightInCtbsY-1.
rect _ slice _ flag equal to 0 specifies that slices within each slice are in raster scan order and slice information is not signaled in the PPS. rect _ slice _ flag equal to 1 specifies that slices within each slice cover a rectangular region of the picture, and slice information is signaled in the PPS. When not present, rect _ slice _ flag is inferred to be equal to 1. When the sub _ present _ flag is equal to 1, the value of rect _ slice _ flag should be equal to 1.
single slice per sub flag equal to 1 specifies that each sub picture consists of one and only one rectangular slice. single slice per sub flag equal to 0 specifies that each sub picture may consist of one or more rectangular slices. When the sub _ present _ flag is equal to 0, the single _ slice _ per _ sub _ flag should be equal to 0. When single _ slice _ per _ sub _ flag is equal to 1, num _ slices _ in _ pic _ minus1 is inferred to be equal to sps _ num _ sub _ minus 1.
num _ slices _ in _ pic _ minus1 plus 1 specifies the number of rectangular slices in each picture of the reference PPS.
The value of num _ slices _ in _ pic _ minus1 should be in the range of 0 to MaxSlicesPictures-1 (inclusive), where MaxSlicesPictures are specified in appendix A. When no _ pic _ partition _ flag is equal to 1, the value of num _ slices _ in _ pic _ minus1 is inferred to be equal to 0.
tile _ idx _ delta _ present _ flag equal to 0 specifies that tile _ idx _ delta values are not present in the PPS and that all rectangular slices in the picture referring to the PPS are specified in raster order according to the procedure defined in section 6.5.1. tile _ idx _ delta _ present _ flag equal to 1 specifies that tile _ idx _ delta values may be present in the PPS and all rectangular slices in a picture referring to the PPS are specified in the order indicated by the value of tile _ idx _ delta.
slice _ width _ in _ tiles _ minus1[ i ] plus 1 specifies the width of the ith rectangular strip in slice units.
The value of slice _ width _ in _ tiles _ minus1[ i ] should be in the range of 0 to NumTilecolumns-1 (inclusive). When not present, the value of slice _ width _ in _ tiles _ minus1[ i ] is inferred to be the value specified in section 6.5.1.
slice _ height _ in _ tiles _ minus1[ i ] plus 1 specifies the height of the ith rectangular stripe in units of slice rows.
The value of slice _ height _ in _ tiles _ minus1[ i ] should be in the range of 0 to NumTileRows-1 (inclusive). When not present, the value of slice _ height _ in _ tiles _ minus1[ i ] is inferred to be the value specified in section 6.5.1.
num _ slices _ in _ tile _ minus1[ i ] plus 1 specifies the number of slices in the current slice, for the case where the ith slice contains a subset of rows of CTUs from a single slice. The value of num _ slices _ in _ tile _ minus1[ i ] should be in the range of 0 to Rowheight [ tile Y ] -1 (inclusive), where tile Y is the slice row index containing the ith slice. When not present, the value of num _ slices _ in _ tile _ minus1[ i ] is inferred to be equal to 0.
slice _ height _ in _ CTU _ minus1[ i ] plus 1 specifies the height of the ith rectangular stripe in CTU row units, for the case where the ith stripe contains a subset of CTU rows from a single slice.
The value of slice _ height _ in _ ctu _ minus1[ i ], which is the slice row index containing the ith stripe, should be in the range of 0 to Rowheight [ tileY ] -1 (inclusive).
tile _ idx _ delta [ i ] specifies the slice index difference between the ith rectangular slice and the (i +1) th rectangular slice.
the value of tile _ idx _ delta [ i ] should be in the range of-NumTilesInPic +1 to NumTilesInPic-1 (inclusive). When not present, the value of tile _ idx _ delta [ i ] is inferred to be equal to 0. In all other cases, the value of tile _ idx _ delta [ i ] should not equal 0.
loop _ filter _ across _ tiles _ enabled _ flag equal to 1 specifies that the in-loop filtering operation may be performed across slice boundaries in the pictures of the reference PPS. loop _ filter _ across _ tiles _ enabled _ flag equal to 0 specifies that the in-loop filtering operation is not performed across slice boundaries in the pictures of the reference PPS. The in-loop filtering operations include deblocking filter, sample adaptive offset filter, and adaptive loop filter operations. When not present, the value of loop _ filter _ across _ tiles _ enabled _ flag is inferred to be equal to 1.
loop _ filter _ across _ slices _ enabled _ flag equal to 1 specifies that the in-loop filtering operation may be performed across slice boundaries in the picture of the reference PPS. loop _ filter _ across _ slice _ enabled _ flag equal to 0 specifies that the in-loop filtering operation is not performed across slice boundaries in pictures that reference the PPS. The in-loop filtering operations include deblocking filter, sample adaptive offset filter, and adaptive loop filter operations. When not present, the value of loop _ filter _ across _ slices _ enabled _ flag is inferred to be equal to 0.
...
7.3.7.1 general slice header syntax
Figure BDA0003728218670000151
Figure BDA0003728218670000161
7.4.8.1 Universal stripe header semantics
...
The slice _ sub _ id specifies a sub-picture identifier of a sub-picture including the slice. If slice _ sub _ id exists, then the value of the variable SubPicIdx is derived such that SubPicIdList [ SubPicIdx ] equals slice _ sub _ id. Otherwise (slice _ sub _ id does not exist), a variable SubPicIdx equal to 0 is derived.
The length of the slice _ sub _ id in bits is derived by:
-if sps _ sub _ id _ signalling _ present _ flag is equal to 1, then the length of slice _ sub _ id is equal to sps _ sub _ id _ len _ minus1+ 1.
Otherwise, if ph _ sub _ id _ signalling _ present _ flag is equal to 1, then the length of slice _ sub _ id is equal to ph _ sub _ id _ len _ minus1+ 1.
Otherwise, if pps _ subacid _ signalling _ present _ flag is equal to 1, the length of slice _ subacid _ id is equal to pps _ subacid _ len _ minus1+ 1.
Else, the length of slice _ subapic _ id is equal to Ceil (Log2(sps _ num _ subapics _ minus1+ 1)).
slice _ address specifies the slice address of the slice. When not present, the value of slice _ address is inferred to be equal to 0.
If rect _ slice _ flag is equal to 0, the following applies:
the slice address is a raster scan slice index.
The length of the slice _ address is Ceil (Log2 (numtiesinpic)) bits.
The value of slice _ address should be in the range of 0 to numtiesinpic-1 (inclusive).
Otherwise (rect _ slice _ flag equal to 1), the following applies:
the stripe address is the stripe index of the stripe within the SubPicIdx-th sub-picture.
The length of the slice _ address is Ceil (Log2 (NumSileseInSubpic [ SubPicIdx ])) bits.
The value of slice _ address should be in the range of 0 to NumSilesinSubpic [ SubPicIdx ] -1 (inclusive).
The requirement for bitstream conformance is to apply the following constraints:
-if rect _ slice _ flag is equal to 0 or sub _ present _ flag is equal to 0, then the value of slice _ address should not be equal to the value of slice _ address of any other codec slice NAL unit of the same codec picture.
Otherwise, the pair of slice _ sub _ id and slice _ address values will not be equal to the pair of slice _ sub _ id and slice _ address values of any other codec slice NAL unit of the same codec picture.
When rect _ slice _ flag is equal to 0, the slices of the picture will be ordered in ascending order of their slice _ address values.
The shape of the slice of the picture should be such that each CTU, when decoded, should have its entire left boundary and entire upper boundary composed of the picture boundaries or of the boundaries of the previously decoded CTU(s).
num _ tiles _ in _ slice _ minus1 plus 1, when present, specifies the number of slices in the slice.
The value of num _ tiles _ in _ slice _ minus1 should be in the range of 0 to numsiesinpic-1 (inclusive).
A variable NumCtuInCurrSlice specifying the number of CTUs in the current slice, and a list ctbsaddrcircurrslice [ i ] specifying the picture raster scan address of the ith CTB within the slice for i in the range 0 to NumCtuInCurrSlice-1 (inclusive of the endpoints), derived as follows:
Figure BDA0003728218670000171
Figure BDA0003728218670000181
the variables SubPicLeftBoundaryPos, SubPicTopBoundaryPos, SubPicRightBoundaryPos, and SubPicBotBotBoudaryPos are derived as follows:
Figure BDA0003728218670000182
4. example of technical problem solved by the solution herein
In VVC, existing designs for signaling sub-pictures, slices, and stripes have the following problems:
1) the codec of sps _ num _ sub _ minus1 is u (8), which does not allow more than 256 sub-pictures per picture. However, in some applications, the maximum number of sub-pictures per picture may need to be greater than 256.
2) The sub _ present _ flag is allowed to be equal to 0 and the sps _ sub _ id _ present _ flag is equal to 1.
However, it has no meaning because the sub _ present _ flag is equal to 0, which means that the CLVS has no information about the sub-picture at all.
3) The list of sub-picture IDs, one for each sub-picture, may be signaled in the Picture Header (PH). However, when a list of sub-picture IDs is signaled in the PHs, and when a subset of sub-pictures is extracted from the bitstream, all PHs need to be changed. This is not preferable.
4) Currently, when it is indicated that a sub-picture ID is to be explicitly signaled through a sps _ sub _ ID _ present _ flag equal to 1 (or the name of a syntax element is changed to sub _ IDs _ explicit _ signaled _ flag), the sub-picture ID may not be signaled anywhere. This is problematic because the sub-picture ID needs to be explicitly signaled in the SPS or PPS when the sub-picture ID is indicated to be explicitly signaled.
5) When there is no explicit signaling of the sub-picture ID, the slice header syntax element slice _ sub _ ID still needs to be signaled as long as sub _ present _ flag is equal to 1 (including when sps _ num _ sub _ minus1 is equal to 0). However, the slice _ sub _ id is currently specified to be of length Ceil (Log2(sps _ num _ sub _ minus1+1)) bits, which may be 0 bits when sps _ num _ sub _ minus1 is equal to 0. This is problematic because the length of any existing syntax element cannot be 0 bits.
6) The sub-picture layout, including the number of sub-pictures and their size and position, remains unchanged for the entire CLVS. Even when the sub-picture ID is not explicitly signaled in the SPS or PPS, the sub-picture ID length still needs to be signaled for the sub-picture ID syntax elements in the slice header.
7) Each time rect _ slice _ flag is equal to 1, the syntax element slice _ address is signaled in the slice header and specifies the slice index within the sub-picture containing the slice (i.e. when the number of slices included within the sub-picture (i.e. numsolicsinpubpic subpidx) is equal to 1). Currently, however, when rect _ slice _ flag is equal to 1, slice _ address is specified to be of length Ceil (Log2 (numsolicsinsubpic [ SubPicIdx ])) bits, which is 0 bits when numsolicsinsubpic [ SubPicIdx ] is equal to 1. This is problematic because the length of any existing syntax element cannot be 0 bits.
8) There is redundancy between the syntax elements no _ pic _ partition _ flag and pps _ num _ sub _ minus1, although the latest VVC text has the following constraints: when sps _ num _ sub _ minus1 is greater than 0, the value of no _ pic _ partition _ flag should be equal to 1.
9) Within CLVS, the sub-picture ID value for a particular sub-picture position or index may vary from picture to picture. When this occurs, in principle, a sub-picture cannot use inter prediction by referring to a reference picture in the same layer. Currently, however, there is a lack of constraints in current VVC specifications that prohibit this.
10) In current VVC designs, the reference pictures may be pictures in different layers to support multiple applications, such as scalable video coding and multi-view video coding. If the sub-picture exists in different layers, it needs to be studied whether inter-layer prediction is allowed or not.
5. Example techniques and embodiments
To solve the above problems and other problems, methods summarized below are disclosed. The present invention should be considered as an example to explain the general concept and should not be interpreted in a narrow manner. Furthermore, these inventions may be applied alone or in any combination.
1) To solve the first problem, the codec of sps _ num _ sub _ minus1 is changed from u (8) to ue (v) to enable more than 256 sub-pictures per picture.
a. In addition, the value of sps _ num _ subpatics _ minus1 is limited to 0 to
Ceil(pic_width_max_in_luma_samples÷CtbSizeY)*
Ceil (pic _ height _ max _ in _ luma _ samples ÷ CtbSizeY) -1 (inclusive).
b. Furthermore, the number of sub-pictures per picture is further limited in the definition of the level.
2) To solve the second problem, the condition for signaling of the syntax element sps _ sub _ id _ present _ flag is set to "if (sub _ present _ flag)", i.e., when the sub _ present _ flag is equal to 0, the syntax element sps _ sub _ id _ present _ flag is not signaled, and when it is not present, the value of sps _ sub _ id _ present _ flag is inferred to be equal to 0.
a. Alternatively, the syntax element sps _ sub _ id _ present _ flag is still signaled when sub _ present _ flag is equal to 0, but when sub _ present _ flag is equal to 0, then the value needs to be equal to 0.
b. In addition, the names of syntax elements sub _ present _ flag and sps _ sub _ id _ present _ flag are changed to sub _ info _ present _ flag and sub _ ids _ explicit _ signaled _ flag, respectively.
3) To solve the third problem, the signaling of the sub-picture ID in the PH syntax is deleted. Thus, for i ranging from 0 to sps _ num _ sub _ minus1 (inclusive of endpoints), the list subpacid IdList [ i ] is derived as follows:
Figure BDA0003728218670000201
Figure BDA0003728218670000211
4) to address the fourth problem, when a sub-picture is indicated as being explicitly signaled, a sub-picture ID is signaled in the SPS or PPS.
a. This is achieved by adding the following constraints: if the sub _ ids _ explicit _ signaled _ flag is 0 or the sub _ ids _ in _ sps _ flag is equal to 1, the sub _ ids _ in _ pps _ flag should be equal to 0. Otherwise (sub _ ids _ explicit _ signed _ flag is 1 and sub _ ids _ in _ sps _ flag is equal to 0), sub _ ids _ in _ pps _ flag should be equal to 1.
5) To solve the fifth and sixth problems, the length of the sub-picture ID is signaled in the SPS regardless of the value of the SPS _ sub _ ID _ present _ flag (or renamed to sub _ IDs _ explicit _ signaled _ flag), although when the sub-picture ID is explicitly signaled in the PPS, the length may also be signaled in the PPS to avoid resolving the dependence of the PPS on the SPS. In this case, the length also specifies the length of the sub-picture ID in the slice header, even if the sub-picture ID is not explicitly signaled in the SPS or PPS. Therefore, when present, the length of slice _ sub _ ID is also specified by the sub-picture ID length signaled in the SPS.
6) Alternatively, to solve the fifth and sixth problems, a flag is added to the SPS syntax, with a value of 1, for specifying the presence of a sub-picture ID length in the SPS syntax. The presence of this flag is independent of the value of the flag indicating whether the sub-picture ID is explicitly signaled in the SPS or PPS. The value of the flag may be equal to 1 or 0 when sub _ ids _ explicit _ signaled _ flag is equal to 0, but must be equal to 1 when sub _ ids _ explicit _ signaled _ flag is equal to 1. When the flag is equal to 0 (i.e., when the sub-picture length does not exist), the length of the slice _ sub _ id is specified to be Max (Ceil (Log2(sps _ num _ sub _ minus1+1)),1) bits (as opposed to Ceil (Log2(sps _ num _ sub _ minus1+1)) bits in the latest VVC draft text).
a. Alternatively, the flag is only present when sub _ ids _ explicit _ signed _ flag is equal to 0, and when sub _ ids _ explicit _ signed _ flag is equal to 1, the value of the flag is inferred to be equal to 1.
7) To solve the seventh problem, when rect _ slice _ flag is equal to 1, the length of slice _ address is specified to be Max (Ceil (Log2 (numsolicsinpubpic [ SubPicIdx ])),1) bits.
a. Alternatively, when rect _ slice _ flag is equal to 0, the length of slice _ address is specified to be Max (Ceil (Log2 (numsiesinpic)), 1) bits, as opposed to Ceil (Log2 (numsiesinpic)) bits.
8) To solve the eighth problem, the condition of signaling of no _ pic _ partition _ flag is set to "if (sub _ ids _ in _ pps _ flag & & pps _ num _ sub _ minus1> 0"), and the following inference is added: when not present, the value of no _ pic _ partition _ flag is inferred to be equal to 1.
a. Alternatively, the sub-picture ID syntax (all four syntax elements) is moved after the slice and slice syntax in the PPS, e.g., immediately before the syntax element entry _ coding _ sync _ enabled _ flag, and then the condition of signaling of PPS _ num _ sub _ minus1 is set to "if (no _ pic _ partition _ flag)".
9) To solve the ninth problem, the following constraints are specified: for each particular sub-picture index (or equivalently, sub-picture position), when the sub-picture ID value changes at picture picA compared to the sub-picture ID value of the previous picture in decoding order in the same layer as picA, unless picA is the first picture of CLVS, the sub-picture at picA should only include a coded slice NAL unit with NAL _ unit _ type equal to IDR _ W _ RADL, IDR _ N _ LP, or CRA _ NUT.
a. Alternatively, the above constraint applies only to sub-picture indices where the value of temporal _ processed _ as _ pic _ flag [ i ] is equal to 1.
b. Alternatively, for items 9 and 9a, "IDR _ W _ RADL, IDR _ N _ LP, or CRA _ NUT" is modified to "IDR _ W _ RADL, IDR _ N _ LP, CRA _ NUT, RSV _ IRAP _11, or RSV _ IRAP _ 12".
c. Alternatively, the sub-picture at picA may contain other types of coded slice NAL units, however, these coded slice NAL units use only one or more of intra prediction, Intra Block Copy (IBC) prediction, and palette mode prediction.
d. Alternatively, a first video unit (such as a slice, block, etc.) in a sub-picture of picA may reference a second video unit in a previous picture. It is constrained that although the sub-picture IDs of the second video unit and the first video unit may be different, both may be in sub-pictures with the same sub-picture index. The sub-picture index is a unique number assigned to a sub-picture, which cannot be changed in CLVS.
10) For a particular sub-picture index (or equivalently, sub-picture position), an indication of which sub-pictures identified by the layer ID value along with the sub-picture index or sub-picture ID value are allowed to be used as reference pictures may be signaled in the bitstream.
11) For the multi-layer case, inter-layer prediction (ILR) from sub-pictures in different layers is allowed when certain conditions are met (e.g., possibly depending on the number of sub-pictures, the position of the sub-pictures), and ILR is disabled when certain conditions are not met.
a. In one example, even when two sub-pictures in two layers have the same sub-picture index value but different sub-picture ID values, inter-layer prediction may still be allowed when certain conditions are satisfied.
i. In one example, some conditions are "if two layers are associated with different view order indices/view order ID values".
b. If the first sub-picture in the first layer and the second sub-picture in the second layer have the same sub-picture index, it may be constrained that the two sub-pictures must be in co-located positions (co-locations) and/or reasonable width/height.
c. If a first sub-picture can refer to a second reference sub-picture, it can be constrained that the first sub-picture in the first layer and the second sub-picture in the second layer must be at co-located positions and/or reasonable width/height.
12) In the bitstream (such as in the VPS/DPS/SPS/PPS/APS/sequence header/picture header), it is signaled whether the current sub-picture can use inter-layer prediction (ILP) from sample values and/or other values (e.g., motion information and/or coding mode information) associated with regions or sub-pictures of the reference layer.
a. In one example, the reference regions or sub-pictures of the reference layer are those that contain at least one co-located sample of samples within the current sub-picture.
b. In one example, the reference region or sub-picture of the reference layer is outside the co-located region of the current sub-picture.
c. In one example, such an indication is signaled in one or more SEI messages.
d. In one example, such an indication is signaled regardless of whether the reference layer has multiple sub-pictures, and when multiple sub-pictures are present in one or more reference layers, regardless of whether the picture-to-sub-picture partitioning is aligned with the current picture such that each sub-picture in the current picture has a corresponding sub-picture in the reference picture that covers a co-located region, and further regardless of whether the corresponding/co-located sub-picture has the same sub-picture ID value as the current sub-picture.
6. Examples of the embodiments
The following are some example embodiments of all inventive aspects except item 8 summarized in section 5 above, which are applicable to the VVC specification. The changed text is based on the latest VVC text in JFET-P2001-v 14. The most relevant part that has been added or modified toUnderlined, bold and italicized textDisplayed, and the most relevant deleted portions highlighted in bold double brackets, e.g., [ [ a ]]]Indicating that "a" has been deleted. There are also other modifications that are editing in nature and therefore not highlighted.
6.1. First embodiment
7.3.2.3 sequence parameter set RBSP syntax
Figure BDA0003728218670000241
7.4.3.3 sequence parameter set RBSP semantics
...
Figure BDA0003728218670000251
Note 2-when the bitstream is the result of the sub-bitstream extraction process and contains only a subset of the sub-pictures of the input bitstream of the sub-bitstream extraction process, it may be necessary to set in the SPS
Figure BDA0003728218670000252
Is equal to 1.
sps _ num _ sub _ minus1 plus 1 specifies the number of sub-pictures.
Figure BDA0003728218670000253
Figure BDA0003728218670000254
Figure BDA0003728218670000255
When not present, the value of sps _ num _ subpacs _ minus1 is inferred to be equal to 0.
subpic _ CTU _ top _ left _ x [ i ] specifies the horizontal position of the upper left corner CTU of the ith sub-picture in CtbSizeY. The length of the syntax element is Ceil (Log2(pic _ width _ max _ in _ luma _ samples/CtbSizeY)) bits. When not present, infer the value of subacid _ ctu _ top _ left _ x [ i ] is equal to 0.
subpic _ CTU _ top _ left _ y [ i ] specifies the vertical position of the upper left corner CTU of the ith sub-picture in CtbSizeY units. The length of the syntax element is Ceil (Log2(pic _ height _ max _ in _ luma _ samples/CtbSizeY)) bits. When not present, infer the value of subacic _ ctu _ top _ left _ y [ i ] equals 0.
The subapic _ width _ minus1[ i ] plus 1 specifies the width of the ith sub-picture in units of CtbSizeY. The length of the syntax element is Ceil (Log2(pic _ width _ max _ in _ luma _ samples/CtbSizeY)) bits. When not present, the value of subapic _ width _ minus1[ i ] is inferred to be equal to Ceil (pic _ width _ max _ in _ luma _ samples/CtbSizeY) -1.
The supplemental height minus1[ i ] plus 1 specifies the height of the ith sub-picture in CtbSizeY units. The length of the syntax element is Ceil (Log2(pic _ height _ max _ in _ luma _ samples/CtbSizeY)) bits. When not present, the value of subapic _ height _ minus1[ i ] is inferred to be equal to Ceil (pic _ height _ max _ in _ luma _ samples/CtbSizeY) -1.
A sub _ linear _ as _ pic _ flag [ i ] equal to 1 specifies that the ith sub-picture of each coded picture in CLVS is considered a picture in a decoding process that does not include an in-loop filtering operation. sub _ linear _ as _ pic _ flag [ i ] equal to 0 specifies that the ith sub-picture of each coded picture in CLVS is not considered a picture in a decoding process that does not include an in-loop filtering operation. When not present, the value of subpac _ linear _ as _ pic _ flag [ i ] is inferred to be equal to 0.
loop _ filter _ across _ sub _ enabled _ flag [ i ] equal to 1 specifies that the in-loop filtering operation can be performed across the boundary of the ith sub-picture in each coded picture in the CLVS.
loop _ filter _ across _ sub _ enabled _ flag [ i ] equal to 0 specifies that the in-loop filtering operation is not performed across the boundary of the ith sub-picture of each coded picture in the CLVS. When not present, the value of loop _ filter _ across _ temporal _ enabled _ pic _ flag [ i ] is inferred to be equal to 1.
The requirement for bitstream conformance is to apply the following constraints:
for any two sub-pictures, subpicA and subpicB, when the sub-picture index of subpicA is smaller than the sub-picture index of subpicB, any codec slice NAL unit of subpicA should precede any codec slice NAL unit of subpicB in decoding order.
The shape of the sub-pictures should be such that each sub-picture, when decoded, should have its entire left boundary and entire upper boundary composed of picture boundaries, or of boundaries of previously decoded sub-pictures.
Figure BDA0003728218670000261
sps _ sub _ ID [ i ] specifies the sub-picture ID of the ith sub-picture. The length of the sps _ subacid _ id [ i ] syntax element is sps _ subacid _ id _ len _ minus1+1 bits.
...
7.3.2.4 Picture parameter set RBSP syntax
Figure BDA0003728218670000262
Figure BDA0003728218670000271
Figure BDA0003728218670000281
7.4.3.4 picture parameter set RBSP semantics
...
Figure BDA0003728218670000282
pps _ num _ sub _ minus1 should be equal to sps _ num _ sub _ minus 1.
pps _ subacid _ id _ len _ minus1 should be equal to sps _ subacid _ id _ len _ minus 1.
pps _ sub _ ID [ i ] specifies a sub-picture ID of the ith sub-picture. The length of the pps _ subpic _ id [ i ] syntax element is pps _ subpic _ id _ len _ minus1+1 bits.
Figure BDA0003728218670000283
Figure BDA0003728218670000291
The requirement for bitstream consistency is that for any i and j ranging from 0 to sps _ num _ subjics _ minus1 (inclusive of endpoints), when i is less than j, the subjicklist [ i ] should be less than the subjicklist [ j ].
...
rect _ slice _ flag equal to 0 specifies that slices within each slice are in raster scan order and slice information is not signaled in the PPS. rect _ slice _ flag equal to 1 specifies a rectangular area of the slice overlay picture within each slice, and the slice information is signaled in the PPS. When not present, rect _ slice _ flag is inferred to be equal to 1. When the temperature is higher than the set temperature
Figure BDA0003728218670000292
Equal to 1, the value of rect _ slice _ flag should be equal to 1.
single slice per sub flag equal to 1 specifies that each sub picture consists of one and only one rectangular slice. A single _ slice _ per _ sub _ flag equal to 0 specifies that each sub-picture may include one or more rectangular slices. When the temperature is higher than the set temperature
Figure BDA0003728218670000293
Equal to 0, single _ slice _ per _ super _ flag should be equal to 0. When single _ slice _ per _ sub _ flag, etcAt 1, it is inferred that num _ slices _ in _ pic _ minus1 equals sps _ num _ sub _ minus 1.
...
7.3.7.1 general stripe heading grammar
Figure BDA0003728218670000294
7.4.8.1 Universal stripe header semantics
...
The slice _ sub _ id specifies a sub-picture identifier of a sub-picture including the slice.
Figure BDA0003728218670000295
Figure BDA0003728218670000301
When not present, the value of slice _ sub _ id is inferred to be equal to 0.
The variable SubPicIdxbe is bitwise derived such that SubpicIdList SubPicIdx equals the value of slice _ subpic _ id
slice _ address specifies the stripe address of the stripe. When not present, the value of slice _ address is inferred to be equal to 0.
If rect _ slice _ flag is equal to 0, the following applies:
the slice address is the raster scan slice index.
The length of the slice _ address is Ceil (Log2 (numtiesinpic)) bits.
The value of slice _ address should be in the range of 0 to numtiesinpic-1 (inclusive).
Otherwise (rect _ slice _ flag equal to 1), the following applies:
the slice address is the sub-picture level slice index of the slice.
The length of slice _ address is
Figure BDA0003728218670000302
Figure BDA0003728218670000303
A bit.
The value of slice _ address should be in the range of 0 to NumSilesinSubpic [ SubPicIdx ] -1, inclusive.
The requirement for bitstream conformance is to apply the following constraints:
-if rect _ slice _ flag is equal to 0 or
Figure BDA0003728218670000304
Equal to 0, then the value of slice _ address should not be equal to the value of slice _ address of any other codec slice NAL unit of the same codec picture.
Otherwise, the pair of slice _ sub _ id and slice _ address values will not be equal to the pair of slice _ sub _ id and slice _ address values of any other codec slice NAL unit of the same codec picture.
When rect _ slice _ flag is equal to 0, the slices of the picture should be arranged in ascending order of their slice _ address values.
The shape of the slice of the picture should be such that each CTU, when decoded, should have its entire left boundary and entire upper boundary composed of the picture boundaries or of the boundaries of the previously decoded CTU(s).
...
Fig. 5 is a block diagram illustrating an example video processing system 500 in which various techniques disclosed herein may be implemented. Various implementations may include some or all of the components of system 500. The system 500 may include an input 502 for receiving video content. The video content may be received in a raw or uncompressed format (e.g., 8 or 10 bit multi-component pixel values) or may be in a compressed or encoded format. Input 502 may represent a network interface, a peripheral bus interface, or a storage interface. Examples of network interfaces include wired interfaces such as ethernet, Passive Optical Network (PON), etc., and wireless interfaces such as Wi-Fi or cellular interfaces.
System 500 may include a codec component 504 that may implement various codecs or encoding methods described in this document. Codec component 504 may reduce the average bit rate of video from input 502 to the output of codec component 504 to produce a codec representation of the video. Thus, codec techniques are sometimes referred to as video compression or video transcoding techniques. The output of the codec component 504 can be stored or transmitted via a connected communication, as represented by component 506. Component 508 can use a stored or communicated bitstream (or codec) representation of the video received at input 502 for generating pixel values or displayable video to send to display interface 510. The process of generating user-viewable video from a bitstream representation is sometimes referred to as video decompression. Further, while certain video processing operations are referred to as "codec" operations or tools, it should be understood that codec tools or operations are used at the encoder and that corresponding decoding tools or operations, as opposed to results of the codec, will be performed by the decoder.
Examples of a peripheral bus interface or display interface may include a Universal Serial Bus (USB) or High Definition Multimedia Interface (HDMI) or displayport, among others. Examples of storage interfaces include SATA (serial advanced technology attachment), PCI, IDE interfaces, and the like. The techniques described in this document may be implemented in various electronic devices, such as mobile phones, laptops, smartphones, or other devices capable of performing digital data processing and/or video display.
Fig. 6 is a block diagram of a video processing apparatus 600. The apparatus 600 may be used to implement one or more of the methods described herein. The apparatus 600 may be embodied in a smartphone, tablet, computer, internet of things (IoT) receiver, and/or the like. The apparatus 600 may include one or more processors 602, one or more memories 604, and video processing hardware 606. The processor(s) 602 may be configured to implement one or more of the methods described in this document. Memory(s) 604 may be used to store data and code for implementing the methods and techniques described herein. The video processing hardware 606 may be used to implement some of the techniques described in this document in hardware circuits.
Fig. 7 is a block diagram illustrating an example video codec system 100 that may utilize techniques of this disclosure.
As shown in fig. 7, the video codec system 100 may include a source device 110 and a target device 120. Source device 110 generates encoded video data, which may be referred to as a video encoding device. Target device 120 may decode the encoded video data generated by source device 110, which may be referred to as a video decoding device.
The source device 110 may include a video source 112, a video encoder 114, and an input/output (I/O) interface 116.
The video source 112 may include a source such as a video capture device, an interface for receiving video data from a video content provider and/or a computer graphics system for generating video data, or a combination of such sources. The video data may include one or more pictures. The video encoder 114 encodes video data from the video source 112 to generate a bitstream. The bitstream may comprise a sequence of bits forming a codec representation of the video data. The bitstream may include coded pictures and associated data. A coded picture is a coded representation of a picture. The associated data may include sequence parameter sets, picture parameter sets, and other syntax structures. The I/O interface 116 may include a modulator/demodulator (modem) and/or a transmitter. The encoded video data may be transmitted directly to the target device 120 over the network 130a via the I/O interface 116. The encoded video data may also be stored on a storage medium/server 130b for access by the target device 120.
Target device 120 may include I/O interface 126, video decoder 124, and display device 122.
I/O interface 126 may include a receiver and/or a modem. I/O interface 126 may retrieve encoded video data from source device 110 or storage medium/server 130 b. The video decoder 124 may decode the encoded video data. The display device 122 may display the decoded video data to a user. The display device 122 may be integrated with the target device 120 or may be external to the target device 120 configured to interface with an external display device.
The video encoder 114 and the video decoder 124 may operate in accordance with video compression standards such as the High Efficiency Video Codec (HEVC) standard, the multifunction video codec (VVM) standard, and other current and/or further standards.
Fig. 8 is a block diagram illustrating an example of a video encoder 200, which video encoder 200 may be the video encoder 114 in the system 100 shown in fig. 7.
Video encoder 200 may be configured to perform any or all of the techniques of this disclosure. In the example of fig. 8, video encoder 200 includes a number of functional components. The techniques described in this disclosure may be shared among various components of the video encoder 200. In some examples, the processor may be configured to perform any or all of the techniques described in this disclosure.
The functional components of the video encoder 200 may include a partition unit 201, a prediction unit 202, a residual generation unit 207, a transform unit 208, a quantization unit 209, an inverse quantization unit 210, an inverse transform unit 211, a reconstruction unit 212, a buffer 213, and an entropy coding unit 214, and the prediction unit 202 may include a mode selection unit 203, a motion estimation unit 204, a motion compensation unit 205, and an intra prediction unit 206. .
In other examples, video encoder 200 may include more, fewer, or different functional components. In an example, the prediction unit 202 may include an Intra Block Copy (IBC) unit. The IBC unit may perform prediction in an IBC mode in which at least one reference picture is a picture in which the current video block is located.
Furthermore, some components (such as the motion estimation unit 204 and the motion compensation unit 205) may be highly integrated, but are separately represented in the example of fig. 8 for explanatory purposes.
The partition unit 201 may partition a picture into one or more video blocks. The video encoder 200 and the video decoder 300 may support multiple video block sizes.
Mode selection unit 203 may select one of the coding modes (intra or inter), e.g., based on the error results, and provide the resulting intra or inter coded blocks to residual generation unit 207 to produce residual block data and to reconstruction unit 212 to reconstruct the coded blocks for use as reference pictures. In some examples, mode selection unit 203 may select a Combined Intra and Inter Prediction (CIIP) mode in which prediction is based on inter prediction signals and intra prediction signals. In the case of inter prediction, mode selection unit 203 may also select the resolution of the motion vector for the block (e.g., sub-pixel or integer-pixel precision).
To perform inter prediction on the current video block, motion estimation unit 204 may generate motion information for the current video block by comparing one or more reference frames from buffer 213 to the current video block. Motion compensation unit 205 may determine a predictive video block for the current video block based on decoded samples and motion information for pictures from buffer 213 other than the picture associated with the current video block.
The motion estimation unit 204 and the motion compensation unit 205 may perform different operations on the current video block, e.g., depending on whether the current video block is in an I-slice, a P-slice, or a B-slice.
In some examples, motion estimation unit 204 may perform uni-directional prediction on the current video block, and motion estimation unit 204 may search for a reference video block of the current video block in a list 0 or list 1 reference picture. Motion estimation unit 204 may then generate a reference index indicating a reference picture in list 0 or list 1 that includes the reference video block and a motion vector indicating spatial displacement between the current video block and the reference video block. Motion estimation unit 204 may output the reference index, the prediction direction indicator, and the motion vector as motion information of the current video block. The motion compensation unit 205 may generate a prediction video block of the current block based on a reference video block indicated by motion information of the current video block.
In other examples, motion estimation unit 204 may perform bi-prediction on the current video block, and motion estimation unit 204 may search for a reference video block of the current video block in a reference picture in list 0 and may also search for another reference video block of the current video block in a reference picture in list 1. Motion estimation unit 204 may then generate a reference index indicating the reference picture in list 0 and list 1 that includes the reference video block and a motion vector indicating the spatial displacement between the reference video block and the current video block. Motion estimation unit 204 may output the reference index and the motion vector of the current video block as motion information for the current video block. Motion compensation unit 205 may generate a prediction video block for the current video block based on the reference video block indicated by the motion information for the current video block.
In some examples, motion estimation unit 204 may output the complete set of motion information for the decoding process of the decoder.
In some examples, motion estimation unit 204 may not output the full set of motion information for the current video. Instead, motion estimation unit 204 may signal motion information for the current video block with reference to motion information of another video block. For example, motion estimation unit 204 may determine that the motion information of the current video block is sufficiently similar to the motion information of the neighboring video block.
In one example, motion estimation unit 204 may indicate, in a syntax structure associated with the current video block, a value indicating to video decoder 300 that the current video block has the same motion information as another video block.
In another example, motion estimation unit 204 may identify another video block and a Motion Vector Difference (MVD) in a syntax structure associated with the current video block. The motion vector difference indicates a difference between a motion vector of the current video block and a motion vector of the indicated video block. The video decoder 300 may use the indicated motion vector and motion vector difference for the video block to determine the motion vector for the current video block.
As described above, the video encoder 200 may predictively signal the motion vectors. Two examples of prediction signaling techniques that may be implemented by video encoder 200 include Advanced Motion Vector Prediction (AMVP) and Merge mode signaling.
The intra prediction unit 206 may perform intra prediction on the current video block. When intra prediction unit 206 performs intra prediction on the current video block, intra prediction unit 206 may generate prediction data for the current video block based on decoded samples of other video blocks in the same picture. The prediction data for the current video block may include a predicted video block and a variety of syntax elements.
Residual generation unit 207 may generate residual data for the current video block by subtracting (e.g., indicated by a negative sign) the predictive video block(s) of the current video block from the current video block. The residual data for the current video block may include residual video blocks corresponding to different sample components of samples in the current video block.
In other examples, there may be no residual data for the current video block, e.g., in skip mode, and residual generation unit 207 may not perform the subtraction operation.
Transform processing unit 208 may generate one or more transform coefficient video blocks for the current video block by applying one or more transforms to a residual video block associated with the current video block.
After transform processing unit 208 generates a transform coefficient video block associated with the current video block, quantization unit 209 may quantize the transform coefficient video block associated with the current video block based on one or more Quantization Parameter (QP) values associated with the current video block.
Inverse quantization unit 210 and inverse transform unit 211 may apply inverse quantization and inverse transform, respectively, to the transform coefficient video blocks to reconstruct residual video blocks from the transform coefficient video blocks. Reconstruction unit 212 may add the reconstructed residual video block to corresponding sample points from one or more prediction video blocks generated by prediction unit 202 to generate a reconstructed video block associated with the current block for storage in buffer 213.
After reconstruction unit 212 reconstructs the video blocks, a loop filtering operation may be performed to reduce video block artifacts in the video blocks.
Entropy encoding unit 214 may receive data from other functional components of video encoder 200. When entropy encoding unit 214 receives the data, entropy encoding unit 214 may perform one or more entropy encoding operations to generate entropy encoded data and output a bitstream that includes the entropy encoded data.
Fig. 9 is a block diagram illustrating an example of a video decoder 300, the video decoder 300 may be the video decoder 114 in the system 100 shown in fig. 7.
Video decoder 300 may be configured to perform any or all of the techniques of this disclosure. In the example of fig. 8, video decoder 300 includes various functional components. The techniques described in this disclosure may be shared among various components of video decoder 300. In some examples, the processor may be configured to perform any or all of the techniques described in this disclosure.
In the example of fig. 9, the video decoder 300 includes an entropy decoding unit 301, a motion compensation unit 302, an intra prediction unit 303, an inverse quantization unit 304, an inverse transform unit 305, and a reconstruction unit 306 and a buffer 307. In some examples, video decoder 300 may perform a decoding pass that is generally the inverse of the encoding pass described with respect to video encoder 200 (fig. 8).
The entropy decoding unit 301 may retrieve the encoded bitstream. The encoded bitstream may include entropy coded video data (e.g., encoded blocks of video data). The entropy decoding unit 301 may decode the entropy-coded video data, and from the entropy-decoded video data, the motion compensation unit 302 may determine motion information including a motion vector, a motion vector precision, a reference picture list index, and other motion information. For example, the motion compensation unit 302 may determine such information by performing AMVP and Merge modes.
The motion compensation unit 302 may generate a motion compensation block, possibly based on an interpolation filter, to perform interpolation. The syntax element may contain an identifier of the interpolation filter to be used with sub-pixel precision.
The motion compensation unit 302 may calculate interpolated values for sub-integer pixels of the reference block using interpolation filters such as those used by the video encoder 200 during encoding of the video block. The motion compensation unit 302 may determine an interpolation filter used by the video encoder 200 according to the received syntax information and generate a prediction block using the interpolation filter.
Motion compensation unit 302 may use some syntax information to determine the size of blocks used to encode frame(s) and/or slice(s) of an encoded video sequence, partition information describing how each macroblock of a picture of the encoded video sequence is partitioned, a mode indicating how each partition is encoded, one or more reference frames (and reference frame lists) of each inter-coded block, and other information used to decode the encoded video sequence.
The intra prediction unit 303 may form a prediction block from spatially neighboring blocks using, for example, an intra prediction mode received in a bitstream. The inverse quantization unit 303 inversely quantizes (i.e., dequantizes) the quantized video block coefficients provided in the bitstream and decoded by the entropy decoding unit 301. The inverse transform unit 303 applies inverse transform.
The reconstruction unit 306 may add the residual block to the corresponding prediction block generated by the motion compensation unit 202 or the intra prediction unit 303 to form a decoded block. A deblocking filter may also be applied to filter the decoded blocks to remove blocking artifacts, if desired. The decoded video blocks are then stored in a buffer 307 that provides reference blocks for subsequent motion compensation/intra prediction and also generates decoded video for presentation on a display device.
Fig. 10-11 illustrate example methods by which the above-described aspects may be implemented in embodiments such as those shown in fig. 5-9.
Fig. 10 shows a flow diagram of an example method 1000 of video processing. The method 1000 includes, at operation 1010, performing a conversion between a video including a plurality of pictures including one or more sub-pictures and a bitstream of the video, the bitstream conforming to a format rule that specifies whether the current sub-picture is not allowed to reference a previous sub-picture for inter prediction in a case where the current sub-picture in the current picture has a current identifier different from an identifier of the previous sub-picture in the previous picture at a same position as the current picture.
Fig. 11 shows a flow diagram of an example method 1100 of video processing. The method 1100 includes, at operation 1110, performing a conversion between video including a video region and a bitstream of video including a plurality of codec layers, the bitstream conforming to a format rule that specifies whether inter-layer prediction (ILP) between video regions in different ones of the plurality of codec layers is allowed based on a condition.
Fig. 12 shows a flow diagram of an example method 1200 of video processing. The method 1200 includes, at operation 1210, performing a conversion between a video including a current video region and a bitstream of the video including a plurality of codec layers, the bitstream conforming to a format rule that specifies whether the bitstream includes an indication of whether inter-layer prediction (ILP) between the current video region and a video region in a reference layer is allowed.
A list of solutions preferred by some embodiments is provided next.
1. A method of video processing, comprising performing a conversion between a video comprising a plurality of pictures including one or more sub-pictures and a bitstream of the video, wherein the bitstream conforms to a format rule that specifies whether a current sub-picture in the current picture is not allowed to reference a previous sub-picture for inter-prediction if the current sub-picture has a current identifier that is different from an identifier of a previous sub-picture in a previous picture at a same location as the current picture.
2. The method of solution 1, wherein the current identifier is a current sub-picture identifier or a current sub-picture position.
3. The method of solution 2, wherein the current sub-picture is not allowed to reference the previous sub-picture for inter prediction because the current identifier is different from an identifier of the previous sub-picture.
4. The method of solution 3, wherein the current sub-picture is coded using intra coding since the current identifier is different from the identifier of the previous sub-picture.
5. The method of solution 3 or 4, wherein a current sub-picture comprises only one or more coded slices of an Instantaneous Decoding Refresh (IDR) sub-picture or a Clean Random Access (CRA) sub-picture.
6. The method of solution 3 or 4, wherein the current sub-picture comprises only one or more coded slices of an Intra Random Access Point (IRAP) sub-picture.
7. The method of solution 3 or 4, wherein the current sub-picture includes only codec slice Network Abstraction Layer (NAL) units having one or more of a predetermined set of NAL unit types.
8. The method of solution 7, wherein the set of predetermined NAL unit types includes IDR _ W _ RADL, IDR _ N _ LP, and CRA _ NUT.
9. The method of solution 7, wherein the set of predetermined NAL unit types includes IDR _ W _ RADL, IDR _ N _ LP, CRA _ NUT, RSV _ IRAP _11, and RSV _ IRAP _ 12.
10. The method of solution 3 or 4, wherein the bitstream comprises a syntax element indicating that the sub-picture is considered as a picture.
11. The method of solution 3 or 4, wherein the current sub-picture comprises a coding slice Network Abstraction Layer (NAL) unit using one or more of intra prediction, Intra Block Copy (IBC) prediction, and palette mode prediction.
12. The method of solution 2, wherein a first video unit in the current sub-picture references a second video unit in a previous sub-picture, wherein a sub-picture index of the current sub-picture is the same as a sub-picture index of the previous sub-picture, and wherein the sub-picture index is a number assigned to a sub-picture that cannot change in a Coding Layer Video Sequence (CLVS).
13. The method of solution 12, wherein the sub-picture identifier of the current sub-picture is the same as the sub-picture identifier of the previous sub-picture.
14. The method of solution 12, wherein the sub-picture identifier of the current sub-picture is different from the sub-picture identifier of the previous sub-picture.
15. The method according to solution 2, wherein the current sub-picture refers to a previous sub-picture and the indication of the current sub-picture is signaled in the bitstream since the current sub-picture is identified by the layer identifier value and the sub-picture index or sub-picture identifier.
16. A method of video processing, comprising performing a conversion between a video comprising a video region and a bitstream of the video comprising a plurality of codec layers, wherein the bitstream conforms to a format rule, and wherein the format rule specifies whether inter-layer prediction (ILP) between video regions in different ones of the plurality of codec layers is allowed based on a condition.
17. The method according to solution 16, wherein the video area is a sub-picture, and wherein inter-layer prediction is allowed.
18. The method of solution 17, wherein two sub-pictures in different coding layers comprise the same sub-picture index value and different sub-picture identifier values.
19. The method of solution 18, wherein the condition specifies that two layers are associated with different view order indices or different view order identifier values.
20. The method of solution 17, wherein the first sub-picture and the second sub-picture in different coding layers are in co-located positions or have reasonable height or width due to the first sub-picture and the second sub-picture having the same sub-picture index.
21. The method of solution 17, wherein the first sub-picture and the second sub-picture in different coding layers are in co-located positions or have a reasonable height or width due to the first sub-picture referencing the second sub-picture.
22. A method of video processing comprising performing a conversion between a video comprising a current video region and a bitstream of the video comprising a plurality of codec layers, wherein the bitstream conforms to a format rule, and wherein the format rule specifies that the bitstream includes an indication of whether inter-layer prediction (ILP) between the current video region and a video region in a reference layer is allowed.
23. The method according to solution 22, wherein the video area is a sub-picture.
24. The method of solution 23, wherein the indication is signaled in a Video Parameter Set (VPS), a Decoding Parameter Set (DPS), a Sequence Parameter Set (SPS), a Picture Parameter Set (PPS), an Adaptive Parameter Set (APS), a sequence header, or a picture header.
25. The method according to solution 23, wherein the video region in the reference layer comprises at least one sample that is co-located with a sample of the current video region.
26. The method according to solution 23, wherein the video region in the reference layer is outside the co-located region of the current video region.
27. The method of solution 23, wherein the indication is signaled in one or more Supplemental Enhancement Information (SEI) messages.
28. The method of solution 23, wherein the indication is signaled regardless of whether the reference layer comprises multiple sub-pictures.
29. The method of claim 23, wherein the reference layer comprises a plurality of sub-pictures, and wherein the indication is signaled regardless of whether partitioning the picture into the plurality of sub-pictures is aligned with the current picture such that each sub-picture in the reference layer is co-located with a corresponding sub-picture in the current picture.
30. The method of any of solutions 1 to 29, wherein converting comprises decoding video from a bitstream.
31. The method of any of solutions 1 to 29, wherein converting comprises encoding the video into a bitstream.
32. A method of storing a bitstream representing a video to a computer readable recording medium, comprising generating a bitstream from a video according to the method of any one or more of solutions 1 to 29; and writing the bitstream to a computer-readable recording medium.
33. A video processing apparatus comprising a processor configured to implement the method of any one or more of solutions 1 to 32.
34. A computer readable medium having stored thereon instructions which, when executed, cause a processor to implement the method of one or more of solutions 1 to 32.
35. A computer readable medium storing a bitstream generated according to any one or more of solutions 1 to 32.
36. A video processing apparatus for storing a bitstream, wherein the video processing apparatus is configured to implement the method of any one or more of solutions 1 to 32.
Another list of solutions preferred by some embodiments is provided next.
P1. a method of video processing, comprising performing a conversion between a picture of a video and a codec representation of the video, wherein a plurality of sub-pictures in the picture are included in the codec representation as fields whose bit widths depend on a value of the number of sub-pictures.
P2. the method according to solution P1, wherein the field represents the number of sub-pictures using a codeword.
P3. the method according to solution P2, wherein the codeword comprises a Golomb codeword.
P4. the method according to any of solutions P1 to P3, wherein the value of the number of sub-pictures is limited to be less than or equal to an integer number of coding tree blocks that fit a picture.
P5., according to any of solutions P1 to P4, wherein the fields depend on the codec level associated with the codec representation.
P6. a method of video processing, comprising performing a conversion between a video region of a video and a codec representation of the video, wherein the codec representation complies with a format rule, wherein the format rule specifies that a syntax element indicating a sub-picture identifier is omitted since the video region does not include any sub-pictures.
P7. the method according to solution P6, wherein the codec representation comprises a field with a value of 0 indicating that the video region does not comprise any sub-pictures.
P8. a method of video processing, comprising performing a conversion between a video region of a video and a codec representation of the video, wherein the codec representation complies with a format rule, wherein the format rule specifies omitting identifiers of sub-pictures in the video region at a video region header level in the codec representation.
P9. the method according to solution P8, wherein the codec representation digitally identifies the sub-pictures according to their order listed in the video area header.
P10. a method of video processing, comprising performing a conversion between a video region of a video and a codec representation of the video, wherein the codec representation complies with a format rule, wherein the format rule specifies that an identifier of a sub-picture and/or a length of an identifier of a sub-picture in the video region is included at a sequence parameter set level or a picture parameter set level.
P11. the method according to solution P10, wherein the length is included in the picture parameter set level.
P12. a method of video processing, comprising performing a conversion between a video region of a video and a codec representation of the video, wherein the codec representation complies with a format rule, wherein the format rule specifies a field included in the codec representation at a video sequence level to indicate whether a sub-picture identifier length field is included in the codec representation at the video sequence level.
P13. the method according to solution P12, wherein the format rule specifies that a further field in the codec representation is set to "1" if said field indicates that the length identifier of the video region is included in the codec representation.
P14. a method of video processing, comprising performing a conversion between a video region of a video and a codec representation of the video, wherein the codec representation complies with a format rule, and wherein the format rule specifies that an indication is included in the codec representation indicating whether the video region can be used as a reference picture.
P15. the method according to solution P14, wherein the indication comprises a layer ID and an index or ID value associated with the video area.
P16. a method of video processing, comprising performing a conversion between a video region of a video and a codec representation of the video, wherein the codec representation complies with a format rule, and wherein the format rule specifies that an indication indicating whether the video region can use inter-layer prediction (ILP) from a plurality of sample values associated with a video region of a reference layer is included in the codec representation.
P17. the method according to solution P16, wherein the indication is included at sequence level, picture level or video level.
P18. the method according to solution P16, wherein the video region of the reference layer comprises at least one sample co-located with a sample within the video region.
P19. the method according to solution P16, wherein the indication is included in one or more Supplemental Enhancement Information (SEI) messages.
P20. the method according to any of the preceding claims, wherein a video region comprises a sub-picture of a video.
P21. the method according to any of the preceding claims, wherein converting comprises parsing and decoding the codec representation to generate video.
P22. the method according to any of the preceding claims, wherein converting comprises encoding video to generate a codec representation.
P23. a video decoding apparatus comprising a processor configured to implement the method of one or more of solutions P1 to P22.
P24. a video coding apparatus comprising a processor configured to implement the method of one or more of solutions P1-P22.
P25. a computer program product having stored thereon computer code which, when executed by a processor, causes the processor to implement the method of any of solutions P1 to P22.
In some embodiments, the bitstream generated according to the above-described method may be stored on a computer-readable medium.
The disclosed and other solutions, examples, embodiments, modules, and functional operations described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them. The disclosed and other embodiments may be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer-readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term "data processing apparatus" encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language file), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be run on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described herein can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such a device. Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any subject matter or claimed content, but rather as descriptions of features specific to particular embodiments of particular technologies. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples have been described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims (36)

1. A method of video processing, comprising:
performing a conversion between a video comprising a plurality of pictures including one or more sub-pictures and a bitstream of the video,
wherein the bitstream conforms to a format rule that specifies whether a current sub-picture in a current picture is not allowed to refer to a previous sub-picture in a previous picture at the same position as the current sub-picture for inter prediction if the current sub-picture has a current identifier that is different from an identifier of the previous sub-picture.
2. The method of claim 1, wherein the current identifier is a current sub-picture identifier or a current sub-picture position.
3. The method of claim 2, wherein the current sub-picture is not allowed to reference the previous sub-picture for inter-prediction because the current identifier is different from an identifier of the previous sub-picture.
4. The method of claim 3, wherein the current sub-picture is coded using intra coding because the current identifier is different from identifiers of the previous sub-pictures.
5. The method of claim 3 or 4, wherein the current sub-picture comprises only one or more coded slices of an Instantaneous Decoding Refresh (IDR) sub-picture or a Clean Random Access (CRA) sub-picture.
6. The method of claim 3 or 4, wherein the current sub-picture comprises only one or more coded slices of an Intra Random Access Point (IRAP) sub-picture.
7. The method of claim 3 or 4, wherein the current sub-picture comprises only codec slice Network Abstraction Layer (NAL) units having one or more of a set of predetermined NAL unit types.
8. The method of claim 7, wherein the set of predetermined NAL unit types includes IDR _ W _ RADL, IDR _ N _ LP, and CRA _ NUT.
9. The method of claim 7 wherein the set of predetermined NAL unit types includes IDR _ W _ RADL, IDR _ N _ LP, CRA _ NUT, RSV _ IRAP _11, and RSV _ IRAP _ 12.
10. The method of claim 3 or 4, wherein the bitstream comprises a syntax element indicating that the sub-picture is considered a picture.
11. The method of claim 3 or 4, wherein the current sub-picture comprises a codec slice Network Abstraction Layer (NAL) unit using one or more of intra prediction, Intra Block Copy (IBC) prediction, and palette mode prediction.
12. The method of claim 2, wherein a first video unit in the current sub-picture references a second video unit in the previous sub-picture, wherein a sub-picture index of the current sub-picture is the same as a sub-picture index of the previous sub-picture, and wherein the sub-picture index is a number assigned to a sub-picture that cannot change in a Coding Layer Video Sequence (CLVS).
13. The method of claim 12, wherein the sub-picture identifier of the current sub-picture is the same as the sub-picture identifier of the previous sub-picture.
14. The method of claim 12, wherein the sub-picture identifier of the current sub-picture is different from the sub-picture identifier of the previous sub-picture.
15. The method of claim 2, wherein the current sub-picture references the previous sub-picture, and an indication of the current sub-picture is signaled in a bitstream as a result of the current sub-picture being identified by a layer identifier value and a sub-picture index or a sub-picture identifier.
16. A method of video processing, comprising:
performing a conversion between a video comprising a video region and a bitstream of said video comprising a plurality of codec layers,
wherein the bitstream complies with format rules, an
Wherein the format rule specifies whether inter-layer prediction (ILP) between video regions in different ones of the plurality of codec layers is allowed based on a condition.
17. The method of claim 16, wherein the video region is a sub-picture, and wherein the inter-layer prediction is allowed.
18. The method of claim 17, wherein two sub-pictures in different coding layers comprise the same sub-picture index value and different sub-picture identifier values.
19. The method of claim 18, wherein the condition specifies that the two layers are associated with different view order indices or different view order identifier values.
20. The method of claim 17, wherein a first sub-picture and a second sub-picture in the different coding layers are in a co-located position or have a reasonable height or width due to the first sub-picture and the second sub-picture having the same sub-picture index.
21. The method of claim 17, wherein a first sub-picture and a second sub-picture in the different coding layers are in a co-located position or have a reasonable height or width due to the first sub-picture referencing the second sub-picture.
22. A method of video processing, comprising:
performing a conversion between a video including a current video region and a bitstream of the video including a plurality of codec layers,
wherein the bitstream complies with a format rule, and
wherein the format rule specifies that the bitstream includes an indication of whether inter-layer prediction (ILP) between the current video area and a video area in a reference layer is allowed.
23. The method according to claim 22, wherein the video region is a sub-picture.
24. The method of claim 23, wherein the indication is signaled in a Video Parameter Set (VPS), a Decoding Parameter Set (DPS), a Sequence Parameter Set (SPS), a Picture Parameter Set (PPS), an Adaptive Parameter Set (APS), a sequence header, or a picture header.
25. The method according to claim 23, wherein the video region in the reference layer includes at least one sample co-located with a sample of the current video region.
26. The method of claim 23, wherein the video region in the reference layer is outside of a co-located region of the current video region.
27. The method of claim 23, wherein the indication is signaled in one or more Supplemental Enhancement Information (SEI) messages.
28. The method of claim 23, wherein the indication is signaled regardless of whether the reference layer includes multiple sub-pictures.
29. The method of claim 23, wherein the reference layer comprises a plurality of sub-pictures, and wherein the indication is signaled regardless of whether partitioning a picture into the plurality of sub-pictures is aligned with a current picture such that each sub-picture in the reference layer is co-located with a corresponding sub-picture in the current picture.
30. The method of any of claims 1-29, wherein the converting comprises decoding the video from the bitstream.
31. The method of any of claims 1-29, wherein the converting comprises encoding the video into the bitstream.
32. A method of storing a bitstream representing a video to a computer-readable recording medium, comprising:
generating a bitstream from a video according to the method of any one or more of claims 1 to 29; and
writing the bitstream to a computer-readable recording medium.
33. A video processing apparatus comprising a processor configured to implement the method of any one or more of claims 1 to 32.
34. A computer-readable medium having instructions stored thereon that, when executed, cause a processor to implement the method of one or more of claims 1-32.
35. A computer readable medium storing a bitstream generated according to any one or more of claims 1 to 32.
36. A video processing apparatus for storing a bitstream, wherein the video processing apparatus is configured to implement the method of any one or more of claims 1 to 32.
CN202180008179.3A 2020-01-04 2021-01-04 Restriction of inter prediction for sub-pictures Pending CN114930837A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US202062957123P 2020-01-04 2020-01-04
US62/957,123 2020-01-04
PCT/US2021/012035 WO2021138652A1 (en) 2020-01-04 2021-01-04 Restrictions on inter prediction for subpicture

Publications (1)

Publication Number Publication Date
CN114930837A true CN114930837A (en) 2022-08-19

Family

ID=76687122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180008179.3A Pending CN114930837A (en) 2020-01-04 2021-01-04 Restriction of inter prediction for sub-pictures

Country Status (2)

Country Link
CN (1) CN114930837A (en)
WO (1) WO2021138652A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4138401A1 (en) * 2021-08-17 2023-02-22 Nokia Technologies Oy A method, an apparatus and a computer program product for video encoding and video decoding

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9479782B2 (en) * 2012-09-28 2016-10-25 Qualcomm Incorporated Supplemental enhancement information message coding
GB2516824A (en) * 2013-07-23 2015-02-11 Nokia Corp An apparatus, a method and a computer program for video coding and decoding
US20170105014A1 (en) * 2015-10-08 2017-04-13 Qualcomm Incorporated Luma-driven chroma scaling for high dynamic range and wide color gamut contents
CN111837397B (en) * 2018-04-03 2023-09-22 华为技术有限公司 Error-cancelling code stream indication in view-dependent video coding based on sub-picture code streams

Also Published As

Publication number Publication date
WO2021138652A1 (en) 2021-07-08

Similar Documents

Publication Publication Date Title
US11812062B2 (en) Syntax for signaling video subpictures
CN114946125A (en) High level syntax indicated signaling
CN115211130B (en) Method for processing video data based on signaling of slice and picture segmentation
CN114930825A (en) Techniques for achieving decoding order in coding and decoding pictures
CN114930837A (en) Restriction of inter prediction for sub-pictures
KR20220160575A (en) Constraints on Co-located Pictures in Video Coding
CN115299050A (en) Coding and decoding of pictures including slices and slices
US20240107039A1 (en) Number restriction for sublayers
CN115152210A (en) Derivation of sub-picture height

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination