CN115836524A - Adaptive loop filtering - Google Patents

Adaptive loop filtering Download PDF

Info

Publication number
CN115836524A
CN115836524A CN202180029337.3A CN202180029337A CN115836524A CN 115836524 A CN115836524 A CN 115836524A CN 202180029337 A CN202180029337 A CN 202180029337A CN 115836524 A CN115836524 A CN 115836524A
Authority
CN
China
Prior art keywords
video
picture
bitstream
parameter set
slice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180029337.3A
Other languages
Chinese (zh)
Inventor
王业奎
张莉
张凯
邓智玭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
ByteDance Inc
Original Assignee
Douyin Vision Co Ltd
ByteDance Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Douyin Vision Co Ltd, ByteDance Inc filed Critical Douyin Vision Co Ltd
Publication of CN115836524A publication Critical patent/CN115836524A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Several techniques for video encoding and video decoding are described. An exemplary method includes performing a conversion between a video and a bitstream of the video according to a rule. The rule specifies that the type of the adaptive parameter set is indicated before the identifier of the adaptive parameter set.

Description

Adaptive loop filtering
Cross Reference to Related Applications
This application is intended to claim the benefit of priority from international patent application No. PCT/CN2020/085483 filed on 18/4/2020 in time, according to applicable patent laws and/or according to the rules of paris convention. The entire disclosure of the above application is incorporated by reference as part of the disclosure of the present application for all purposes dictated by law.
Technical Field
This patent document relates to image and video encoding and decoding.
Background
Digital video accounts for the largest bandwidth usage on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, the bandwidth requirements for pre-counting the use of digital video will continue to grow.
Disclosure of Invention
This document discloses techniques that may be used by video encoders and decoders to process a codec representation using control information useful for decoding of the codec representation of video.
In one exemplary aspect, a video processing method is disclosed. The method includes performing a conversion between a video and a bitstream of the video according to a rule. The rule specifies that the type of the adaptive parameter set is indicated before the identifier of the adaptive parameter set.
In another exemplary aspect, a video processing method is disclosed. The method includes performing a conversion between a video and a bitstream of the video according to a rule. The rule provides for determining the total number of allowed filters indicated in the adaptation parameter set based on the codec information in the bitstream.
In another exemplary aspect, a video processing method is disclosed. The method includes performing a conversion between video comprising one or more video pictures, wherein the codec representation complies with format rules; wherein the format rule specifies that two or more syntax fields in a sequence parameter set control Reference Picture Resolution (RPR) changes in the video.
In another exemplary aspect, another video processing method is disclosed. The method includes performing a conversion between video comprising one or more video pictures, wherein the codec representation complies with format rules; wherein the format rule specifies that a single syntax field in a sequence parameter set controls Reference Picture Resolution (RPR) changes in the video; and wherein the format rule specifies that inter-layer reference pictures are allowed to be resampled for conversion regardless of the value of the single syntax field.
In another exemplary aspect, another video processing method is disclosed. The method includes performing a conversion between video including one or more layers, the one or more layers including one or more video pictures, the one or more video pictures including one or more sub-pictures, wherein the coded representation complies with format rules; wherein the format rule specifies a first constraint on cross-layer alignment or a second constraint on a combination of sub-picture and inter-layer picture scalability.
In another exemplary aspect, another video processing method is disclosed. The method includes performing a conversion between video including one or more layers, the one or more layers including one or more video pictures, the one or more video pictures including one or more sub-pictures, wherein the conversion complies with a format rule that specifies that an inter-layer reference picture or a long-term reference picture is not allowed as a collocated picture for a current picture of the conversion.
In another exemplary aspect, another video processing method is disclosed. The method includes performing a conversion between video comprising a plurality of pictures and a codec representation of the video, wherein the conversion conforms to a rule that specifies that a value of each of scaling _ win _ left _ offset, scaling _ win _ right _ offset, scaling _ win _ top _ offset, and scaling _ win _ bottom _ offset is the same for any two pictures in the same codec layer video sequence or codec video sequence that have the same value for pic _ dth _ in _ luma _ samples and pic _ height _ in _ luma _ samples.
In another exemplary aspect, another video processing method is disclosed. The method comprises performing a conversion between video comprising a plurality of pictures and a coded representation of the video, wherein the conversion complies with a rule specifying that inter-layer prediction is only allowed if the current picture is an intra random access point picture if the picture resolution or extension window is different for the current picture and other pictures in the same access unit.
In yet another exemplary aspect, a video encoder apparatus is disclosed. The video encoder comprises a processor configured to implement the above method.
In yet another exemplary aspect, a video decoder apparatus is disclosed. The video decoder comprises a processor configured to implement the above-described method.
In yet another example aspect, a computer-readable medium having code stored thereon is disclosed. The code embodies one of the methods described herein in the form of processor executable code.
These and other features are recited throughout this document.
Drawings
Fig. 1 shows an example of a raster scan slice (slice) partition of a picture, wherein the picture is divided into 12 slices (tiles) and 3 raster scan slices.
Fig. 2 shows an example of rectangular slice partitioning of a picture, wherein the picture is divided into 24 slices (6 slice columns and 4 slice rows) and 9 rectangular slices.
Fig. 3 shows an example of a picture partitioned into slices and rectangular slices, wherein the picture is divided into 4 slices (2 slice columns and 2 slice rows) and 4 rectangular slices.
Fig. 4 shows a picture partitioned into 15 slices, 24 stripes, and 24 sub-pictures.
Fig. 5 is a block diagram of an exemplary video processing system.
Fig. 6 is a block diagram of a video processing apparatus.
Fig. 7 is a flow diagram of an exemplary method for video processing.
Fig. 8 is a block diagram illustrating a video codec system according to some embodiments of the present disclosure.
Fig. 9 is a block diagram illustrating an encoder in accordance with some embodiments of the present disclosure.
Fig. 10 is a block diagram illustrating a decoder according to some embodiments of the present disclosure.
Fig. 11 shows an example of a typical sub-picture based viewport-dependent 360 ° video codec scheme.
Fig. 12 shows a sub-picture and spatial scalability based viewport-dependent 360 ° video coding scheme.
FIG. 13 is a flowchart representation of a method for video processing in accordance with the present technology.
FIG. 14 is a flow diagram representation of another method for video processing in accordance with the present technology.
Detailed Description
The section headings used in this document are for ease of understanding and do not limit the applicability of the techniques and embodiments disclosed in each section to that section only. Furthermore, the use of the h.266 term in some of the descriptions is only for ease of understanding and is not intended to limit the scope of the disclosed technology. Thus, the techniques described herein are also applicable to other video codec protocols and designs. In this document, with respect to the current draft of the VVC specification, editing changes are shown to text by strikethrough indicating text that has been cancelled and highlighting indicating text that has been added (including bold italics).
1. Overview
This document relates to video coding and decoding techniques. Specifically, it relates to 1) a combination of two or more of Reference Picture Resampling (RPR), sub-picture, and scalability in video coding, 2) using RPR between a current picture and a reference picture having the same spatial resolution, and 3) a combination of a long-term reference picture and a collocated picture. These ideas can be applied to any video codec standard or non-standard video codec supporting multi-layer video codecs, such as the multi-function video codec (VVC) being developed, either alone or in various combinations.
2. Abbreviations
APS adaptive parameter set
AU access unit
AUD access unit delimiter
AVC advanced video coding and decoding
CLVS codec layer video sequence
CCALF cross-component adaptive loop filter
CPB coded picture buffer
CRA clean random access
CTU coding and decoding tree unit
CVS codec video sequence
DCI decoding capability information
DPB decoded picture buffer
End of EOB bit stream
End of EOS sequence
GDR gradual decode refresh
HEVC (high efficiency video coding and decoding)
HRD hypothetical reference decoder
GDR immediate decode refresh
ILP inter-layer prediction
ILRP inter-layer reference pictures
Random access picture in IRAP frame
JEM joint exploration model
LTRP long-term reference pictures
MCTS motion constraint slice set
NAL network abstraction layer
OLS output layer set
PH picture header
PPS picture parameter set
PTL grades, layers and grades
PU picture unit
RAP random access point
RBSP original byte sequence payload
SEI supplemental enhancement information
SPS sequence parameter set
STRP short term reference pictures
SVC scalable video coding and decoding
VCL video coding and decoding layer
VPS video parameter set
VTM VVC test model
VUI video usability information
VVC multifunctional video coding and decoding
3. Preliminary discussion
The video codec standard was developed primarily by developing the well-known ITU-T and ISO/IEC standards. ITU-T makes H.261 and H.263, ISO/IEC makes MPEG-1 and MPEG-4 visuals, and these two organizations jointly make the H.262/MPEG-2 video and the H.264/MPEG-4 Advanced Video Codec (AVC) and the H.265/HEVC standards. From h.262, the video codec standard is based on a hybrid video codec structure, where temporal prediction plus transform coding is utilized. To explore future video codec technologies beyond HEVC, VCEG and MPEG united in 2015 into a joint video exploration team (jfet). Since then, JVET has adopted many new approaches and placed it in a reference software named Joint Exploration Model (JEM). The jfet conference is held once a quarter at a time and the goal of the new codec standard is a 50% reduction in bit rate compared to HEVC. The new video codec standard was formally named multifunctional video codec (VVC) at the jvt meeting, 4 months in 2018, and then released a first version of the VVC Test Model (VTM). As efforts continue to promote VVC standardization, new codec techniques are adopted in the VVC standard in every jvt conference. The VVC working draft and test model VTM are then updated after each meeting. The VVC project is now targeted for technical completion (FDIS) at a 7-month conference in 2020.
Picture partitioning scheme in 3.1.HEVC
HEVC includes four different picture partitioning schemes, namely conventional slice, non-independent slice, and Wavefront Parallel Processing (WPP), which are applicable to Maximum Transmission Unit (MTU) size matching, parallel processing, and reducing end-to-end delay.
Conventional slices are similar to slices in H.264/AVC. Each regular slice is encapsulated in its own NAL unit and intra picture prediction (intra sample prediction, motion information prediction, codec mode prediction) and entropy codec dependencies across slice boundaries are disabled. Thus, conventional slices may be reconstructed independently of other conventional slices in the same picture (although interdependencies may still exist due to the loop filtering operation).
Conventional stripes are the only tools available for parallelization, which are also provided in almost the same form in h.264/AVC. Conventional slice-based parallelization does not require much inter-processor or inter-core communication (which is much more cumbersome than inter-processor or inter-core data sharing due to intra picture prediction, in addition to inter-processor or inter-core data sharing for motion compensation when decoding predictive coded images). However, for the same reason, using conventional stripes may generate a large amount of codec overhead due to the bit cost of the slice header and due to the lack of prediction across the slice boundaries. Furthermore, conventional stripes (compared to other tools mentioned below) are also used as a key mechanism for bitstream partitioning to match MTU size requirements due to their intra picture independence and each conventional stripe is encapsulated in its own NAL unit. In many cases, parallelized targets and MTU size-matched targets place conflicting requirements on the stripe layout in the picture. The implementation of this situation has led to the development of parallelization tools as mentioned below.
Non-independent slices have short slice headers and allow partitioning of the bitstream at tree block boundaries without breaking any intra picture prediction. Basically, non-independent slices segment a conventional slice into a plurality of NAL units to reduce end-to-end delay by allowing a portion of the conventional slice to be issued before encoding of the entire conventional slice is complete.
In WPP, a picture is partitioned into a single row of Coded Tree Blocks (CTBs). Allowing entropy decoding and prediction to use data from CTBs in other partitions. Parallel processing may be performed by parallel decoding of the rows of CTBs, where the start of decoding of a row of CTBs is delayed by two CTBs in order to ensure that data related to CTBs above and to the right of the main CTB is available before decoding the main CTB. Using such an interleaved start (appearing as a wavefront when represented graphically), parallelization can be done using up to multiple processors/cores, since the picture contains CTB lines. Because intra picture prediction is allowed between adjacent tree block rows in a picture, the inter-processor/inter-core communication required to enable intra picture prediction can be significant. WPP partitioning does not result in additional NAL units being generated compared to when not applied, and therefore WPP is not a tool for MTU size matching. However, if MTU size matching is required, then a conventional stripe can be used with WPP, but with some codec overhead.
Slices define horizontal and vertical boundaries that are used to partition a picture into slice columns and rows. The columns of tiles extend from the top of the picture all the way to the bottom of the picture. Likewise, the slice rows extend all the way from the left side of the picture to the right side of the picture. The number of slices in a picture can simply be derived as the number of columns of slices times the number of rows of slices.
The scanning order of the CTBs will be changed to local in the slice (in order of CTB raster scan of the slice), and then the top left CTB of the next slice is decoded in order of slice raster scan of the picture. Similar to conventional slices, slices can destroy intra picture prediction dependencies as well as entropy decoding dependencies. However, they need not be included in each NAL unit (same as WPP in this respect); therefore, slices cannot be used for MTU size matching. Each slice may be processed by one processor/core and the inter-processor/inter-core communication required for intra picture prediction between processing units decoding neighboring slices is limited to passing shared slice headers and loop filtering the related sharing of reconstructed samples and metadata if a slice spans more than one slice. When more than one slice or WPP slice is included in a slice, the entry point byte offset for each slice or WPP slice other than the first slice or WPP slice in the slice is signaled in the slice header.
For simplicity, limitations on the application of four different picture partitioning schemes have been specified in HEVC. For most of the grades specified in HEVC, a given codec video sequence cannot include both slices and wavefronts. For each strip and sheet, either or both of the following conditions must be met: 1) All the coding and decoding tree blocks in the strip belong to the same piece; 2) All the codec tree blocks in a slice belong to the same stripe. Finally, a wavefront segment contains only one row of CTBs, and when WPP is in use, if a stripe starts within a row of CTBs, it must end in the same row of CTBs.
With recent revisions to HEVC, HEVC specifies three MCTS-related SEI messages, namely, a temporal MCTS SEI message, an MCTS extraction information set SEI message, and an MCTS extraction information nesting SEI message.
The temporal MCTS SEI message indicates the presence of an MCTS in the bitstream and signals the MCTS. For each MCTS, the motion vector is constrained to point to a full sample position within the MCTS and to point to a fractional sample position that requires only full sample positions within the MCTS for interpolation, and temporal motion vector prediction derived from blocks outside the MCTS is not allowed using motion vector candidates. Thus, each MCTS can decode independently without slices that are not included in the MCTS.
The MCTS extraction information set SEI message provides supplemental information that can be used for MCTS sub-bitstream extraction (specified as part of the semantics of the SEI message) to generate a compliant bitstream for the MCTS set. This information consists of a number of sets of extracted information, each defining a number of MCTS sets and containing the RBSP bytes of the replacement VPS, SPS and PPS to be used during the MCTS sub-bitstream extraction process. When extracting sub-bitstreams according to the MCTS sub-bitstream extraction process, parameter sets (VPS, SPS, and PPS) need to be overwritten or replaced, slice headers need to be slightly updated because one or all slice address-related syntax elements (including first _ slice _ segment _ in _ pic _ flag and slice _ segment _ address) typically need to have different values.
3.2. partitioning of pictures in VVC
In VVC, a picture is divided into one or more slice rows and one or more slice columns. A slice is a sequence of CTUs covering a rectangular area of a picture. The CTUs in a slice scan in raster scan order in the slice.
A slice consists of an integer number of complete slices or an integer number of consecutive complete rows of CTUs in a slice of a picture.
Two stripe modes are supported, namely, a raster scan stripe mode and a rectangular stripe mode. In raster scan stripe mode, a stripe contains a series of complete slices in a slice raster scan of a picture. In the rectangular slice mode, a slice contains many complete slices that together form a rectangular region of the picture, or contains multiple consecutive complete CTU rows of a slice (which together form a rectangular region of the picture). Tiles within a rectangular strip will be scanned within the rectangular area corresponding to that strip in a tile raster scan order.
A sub-picture contains one or more strips that collectively cover a rectangular area of the picture.
Fig. 1 shows an example of raster scan stripe partitioning of a picture, wherein the picture is divided into 12 slices and 3 raster scan stripes.
Fig. 2 shows an example of rectangular slice partitioning of a picture, wherein the picture is divided into 24 slices (6 slice columns and 4 slice rows) and 9 rectangular slices.
Fig. 3 shows an example of a picture partitioned into slices and rectangular slices, wherein the picture is divided into 4 slices (2 slice columns and 2 slice rows) and 4 rectangular slices.
Fig. 4 shows an example of sub-picture partitioning of a picture, where the picture is partitioned into 18 slices: the left 12 slices, each covering one stripe with 4 × 4 CTUs; the right 6 slices, each covering 2 vertically stacked slices with 2 × 2 CTUs, collectively produce 24 slices and 24 different sized sub-pictures (each slice is a sub-picture).
3.3. Picture resolution modification within a sequence
In AVC and HEVC, the spatial resolution of a picture cannot be changed unless a new sequence using a new SPS starts with an IRAP picture. VVC enables changing picture resolution within a sequence at a certain position without encoding an IRAP picture, which is always intra coded. This feature is sometimes referred to as Reference Picture Resampling (RPR) because it requires resampling of the reference picture for inter prediction when the resolution of the reference picture is different from the current picture being decoded.
The expansion ratio is limited to be greater than or equal to 1/2 (2-fold downsampling from the reference picture to the current picture) and less than or equal to 8 (8-fold upsampling). Three sets of resampling filters with different cut-off frequencies are specified to handle various extension ratios between the reference picture and the current picture. Three sets of resampling filters are applied to spreading ratios ranging from 1/2 to 1/1.75, from 1/1.75 to 1/1.25, and from 1/1.25 to 8, respectively. Each set of resample filters has 16 phases for luminance and 32 phases for chrominance, as is the case with motion compensated interpolation filters. In fact, the normal MC interpolation process is a special case of the reconstruction process with an extension ratio in the range of 1/1.25 to 8. The horizontal and vertical extension ratios are derived based on the picture width and height and the left, right, up, and down extension offsets specified for the reference picture and the current picture.
Other aspects of VVC design that differ from HEVC that support this feature include: i) The picture resolution and the corresponding dependency window are signaled in the PPS instead of in the SPS, where the maximum picture resolution is signaled. ii) for a single layer bitstream, each picture storage area (slot in the DPB for storing one decoded picture) occupies the buffer size needed to store the decoded picture with the largest picture resolution.
3.4. Scalable Video Codec (SVC) in general and in VVC
Scalable video coding (SVC, sometimes also referred to as scalability in video coding) refers to video coding using a Base Layer (BL), sometimes referred to as a Reference Layer (RL), and one or more scalable Enhancement Layers (ELs). In SVC, a base layer can carry video data with a base quality level. One or more enhancement layers may carry additional video data to support, for example, higher spatial, temporal, and/or signal-to-noise ratio (SNR) levels. The enhancement layer may be defined relative to a previously encoded layer. For example, the bottom layer may serve as the BL, and the top layer may serve as the EL. The intermediate layer may be used as either EL or RL, or both. For example, an intermediate layer (e.g., a layer that is neither the lowest layer nor the highest layer) can be the EL of a layer below the intermediate layer (such as the base layer or any intervening enhancement layers), while acting as the RL of one or more enhancement layers above the intermediate layer. Similarly, in Multiview or 3D extensions of the HEVC standard, there may be multiple views, and information of one view may be used for information of another view (e.g., motion estimation, motion vector prediction, and/or other redundancy) for coding (e.g., encoding or decoding).
In SVC, parameters used by an encoder or decoder are grouped into parameter sets that they may utilize in based on codec level (e.g., video level, sequence level, picture level, slice level, etc.). For example, parameters that may be utilized by one or more coded video sequences of different layers in a bitstream may be included in a Video Parameter Set (VPS), and parameters that may be utilized by one or more pictures in a coded video sequence may be included in a Sequence Parameter Set (SPS). Similarly, parameters utilized by one or more slices in a picture may be included in a Picture Parameter Set (PPS), while other parameters specific to a single slice may be included in a slice header. Similarly, an indication of which parameter set is being used by a particular layer at a given time may be provided at different codec levels.
Since Reference Picture Resampling (RPR) is supported in VVC, a bitstream containing multiple layers (e.g., two layers with SD and HD resolution in VVC) can be designed without any additional signal processing level codec tools, since the upsampling needed for spatial scalability support only requires the use of RPR upsampling filters. However, for scalability support, high level syntax changes are required (as compared to not supporting scalability). Extensibility support is specified in VVC version 1. Unlike scalability support in any of the early video codec standards (including extensions to AVC and HEVC), the design of VVC scalability has been as friendly as possible to single layer decoder designs. The decoding capability of a multi-layer bitstream is specified in a manner as if there were only a single layer in the bitstream. For example, the decoding capability (such as DPB size) is specified in a manner independent of the number of layers in the bitstream to be decoded. Basically, a decoder designed for a single layer bitstream does not require much modification to decode a multi-layer bitstream. The HLS aspect has been greatly simplified compared to the multi-layer extension design of AVC and HEVC, but some flexibility is sacrificed. For example, an IRAP AU is required to contain pictures of each layer present in the CVS.
3.5. Sub-picture based viewport correlated 360 DEG video streaming
In streaming media for 360 ° video (also known as omni-directional video), only a subset of the entire omni-directional video sphere (e.g., the current viewport) will be rendered to the user at any particular time, and the user can rotate his/her head at any time to change the viewing direction, and thus the current viewport. While it is desirable that at least some lower quality region representations are not covered by the current viewport available at the client and ready to be rendered to the user in case the user suddenly changes its viewing orientation to any position on the sphere, only the current viewport currently being used for rendering requires a high quality representation of the omnidirectional video. Such optimization is achieved by partitioning the high quality representation of the entire omnidirectional video into sub-pictures with appropriate granularity. Using VVC, the two representations can be encoded as two layers independent of each other.
Fig. 11 illustrates a typical sub-picture based viewport-related 360 ° video delivery scheme, where a high resolution representation of the full video is composed of sub-pictures, whereas a low resolution representation of the full video does not use sub-pictures and can be coded with lower frequency random access points than the high resolution representation. The client receives the full video at a lower resolution, whereas for high resolution video it only receives and decodes the sub-pictures covering the current viewport.
3.6. Parameter set
AVC, HEVC and VVC specify parameter sets. The types of parameter sets include SPS, PPS, APS, and VPS. All AVC, HEVC and VVC support SPS and PPS. VPS was introduced since HEVC and included in both HEVC and VVC. APS is not included in AVC or HEVC, but in the latest VVC draft text.
SPS is designed to carry sequence-level header information, while PPS is designed to carry picture-level header information that does not change frequently. For SPS and PPS, infrequently changing information need not be repeated for each sequence or picture, and thus redundant signaling of such information may be avoided. Furthermore, out-of-band transmission of important header information is achieved using SPS and PPS, thus not only avoiding the need for redundant transmission, but also improving error recovery capability.
VPS was introduced to carry sequence level header information that is common to all layers in a multi-layer bitstream.
APS were introduced to carry picture level or slice level information that requires a significant number of bits to be coded, can be shared by multiple pictures, and can have many different variations in the sequence.
The following is the semantics of SPS/PPS/APS in some embodiments:
the SPS _ seq _ parameter _ set _ id provides an identifier for the SPS to refer to for other syntax elements.
SPS NAL units share the same value space for SPS _ seq _ parameter _ set _ id, regardless of the nuh _ layer _ id value.
Assume that the spsLayerId is the value of nuh _ layer _ id for a particular SPS NAL unit and assume that vclLayerId is the value of nuh _ layer _ id for a particular VCL NAL unit. A particular VCL NAL unit should not refer to a particular SPS NAL unit unless the spsllayerld is less than or equal to vcllld, and the OLS being decoded contains layers with nuh _ layer _ id equal to spsllayerld and layers with nuh _ layer _ id equal to vclld.
PPS _ pic _ parameter _ set _ id identifies the PPS for reference by other syntax elements. The value of pps _ pic _ parameter _ set _ id should be in the range of 0 to 63 (inclusive).
Regardless of the nuh _ layer _ id value, PPS NAL units share the same value space of PPS _ pic _ parameter _ set _ id.
Let ppsLayerId be the value of nuh _ layer _ id for a particular PPS NAL unit, and let vclLayerId be the value of nuh _ layer _ id for a particular VCL NAL unit. A particular VCL NAL unit should not refer to a particular PPS NAL unit unless the ppsslayerid is less than or equal to vcllld, and the OLS being decoded contains layers with nuh _ layer _ id equal to the ppsslayerid and layers with nuh _ layer _ id equal to vclld.
The adaptation _ parameter _ set _ id provides an identifier for the APS for reference by other syntax elements.
When APS _ params _ type is equal to ALF _ APS or scan _ APS, the value of the adaptation _ parameter _ set _ id should be in the range of 0 to 7 (including endpoints).
When APS _ params _ type is equal to LMCS _ APS, the value of adaptation _ parameter _ set _ id should be in the range of 0 to 3 (inclusive).
Suppose apsLayerId is the value of nuh _ layer _ id for a particular APS NAL unit, and suppose vclLayerId is the value of nuh _ layer _ id for a particular VCL NAL unit. A particular VCL NAL unit should not refer to a particular APS NAL unit unless apsllayerid is less than or equal to vclllayerid, and the OLS being decoded contains layers with nuh _ layer _ id equal to apsllayerid and layers with nuh _ layer _ id equal to vclld.
3.7. Sub-bitstream extraction process
The inputs to this process are the bitstream inBitstream, the target OLS index targetolssidx and the target highest temporalld value tidrtarget.
The output of this process is the sub-bit stream outBitstream.
The bitstream dependency requirement of the input bitstream is that any output sub-bitstream that satisfies all of the following conditions should be a compliant bitstream:
the output sub-bitstream is the output of the process specified in the clause, with bitstream targetolsdx equal to the index of the OLS list specified by the VPS and tidtagget equal to any value in the range of 0 to 6 (including the endpoints) as inputs.
The output sub-bitstream contains at least one VCL NAL unit, where nuh layer id is equal to each of nuh layer id in layerdlinils [ targetolsdx ].
-the output sub-bitstream contains at least one VCL NAL unit, wherein temporalld is equal to tiddtarget.
Note that a compliant bitstream contains one or more codec slice NAL units with temporalld equal to 0, but not necessarily a codec slice NAL unit with nuh layer id equal to 0.
The output sub-bitstream outbitstart is derived as follows:
the bit stream outBitstream is set to be the same as the bit stream inBitstream.
-deleting all NAL units from outBitstream whose temporalld is greater than tIdTarget.
-deleting from the outBitstream all NAL units with NAL _ unit _ type not equal to any of VPS _ NUT, DCI _ NUT and EOB _ NUT and nuh _ layer _ id not included in the list for LayerIdOls [ targetolsdx ].
-deleting from outBitstream all NAL units that satisfy all the following conditions:
the nal _ unit _ type is not equal to IDR _ W _ RADL, IDR _ N _ LP or CRA _ NUT.
Nuh _ layer _ id equals LayerInOls [ targetOlsIdx ] [ j ] for j values in the range of 0 to NumLayerInOls [ targetOlsIdx ] -1 (including endpoints).
-Temporalld is greater than or equal to NumSubLayersInLayerInOLS [ targetOlsIdx ] [ j ].
-deleting from the outBitstream all SEI NAL units containing scalable nesting SEI messages with nesting _ ols _ flag equal to 1 and no i-value in the range of 0 to nesting _ num _ ols _ minus1 (including endpoints), so that nestingolidx [ i ] is equal to targetolidx.
When layerdlinilols [ targetolldx ] does not include all values of nuh _ layer _ id in all NAL units in the bitstream, the following applies:
remove all SEI NAL units from the outBitstream that contain non-extensible nesting SEI messages with payloadType equal to 0 (buffering period) or 130 (decoding unit information).
-when general _ same _ pic _ timing _ in _ all _ ols _ flag is equal to 0, all SEI NAL units containing non-extensible nested SEI messages with payload type equal to 1 (picture timing) are deleted from the outBitstream.
When an outBitstream contains an SEI NAL unit containing a scalable nesting SEI message with nesting _ ols _ flag equal to 1 and applicable to outBitstream (nesting olsdx [ i ] equal to targetolsdx), the following applies:
-if the same _ pic _ timing _ with _ ols _ flag is equal to 0, extracting from the extensible nesting SEI messages the appropriate non-extensible nesting SEI messages with payload type equal to 0 (buffering period), 1 (picture timing) or 130 (decoding unit information) and including these SEI messages in the outBitstream.
Otherwise (same _ pic _ timing _ with _ ols _ flag equal to 1), extracting from the extensible nesting SEI messages the appropriate non-extensible nesting SEI messages with payload type equal to 0 (buffering period) or 130 (decoding unit information) and including these SEI messages in the outBitstream.
4. Technical problem solved by the disclosed technical solution
The existing designs in the latest VVC texts have the following problems:
1) Current VVC designs support a typical codec scheme for 360 ° video, as shown in fig. 11. However, although scalability is supported in the current VVC design, the improved 360 ° video codec scheme as shown in fig. 12 is not supported. The only difference compared to the method shown in fig. 11 is that inter-layer prediction (ILP) is applied to the method shown in fig. 12.
The following two positions in the VVC draft do not allow the combined use of sub-pictures and spatial scalability:
spatial scalability design in vvc relies on RPR characteristics. However, the following semantic constraints currently do not allow for the combination of RPR and sub-pictures:
when res _ change _ in _ clvs _ allowed _ flag is equal to 1, the value of sub _ info _ present _ flag should be equal to 0.
Therefore, the use of the improved codec scheme is not allowed, because for SPS referenced by upper layers, the above constraint does not allow setting of sub _ info _ present _ flag to 1 (multiple sub-pictures are used per picture), while setting res _ change _ in _ clvs _ allowed _ flag to 1 (to enable RPR, which is necessary for spatial scalability of ILP).
b. The current VVC draft has the following constraints on the combination of sub-pictures and scalability:
when supplemental _ managed _ as _ pic _ flag [ i ] is equal to 1, bitstream dependency requires that each output layer in the OLS (including the layer containing the ith sub-picture as the output layer) and its reference layer satisfy all of the following conditions:
all pictures in the output layer and its reference layer should have the same pic _ width _ in _ luma _ samples value and the same pic _ height _ in _ luma _ samples value.
All SPS referenced by the output layer and its reference layers should have the same value of SPS _ num _ sub _ minus1, and for each value of j in the range of 0 to SPS _ num _ sub _ minus1 (including endpoints), should have the same value of sub _ ctu _ top _ left _ x [ j ], sub _ ctu _ top _ left _ y [ j ], sub _ width _ minus1[ j ], sub _ height _ minus1[ j ], and lo _ filter _ across _ sub _ enabled _ flag [ j ], respectively.
All pictures in each access unit in the output layer and its reference layer should have the same value of subacidval [ j ] for each j value in the range 0 to sps _ num _ subacids _ minus1 (including endpoints).
The above constraints basically do not allow the use of any other combination of sub-picture and scalability of ILP than the limited combination with SNR scalability, where the layers in each dependency tree must have the same spatial resolution and the same sub-picture layout.
2) When the supplemental _ managed _ as _ pic _ flag [ i ] is equal to 1, the subpicture boundary of the ith subpicture will be regarded as a picture boundary in motion compensation. This processing is implemented in the VVC draft text by applying certain clipping operations during the decoding process related to motion compensation. However, for the improved codec scheme shown in fig. 12, since the lower layer is fully available for the decoder, not only for the region corresponding to the ith sub-picture, it is not necessary to apply such clipping in this case, so as not to cause unnecessary codec efficiency loss.
3) The above-described constraint on the combination of sub-picture and scalability with ILP (the description is included in the description of problem 1 b) has the following problems, without considering the support of the improved codec scheme shown in fig. 12:
a. this constraint should also apply when the layer containing the ith sub-picture is not the output layer of the OLS. The entire constraint should be specified in a manner that does not consider whether a layer is an output layer of the OLS.
b. The requirement of cross-layer alignment of the value of sub _ linear _ as _ pic _ flag [ i ] should be included, otherwise sub-picture sequences with the same index would not be able to be extracted across layers.
c. The requirement of cross-layer alignment of the value of loop _ filter _ across _ temporal _ enabled _ flag [ i ] should be excluded, because regardless of the value of this flag, a sub-picture sequence is extractable as long as temporal _ linear _ as _ pic _ flag [ i ] is equal to 1. The setting of the value of loop _ filter _ across _ sub _ enabled _ flag [ i ] should be left to the encoder to decide the trade-off between the quality of a single extractable sub-picture sequence and the quality of the set of extractable sub-picture sequences, just as why the two flags are signaled independently of each other.
d. The entire constraint should only apply if sps _ num _ sub _ minus1 is greater than 0 to avoid all cases where one sub-picture per sub-picture is inadvertently covered by the constraint.
e. The time range in which the constraint is applied, e.g. the set of AUs, needs to be specified explicitly.
f. A requirement should be included for the value of each of the cross-layer aligned extended window parameters scaling _ win _ left _ offset, scaling _ win _ right _ offset, scaling _ win _ top _ offset, and scaling _ win _ bottom _ offset to ensure that RPR of ITRP is not needed when there are multiple sub-pictures per picture.
4) Currently, the collocated picture for the current picture may be a Long Term Reference Picture (LTRP) located in the same layer as the current picture, and may also be an inter-layer reference picture (ILRP), e.g., a reference picture located in a different layer from the current picture. In either case, however, POC-based motion vector extension is not applied, so the codec performance due to allowing this is expected to be very low. Therefore, it is preferable that the collocated picture of the current picture is not allowed to be LTRP or ILRP.
5) Currently, pictures with the same spatial resolution in CLVS are allowed to have different extended windows. However, this should be prohibited because otherwise the SPS flag for RPR and the conventional constraint flag for RPR would not be available to disable the RPR utility completely.
6) Currently, when performing sub-bitstream extraction, parameter sets (SPS/PPS/VPS) of layers not included in the current/target OLS may also be included in the extracted bitstream. However, even if layerA and layerB are not included in the OLS currently being decoded, as long as the OLS containing both layerA and layerB is defined in the VPS, the design intent is not that the slice in layerA refers to a parameter set in layerB.
7) The number of allowed access points depends on the APS type. However, regardless of the allowed APS, the signaling of the APS ID is fixed to u (5), which may waste unnecessary bits.
5. List of technical solutions and embodiments
To address the above and other problems, a method summarized below is disclosed. These terms should be considered as exemplifications of the general concepts, and should not be construed narrowly. Furthermore, these items may be applied individually or combined in any manner.
1) To solve problem 1a, multiple (such as two) SPS flags may be specified and/or signaled for this purpose, rather than just one SPS flag (e.g., res _ change _ in _ clvs _ allowed _ flag as in the current VVC draft) to control RPR.
a. For example, a first flag (e.g., ref _ pic _ decoding _ enabled _ flag) specifies whether RPR is needed to be used to decode one or more pictures, while a second flag (e.g., res _ change _ in _ CLVS _ allowed _ flag) specifies whether picture resolution is allowed to be altered within CLVS.
b. Alternatively, further, the second flag may be signaled only when the first flag specifies that the RPR may need to be used to decode one or more pictures. Further, when not signaled, it is inferred that the value of the second flag is a value specifying that no picture resolution is allowed to be changed within CLVS.
i. Alternatively, the two flags are signaled independently of each other.
c. Alternatively, in addition, a more general constraint flag is added, such that each of the first flag and the second flag has a general constraint flag.
d. Further, a combination of a plurality of sub-pictures of each picture with res _ change _ in _ clvs _ allowed _ flag equal to 1 is not allowed, but a combination of a plurality of sub-pictures of each picture with ref _ pic _ decoding _ enabled _ flag equal to 1 is allowed.
e. Furthermore, the constraint of scaling _ window _ explicit _ signalling _ flag value based on res _ change _ in _ clvs _ allowed _ flag value will be modified to a value based on ref _ pic _ scaling _ enabled _ flag as follows: when ref _ pic _ resetting _ enabled _ flag is asserted
Figure BDA0003895594000000171
Equal to 0, the value of scaling _ window _ explicit _ signaling _ flag should be equal to 0.
f. Alternatively, one or all of the multiple (such as two) flags may be signaled in the VPS instead of in the SPS.
i. In one example, one or all of the multiple (such as two) flags in the VPS apply to all layers specified by the VPS.
in another example, one or all of multiple (such as two) flags in the VPS may each signal multiple instances in the VPS, and each instance applies to all layers in one dependency tree.
g. In one example, each of the plurality of flags is encoded as an unsigned integer using a l-bit u (1) codec.
h. Alternatively, a non-binary value may be used to signal syntax elements, e.g., in SPS/VPS, to specify the use of RPR in the decoding process and to allow picture resolution changes in CLVS. i. In one example, when the value of the syntax element is equal to 0, it specifies that decoding of one or more pictures does not require use of RPR.
in one example, when the value of the syntax element is equal to 1, it specifies that the RPR may need to be used to decode one or more pictures when picture resolution is not allowed to be altered in CLVS.
in one example, when the value of the syntax element is equal to 2, it specifies that one or more pictures may need to be decoded using RPR when picture resolution is allowed to be altered in CLVS. Alternatively, furthermore, how the syntax elements are signaled may depend on whether inter-layer prediction is allowed or not.
v. in one example, the syntax element is coded using ue (v), indicating an unsigned integer order 0 Exp-Golomb coded syntax element with the left bit preceding.
In another example, the syntax element is encoded as an unsigned integer using N bits u (N), e.g., N equals 2.
2) Alternatively, or in addition to item 1 for solving problem 1a, there is still only one flag, e.g., res _ change _ in _ clvs _ allowed _ flag, but the semantics may be altered such that the inter-layer reference picture is allowed to be resampled regardless of the value of the flag.
a. In one example, semantics may be altered as follows: res _ change _ in _ CLVS _ allowed _ flag equal to 1 specifies that picture spatial resolution can be altered within a CLVS referring to SPS, and when decoding a current picture in the CLVS, it may be necessary to resample a reference picture located in the same layer as the current picture. res _ change _ in _ CLVS _ allowed _ flag equal to 0 specifies that picture spatial resolution cannot be altered within any CLVS referring to SPS, and that when any current picture is decoded in a CLVS, there is no need to resample reference pictures located in the same layer as the current picture.
b. With this modification, even if res _ change _ in _ clvs _ allowed _ flag is equal to 0, the decoding of the sub-picture/picture can use RPR for the inter-layer reference picture (ILRP).
3) To solve problem 1b, the constraints on the combination of sub-picture and scalability with ILP are updated such that the constraints only impose cross-layer alignment restrictions on the current layer and all higher layers that depend on the current layer, and do not impose constraints on higher layers that do not depend on the current layer or on lower layers.
a. Instead, the constraint is updated to impose cross-layer alignment constraints only on the current layer and all layers above the current layer.
b. Instead, the constraints are updated to impose cross-layer alignment constraints only on the current layer and all higher layers in each OLS that contains the current layer.
c. Instead, the constraint is updated to impose cross-layer alignment restrictions only on the current layer and all lower layers that are reference layers to the current layer.
d. Instead, the constraint is updated to impose cross-layer alignment constraints only on the current layer and all layers below the current layer.
e. Instead, the constraints are updated to impose cross-layer alignment restrictions only on the current layer and all lower layers in each OLS that contains the current layer.
f. Alternatively, the constraint is updated to impose cross-layer alignment constraints only on all layers below the highest layer.
g. Alternatively, the constraint is updated to impose cross-layer alignment constraints only on all layers above the lowest layer.
4) To solve problem 2, in one or more decoding processes involving a cropping operation in an intra-prediction related process for handling sub-picture boundaries in motion compensation/motion prediction as picture boundaries (e.g., in clause 8.5.2.11 for a derivation process for temporal luma motion vector prediction, 8.5.3.2.2 luma sample bilinear interpolation process, 8.5.5.3 for a sub-block-based temporal merge candidate derivation process, 8.5.5.4 for a sub-block-based temporal merge base motion data derivation process, 8.5.5.6 for constructing affine control point motion vector merge candidates derivation process, 8.5.6.3.2 luma sample interpolation filtering process, 8.5.6.3.3 sample integer sample acquisition process, and 8.5.6.3.4 chroma sample interpolation process), the following changes are applied:
a. in one example, these processes are altered such that if the temporal _ mined _ as _ pic _ flag [ currsubpapiccididx ] is equal to 1 and the sps _ num _ subpacs _ minus1 of the reference picture refPicLX is greater than 0, then the cropping operation is applied, otherwise the cropping operation is not applied.
i. Alternatively, when the collocated picture of the pictures is not allowed as the ILRP, only the process in which the reference picture refpicLX is not a collocated picture is changed as described above, and the process in which the reference picture refpicLX is a collocated picture is not changed.
b. In one example, these processes are modified such that if the temporal _ sampled _ as _ pic _ flag [ currsubpapicciddx ] is equal to 1 and the nal _ unit _ type value of the current slice is not equal to IDR _ W _ RADL, IDR _ N _ LP, or CRA _ NUT, then a clipping operation is applied, otherwise no clipping operation is applied. Meanwhile, only ILP is allowed to encode and decode IRAP pictures.
c. In one example, as with the current VCC text, no changes are made to these decoding processes, e.g., if the sub _ linear _ as _ pic _ flag [ currsubpaccidx ] is equal to 1, then the clipping operation is applied, otherwise the clipping operation is not applied.
5) To solve problem 3a, the constraint on the combination of sub-picture and scalability with ILP is updated such that the constraint imposes a cross-layer alignment constraint on all layers in each dependency tree. The dependency tree contains the particular layer, all layers that have the particular layer as a reference layer, and all reference layers for the particular layer, regardless of whether any layer is an output layer of the OLS.
6) To solve problem 3b, the constraint on the combination of sub-picture and scalability with ILP is updated such that the constraint imposes a cross-layer alignment limit on the value of sub _ linear _ as _ pic _ flag [ i ].
7) To solve problem 3c, the constraint on the combination of sub-picture and scalability with ILP is updated such that the constraint does not impose cross-layer alignment constraints on the value of loop _ filter _ across _ underlying _ enabled _ flag [ i ].
8) To solve problem 3d, the constraint on the combination of sub-picture and scalability with ILP is updated such that it is not applied when sps _ num _ sub _ minus1 is equal to 0.
a. Alternatively, the constraint is updated such that when the sub _ info _ present _ flag is equal to 0, no constraint is applied.
9) To solve problem 3e, the constraint on the combination of sub-pictures and scalability with ILP is updated such that the constraint imposes a cross-layer alignment limit on pictures in a certain target set of AUs.
a. In one example, for each CLVS referring to the current layer of the SPS, assume that the target set targetaset of AUs becomes all AUs starting from the AU containing the first picture of the CLVS in decoding order to the AU containing the last picture of the CLVS in decoding order (including the end points).
10 To solve problem 3f, the constraint on the combination of sub-picture and scalability with ILP is updated such that the constraint imposes a cross-layer alignment limit on the value of each of scaling _ win _ left _ offset, scaling _ win _ right _ offset, scaling _ win _ top _ offset, and scaling _ win _ bottom _ offset.
11 To solve problem 4, the collocated picture that constrains the current picture should not be a long-term reference picture (LTRP).
a. Alternatively, the collocated picture that constrains the current picture should not be an inter-layer reference picture (ILRP).
b. Alternatively, the collocated picture that constrains the current picture should not be LTRP or ILRP.
c. Alternatively, if the collocated picture of the current picture is instead LTRP or ILRP, no extension is applied to obtain the motion vector pointing to the collocated picture.
12 To solve problem 5, it is constrained that for any two pictures in the same CLVS that have the same value for pic _ width _ in _ luma _ samples and pic _ height _ in _ luma _ samples, respectively, the value of each of scaling _ win _ left _ offset, scaling _ win _ right _ offset, scaling _ win _ top _ offset, and scaling _ win _ bottom _ offset should be the same.
a. Alternatively, the above "within the same CLVS" is replaced with "within the same CVS".
b. Alternatively, constraints are specified as follows:
let ppsA and ppsB refer to any two PPS's that refer to the same SPS. One requirement of bitstream dependency is that when ppsA and ppsB have the same value of pic _ width _ in _ luma _ samples and pic _ height _ in _ luma _ samples, respectively, ppsA and ppsB should have the same value of scaling _ with _ left _ offset, scaling _ with _ right _ offset, scaling _ with _ top _ offset, and scaling _ with _ bottom _ offset, respectively.
c. Alternatively, constraints are specified as follows:
for any two pictures in the same CVS and satisfying all the following conditions, the value of each of scaling _ win _ left _ offset, scaling _ win _ right _ offset, scaling _ win _ top _ offset, and scaling _ win _ bottom _ offset should be the same:
i. these two pictures have the same value for pic _ width _ in _ luma _ samples and pic _ height _ in _ luma _ samples, respectively.
The two pictures belong to the same layer or two layers, i.e. one layer is a reference layer for the other layer.
13 It is proposed to allow use of ILP only if the current picture is an IRAP picture when the picture resolution/extension window of the current picture is different from the other pictures in the same access unit.
14 In this document, picture resolution may refer to the width and/or height of a picture, or it may refer to the width and/or height and/or upper left corner position of an extended window and/or compliance window of a picture.
15 In this document, not using RPR may mean that the resolution of any reference picture of the current picture is the same as the resolution of the current picture.
16 With respect to bitstream extraction to solve problem 6, one or more of the following solutions are proposed:
a. in one example, to derive the output sub-bitstream, parameter sets (e.g., SPS/PPS/APS NAL units) are deleted whose nuh layer id is not included in the list layerdlinils [ targetolssidx ].
b. For example, the derivation of the output sub-bitstream outbitstart may depend on one or more of:
i. all NAL units with NAL _ unit _ type equal to SPS _ NUT and nuh _ layer _ id not included in the list layerdlinils [ targetolldx ] are deleted from the outBitstream.
Deleting from the outBitstream all NAL units with NAL _ unit _ type equal to PPS _ NUT and nuh _ layer _ id not included in the list layerldiniols [ targetolldx ].
Deleting from the outBitstream all NAL units with NAL _ unit _ type equal to APS _ NUT and nuh _ layer _ id not included in the list layerldiniols [ targetolldx ].
Deleting from the outBitstream all NAL units with NAL _ unit _ type equal to SPS _ NUT, PPS _ NUT and APS _ NUT and satisfying any of the following conditions:
nuh _ layer _ id is greater than layerldinols [ targetolldx ] [ j ] for at least j values in the range of 0 to NumLayersInOls [ targetolldx ] -1 (inclusive).
Nuh _layer _idis not included in the list LayerIdInOls [ targetOlsIdx ].
v. when a first NAL unit of NAL _ unit _ type equal to any of SPS _ NUT, PPS _ NUT, and APS _ NUT is deleted during extraction, a second NAL unit referring to the first NAL unit should also be deleted.
c. For example, the derivation of the output sub-bitstream outbitstart may depend on one or more of:
deleting from the outBitstream all NAL units with NAL _ unit _ type equal to any of VPS _ NUT, DCI _ NUT and EOB _ NUT and nuh _ layer _ id not included in the list for LayerIdOls [ targetollsidx ].
17 The bits (e.g., adaptation _ parameter _ set _ ID) required to signal the APS ID depend on the APS type.
a. The bits required to signal the APS ID (e.g., adaptation _ parameter _ set _ ID) are modified from U (3) to U (v).
i. In one example, for an Adaptive Loop Filter (ALF) APS, the APS ID may be codec using u (a).
in one example, for luma mapping and chroma extension (LMCS), the APS ID may be coded using u (b).
in one example, for an extended list APS, the APS ID may be coded using u (c).
in one example, a/b/c depends on the maximum APS number allowed by the corresponding type.
1. In one example, a > b and a > c.
2. In one example, a > = b and a > c.
3. In one example, c > b.
4. In one example, b =2.
5. In one example, c =3.
6. In one example, a =3 or greater than 3 (e.g., 4, 5, 6, 7, 8, 9).
The codec order of the APS ID (e.g., adaptation _ parameter _ set _ ID) and the APS type (e.g., APS _ parameters _ type in the VVC text) is switched to the APS type before the APS ID exists in the bitstream.
The total number of filters allowed in aps may be limited according to codec information such as picture/slice type, codec structure (dual tree or single tree), layer information.
The total number of filters allowed in APS may include the total number of luminance/chrominance ALFs and CC-ALFs in ALF APS in all APS NAL units with PUs.
The total number of filters allowed in APS may include the total number of adaptive loop filter classes for the luma component (or luma ALF filter), the total number of replacement filters for the chroma component (chroma ALF filter), and/or the total number of cross-component filters in all APS NAL units with PUs.
6. Examples of the embodiments
The following are some exemplary embodiments of some inventive aspects summarized above in section 5, which may be applied to the VVC specification. Most relevant parts that have been added or modified are replaced by
Figure BDA0003895594000000221
Figure BDA0003895594000000222
Marked and some deleted portions use [ 2 ]]]And (4) indicating.
6.1. First embodiment
The present embodiment is applicable to items 1, 1.a, 1.b, 1.c, 1.d, 3, 4.a.i, 5, 6, 7, 8, 9, 9.a, 10, 11, and 12 b.
7.3.2.3 sequence parameter set syntax
Figure BDA0003895594000000231
7.4.3.3 sequence parameter set RBSP semantics
...
Figure BDA0003895594000000232
res _ change _ in _ CLVS _ allowed _ flag equal to 1 specifies that in CLVS for reference SPS, picture spatial resolution may be altered. res _ change _ in _ CLVS _ allowed _ flag equal to 0 specifies that in any CLVS referring to SPS, picture spatial resolution is not altered.
Figure BDA0003895594000000233
Figure BDA0003895594000000234
...
sub _ linear _ as _ pic _ flag [ i ] equal to 1 specifies that the ith sub-picture of each coded picture in CLVS is viewed as a picture in a decoding process that does not include an in-loop filtering operation. sub _ linear _ as _ pic _ flag [ i ] equal to 0 specifies that the ith sub-picture of each coded picture in CLVS is not considered a picture in the decoding process that does not include the loop filtering operation. If not, the value of sub _ linear _ as _ pic _ flag [ i ] is inferred to be equal to sps _ independent _ sub _ flag.
When in use
Figure BDA0003895594000000235
subpic_treated_as_pic_flag[i]When the value is equal to 1, the reaction solution is,
Figure BDA0003895594000000241
Figure BDA0003895594000000242
Figure BDA0003895594000000243
the requirements of the bitstream compliance are met,
Figure BDA0003895594000000244
Figure BDA0003895594000000245
all of the following conditions are satisfied:
all pictures of the AU in the targetAuSet and the layer in the targetLayerSet should have the same pic _ width _ in _ luma _ samples value and the same pic _ height _ in _ luma _ samples value.
Figure BDA0003895594000000246
-by
Figure BDA0003895594000000247
All SPS referenced should have the same SPS _ num _ sub _ minus1 value, and for each j value in the range of 0 to SPS _ num _ sub _ minus1 (including endpoints), should have a sub _ ctu _ top _ left _ x [ j ] values, respectively]、subpic_ctu_top_left_y[j]、subpic_width_minus1[j]、subpic_height_minus1[j]And
Figure BDA0003895594000000248
[[loop_filter_across_subpic_enabled_flag[j],]]the same value of (c).
Figure BDA0003895594000000249
All pictures should have the same subpaccidVal [ j ] for each j value in the 0 to sps _ num _ subpatics _ minus1 range (including endpoints)]The value is obtained.
...
7.4.3.4 Picture parameter set RBSP semantics
...
scaling _ window _ explicit _ signaling _ flag equal to 1 specifies the presence of an extended window offset parameter in the PPS. scaling _ window _ explicit _ signaling _ flag equal to 0 specifies that the extended window offset parameter is not present in the PPS. When in use
Figure BDA00038955940000002410
[[res_change_in_clvs_allowed_flag]]Equal to 0, the value of scaling _ window _ explicit _ signaling _ flag should be equal to 0.
scaling _ win _ left _ offset, scaling _ win _ right _ offset, scaling _ win _ top _ offset, and scaling _ win _ bottom _ offset specify the amount of offset to be applied to the picture size for expansion ratio calculation. If not, it is inferred that the values of scaling _ win _ left _ offset, scaling _ win _ right _ offset, scaling _ win _ top _ offset, and scaling _ win _ bottom _ offset are equal to pps _ conf _ win _ left _ offset, pps _ conf _ win _ right _ offset, pps _ conf _ win _ top _ offset, and pps _ conf _ win _ bottom _ offset, respectively.
The value of SubWidthC (scaling _ win _ left _ offset + scaling _ win _ right _ offset) should be less than pic _ width _ in _ luma _ samples, and the value of subwight c (scaling _ win _ top _ offset + scaling _ win _ bottom _ offset) should be less than pic _ height _ in _ luma _ samples.
Figure BDA0003895594000000251
The variables picoutputwidth l and picoutputheight l are derived as follows:
PicOutputWidthL=pic_width_in_luma_samples-(78)
SubWidthC*(scaling_win_right_offset+scaling_win_left_offset)
PicOutputHeightL=pic_height_in_luma_samples-(79)
SubWidthC*(scaling_win_bottom_offset+scaling_win_top_offset)
let refPicCoutPutWidthL and refPicCoutHeightL be PicOutputWidthL and PicOutputHeightL, respectively, that refer to the reference picture of the current picture of the PPS. Bitstream compliance requires that all of the following conditions be met:
-picoutputwidth l 2 should be greater than or equal to refpicwidthinlumamasamples.
-PicOutputHeightL 2 should be greater than or equal to refPicHeightInLumaSamples.
-picoutputwidth l should be less than or equal to refpicowidth inlumasamples 8.
-PicOutputHeight L should be less than or equal to refPicHeightInLumaSamples 8.
-picoutputwidth _ pic _ width _ Max _ in _ luma _ samples should be greater than or equal to refpicoutputwidth _ pic _ width _ in _ luma _ samples-Max (8, mincbsizey).
-PicOutputHeight L pic _ height _ Max _ in _ luma _ samples should be greater than or equal to refPicOutputHeight L (pic _ height _ in _ luma _ samples-Max (8, minCbSizeY)).
...
7.3.3.2 general constraint information syntax
Figure BDA0003895594000000252
Figure BDA0003895594000000261
7.4.4.2 general constraint information semantics
...
Figure BDA0003895594000000262
no _ res _ change _ in _ clvs _ constraint _ flag equal to 1 specifies that res _ change _ in _ clvs _ allowed _ flag should be equal to 0.no _ res _ change _ in _ clvs _ constraint _ flag equal to 0 does not impose such a constraint.
...
7.4.8.1 Universal stripe header semantics
...
slice _ collocated _ from _ l0_ flag equal to 1 specifies that collocated pictures used for temporal motion vector prediction are derived from reference picture list 0. slice _ collocated _ from _ l0_ flag equal to 0 specifies that collocated pictures used for temporal motion vector prediction are derived from reference picture list 1.
When slice _ type is equal to B or P, ph _ temporal _ mvp _ enabled _ flag is equal to 1, and slice _ collocated _ from _ l0_ flag is not present, the following applies:
-if rpl _ info _ in _ ph _ flag is equal to 1, then we conclude that slice _ collocated _ from _ l0_ flag is equal to ph _ collocated _ from _ l0_ flag.
Otherwise (rpl _ info _ in _ ph _ flag equal to 0 and slice _ type equal to P), the value of slice _ collocated _ from _ l0_ flag is inferred to be equal to 1.
slice _ collocated _ ref _ idx specifies the reference index of the collocated picture used for temporal motion vector prediction.
When slice _ type is equal to P or when slice _ type is equal to B and slice _ collocated _ from _ l0_ flag is equal to 1, slice _ collocated _ ref _ idx refers to an entry in reference picture list 0, and the value of slice _ collocated _ ref _ idx should be in the range of 0 to NumRefIdxActive [0] -1 (including the endpoints).
When slice _ type is equal to B and slice _ collocated _ from _ l0_ flag is equal to 0, slice _ collocated _ ref _ idx refers to an entry in reference picture list 1, and the value of slice _ collocated _ ref _ idx should be in the range of 0 to NumRefIdxActive [1] -1 (inclusive).
When there is no slice _ collocated _ ref _ idx, the following applies:
-if rpl _ info _ in _ ph _ flag is equal to 1, deducing that the value of slice _ collocated _ ref _ idx is equal to ph _ collocated _ ref _ idx.
-otherwise (rpl _ info _ in _ ph _ flag equal to 0), deducing that the value of slice _ collocated _ ref _ idx is equal to 0.
Bitstream dependency requires that the picture referenced by slice _ collocated _ ref _ idx should be the same for all slices of the coded picture,
Figure BDA0003895594000000271
bitstream dependency requirement, the values of pic _ width _ in _ luma _ samples and pic _ height _ in _ luma _ samples of the reference picture referenced by slice _ collocated _ ref _ idx should be equal to the values of pic _ width _ in _ luma _ samples and pic _ height _ in _ luma _ samples of the current picture, respectively, and
RprConstraint sActive slice _ collocated _ from _ l0_ flag0:1 slice _ collocated _ ref _ idx should be equal to 0.
...
8.5.3.2.2 luma sample bilinear interpolation process
...
Luminance position in full sample unit (xtint) i ,yInt i ) Is derived as follows for i =0..1:
if subpac _ managed _ as _ pic _ flag [ CurrSubpicidX [ ]]Equal to 1, and is,
Figure BDA0003895594000000272
Figure BDA0003895594000000273
the following applies:
xInt i =Clip3(SubpicLeftBoundaryPos,SubpicRightBoundaryPos,xInt L +i)(640)
yInt i =Clip3(SubpicTopBoundaryPos,SubpicBotBoundaryPos,yInt L +i)(641)
else (redundant _ managed _ as _ pic _ flag [ CurrSubpicidX ]]Is equal to 0 and is equal to 0,
Figure BDA0003895594000000274
Figure BDA0003895594000000275
the following applies:
xInt i =Clip3(0,picW-1,refWraparoundEnabledFlag?
ClipH((PpsRefWraparoundOffset)*MinCbSizeY,picW,(xInt L +i)):xInt L +i)(642)
yInt i =Clip3(0,picH-1,yInt L +i)(643)
...
8.5.6.3.2 luminance sample interpolation filter process
...
If temporal _ processed _ as _ pic _ flag [ CurrSubpicidX ]]Equal to 1, and is,
Figure BDA0003895594000000281
Figure BDA0003895594000000282
the following applies:
xInt i =Clip3(SubpicLeftBoundaryPos,SubpicRightBoundaryPos,xInt i )(959)
yInt i =Clip3(SubpicTopBoundaryPos,SubpicBotBoundaryPos,yInt i )(960)
else (redundant _ managed _ as _ pic _ flag [ CurrSubpicidX ]]Is equal to 0 and is equal to 0,
Figure BDA0003895594000000283
Figure BDA0003895594000000284
) The following applies:
xInt i =Clip3(0,picW-1,refWraparoundEnabledFlag?
ClipH((PpsRefWraparoundOffset)*MinCbSizeY,picW,xInt i ):xInt i )(961)
yInt i =Clip3(0,picH-1,yInt i )(962)
...
8.5.6.3.3 luminance integer sample acquisition process
...
The derivation of the luminance position (xInt, yInt) in full sample units is as follows:
if subpac _ managed _ as _ pic _ flag [ CurrSubpicidX [ ]]Equal to 1, and is,
Figure BDA0003895594000000285
Figure BDA0003895594000000286
the following applies:
xInt=Clip3(SubpicLeftBoundaryPos,SubpicRightBoundaryPos,xInt L )(968)
yInt=Clip3(SubpicTopBoundaryPos,SubpicBotBoundaryPos,yInt L )(969)
-otherwise (
Figure BDA0003895594000000287
Figure BDA0003895594000000288
) The following applies:
xInt=Clip3(0,picW-1,refWraparoundEnabledFlag?(970)
ClipH((PpsRefWraparoundOffset)*MinCbSizeY,picW,xInt L ):xInt L )
yInt=Clip3(0,picH-1,yInt L )(971)
...
8.5.6.3.4 chroma sample interpolation process
...
If subpac _ managed _ as _ pic _ flag [ CurrSubpicidX [ ]]Equal to 1, and is,
Figure BDA0003895594000000291
Figure BDA0003895594000000292
the following applies:
xInt i =Clip3(SubpicLeftBoundaryPos/SubWidthC,SubpicRightBoundaryPos/SubWidthC,xInt i )(977)
yInt i =Clip3(SubpicTopBoundaryPos/SubHeightC,SubpicBotBoundaryPos/SubHeightC,yInt i )(978)
else (redundant _ managed _ as _ pic _ flag [ CurrSubpicidX ]]Is equal to 0 and is equal to 0,
Figure BDA0003895594000000293
Figure BDA0003895594000000294
) The following applies:
xInt i =Clip3(0,picWC-1,refWraparoundEnabledFlagClipH(xOffset,picW C ,xInt i ):(979)
xInt C +i-1)
yInt i =Clip3(0,picH C -1,yInt i )(980)
...
alternatively, the highlight section "and the sps _ num _ sub _ minus1 of the reference picture refPicLX is greater than 0" may be replaced with "and if the reference picture refPicLX is an ILRP having the same spatial resolution as the current picture".
Alternatively, the highlighted portion "or sps _ num _ sub _ minus1 of the reference picture refPicLX equal to 0" may be replaced with "or if the reference picture refPicLX is an ILRP with a different spatial resolution than the current picture".
Alternatively, for collocated picture requirements, e.g., "bitstream dependency requirement, the picture referenced by slice _ collocated _ ref _ idx should be the same for all slices of the coded picture,
Figure BDA0003895594000000295
"can be replaced by" bitstream dependency requirement, the picture referenced by slice _ collocated _ ref _ idx should be the same for all slices of the coded picture,
Figure BDA0003895594000000296
”。
alternatively, for collocated picture requirements, e.g., "bitstream dependency requirement, the picture referenced by slice _ collocated _ ref _ idx should be the same for all slices of the coded picture,
Figure BDA0003895594000000297
"can be replaced by" bitstream dependency requirement, the picture referenced by slice _ collocated _ ref _ idx should be the same for all slices of the coded picture,
Figure BDA0003895594000000298
”。
6.2. second embodiment
In some alternative embodiments, the following constraints in the first embodiment:
when the temperature is higher than the set temperature
Figure BDA0003895594000000301
subpic_treated_as_pic_flag[i]When the value is equal to 1, the reaction solution is,
Figure BDA0003895594000000302
Figure BDA0003895594000000303
Figure BDA0003895594000000304
the requirements of the bit stream compliance are such that,
Figure BDA0003895594000000305
Figure BDA0003895594000000306
all of the following conditions are satisfied:
Figure BDA0003895594000000307
all pictures should have the same pic _ width _ in _ luma _ samples value and the same pic _ height _ in _ luma _ samples value.
Figure BDA0003895594000000308
Figure BDA0003895594000000309
-by
Figure BDA00038955940000003010
All SPS referenced should have the same SPS _ num _ sub _ minus1 value, and for each j value in the range of 0 to SPS _ num _ sub _ minus1 (including endpoints), should have a sub _ ctu _ top _ left _ x [ j ] values, respectively]、subpic_ctu_top_left_y[j]、subpic_width_minus1[j]、subpic_height_minus1[j]And
Figure BDA00038955940000003011
[[loop_filter_across_subpic_enabled_flag[j],]]are the same asThe value is obtained.
All pictures of AU in targetAuSet and layer in targetLayerSet should have the same value of subacidval [ j ] for each j value in the range 0 to sps _ num _ subacids _ minus1 (including endpoints).
Is replaced with one of:
1) When the temperature is higher than the set temperature
Figure BDA00038955940000003012
subpic_treated_as_pic_flag[i]When the value is equal to 1, the reaction solution is,
Figure BDA00038955940000003013
Figure BDA00038955940000003014
Figure BDA00038955940000003015
the requirements of the bit stream compliance are such that,
Figure BDA00038955940000003016
Figure BDA00038955940000003017
all of the following conditions are satisfied:
Figure BDA00038955940000003018
all pictures should have the same pic _ width _ in _ luma _ samples value and the same pic _ height _ in _ luma _ samples value.
Figure BDA00038955940000003019
Figure BDA00038955940000003020
-by
Figure BDA0003895594000000311
All SPS referenced should have the same SPS _ num _ sub _ minus1 values, and for each j value in the range of 0 to sps _ num _ sub _ minus1 (including endpoints), shall have a sub _ ctu _ top _ left _ x [ j ] respectively]、subpic_ctu_top_left_y[j]、subpic_width_minus1[j]Andsubpic_height_minus1[j]and [ [ and loop _ filter _ across _ supplemental _ enabled _ flag [ j ]],]]The same value of (a).
All pictures of AU in targetAuSet and layer in targetLayerSet should have the same value of subacidval [ j ] for each j value in the range 0 to sps _ num _ subacids _ minus1 (including endpoints).
2) When the temperature is higher than the set temperature
Figure BDA0003895594000000312
subpic_treated_as_pic_flag[i]When the value is equal to 1, the reaction solution is,
Figure BDA0003895594000000313
Figure BDA0003895594000000314
Figure BDA0003895594000000315
the requirements of the bit stream compliance are such that,
Figure BDA0003895594000000316
Figure BDA0003895594000000317
all of the following conditions are satisfied:
Figure BDA0003895594000000318
all pictures should have the same pic _ width _ in _ luma _ samples value and the same pic _ height _ in _ luma _ samples value.
Figure BDA0003895594000000319
Figure BDA00038955940000003110
-by
Figure BDA00038955940000003111
All SPS referenced should have the same SPS _ num _ sub _ minus1 value, and for each j value in the range of 0 to SPS _ num _ sub _ minus1 (including endpoints), should have a sub _ ctu _ top _ left _ x [ j ] values, respectively]、subpic_ctu_top_left_y[j]、subpic_width_minus1[j]、subpic_height_minus1[j]And loop _ filter _ across _ subacic _ enabled _ flag [ j]The same value of (a).
Figure BDA00038955940000003112
All pictures should have the same subpaciidval j for each j value in the 0 to sps _ num _ subpacis _ minus1 range (including endpoints)]The value is obtained.
3) When configured _ linear _ as _ pic _ flag [ i [ ]]When the value is equal to 1, the reaction solution is,
Figure BDA00038955940000003113
Figure BDA00038955940000003114
Figure BDA00038955940000003115
the requirements of the bit stream compliance are such that,
Figure BDA00038955940000003116
Figure BDA00038955940000003117
all of the following conditions are satisfied:
Figure BDA0003895594000000321
all pictures should have the same pic _ width _ in _ luma _ samples value and the same pic _ height _ in _ luma _ samples value.
Figure BDA0003895594000000322
-by
Figure BDA0003895594000000323
All SPS referenced should have the same SPS _ num _ sub _ minus1 value, and for each j value in the range of 0 to SPS _ num _ sub _ minus1 (including endpoints), should have a sub _ ctu _ top _ left _ x [ j ] values, respectively]、subpic_ctu_top_left_y[j]、subpic_width_minus1[j]、subpic_height_minus1[j]And
Figure BDA0003895594000000324
[[loop_filter_across_subpic_enabled_flag[j],]]the same value of (a).
All pictures of AU in targetAuSet and layer in targetLayerSet should have the same value of subacidval [ j ] for each j value in the range 0 to sps _ num _ subacids _ minus1 (including endpoints).
4) When the temperature is higher than the set temperature
Figure BDA0003895594000000325
subpic_treated_as_pic_flag[i]When the value is equal to 1, the reaction solution is,
Figure BDA0003895594000000326
Figure BDA0003895594000000327
Figure BDA0003895594000000328
the requirements of the bit stream compliance are such that,
Figure BDA0003895594000000329
Figure BDA00038955940000003210
all of the following conditions are satisfied:
Figure BDA00038955940000003211
all pictures should have the same pic _ width _ in _ luma _ samples value and the same pic _ height _ in _ luma _ samples valueThe value is obtained.
Figure BDA00038955940000003212
-by
Figure BDA00038955940000003213
All SPS referenced should have the same SPS _ num _ sub _ minus1 value, and for each j value in the range of 0 to SPS _ num _ sub _ minus1 (including endpoints), should have a sub _ ctu _ top _ left _ x [ j ] values, respectively]、subpic_ctu_top_left_y[j]、subpic_width_minus1[j]、subpic_height_minus1[j]、
Figure BDA00038955940000003214
And loop _ filter _ across _ subacic _ enabled _ flag [ j]The same value of (a).
Figure BDA00038955940000003215
All pictures should have the same subpaccidVal [ j ] for each j value in the 0 to sps _ num _ subpatics _ minus1 range (including endpoints)]The value is obtained.
6.3. Third embodiment
The present example proposes the following aspects with respect to the limitation of the maximum number of ALF and CC-ALF filters:
1) Replacing the constraint on the number of ALF APS with the constraint on the number of filters, more specifically, adding the following constraints is proposed:
the total number of adaptive loop filter classes for the luma component, the total number of replacement filters for the chroma components, and the total number of cross-component filters in all APS NAL units of the PU should be less than or equal to 200, 64, and 64, respectively.
2) Item 1) further changes the codec of the APS ID in the APS syntax from u (5) to u (v), with lengths of 9, 2 and 3 for ALF, LMCS and extended list APS, respectively.
3) On the basis of item 1, the codec of the ALF APS index and the number of ALF APS in PH and SH are further changed from u (v) to ue (v).
7.3.2.5 adaptive parameter set RBSP syntax
Figure BDA0003895594000000331
7.3.2.7 Picture header Structure syntax
Figure BDA0003895594000000332
Figure BDA0003895594000000341
7.3.7.1 generic slice header syntax
Figure BDA0003895594000000342
Figure BDA0003895594000000351
7.4.3.5 adaptive parameter set semantics
Each APS RBSP should be available for the decoding process before it is referenced, including in at least one AU, with a temporalld less than or equal to the temporalld of the codec slice NAL unit referencing that APS RBSP or provided by external means.
All APS NAL units within a PU that have a particular value of adaptation _ parameter _ set _ id and a particular value of APS _ parameters _ type, whether they are prefix or suffix APS NAL units, should have the same content.
The adaptation _ parameter _ set _ id provides an identifier for the PPS for reference by other syntax elements. The length (in bits) of the syntax element adaptation _ parameter _ set _ id is APS _ params _ type = ALF _ APS?9 (APS _ params _ type = = LMCS _ APS2: 3).
When APS _ params _ type is equal to ALF _ APS [ [ or SCALING _ APS [ ]]]When the adaptation _ parameter _ set _ id should have a value from 0 to[[7]]
Figure BDA0003895594000000352
In (b) (including the endpoints).
[ [ when APS _ params _ type is equal to LMCS _ APS, the value of adaptation _ parameter _ set _ id should be in the range of 0 to 3 (inclusive). ]]
Suppose apsLayerId is the value of nuh _ layer _ id for a particular APS NAL unit, and suppose vclLayerId is the value of nuh _ layer _ id for a particular VCL NAL unit. A particular VCL NAL unit should not refer to a particular APS NAL unit unless apsllayerid is less than or equal to vclllayerid and a layer with nuh _ layer _ id equal to apsllayerid is included in at least one OLS including layers with nuh _ layer _ id equal to vcllld.
The APS _ params _ type specifies the type of APS parameter carried in the APS specified in table 6.
Figure BDA0003895594000000353
All APS NAL units with a particular value APS _ params _ type (regardless of the nuh _ layer _ id value) share the same value space for the adaptation _ parameter _ set _ id. APS NAL units with different values APS _ params _ type use separate value spaces for the adaptation _ parameter _ set _ id.
7.4.3.7 Picture header Structure semantics
PH _ num _ ALF _ APS _ ids _ luma specifies the number of ALF APS to which the stripe associated with PH refers.
Figure BDA0003895594000000361
Figure BDA0003895594000000362
ph_alf_aps_id_luma[i]Specifies the adaptation _ parameter _ set _ id of the ith ALF APS to which the luma component of the slice associated with PH refers.
Figure BDA0003895594000000363
Figure BDA0003895594000000364
ph_alf_aps_id_chroma[i]The adaptation _ parameter _ set _ id of the ALF APS to which the chroma component of the slice associated with the PH refers is specified.
Figure BDA0003895594000000365
Figure BDA0003895594000000366
PH _ cc _ ALF _ Cb _ APS _ id specifies the adaptation _ parameter _ set _ id of the ALF APS to which the Cb color component of the slice associated with the PH refers.
Figure BDA0003895594000000367
Figure BDA0003895594000000368
PH _ cc _ ALF _ Cr _ APS _ id specifies the adaptation _ parameter _ set _ id of the ALF APS to which the Cr color component of the slice associated with the PH refers.
Figure BDA0003895594000000369
Figure BDA00038955940000003610
7.4.8.1 generic stripe header semantics
slice _ num _ ALF _ APS _ ids _ luma specifies the number of ALF APS to which the slice refers. When slice _ alf _ enabled _ flag is equal to 1 and there is no slice _ num _ alf _ aps _ ids _ luma, it will be inferred that the value of slice _ num _ alf _ aps _ ids _ luma is equal to the value of ph _ num _ alf _ aps _ ids _ luma.
Figure BDA00038955940000003611
slice _ ALF _ APS _ id _ luma [ i ] specifies the adaptation _ parameter _ set _ id of the ith ALF APS to which the luma component of the slice refers. The Temporalld of an APS NAL unit with APS _ params _ type equal to ALF _ APS and adaptation _ parameter _ set _ id equal to slice _ ALF _ APS _ id _ luma [ i ] should be less than or equal to the Temporalld of a codec slice NAL unit. When slice _ alf _ enabled _ flag is equal to 1 and slice _ alf _ aps _ id _ luma [ i ] is not present, it will be inferred that the value of slice _ alf _ aps _ id _ luma [ i ] is equal to the value of ph _ alf _ aps _ id _ luma [ i ].
Figure BDA00038955940000003612
slice _ ALF _ APS _ id _ chroma specifies the adaptation _ parameter _ set _ id of the ALF APS to which the chroma components of the slice refer. The temporalld of APS NAL unit with APS _ params _ type equal to ALF _ APS and adaptation _ parameter _ set _ id equal to slice _ ALF _ APS _ id _ chroma should be less than or equal to the temporalld of codec slice NAL unit. When slice _ alf _ enabled _ flag is equal to 1 and slice _ alf _ aps _ id _ chroma is not present, it will be inferred that the value of slice _ alf _ aps _ id _ chroma is equal to the value of ph _ alf _ aps _ id _ chroma.
Figure BDA0003895594000000371
slice _ cc _ alf _ Cb _ aps _ id specifies the adaptation _ parameter _ set _ id to which the Cb color component of the slice refers.
The Temporalld of an APS NAL unit with APS _ params _ type equal to ALF _ APS and adaptation _ parameter _ set _ id equal to slice _ cc _ ALF _ cb _ APS _ id should be less than or equal to the Temporalld of a codec slice NAL unit. When slice _ cc _ alf _ cb _ enabled _ flag is equal to 1 and there is no slice _ cc _ alf _ cb _ aps _ id, it will be inferred that the value of slice _ cc _ alf _ cb _ aps _ id is equal to the value of ph _ cc _ alf _ cb _ aps _ id.
Figure BDA0003895594000000372
Figure BDA0003895594000000373
The ALF _ cc _ cb _ filter _ signal _ flag value for an APS NAL unit with APS _ params _ type equal to ALF _ APS and adaptation _ parameter _ set _ id equal to slice _ cc _ ALF _ cb _ APS _ id should be equal to 1.
slice _ cc _ alf _ Cr _ aps _ id specifies the adaptation _ parameter _ set _ id referred to by the Cr color component of the slice. The temporalld of APS NAL unit with APS _ params _ type equal to ALF _ APS and adaptation _ parameter _ set _ id equal to slice _ cc _ ALF _ cr _ APS _ id should be less than or equal to the temporalld of the codec slice NAL unit. When slice _ cc _ alf _ cr _ enabled _ flag is equal to 1 and there is no slice _ cc _ alf _ cr _ aps _ id, it will be inferred that the value of slice _ cc _ alf _ cr _ aps _ id is equal to the value of ph _ cc _ alf _ cr _ aps _ id.
Figure BDA0003895594000000374
The ALF _ cc _ cr _ filter _ signal _ flag value for APS NAL units with APS _ params _ type equal to ALF _ APS and adaptation _ parameter _ set _ id equal to slice _ cc _ ALF _ cr _ APS _ id should be equal to 1.
In the above example, the following may instead be used:
the adaptation _ parameter _ set _ id provides an identifier for the PPS for reference by other syntax elements. The length (in bits) of the syntax element adaptation _ parameter _ set _ id is APS _ params _ type = ALF _ APS? M (APS _ params _ type = = LMCS _ APS2: 3), where M is equal to a value not less than 3 (e.g., 4, 5, 6, 7, 8, 9).
The values of '200, 64' may be replaced with other non-zero integer values.
The value of '327' may be replaced with other non-zero integer values.
Fig. 5 is a block diagram illustrating an exemplary video processing system 1900 in which various techniques disclosed herein may be implemented. Various embodiments may include some or all of the components of system 1900. The system 1900 may include an input 1902 for receiving video content. The video content may be received in a raw or uncompressed format (e.g., 8-bit or 10-bit multi-component pixel values), or may be received in a compressed or encoded format. Input 1902 may represent a network interface, a peripheral bus interface, or a storage interface. Examples of network interfaces include wired interfaces, such as ethernet, passive Optical Network (PON), etc., and wireless interfaces, such as Wi-Fi or cellular interfaces.
The system 1900 can include a codec component 1904 that can implement various codecs or encoding methods described in this document. The codec component 1904 may reduce the average bit rate of the video output from the input 1902 to the codec component 1904 to produce a codec representation of the video. Thus, codec techniques are sometimes referred to as video compression or video transcoding techniques. The output of the codec component 1904 may be stored or sent via a connected communication, as represented by component 1906. A stored or transmitted bitstream (or codec) representation of the video received at input 1902 can be used by component 1908 to generate pixel values or displayable video, which is issued to display interface 1910. The process of generating user-viewable video from a bitstream representation is sometimes referred to as video decompression. Further, while certain video processing operations are referred to as "codec" operations or tools, it should be understood that codec tools or operations are used in the encoder and that the decoder will perform corresponding decoding tools or operations that reverse the codec results.
Examples of a peripheral bus interface or display interface may include a Universal Serial Bus (USB) or a high-definition multimedia interface (HDMI) or displayport, among others. Examples of storage interfaces include SATA (serial advanced technology attachment), PCI, IDE interfaces, and the like. The techniques described in this document may be embodied in various electronic devices, such as mobile phones, laptop computers, smart phones, or other devices capable of performing digital data processing and/or video display.
Fig. 6 is a block diagram of the video processing device 3600. The apparatus 3600 may be used to implement one or more of the methods described herein. The device 3600 may be embodied as a smartphone, tablet computer, internet of things (IoT) receiver, and/or the like. The apparatus 3600 may include one or more processors 3602, one or more memories 3604, and video processing hardware 3606. The processor 3602 may be configured to implement one or more of the methods described in this document. Memory (es) 3604 may be used to store data and code for implementing the methods and techniques described herein. The video processing hardware 3606 may be used to implement some of the techniques described in this document in hardware circuits.
Fig. 8 is a block diagram illustrating an exemplary video codec system 100 that may utilize the techniques of this disclosure.
As shown in fig. 8, the video codec system 100 may include a source device 110 and a target device 120. The source device 110 generates encoded video data, which source device 110 may be referred to as a video encoding device. Target device 120, which may be referred to as a video decoding device, may decode the encoded video data generated by source device 110.
The source device 110 may include a video source 112, a video encoder 114, and an input/output (I/O) interface 116.
The video source 112 may include a source such as a video capture device, an interface for receiving video data from a video content provider and/or a computer graphics system for generating video data, or a combination of such sources. The video data may include one or more pictures. The video encoder 114 encodes video data from the video source 112 to generate a bitstream. The bitstream may comprise a series of bits that form a codec representation of the video data. The bitstream may include coded pictures and associated data. A coded picture is a coded representation of a picture. The associated data may include sequence parameter sets, picture parameter sets, and other syntax structures. The I/O interface 116 may include a modulator/demodulator (modem) and/or a transmitter. The encoded video data may be transmitted directly to the target device 120 via the I/O interface 116 through the network 130 a. The encoded video data may also be stored on storage medium/server 130b for access by target device 120.
Target device 120 may include I/O interface 126, video decoder 124, and display device 122.
I/O interface 126 may include a receiver and/or a modem. I/O interface 126 may retrieve encoded video data from source device 110 or storage medium/server 130 b. The video decoder 124 may decode the encoded video data. The display device 122 may display the decoded video data to a user. The display device 122 may be integrated with the target device 120 or may be external to the target device 120, which is configured to interface with an external display device.
The video encoder 114 and the video decoder 124 may operate in accordance with video compression standards such as the High Efficiency Video Codec (HEVC) standard, the multifunctional video codec (VVM) standard, and other current and/or additional standards.
Fig. 9 is a block diagram illustrating an example of a video encoder 200, which may be the video encoder 114 in the system 100 shown in fig. 8.
Video encoder 200 may be configured to perform any or all of the techniques of this disclosure. In the example of fig. 9, video encoder 200 includes a number of functional components. The techniques described in this disclosure may be shared among various components of the video encoder 200. In some examples, the processor may be configured to perform any or all of the techniques described in this disclosure.
The functional components of the video encoder 200 may include a partition unit 201, a prediction unit 202 (which may include a mode selection unit 203, a motion estimation unit 204, a motion compensation unit 205, and an intra prediction unit 206), a residual generation unit 207, a transform unit 208, a quantization unit 209, an inverse quantization unit 210, an inverse transform unit 211, a reconstruction unit 212, a buffer 213, and an entropy coding unit 214.
In other examples, video encoder 200 may include more, fewer, or different functional components. In an example, the prediction unit 202 may include an Intra Block Copy (IBC) unit. The IBC unit may perform prediction in IBC mode, where the at least one reference picture is a picture in which the current video block is located.
Furthermore, some components (such as the motion estimation unit 204 and the motion compensation unit 205) may be highly integrated, but are represented separately for explanatory purposes in the example of fig. 9.
Partition unit 201 may partition a picture into one or more video blocks. The video encoder 200 and the video decoder 300 may support various video block sizes.
The mode selection unit 203 may, for example, select one of the codec modes, i.e., intra prediction or inter prediction, based on the error result, and supply the resulting intra-codec or inter-codec block to the residual generation unit 207 to generate residual block data and to the reconstruction unit 212 to reconstruct the encoded block to serve as a reference picture. In some examples, mode selection unit 203 may select a Combination of Intra and Inter Prediction (CIIP) modes, where the prediction is based on an inter prediction signal and an intra prediction signal. Mode selection unit 203 may also select the resolution of the motion vectors (e.g., sub-pixel or integer-pixel precision) for the blocks in the case of inter prediction.
To perform inter prediction on the current video block, motion estimation unit 204 may generate motion information for the current video block by comparing one or more reference frames from buffer 213 to the current video block. Motion compensation unit 205 may determine a prediction video block for the current video block based on the motion information and decoded samples of the picture from buffer 213 (rather than the picture associated with the current video block).
The motion estimation unit 204 and the motion compensation unit 205 may perform different operations on the current video block, for example, depending on whether the current video block is in an I-slice, a P-slice, or a B-slice.
In some examples, motion estimation unit 204 may perform uni-directional prediction on the current video block, and motion estimation unit 204 may search for a reference video block for the current video block in a list 0 or list 1 reference picture. Motion estimation unit 204 may then generate a reference index that indicates a reference picture in list 0 or list 1 that includes the reference video block and a motion vector that indicates the spatial displacement between the current video block and the reference video block. Motion estimation unit 204 may output the reference index, the prediction direction indicator, and the motion vector as motion information of the current video block. The motion compensation unit 205 may generate a prediction video block of the current block based on a reference video block indicated by motion information of the current video block.
In other examples, motion estimation unit 204 may perform bi-prediction on the current video block, and motion estimation unit 204 may search a reference video block of the current video block in a reference picture in list 0 and may also search another reference video block of the current video block in a reference picture in list 1. Motion estimation unit 204 may then generate reference indices that indicate reference pictures in list 0 and list 1 that include a reference video block and a motion vector that indicates a spatial displacement between the reference video block and the current video block. Motion estimation unit 204 may output the reference index and the motion vector of the current video block as motion information for the current video block. Motion compensation unit 205 may generate a prediction video block for the current video block based on the reference video block indicated by the motion information for the current video block.
In some examples, the motion estimation unit 204 may output a complete set of motion information for decoding processing by a decoder.
In some examples, motion estimation unit 204 may not output a complete set of motion information for the current video. More specifically, motion estimation unit 204 may signal motion information for the current video block with reference to motion information of another video block. For example, motion estimation unit 204 may determine that the motion information of the current video block is sufficiently similar to the motion information of the neighboring video block.
In one example, motion estimation unit 204 may indicate a value in a syntax structure associated with the current video block that indicates to video decoder 300 that the current video block has the same motion information as another video block.
In another example, motion estimation unit 204 may identify another video block and a Motion Vector Difference (MVD) in a syntax structure associated with the current video block. The motion vector difference indicates a difference between the motion vector of the current video block and the motion vector of the indicated video block. Video decoder 300 may use the indicated motion vector and motion vector difference for the video block to determine the motion vector for the current video block.
As discussed above, the video encoder 200 may predictively signal the motion vectors. Two examples of prediction signaling techniques that may be implemented by video encoder 200 include Advanced Motion Vector Prediction (AMVP) and merge mode signaling.
The intra prediction unit 206 may perform intra prediction on the current video block. When intra-prediction unit 206 performs intra-prediction on the current video block, intra-prediction unit 206 may generate prediction data for the current video block based on decoded samples of other video blocks in the same picture. The prediction data for the current video block may include a predicted video block and various syntax elements.
Residual generation unit 207 may generate residual data for the current video block by subtracting (e.g., as indicated by a minus sign) the prediction video block of the current video block from the current video block. The residual data for the current video block may include residual video blocks corresponding to different sample components of samples in the current video block.
In other examples, for example in skip mode, there may be no residual data for the current video block of the current video block, and the residual generation unit 207 may not perform the subtraction operation.
Transform processing unit 208 may generate a transform coefficient video block for the current video block by applying one or more transforms to a residual video block associated with the current video block.
After transform processing unit 208 generates a transform coefficient video block associated with the current video block, quantization unit 209 may quantize the transform coefficient video block associated with the current video block based on one or more Quantization Parameter (QP) values associated with the current video block.
Inverse quantization unit 210 and inverse transform unit 211 may apply inverse quantization and inverse transform, respectively, to the transform coefficient video blocks to reconstruct residual video blocks from the transform coefficient video blocks. Reconstruction unit 212 may add the reconstructed residual video block to corresponding samples from one or more prediction video blocks generated by prediction unit 202 to produce a reconstructed video block associated with the current block for storage in buffer 213.
After reconstruction unit 212 reconstructs the video blocks, a loop filtering operation may be performed to reduce video blocking artifacts in the video blocks.
Entropy encoding unit 214 may receive data from other functional components of video encoder 200. When entropy encoding unit 214 receives the data, entropy encoding unit 214 may perform one or more entropy encoding operations to generate entropy encoded data and output a bitstream that includes the entropy encoded data.
Fig. 10 is a block diagram illustrating an example of a video decoder 300, which may be the video decoder 114 in the system 100 shown in fig. 8.
Video decoder 300 may be configured to perform any or all of the techniques of this disclosure. In the example of fig. 10, the video decoder 300 includes a number of functional components. The techniques described in this disclosure may be shared among various components of video decoder 300. In some examples, the processor may be configured to perform any or all of the techniques described in this disclosure.
In the example of fig. 10, the video decoder 300 includes an entropy decoding unit 301, a motion compensation unit 302, an intra prediction unit 303, an inverse quantization unit 304, an inverse transform unit 305, and a reconstruction unit 306 and a buffer 307. In some examples, video decoder 300 may perform decoding channels that generally correspond to the encoding channels described with respect to video encoder 200 (fig. 9).
The entropy decoding unit 301 may retrieve the encoded bitstream. The encoded bitstream may include entropy coded video data (e.g., encoded blocks of video data). The entropy decoding unit 301 may decode the entropy-coded video data, and the motion compensation unit 302 may determine motion information from the entropy-decoded video data, the motion information including a motion vector, a motion vector precision, a reference picture list index, and other motion information. The motion compensation unit 302 may determine such information, for example, by performing AMVP and merge mode.
The motion compensation unit 302 may generate a motion compensation block so that it is possible to perform interpolation based on an interpolation filter. An identifier of the interpolation filter used with sub-pixel precision may be included in the syntax element.
The motion compensation unit 302 may use interpolation filters used by the video encoder 200 during encoding of the video block to calculate an interpolation of sub-integer pixels of the reference block. The motion compensation unit 302 may determine an interpolation filter used by the video encoder 200 according to the received syntax information and generate a prediction block using the interpolation filter.
The motion compensation unit 302 may use some syntax information to determine the size of blocks used to encode frames and/or slices of an encoded video sequence, partition information describing how each macroblock of a picture of the encoded video sequence is partitioned, a mode indicating how each partition is encoded, one or more reference frames (and reference frame lists) of each inter-coded block, and other information used to decode the encoded video sequence.
The intra prediction unit 303 may form a prediction block from spatially adjacent blocks using, for example, an intra prediction mode received in a bitstream. The inverse quantization unit 303 inversely quantizes, e.g., dequantizes, the video block coefficients provided in the bitstream and decoded and quantized by the entropy decoding unit 301. The inverse transform unit 303 applies inverse transform.
The reconstruction unit 306 may add the residual block to the corresponding prediction block generated by the motion compensation unit 202 or the intra prediction unit 303 to form a decoded block. A deblocking filter may also be applied to filter the decoded blocks to remove blockiness, if desired. The decoded video blocks are then stored in a buffer 307, which provides reference blocks for subsequent motion compensation/intra prediction, and also produces decoded video for presentation on a display device.
A list of solutions preferred by some embodiments is provided next.
The following solution illustrates an exemplary embodiment of the techniques discussed in the previous section (e.g., item 1).
1.A video processing method (e.g., method 700 shown in fig. 7) includes: performing (702) a conversion between videos comprising one or more video pictures, wherein the codec representation complies with format rules; wherein the format rule specifies that two or more syntax fields in the sequence parameter set control Reference Picture Resolution (RPR) changes in the video.
2. The method of solution 1, wherein a first syntax field of the two or more syntax fields indicates whether RPR is used for one or more pictures and a second syntax field of the two or more syntax fields indicates whether picture resolution is allowed to be altered at a sequence level in the coded representation.
The following solution illustrates an exemplary embodiment of the techniques discussed in the previous section (e.g., item 2).
3. A video processing method, comprising: performing a conversion between videos including one or more video pictures, wherein the codec representation complies with format rules; wherein the format rule specifies that a single syntax field in the sequence parameter set controls a Reference Picture Resolution (RPR) modification in the video; and wherein the format rules specify that inter-layer reference pictures are allowed to be resampled for conversion regardless of the value of the single syntax field.
The following solutions illustrate exemplary embodiments of the techniques discussed in the previous section (e.g., items 3, 5, 6, 7, 9, 10).
4.A video processing method, comprising: performing a conversion between video comprising one or more layers, the one or more layers comprising one or more video pictures, the one or more video pictures comprising one or more sub-pictures, wherein the coded representation complies with format rules; wherein the format rule specifies a first constraint on cross-layer alignment or a second constraint on a combination of sub-picture and scalability of inter-layer pictures.
5. The method of solution 4, wherein the first constraint defines a cross-layer alignment limit for the current layer and all higher layers that depend on the current layer without applying an alignment limit for lower layers in the current layer and all higher layers that do not depend on the current layer.
6. The method of solution 4, wherein the second constraint imposes a cross-layer alignment constraint on all layers in each dependency tree for a particular layer.
7. The method of solution 4, wherein the second constraint value limits the supplemental _ managed _ as _ pic _ flag [ i ] according to a cross-layer alignment limit.
8. The method of solution 4, wherein the second constraint value limits the loop _ filter _ across _ configured _ enabled _ flag [ i ] according to a cross-layer alignment limit.
9. The method according to any of the solutions 4 to 8, wherein the first constraint and/or the second constraint is specified for a set of target access units.
10. The method according to solution 4, wherein the second constraint limits the value of each of scaling _ win _ left _ offset, scaling _ win _ right _ offset, scaling _ win _ top _ offset, and scaling _ win _ bottom _ offset according to a cross-layer alignment limit.
The following solution illustrates an exemplary embodiment of the techniques discussed in the previous section (e.g., item 11).
11. A video processing method, comprising: performing a conversion between video comprising one or more layers, the one or more layers comprising one or more video pictures, the one or more video pictures comprising one or more sub-pictures, wherein the conversion complies with a format rule that specifies that inter-layer reference pictures or long-term reference pictures are not allowed as collocated pictures for a current picture of the conversion.
The following solution illustrates an exemplary embodiment of the techniques discussed in the previous section (e.g., item 12).
12. A video processing method, comprising: performing a conversion between video comprising a plurality of pictures and a codec representation of the video, wherein the conversion complies with the following rules: the value specifying each of scaling _ win _ left _ offset, scaling _ win _ right _ offset, scaling _ win _ top _ offset, and scaling _ win _ bottom _ offset is the same for any two pictures in the same codec layer video sequence or codec video sequence that have the same value for pic _ width _ in _ luma _ samples and pic _ height _ in _ luma _ samples.
The following solution illustrates an exemplary embodiment of the techniques discussed in the previous section (e.g., item 13).
13. A video processing method, comprising: performing a conversion between video comprising a plurality of pictures and a codec representation of the video, wherein the conversion complies with the following rules: it is specified that if the picture resolution or the extended window is different for the current picture and other pictures in the same access unit, inter-layer prediction is only allowed if the current picture is an intra random access point picture.
14. The method according to any of solutions 1 to 13, wherein the converting comprises encoding the video into a codec representation.
15. The method according to any of solutions 1 to 13, wherein the converting comprises decoding the codec representation to generate pixel values of the video.
16. A video decoding apparatus comprising a processor configured to implement a method according to one or more of solutions 1 to 15.
17. A video encoding apparatus comprising a processor configured to implement a method according to one or more of solutions 1 to 15.
18. A computer program product having stored thereon computer code which, when executed by a processor, causes the processor to carry out a method according to any one of solutions 1 to 15.
19. A method, apparatus or system described in this patent document.
Fig. 13 is a flowchart representation of a method 1300 for video processing according to the present technology. The method 1300 includes performing a conversion between the video and a bitstream of the video according to rules at operation 1310. The rule specifies that the type of the adaptive parameter set is indicated before the identifier of the adaptive parameter set.
In some embodiments, there is a first syntax element indicating the type of the adaptation parameter set, and then there is a second syntax element in the adaptation parameter set indicating an identifier of the adaptation parameter set. In some embodiments, the number of bits required to represent the value of the identifier of the adaptive parameter set varies based on the type of adaptive parameter set. In some embodiments, a bits are required to represent an identifier of an adaptive parameter set of an Adaptive Loop Filter (ALF) type represented using a bits, B bits are required to represent an identifier of an adaptive parameter set of a luma map and chroma extension (LMCS) type, and C bits are required to represent an identifier of an adaptive parameter set of an extension list type. In some embodiments, a = B > C. In some embodiments, a =3.
Fig. 14 is a flowchart representation of a method 1400 for video processing in accordance with the present technology. The method 1400 includes performing a conversion between the video and a bitstream of the video according to rules at operation 1410. The rule provides for determining the total number of allowed filters indicated in the adaptation parameter set based on the codec information in the bitstream.
In some embodiments, the coding information comprises picture type, slice type, coding tree structure, or layer information. In some embodiments, the total number of filters is allowed to include the total number of adaptive loop filters and/or the total number of cross-component adaptive loop filters in the set of adaptive parameters for the adaptive loop filter type within the picture unit. In some embodiments, the total number of filters is allowed to include, in an adaptive parameter set unit within the picture unit, a total number of adaptive loop filter categories for luma components, a total number of replacement filters for chroma components, and or a total number of cross-component adaptive loop filters.
In some embodiments, the converting includes encoding the video into a bitstream. In some embodiments, the converting includes decoding the video from a bitstream.
In the solution described herein, the encoder may comply with the format rules by generating a codec representation according to the format rules. In the solution described herein, a decoder may parse syntax elements in a codec representation using format rules and know the presence and absence of syntax elements according to the format rules to produce decoded video.
In this document, the term "video processing" may refer to video encoding, video decoding, video compression, or video decompression. For example, video compression algorithms may be applied during conversion from a pixel representation of a video to a corresponding bitstream representation (or vice versa). For example, the bitstream representation of the current video block may correspond to bits that are co-located or distributed at different locations in the bitstream, as defined by the syntax. For example, a macroblock may be encoded according to transform and codec error residual values and also using bits in the header and other fields in the bitstream. Furthermore, during the transition, the decoder may parse the bitstream based on the determination and know that certain fields may or may not be present, as described in the above-described solutions. Similarly, the encoder may determine whether certain syntax fields are included and generate the codec representation accordingly by including or excluding syntax fields in the codec representation.
The disclosed and other solutions, examples, embodiments, modules, and functional operations described in this document may be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their equivalents, or in combinations of one or more of them. The disclosed and other embodiments may be implemented as one or more computer program products, e.g., one or more modules of computer program instructions, encoded on a computer-readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term "data processing apparatus" encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or any other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes or logic flows described in this document can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; a magneto-optical disk; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
Although this patent document contains many specifics, these should not be construed as limitations on the scope of any subject matter or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular technologies. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are shown in the figures in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few embodiments and examples are described and other implementations, enhancements and modifications can be made based on what is described and illustrated in this patent document.

Claims (20)

1.A video processing method, comprising:
performing a conversion between a video and a bitstream of the video according to a rule,
wherein the rule specifies that a type of an adaptive parameter set is indicated before an identifier of the adaptive parameter set.
2. The method of claim 1, wherein a first syntax element indicating the type of the adaptive parameter set is present in the adaptive parameter set before a second syntax element indicating the identifier of the adaptive parameter set.
3. The method according to claim 1 or 2, wherein the number of bits required to represent the value of the identifier of the adaptive parameter set varies based on the type of the adaptive parameter set.
4. The method of claim 3, wherein A bits are required to represent the identifier of the adaptive parameter set of an Adaptive Loop Filter (ALF) type represented using A bits, B bits are required to represent the identifier of the adaptive parameter set of a luma map and chroma extension (LMCS) type, and C bits are required to represent the identifier of the adaptive parameter set of an extension list type.
5. The method of claim 4, wherein A = B > C.
6. The method of claim 4, wherein A =3.
7. A method of video processing, comprising:
performing a conversion between a video and a bitstream of the video according to a rule,
wherein the rule specifies that the total number of allowed filters indicated in the adaptation parameter set is determined from codec information in the bitstream.
8. The method of claim 7, wherein the coding information comprises a picture type, a slice type, a coding tree structure, or layer information.
9. The method of claim 7 or 8, wherein the total number of allowed filters comprises a total number of adaptive loop filters and/or a total number of cross-component adaptive loop filters in an adaptive parameter set of adaptive loop filter types within a picture unit.
10. The method of claim 7 or 8, wherein the total number of allowed filters comprises a total number of adaptive loop filter categories for luma components, a total number of replacement filters for chroma components, and or a total number of cross-component adaptive loop filters in an adaptive parameter set unit within a picture unit.
11. The method of any of claims 1-10, wherein the converting comprises encoding the video into the bitstream.
12. The method of any of claims 1-10, wherein the converting comprises decoding the video from the bitstream.
13. A method for storing a bitstream of video, comprising:
generating a bitstream of the video from the video according to rules,
storing the bitstream in a non-transitory computer-readable recording medium,
wherein the rule specifies that a type of the adaptive parameter set is indicated before an identifier of the adaptive parameter set.
14. A method for storing a bitstream of video, comprising:
generating a bitstream of the video from the video according to rules,
storing the bitstream in a non-transitory computer-readable recording medium,
wherein the rule specifies determining a total number of allowed filters in an adaptation parameter set based on codec information in the bitstream.
15. A video decoding apparatus comprising a processor configured to implement the method of any of claims 1 to 14.
16. A video encoding apparatus comprising a processor configured to implement the method of any one of claims 1 to 14.
17. A computer program product having computer code stored thereon, which when executed by a processor causes the processor to implement the method of any one of claims 1 to 14.
18. A non-transitory computer-readable recording medium storing a bitstream of a video generated by a method performed by a video processing apparatus, wherein the method comprises:
generating a bitstream of the video from the video according to rules,
wherein the rule specifies that a type of an adaptive parameter set is indicated before an identifier of the adaptive parameter set.
19. A non-transitory computer-readable recording medium storing a bitstream of a video generated by a method performed by a video processing apparatus, wherein the method comprises:
generating a bitstream of the video from the video according to rules,
wherein the rule specifies determining a total number of allowed filters in an adaptation parameter set based on codec information in the bitstream.
20. A method, apparatus or system described in this patent document.
CN202180029337.3A 2020-04-18 2021-04-18 Adaptive loop filtering Pending CN115836524A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN2020085483 2020-04-18
CNPCT/CN2020/085483 2020-04-18
PCT/CN2021/087963 WO2021209062A1 (en) 2020-04-18 2021-04-18 Adaptive loop filtering

Publications (1)

Publication Number Publication Date
CN115836524A true CN115836524A (en) 2023-03-21

Family

ID=78083946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180029337.3A Pending CN115836524A (en) 2020-04-18 2021-04-18 Adaptive loop filtering

Country Status (2)

Country Link
CN (1) CN115836524A (en)
WO (1) WO2021209062A1 (en)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200002697A (en) * 2018-06-29 2020-01-08 한국전자통신연구원 Method and apparatus for image encoding/decoding to improve throughput and recording medium for storing bitstream

Also Published As

Publication number Publication date
WO2021209062A1 (en) 2021-10-21

Similar Documents

Publication Publication Date Title
JP7544843B2 (en) Combining Subpicture and Scalability
CN115380530A (en) Coding and decoding of adjacent sub-pictures
CN116671101A (en) Signaling of quantization information in a codec video
JP2024099764A (en) Transform skip residual coding
CN115699732A (en) Reference picture resampling in video coding and decoding
JP7518216B2 (en) Inter-layer prediction with different coding block sizes
JP2023515513A (en) Partition calculation based on subpicture level
CN115462070A (en) Constraints on reference picture lists
CN115398898A (en) Stripe type in video coding and decoding
WO2021209062A1 (en) Adaptive loop filtering
JP7553607B2 (en) Signaling sub-picture level information in video coding - Patents.com
CN115769585A (en) Number of sub-layers limitation
CN115462085A (en) Advanced control of filtering in video coding and decoding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination