WO2020094154A1 - Improvements for region based adaptive loop filter - Google Patents

Improvements for region based adaptive loop filter Download PDF

Info

Publication number
WO2020094154A1
WO2020094154A1 PCT/CN2019/117149 CN2019117149W WO2020094154A1 WO 2020094154 A1 WO2020094154 A1 WO 2020094154A1 CN 2019117149 W CN2019117149 W CN 2019117149W WO 2020094154 A1 WO2020094154 A1 WO 2020094154A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
video
picture
filter coefficients
current
Prior art date
Application number
PCT/CN2019/117149
Other languages
French (fr)
Inventor
Li Zhang
Kai Zhang
Hongbin Liu
Yue Wang
Original Assignee
Beijing Bytedance Network Technology Co., Ltd.
Bytedance Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Bytedance Network Technology Co., Ltd., Bytedance Inc. filed Critical Beijing Bytedance Network Technology Co., Ltd.
Priority to CN201980072485.6A priority Critical patent/CN112997500B/en
Publication of WO2020094154A1 publication Critical patent/WO2020094154A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • This patent document relates to video coding techniques, devices and systems.
  • Devices, systems and methods related to digital video coding, and specifically, to adaptive loop filtering for video coding are described.
  • the described methods may be applied to both the existing video coding standards (e.g., High Efficiency Video Coding (HEVC) ) and future video coding standards (e.g., Versatile Video Coding (VVC) ) or codecs.
  • HEVC High Efficiency Video Coding
  • VVC Versatile Video Coding
  • the disclosed technology may be used to provide a method for video processing.
  • This method includes determining a first set of filter coefficients for a current region of video based on a second set of filter coefficients for a second region of video that is collocated with the current region of video, and reconstructing, based on performing a filtering operation using the first set of filter coefficients, the current region of video from a corresponding bitstream representation.
  • the disclosed technology may be used to provide a method for video processing.
  • This method includes determining, for a first chroma component of a current region of video, a value of one or more flags in a bitstream representation of the current region of video based on a value corresponding to another color component, configuring a filtering operation based one the value of the one or more flags, and reconstructing, using the filtering operation, the current region of video from the bitstream representation.
  • the disclosed technology may be used to provide a method for video processing.
  • This method includes determining, based on a color format of a current region of video, a set of filter coefficients for a filtering operation, and reconstructing, using the filtering operation, the current region of video from a corresponding bitstream representation.
  • the disclosed technology may be used to provide a method for video processing.
  • This method includes determining, for a conversion between a current region of video and a bitstream representation of the current region of video, a first set of filter coefficients for the current region of video based on a second set of filter coefficients for a second region of video that is collocated with the current region of video; and performing the conversion by performing a filtering operation using the first set of filter coefficients.
  • the disclosed technology may be used to provide a method for video processing.
  • This method includes determining, for a conversion between a current processing unit of video and a bitstream representation of the current processing unit of video, a first flag indicating on or off condition of an adaptive loop filter for the current processing unit of video based on a second processing unit of video that is collocated with the current processing unit of video; and performing the conversion by performing a filtering operation using the first flag.
  • the disclosed technology may be used to provide a method for video processing.
  • This method includes signaling, for a conversion between a picture of video and a bitstream representation of the video, information on region numbers and/or size for the picture of video; splitting the picture into regions based on the information; and performing the conversion based on the split regions.
  • the disclosed technology may be used to provide a method for video processing.
  • This method includes parsing, for a conversion between a picture of video and a bitstream representation of the video, the bitstream representation of the video to obtain information on region numbers and/or size for the picture of video; and performing the conversion based on the information.
  • the disclosed technology may be used to provide a method for video processing.
  • This method includes determining, for a conversion between a first region of video and a bitstream representation of the first region of video, a first set of filter coefficients for the first region of video based on a second set of filter coefficients for a second region of video and a set of differences between the first and second sets of filter coefficients; and performing the conversion by performing a filtering operation using the first set of filter coefficients.
  • the disclosed technology may be used to provide a method for video processing.
  • This method includes merging at least two different regions of video to obtain a merged region; and performing a conversion between the merged region of video and a bitstream representation of the merged region by performing a filtering operation using same selected filter coefficients, wherein an index of a first one region in the at least two different regions of video is non-consecutive to an index of a second one region in the at least two different regions of video.
  • the disclosed technology may be used to provide a method for video processing.
  • This method includes making a decision, for a current coding tree unit (CTU) of video, regarding values of first flags associated with adaptive loop filter for a first component; and signaling second flags associated with adaptive loop filter for a second component based on the decision.
  • CTU current coding tree unit
  • the disclosed technology may be used to provide a method for video processing.
  • This method includes parsing a bitstream representation of a current coding tree unit (CTU) of video to determine values of first flags for a first component of the CTU of video based on values of second flags corresponding to a second component of the CTU; configuring a filtering operation based on the values of the first flags; and performing, using the filtering operation, a conversion between the current CTU of video and the bitstream representation of the video including the current CTU.
  • CTU current coding tree unit
  • the disclosed technology may be used to provide a method for video processing.
  • This method includes making a determination regarding a color format of a current region of video; and determining adaptive loop filters for one or more chroma components based on the determination.
  • the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.
  • a device that is configured or operable to perform the above-described method.
  • the device may include a processor that is programmed to implement this method.
  • a video decoder apparatus may implement a method as described herein.
  • FIG. 1 shows an example of an encoder block diagram for video coding.
  • FIGS. 2A, 2B and 2C show examples of geometry transformation-based adaptive loop filter (GALF) filter shapes.
  • GALF geometry transformation-based adaptive loop filter
  • FIG. 3 shows an example of a flow graph for a GALF encoder decision.
  • FIGS. 4A-4D show example subsampled Laplacian calculations for adaptive loop filter (ALF) classification.
  • ALF adaptive loop filter
  • FIG. 5 shows an example of a luma filter shape.
  • FIG. 6 shows an example of region division of a Wide Video Graphic Array (WVGA) sequence.
  • WVGA Wide Video Graphic Array
  • FIG. 7 shows a flowchart of an example method for linear model derivations for cross-component prediction in accordance with the disclosed technology.
  • FIG. 8 shows a flowchart of another example method for linear model derivations for cross-component prediction in accordance with the disclosed technology.
  • FIG. 9 shows a flowchart of yet another example method for linear model derivations for cross-component prediction in accordance with the disclosed technology.
  • Fig. 10 is a flowchart of an example method of video processing.
  • Fig. 11 is a flowchart of an example method of video processing.
  • Fig. 12 is a flowchart of an example method of video processing.
  • Fig. 13 is a flowchart of an example method of video processing.
  • Fig. 14 is a flowchart of an example method of video processing.
  • Fig. 15 is a flowchart of an example method of video processing.
  • Fig. 16 is a flowchart of an example method of video processing.
  • Fig. 17 is a flowchart of an example method of video processing.
  • Fig. 18 is a flowchart of an example method of video processing.
  • FIG. 19 is a block diagram of an example of a hardware platform for implementing a visual media decoding or a visual media encoding technique described in the present document.
  • Video codecs typically include an electronic circuit or software that compresses or decompresses digital video, and are continually being improved to provide higher coding efficiency.
  • a video codec converts uncompressed video to a compressed format or vice versa.
  • the compressed format usually conforms to a standard video compression specification, e.g., the High Efficiency Video Coding (HEVC) standard (also known as H. 265 or MPEG-H Part 2) , the Versatile Video Coding (VVC) standard to be finalized, or other current and/or future video coding standards.
  • HEVC High Efficiency Video Coding
  • VVC Versatile Video Coding
  • JEM Joint Exploration Model
  • affine prediction alternative temporal motion vector prediction
  • STMVP spatial-temporal motion vector prediction
  • BIO bi-directional optical flow
  • FRUC Frame-Rate Up Conversion
  • LAMVR Locally Adaptive Motion Vector Resolution
  • OBMC Overlapped Block Motion Compensation
  • LIC Local Illumination Compensation
  • DMVR Decoder-side Motion Vector Refinement
  • Embodiments of the disclosed technology may be applied to existing video coding standards (e.g., HEVC, H. 265) and future standards to improve runtime performance.
  • Section headings are used in the present document to improve readability of the description and do not in any way limit the discussion or the embodiments (and/or implementations) to the respective sections only.
  • Color space also known as the color model (or color system)
  • color model is an abstract mathematical model which simply describes the range of colors as tuples of numbers, typically as 3 or 4 values or color components (e.g. RGB) .
  • color space is an elaboration of the coordinate system and sub-space.
  • YCbCr, Y′CbCr, or Y Pb/Cb Pr/Cr also written as YCBCR or Y'CBCR, is a family of color spaces used as a part of the color image pipeline in video and digital photography systems.
  • Y′ is the luma component and CB and CR are the blue-difference and red-difference chroma components.
  • Y′ (with prime) is distinguished from Y, which is luminance, meaning that light intensity is nonlinearly encoded based on gamma corrected RGB primaries.
  • Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system's lower acuity for color differences than for luminance.
  • Each of the three Y'CbCr components have the same sample rate, thus there is no chroma subsampling. This scheme is sometimes used in high-end film scanners and cinematic post production.
  • the two chroma components are sampled at half the sample rate of luma, e.g. the horizontal chroma resolution is halved. This reduces the bandwidth of an uncompressed video signal by one-third with little to no visual difference.
  • Cb and Cr are cosited horizontally.
  • Cb and Cr are sited between pixels in the vertical direction (sited interstitially) .
  • Cb and Cr are sited interstitially, halfway between alternate luma samples.
  • Cb and Cr are co-sited in the horizontal direction. In the vertical direction, they are co-sited on alternating lines.
  • FIG. 1 shows an example of encoder block diagram of VVC, which contains three in-loop filtering blocks: deblocking filter (DF) , sample adaptive offset (SAO) and ALF.
  • DF deblocking filter
  • SAO sample adaptive offset
  • ALF utilize the original samples of the current picture to reduce the mean square errors between the original samples and the reconstructed samples by adding an offset and by applying a finite impulse response (FIR) filter, respectively, with coded side information signaling the offsets and filter coefficients.
  • FIR finite impulse response
  • ALF is located at the last processing stage of each picture and can be regarded as a tool trying to catch and fix artifacts created by the previous stages.
  • GAF geometry transformation-based adaptive loop filter
  • For the luma component one among 25 filters is selected for each 2 ⁇ 2 block, based on the direction and activity of local gradients.
  • up to three diamond filter shapes (as shown in FIGS. 2A, 2B and 2C for the 5 ⁇ 5 diamond, 7 ⁇ 7 diamond and 9 ⁇ 9 diamond, respectively) can be selected for the luma component.
  • An index is signalled at the picture level to indicate the filter shape used for the luma component.
  • the 5 ⁇ 5 diamond shape is always used.
  • Each 2 ⁇ 2 block is categorized into one out of 25 classes.
  • the classification index C is derived based on its directionality D and a quantized value of activity as follows:
  • Indices i and j refer to the coordinates of the upper left sample in the 2 ⁇ 2 block and R (i, j) indicates a reconstructed sample at coordinate (i, j) .
  • D maximum and minimum values of the gradients of horizontal and vertical directions are set as:
  • Step 1 If both and are true, D is set to 0.
  • Step 2 If continue from Step 3; otherwise continue from Step 4.
  • Step 3 If D is set to 2; otherwise D is set to 1.
  • the activity value A is calculated as:
  • A is further quantized to the range of 0 to 4, inclusively, and the quantized value is denoted as For both chroma components in a picture, no classification method is applied, i.e. a single set of ALF coefficients is applied for each chroma component.
  • K is the size of the filter and 0 ⁇ k, l ⁇ K-1 are coefficients coordinates, such that location (0, 0) is at the upper left corner and location (K-1, K-1) is at the lower right corner.
  • the transformations are applied to the filter coefficients f (k, l) depending on gradient values calculated for that block.
  • Table 1 The relationship between the transformation and the four gradients of the four directions are summarized in Table 1.
  • GALF filter parameters are signaled for the first CTU, i.e., after the slice header and before the SAO parameters of the first CTU. Up to 25 sets of luma filter coefficients could be signaled. To reduce bits overhead, filter coefficients of different classification can be merged. Also, the GALF coefficients of reference pictures are stored and allowed to be reused as GALF coefficients of a current picture. The current picture may choose to use GALF coefficients stored for the reference pictures, and bypass the GALF coefficients signaling. In this case, only an index to one of the reference pictures is signaled, and the stored GALF coefficients of the indicated reference picture are inherited for the current picture.
  • a candidate list of GALF filter sets is maintained. At the beginning of decoding a new sequence, the candidate list is empty. After decoding one picture, the corresponding set of filters may be added to the candidate list. Once the size of the candidate list reaches the maximum allowed value (i.e., 6 in current JEM) , a new set of filters overwrites the oldest set in decoding order, and that is, first-in-first-out (FIFO) rule is applied to update the candidate list. To avoid duplications, a set could only be added to the list when the corresponding picture doesn’t use GALF temporal prediction. To support temporal scalability, there are multiple candidate lists of filter sets, and each candidate list is associated with a temporal layer.
  • each array assigned by temporal layer index may compose filter sets of previously decoded pictures with equal to lower TempIdx.
  • the k-th array is assigned to be associated with TempIdx equal to k, and it only contains filter sets from pictures with TempIdx smaller than or equal to k. After coding a certain picture, the filter sets associated with the picture will be used to update those arrays associated with equal or higher TempIdx.
  • Temporal prediction of GALF coefficients is used for inter coded frames to minimize signaling overhead.
  • temporal prediction is not available, and a set of 16 fixed filters is assigned to each class.
  • a flag for each class is signaled and if required, the index of the chosen fixed filter.
  • the coefficients of the adaptive filter f (k, l) can still be sent for this class in which case the coefficients of the filter which will be applied to the reconstructed image are sum of both sets of coefficients.
  • the filtering process of luma component can controlled at CU level.
  • a flag is signaled to indicate whether GALF is applied to the luma component of a CU.
  • chroma component whether GALF is applied or not is indicated at picture level only.
  • each sample R (i, j) within the block is filtered, resulting in sample value R′ (i, j) as shown below, where L denotes filter length, f m,n represents filter coefficient, and f (k, l) denotes the decoded filter coefficients.
  • ALF is the last stage of in-loop filtering. There are two stages in this process.
  • the first stage is filter coefficient derivation. To train the filter coefficients, the encoder classifies reconstructed pixels of the luminance component into 16 regions, and one set of filter coefficients is trained for each category using wiener-hopf equations to minimize the mean squared error between the original frame and the reconstructed frame. To reduce the redundancy between these 16 sets of filter coefficients, the encoder will adaptively merge them based on the rate-distortion performance. At its maximum, 16 different filter sets can be assigned for the luminance component and only one for the chrominance components.
  • the second stage is a filter decision, which includes both the frame level and LCU level. Firstly the encoder decides whether frame-level adaptive loop filtering is performed. If frame level ALF is on, then the encoder further decides whether the LCU level ALF is performed.
  • the filter shape adopted inAVS-2 is a 7 ⁇ 7 cross shape superposing a 3 ⁇ 3 square shape, just as illustrated in FIG. 5 for both luminance and chroma components.
  • Each square in FIG. 5 corresponds to a sample. Therefore, a total of 17 samples are used to derive a filtered value for the sample of position C8.
  • a point-symmetrical filter is utilized with only nine coefficients left, ⁇ C0, C1, ..., C8 ⁇ , which reduces the number of filter coefficients to half as well as the number of multiplications in filtering.
  • the point-symmetrical filter can also reduce half of the computation for one filtered sample, e.g., only 9 multiplications and 14 add operations for one filtered sample.
  • AVS-2 adopts region-based multiple adaptive loop filters for luminance component.
  • the luminance component is divided into 16 roughly-equal-size basic regions where each basic region is aligned with largest coding unit (LCU) boundaries as shown in FIG. 6, and one Wiener filter is derived for each region.
  • LCU largest coding unit
  • these regions can be merged into fewer larger regions, which share the same filter coefficients.
  • each region is assigned with an index according to a modified Hilbert order based on the image prior correlations. Two regions with successive indices can be merged based on rate-distortion cost.
  • mapping information between regions should be signaled to the decoder.
  • AVS-2 the number of basic regions is used to represent the merge results and the filter coefficients are compressed sequentially according to its region order. For example, when ⁇ 0, 1 ⁇ , ⁇ 2, 3, 4 ⁇ , ⁇ 5, 6, 7, 8, 9 ⁇ and the left basic regions merged into one region respectively, only three integers are coded to represent this merge map, i.e., 2, 3, 5.
  • the sequence switch flag, adaptive_loop_filter_enable is used to control whether adaptive loop filter is applied for the whole sequence.
  • the image switch flags, picture_alf_enble [i] control whether ALF is applied for the corresponding ith image component. Only if the picture_alf_enble [i] is enabled, the corresponding LCU-level flags and filter coefficients for that color component will be transmitted.
  • the LCU level flags, lcu_alf_enable [k] control whether ALF is enabled for the corresponding kth LCU, and are interleaved into the slice data.
  • the decision of different level regulated flags is all based on the rate-distortion cost. The high flexibility further makes the ALF improve the coding efficiency much more significantly.
  • one set of filter coefficients may be transmitted.
  • the region size is fixed for all kinds of videos regardless the video resolution. For a video with high resolution, e.g., 4096x2048, splitting into 16 regions may result in a too big region.
  • the GALF design in VVC has the following problems:
  • Embodiments of the presently disclosed technology overcome the drawbacks of existing implementations, thereby providing video coding with higher coding efficiencies.
  • the improvement of adaptive loop filtering, based on the disclosed technology, may enhance both existing and future video coding standards, is elucidated in the following examples described for various implementations.
  • the examples of the disclosed technology provided below explain general concepts, and are not meant to be interpreted as limiting. In an example, unless explicitly indicated to the contrary, the various features described in these examples may be combined.
  • Example 1 It is proposed that filter coefficients of one region within current slice/picture/tile group may be predicted/derived from that used in a (e.g., collocated) region in different pictures.
  • one flag for a region may be firstly signaled to indicate whether the filter coefficients are predicted/derived from those used in a collocated region.
  • the collocated region should be located in a reference picture of current picture.
  • an index may be signaled to indicate from which picture the filter coefficients may be predicted/derived.
  • another flag for a region may be signaled to indicate whether its filter coefficients are predicted/derived from the same picture as another region (e.g., its neighboring region) .
  • an additional information is signaled to indicate which region the filter coefficients are predicted/derived from.
  • Example 2 One flag may be signaled in a higher level (i.e., a larger set of video data, such as picture/slice/tile group/tile) to indicate whether all regions’ filter coefficients are predicted/derived from their corresponding collocated regions in different pictures.
  • a higher level i.e., a larger set of video data, such as picture/slice/tile group/tile
  • the different pictures should be reference pictures of current picture.
  • an index may be signaled to indicate from which picture the filter coefficients may be predicted/derived.
  • Example 3 The ALF on/off flags of a region or CTU may be inherited from (e.g., collocated) region/ (e.g., collocated) CTU in different pictures.
  • the collocated region should be located in a reference picture of current picture.
  • One flag is signaled in a higher level (i.e., a larger set of video data, such as picture/slice/tile group/tile) to indicate whether all regions’ on/off flags are inherited from their corresponding collocated regions in different pictures.
  • a higher level i.e., a larger set of video data, such as picture/slice/tile group/tile
  • An index may be signaled in picture/slice header/tile group header to indicate from which picture the on/off flags may be inherited.
  • Region size or the number of regions may be signaled in a SPS, a VPS, a PPS, a picture header or slice header.
  • the number of regions or region sizes may be dependent on the width and/or height of the picture, and/or picture/slice types .
  • Example 5 Predictive coding of filter coefficients associated with two regions may be utilized.
  • the second region may be the one with successive index of the first region.
  • the second region may be the one with the largest index value of the previously coded regions with ALF enabled.
  • Example 6 It is proposed that different regions even with non-successive indices could be merged.
  • Merged regions may share the same set of selected filters.
  • an index of a set of selected filter coefficients may be transmitted.
  • Example 7 For a given CTU, the signaling of ALF on/off flags for chroma component may be dependent on the on/off values for the luma component.
  • the signaling of ALF on/off flags for chroma component may be dependent on the on/off values for another chroma component, e.g., Cb depend on Cr, or Cr depend on Cb.
  • the ALF on/off values of one color component may be used as context for coding the ALF on/off values of another color component.
  • Example 8 How to handle ALF for chroma color components may depend on the color format.
  • classification for chroma components is dependent on the color format. For example, for 4: 4: 4, block-based classification for chroma component may be applied while for 4: 2: 0, it is disallowed.
  • two chroma components may use different filters, or different sets of filters, or selection of filters are based on classification results of each color component.
  • FIG. 7 shows a flowchart of an exemplary method for video processing.
  • the method 700 includes, at step 710, determining a first set of filter coefficients for a current region of video based on a second set of filter coefficients for a second region of video that is collocated with the current region of video.
  • the method 700 includes, at step 720, reconstructing, based on performing a filtering operation using the first set of filter coefficients, the current region of video from a corresponding bitstream representation.
  • the filtering operations include loop filtering (or adaptive loop filtering) .
  • the second region of video is from a different picture than a current picture of the current region of video.
  • the different picture is a reference picture of the current picture.
  • the first set of filter coefficients is predicted from the second set of filter coefficients using a prediction operation.
  • the prediction operation is controlled based on a flag in the bitstream representation.
  • the first set of filter coefficients is based on the second set of filter coefficients and a set of differences between the first and second sets of filter coefficients.
  • an index of the second region of video is consecutive to an index of the current region of video.
  • an index of the second region of video corresponds to a largest index value of previously coded regions for which the filtering operation was enabled.
  • an index of the second region of video is non-consecutive to an index of the current region of video.
  • FIG. 8 shows a flowchart of an exemplary method for video processing.
  • the method 800 includes, at step 810, determining, for a first chroma component of a current region of video, a value of one or more flags in a bitstream representation of the current region of video based on a value corresponding to another color component.
  • the color component may be a luma component or another chroma component, e.g., Y, Cb and Cr for YUV files.
  • the method 800 includes, at step 820, configuring a filtering operation based one the value of the one or more flags.
  • the filtering operations include loop filtering (or adaptive loop filtering) .
  • the method 800 includes, at step 830, reconstructing, using the filtering operation, the current region of video from the bitstream representation.
  • the value of the one or more flags corresponding to the first chroma component are based on a value of one or more flags corresponding to a luma component of the current region of video.
  • the value of the one or more flags corresponding to the first chroma component are based on a value of one or more flags corresponding to a second chroma component of the current region of video.
  • the first chroma component is a blue-difference chroma component and the second chroma component is a red-difference chroma component.
  • the first chroma component is a red-difference chroma component and the second chroma component is a blue-difference chroma component.
  • the value of the one or more flags corresponding to the first chroma component are based on a color format of the current region of video.
  • FIG. 9 shows a flowchart of an exemplary method for video processing.
  • the method 900 includes, at step 910, determining, based on a color format of a current region of video, a set of filter coefficients for a filtering operation.
  • the filtering operations include loop filtering (or adaptive loop filtering) .
  • the method 900 includes, at step 920, reconstructing, using the filtering operation, the current region of video from a corresponding bitstream representation.
  • different sets of filter coefficients are used for the filtering operation for different chroma components of the current region of video.
  • multiple sets of filter coefficients are used for the filtering operation for at least one chroma component of the current region of video.
  • the color format is 4: 4: 4.
  • FIG. 10 shows a flowchart of an exemplary method for video processing.
  • the method 1000 includes, determining (1002) , for a conversion between a current region of video and a bitstream representation of the current region of video, a first set of filter coefficients for the current region of video based on a second set of filter coefficients for a second region of video that is collocated with the current region of video; and performing (1004) the conversion by performing a filtering operation using the first set of filter coefficients.
  • the first set of filter coefficients is predicted or derived from the second set of filter coefficients.
  • the filtering operation comprises loop filtering
  • the first set of filter coefficients is the filter coefficients for adaptive loop filters of the loop filtering.
  • the current region of video is from a first set of video data
  • the second region of video is from a second set of video data different from the first set of video data
  • the set of video data including one of slice, tile, tile group, picture.
  • the second region of video is from a different picture than a current picture of the current region of video.
  • the different picture is a reference picture of the current picture.
  • the method 1000 further comprises: for at least one region of the video, signaling a first flag for the region to indicate whether a set of filter coefficients for the region is predicted/derived based on a corresponding set of filter coefficients for a collocated region that is collocated with the region.
  • the method 1000 further comprises: for at least one region of the video, parsing the bitstream representation of the region to obtain a first flag for the region to indicate whether a set of filter coefficients for the region is predicted or derived based on a corresponding set of filter coefficients for a collocated region that is collocated with the region.
  • the method 1000 further comprises: for at least one region of the video, signaling an index of picture to indicate from which picture the set filter coefficients of the region is predicted or derived.
  • the method 1000 further comprises: for at least one region of the video, parsing the bitstream representation of the region to obtain an index of picture to indicate from which picture the set filter coefficients of the region is predicted or derived.
  • the method 1000 further comprises: for at least one region of the video, signaling a second flag for the region to indicate whether the set of filter coefficients of the region is predicted or derived from the same picture as another region.
  • the another region is a neighboring region of the region.
  • the method 1000 further comprises: signaling an additional information for the region to indicate from which region the set of filter coefficients is predicted or derived.
  • the method 1000 further comprises: for at least one region of the video, parsing the bitstream representation of the region to obtain a second flag for the region to indicate whether the set of filter coefficients of the region is predicted or derived from the same picture as another region.
  • the another region is a neighboring region of the region.
  • the method 1000 further comprises: parsing the bitstream representation of the region to obtain an additional information for the region to indicate from which region the set of filter coefficients is predicted or derived.
  • the method 1000 further comprises: signaling a third flag, at a level of the set of video data, to indicate whether filter coefficients of all regions within the first set of video data are predicted or derived from their corresponding collocated regions in different pictures.
  • the different pictures are reference pictures of current picture.
  • the method 1000 further comprises: signaling an index of picture to indicate from which picture the filter coefficients of all region are predicted or derived.
  • the method 1000 further comprises: parsing the bitstream representation of the region to obtain a third flag, at a level of the set of video data, to indicate whether filter coefficients of all regions within the first set of video data are predicted or derived from their corresponding collocated regions in different pictures.
  • the different pictures are reference pictures of current picture.
  • the method 1000 further comprises: parsing the bitstream representation of the region to obtain an index of picture to indicate from which picture the filter coefficients of all region are predicted or derived.
  • FIG. 11 shows a flowchart of an exemplary method for video processing.
  • the method 1100 includes, determining (1102) , for a conversion between a current processing unit of video and a bitstream representation of the current processing unit of video, a first flag indicating on or off condition of an adaptive loop filter for the current processing unit of video based on a second processing unit of video that is collocated with the current processing unit of video; and performing (1104) the conversion by performing a filtering operation based on the first flag.
  • the first flag for the current processing unit of video is inherited from the second processing unit of video.
  • the filtering operation comprises loop filtering.
  • the processing unit includes one of region and coding tree unit (CTU) .
  • CTU region and coding tree unit
  • the current processing unit of video is from a first set of video data
  • the second processing unit of video is from a second set of video data different from the first set of video data, the set of video data including one of slice, tile, tile group, picture.
  • the second processing unit of video is from a different picture than a current picture of the current processing unit of video.
  • the different picture is a reference picture of the current picture.
  • the method 1100 further comprises: signaling a second flag, at a level of the set of video data, to indicate whether the first flags of all processing units within the set of video data are inherited from their corresponding collocated processing units in different pictures.
  • the method 1100 further comprises: parsing the bitstream representation of the region to obtain a second flag, at a level of the set of video data, to indicate whether the first flags of all processing units within the set of video data are inherited from their corresponding collocated processing units in different pictures.
  • the method 1100 further comprises: signaling an index of picture in picture header, slice header, tile group header to indicate from which picture the first flags of the first processing unit are inherited.
  • the method 1100 further comprises: parsing the bitstream representation of the region to obtain an index of picture in picture header, slice header, tile group header to indicate from which picture the first flags of the first processing unit are inherited.
  • FIG. 12 shows a flowchart of an exemplary method for video processing.
  • the method 1200 includes, signaling (1202) , for a conversion between a picture of video and a bitstream representation of the video, information on region numbers and/or size for the picture of video; splitting (1204) the picture into regions based on the information; and performing (1206) the conversion based on the split regions.
  • SPS Sequence Parameter Set
  • VPS Video Parameter Set
  • PPS Picture Parameter Set
  • the method 1200 further comprises: signaling an index to at least one of a plurality of sets of region numbers and/or size, wherein plurality of sets of region numbers and/or size are pre-defined.
  • the region numbers and/or size is dependent on width and/or height of the picture and/or slice types.
  • FIG. 13 shows a flowchart of an exemplary method for video processing.
  • the method 1300 includes, parsing (1302) , for a conversion between a picture of video and a bitstream representation of the video, the bitstream representation of the video to obtain information on region numbers and/or size for the picture of video; and performing (1304) the conversion based on the information.
  • SPS Sequence Parameter Set
  • VPS Video Parameter Set
  • PPS Picture Parameter Set
  • the method 1300 further comprises: parsing the bitstream representation of the video to obtain an index to at least one of a plurality of sets of region numbers and/or size, wherein plurality of sets of region numbers and/or size are pre-defined.
  • the region numbers and/or size is dependent on width and/or height of the picture and/or slice types.
  • FIG. 14 shows a flowchart of an exemplary method for video processing.
  • the method 1400 includes, determining (1402) , for a conversion between a first region of video and a bitstream representation of the first region of video, a first set of filter coefficients for the first region of video based on a second set of filter coefficients for a second region of video and a set of differences between the first and second sets of filter coefficients; and performing (1404) the conversion by performing a filtering operation using the first set of filter coefficients.
  • the set of differences is signaled.
  • an index of the second region of video is consecutive to an index of the first region of video.
  • an index of the second region of video corresponds to a largest index value of previously coded regions for which the filtering operation was enabled.
  • the filtering operation includes adaptive loop filtering.
  • FIG. 15 shows a flowchart of an exemplary method for video processing.
  • the method 1500 includes, merging (1502) at least two different regions of video to obtain merged regions; and performing (1504) a conversion between the merged regions of video and a bitstream representation of the merged regions by performing a filtering operation using same selected filter coefficients, wherein an index of a first one region in the at least two different regions of video is non-consecutive to an index of a second one region in the at least two different regions of video.
  • the merged regions share one same set of selected filters coefficients.
  • the method 1500 further comprises: signaling in picture header which regions of video are merged.
  • the method 1500 further comprises: for each region, an index of a set of selected filter coefficients is transmitted.
  • FIG. 16 shows a flowchart of an exemplary method for video processing.
  • the method 1600 includes making a decision (1602) , for a current coding tree unit (CTU) of video, regarding values of first flags associated with adaptive loop filter for a first component; and signaling (1604) second flags associated with adaptive loop filter for a second component based on the decision.
  • CTU current coding tree unit
  • the first component comprises luma component and the second component comprises one or more chroma components.
  • the adaptive loop filter for luma component in response to the decision indicating the adaptive loop filter for luma component is disabled, automatically disabling the adaptive loop filter for one or more chroma components for the CTU without any signaling.
  • the first component is a blue-difference (Cb) chroma component and the second component is a red-difference (Cr) chroma component.
  • Cb blue-difference
  • Cr red-difference
  • the first component is a red-difference (Cr) chroma component and the second component is a blue-difference (Cb) chroma component.
  • Cr red-difference
  • Cb blue-difference
  • the values of the first flags associated with adaptive loop filter for one color component is used as context for coding the values of second flags associated with adaptive loop filter for another color component.
  • the method 1600 further comprises: determining an enabling/disabling of a filtering operation using the second flags, performing, based on the determination, a conversion between the current CTU of video and a bitstream representation of the video including the current CTU.
  • FIG. 17 shows a flowchart of an exemplary method for video processing.
  • the method 1700 includes, parsing (1702) a bitstream representation of a current coding tree unit (CTU) of video to determine values of first flags for a first component of the CTU of video based on values of second flags corresponding to a second component of the CTU; configuring (1704) a filtering operation based on the values of the first flags; and performing (1706) , using the filtering operation, a conversion between the current CTU of video and the bitstream representation of the video including the current CTU.
  • CTU current coding tree unit
  • the second component comprises luma component and the first component comprises one or more chroma components.
  • the second component is a blue-difference (Cb) chroma component and the first chroma component is a red-difference (Cr) chroma component.
  • the second component is a red-difference (Cr) chroma component and the first chroma component is a blue-difference (Cb) chroma component.
  • the values of the first flags associated with adaptive loop filter for one color component is used as context for decoding the values of first flags associated with adaptive loop filter for another color component.
  • FIG. 18 shows a flowchart of an exemplary method for video processing.
  • the method 1800 includes, making a determination (1802) regarding a color format of a current region of video; and determining (1804) adaptive loop filters for one or more chroma components based on the determination.
  • whether to use applying classification for the one or more chroma components is based on the determination.
  • whether to use multiple sets of filter for the one or more chroma components is based on the determination.
  • whether to use different sets of filter for two chroma components is based on the determination.
  • two chroma components use different filters, or different sets of filters, or selection of filters are based on classification results of each color component.
  • the method 1800 further comprises: performing a conversion between the current region of video and a bitstream representation of the current region by performing a filtering operation using the adaptive loop filters for one or more chroma components.
  • the filtering operation comprises loop filtering.
  • the conversion generates the region of video from the bitstream representation.
  • the conversion generates the bitstream representation from the region of video.
  • FIG. 19 is a block diagram of a video processing apparatus 1900.
  • the apparatus 1900 may be used to implement one or more of the methods described herein.
  • the apparatus 1900 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on.
  • the apparatus 1900 may include one or more processors 1902, one or more memories 1904 and video processing hardware 1906.
  • the processor (s) 1902 may be configured to implement one or more methods (including, but not limited to, methods 700, 800 and 900) described in the present document.
  • the memory (memories) 1904 may be used for storing data and code used for implementing the methods and techniques described herein.
  • the video processing hardware 1906 may be used to implement, in hardware circuitry, some techniques described in the present document.
  • the video coding methods may be implemented using an apparatus that is implemented on a hardware platform as described with respect to FIG. 19.
  • Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus.
  • the computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them.
  • data processing unit or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document) , in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code) .
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) .
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

Abstract

Improvements for region based adaptive loop filter are described. In an exemplary aspect, a method for video processing includes determining, for a conversion between a current region of video and a bitstream representation of the current region of video, a first set of filter coefficients for the current region of video based on a second set of filter coefficients for a second region of video that is collocated with the current region of video; and performing the conversion by performing a filtering operation using the first set of filter coefficients.

Description

IMPROVEMENTS FOR REGION BASED ADAPTIVE LOOP FILTER
Under the applicable patent law and/or rules pursuant to the Paris Convention, this application is made to timely claim the priority to and benefits of International Patent Application No. PCT/CN2018/114834, filed on November 9, 2018. The entire disclosures of International Patent Application No. PCT/CN2018/114834 is incorporated by reference as part of the disclosure of this application.
TECHNICAL FIELD
This patent document relates to video coding techniques, devices and systems.
BACKGROUND
In spite of the advances in video compression, digital video still accounts for the largest bandwidth use on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, it is expected that the bandwidth demand for digital video usage will continue to grow.
SUMMARY
Devices, systems and methods related to digital video coding, and specifically, to adaptive loop filtering for video coding are described. The described methods may be applied to both the existing video coding standards (e.g., High Efficiency Video Coding (HEVC) ) and future video coding standards (e.g., Versatile Video Coding (VVC) ) or codecs.
In one representative aspect, the disclosed technology may be used to provide a method for video processing. This method includes determining a first set of filter coefficients for a current region of video based on a second set of filter coefficients for a second region of video that is collocated with the current region of video, and reconstructing, based on performing a filtering operation using the first set of filter coefficients, the current region of video from a corresponding bitstream representation.
In another representative aspect, the disclosed technology may be used to provide a method for video processing. This method includes determining, for a first chroma component of a current region of video, a value of one or more flags in a bitstream representation of the current  region of video based on a value corresponding to another color component, configuring a filtering operation based one the value of the one or more flags, and reconstructing, using the filtering operation, the current region of video from the bitstream representation.
In yet another representative aspect, the disclosed technology may be used to provide a method for video processing. This method includes determining, based on a color format of a current region of video, a set of filter coefficients for a filtering operation, and reconstructing, using the filtering operation, the current region of video from a corresponding bitstream representation.
In another representative aspect, the disclosed technology may be used to provide a method for video processing. This method includes determining, for a conversion between a current region of video and a bitstream representation of the current region of video, a first set of filter coefficients for the current region of video based on a second set of filter coefficients for a second region of video that is collocated with the current region of video; and performing the conversion by performing a filtering operation using the first set of filter coefficients.
In another representative aspect, the disclosed technology may be used to provide a method for video processing. This method includes determining, for a conversion between a current processing unit of video and a bitstream representation of the current processing unit of video, a first flag indicating on or off condition of an adaptive loop filter for the current processing unit of video based on a second processing unit of video that is collocated with the current processing unit of video; and performing the conversion by performing a filtering operation using the first flag.
In another representative aspect, the disclosed technology may be used to provide a method for video processing. This method includes signaling, for a conversion between a picture of video and a bitstream representation of the video, information on region numbers and/or size for the picture of video; splitting the picture into regions based on the information; and performing the conversion based on the split regions.
In another representative aspect, the disclosed technology may be used to provide a method for video processing. This method includes parsing, for a conversion between a picture of video and a bitstream representation of the video, the bitstream representation of the video to obtain information on region numbers and/or size for the picture of video; and performing the conversion based on the information.
In another representative aspect, the disclosed technology may be used to provide a method for video processing. This method includes determining, for a conversion between a first region of video and a bitstream representation of the first region of video, a first set of filter coefficients for the first region of video based on a second set of filter coefficients for a second region of video and a set of differences between the first and second sets of filter coefficients; and performing the conversion by performing a filtering operation using the first set of filter coefficients.
In another representative aspect, the disclosed technology may be used to provide a method for video processing. This method includes merging at least two different regions of video to obtain a merged region; and performing a conversion between the merged region of video and a bitstream representation of the merged region by performing a filtering operation using same selected filter coefficients, wherein an index of a first one region in the at least two different regions of video is non-consecutive to an index of a second one region in the at least two different regions of video.
In another representative aspect, the disclosed technology may be used to provide a method for video processing. This method includes making a decision, for a current coding tree unit (CTU) of video, regarding values of first flags associated with adaptive loop filter for a first component; and signaling second flags associated with adaptive loop filter for a second component based on the decision.
In another representative aspect, the disclosed technology may be used to provide a method for video processing. This method includes parsing a bitstream representation of a current coding tree unit (CTU) of video to determine values of first flags for a first component of the CTU of video based on values of second flags corresponding to a second component of the CTU; configuring a filtering operation based on the values of the first flags; and performing, using the filtering operation, a conversion between the current CTU of video and the bitstream representation of the video including the current CTU.
In another representative aspect, the disclosed technology may be used to provide a method for video processing. This method includes making a determination regarding a color format of a current region of video; and determining adaptive loop filters for one or more chroma components based on the determination.
In yet another representative aspect, the above-described method is embodied in the  form of processor-executable code and stored in a computer-readable program medium.
In yet another representative aspect, a device that is configured or operable to perform the above-described method is disclosed. The device may include a processor that is programmed to implement this method.
In yet another representative aspect, a video decoder apparatus may implement a method as described herein.
The above and other aspects and features of the disclosed technology are described in greater detail in the drawings, the description and the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows an example of an encoder block diagram for video coding.
FIGS. 2A, 2B and 2C show examples of geometry transformation-based adaptive loop filter (GALF) filter shapes.
FIG. 3 shows an example of a flow graph for a GALF encoder decision.
FIGS. 4A-4D show example subsampled Laplacian calculations for adaptive loop filter (ALF) classification.
FIG. 5 shows an example of a luma filter shape.
FIG. 6 shows an example of region division of a Wide Video Graphic Array (WVGA) sequence.
FIG. 7 shows a flowchart of an example method for linear model derivations for cross-component prediction in accordance with the disclosed technology.
FIG. 8 shows a flowchart of another example method for linear model derivations for cross-component prediction in accordance with the disclosed technology.
FIG. 9 shows a flowchart of yet another example method for linear model derivations for cross-component prediction in accordance with the disclosed technology.
Fig. 10 is a flowchart of an example method of video processing.
Fig. 11 is a flowchart of an example method of video processing.
Fig. 12 is a flowchart of an example method of video processing.
Fig. 13 is a flowchart of an example method of video processing.
Fig. 14 is a flowchart of an example method of video processing.
Fig. 15 is a flowchart of an example method of video processing.
Fig. 16 is a flowchart of an example method of video processing.
Fig. 17 is a flowchart of an example method of video processing.
Fig. 18 is a flowchart of an example method of video processing.
FIG. 19 is a block diagram of an example of a hardware platform for implementing a visual media decoding or a visual media encoding technique described in the present document.
DETAILED DESCRIPTION
Due to the increasing demand of higher resolution video, video coding methods and techniques are ubiquitous in modern technology. Video codecs typically include an electronic circuit or software that compresses or decompresses digital video, and are continually being improved to provide higher coding efficiency. A video codec converts uncompressed video to a compressed format or vice versa. There are complex relationships between the video quality, the amount of data used to represent the video (determined by the bit rate) , the complexity of the encoding and decoding algorithms, sensitivity to data losses and errors, ease of editing, random access, and end-to-end delay (latency) . The compressed format usually conforms to a standard video compression specification, e.g., the High Efficiency Video Coding (HEVC) standard (also known as H. 265 or MPEG-H Part 2) , the Versatile Video Coding (VVC) standard to be finalized, or other current and/or future video coding standards.
In some embodiments, future video coding technologies are explored using a reference software known as the Joint Exploration Model (JEM) . In JEM, sub-block based prediction is adopted in several coding tools, such as affine prediction, alternative temporal motion vector prediction (ATMVP) , spatial-temporal motion vector prediction (STMVP) , bi-directional optical flow (BIO) , Frame-Rate Up Conversion (FRUC) , Locally Adaptive Motion Vector Resolution (LAMVR) , Overlapped Block Motion Compensation (OBMC) , Local Illumination Compensation (LIC) , and Decoder-side Motion Vector Refinement (DMVR) .
Embodiments of the disclosed technology may be applied to existing video coding standards (e.g., HEVC, H. 265) and future standards to improve runtime performance. Section headings are used in the present document to improve readability of the description and do not in any way limit the discussion or the embodiments (and/or implementations) to the respective sections only.
1 Examples of color space and chroma subsampling
Color space, also known as the color model (or color system) , is an abstract mathematical model which simply describes the range of colors as tuples of numbers, typically as 3 or 4 values or color components (e.g. RGB) . Basically speaking, color space is an elaboration of the coordinate system and sub-space.
For video compression, the most frequently used color spaces are YCbCr and RGB.
YCbCr, Y′CbCr, or Y Pb/Cb Pr/Cr, also written as YCBCR or Y'CBCR, is a family of color spaces used as a part of the color image pipeline in video and digital photography systems. Y′ is the luma component and CB and CR are the blue-difference and red-difference chroma components. Y′ (with prime) is distinguished from Y, which is luminance, meaning that light intensity is nonlinearly encoded based on gamma corrected RGB primaries.
Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system's lower acuity for color differences than for luminance.
1.1 The 4: 4: 4 color format
Each of the three Y'CbCr components have the same sample rate, thus there is no chroma subsampling. This scheme is sometimes used in high-end film scanners and cinematic post production.
1.2 The 4: 2: 2 color format
The two chroma components are sampled at half the sample rate of luma, e.g. the horizontal chroma resolution is halved. This reduces the bandwidth of an uncompressed video signal by one-third with little to no visual difference.
1.3 The 4: 2: 0 color format
In 4: 2: 0, the horizontal sampling is doubled compared to 4: 1: 1, but as the Cb and Cr channels are only sampled on each alternate line in this scheme, the vertical resolution is halved. The data rate is thus the same. Cb and Cr are each subsampled at a factor of 2 both horizontally and vertically. There are three variants of 4: 2: 0 schemes, having different horizontal and vertical siting.
○ In MPEG-2, Cb and Cr are cosited horizontally. Cb and Cr are sited between pixels in the vertical direction (sited interstitially) .
○ In JPEG/JFIF, H. 261, and MPEG-1, Cb and Cr are sited interstitially, halfway between alternate luma samples.
○ In 4: 2: 0 DV, Cb and Cr are co-sited in the horizontal direction. In the vertical direction, they are co-sited on alternating lines.
2 Examples of the coding flow of a typical video codec
FIG. 1 shows an example of encoder block diagram of VVC, which contains three in-loop filtering blocks: deblocking filter (DF) , sample adaptive offset (SAO) and ALF. Unlike DF, which uses predefined filters, SAO and ALF utilize the original samples of the current picture to reduce the mean square errors between the original samples and the reconstructed samples by adding an offset and by applying a finite impulse response (FIR) filter, respectively, with coded side information signaling the offsets and filter coefficients. ALF is located at the last processing stage of each picture and can be regarded as a tool trying to catch and fix artifacts created by the previous stages.
3 Examples of a geometry transformation-based adaptive loop filter in JEM
In the JEM, an geometry transformation-based adaptive loop filter (GALF) with block-based filter adaption is applied. For the luma component, one among 25 filters is selected for each 2×2 block, based on the direction and activity of local gradients.
3.1 Examples of filter shape
In the JEM, up to three diamond filter shapes (as shown in FIGS. 2A, 2B and 2C for the 5×5 diamond, 7×7 diamond and 9×9 diamond, respectively) can be selected for the luma component. An index is signalled at the picture level to indicate the filter shape used for the luma component. For chroma components in a picture, the 5×5 diamond shape is always used.
3.1.1 Block classification
Each 2×2 block is categorized into one out of 25 classes. The classification index C is derived based on its directionality D and a quantized value of activity
Figure PCTCN2019117149-appb-000001
as follows:
Figure PCTCN2019117149-appb-000002
To calculate D and
Figure PCTCN2019117149-appb-000003
gradients of the horizontal, vertical and two diagonal direction are first calculated using 1-D Laplacian:
Figure PCTCN2019117149-appb-000004
Figure PCTCN2019117149-appb-000005
Figure PCTCN2019117149-appb-000006
Figure PCTCN2019117149-appb-000007
Indices i and j refer to the coordinates of the upper left sample in the 2×2 block and R (i, j) indicates a reconstructed sample at coordinate (i, j) .
Then D maximum and minimum values of the gradients of horizontal and vertical directions are set as:
Figure PCTCN2019117149-appb-000008
and the maximum and minimum values of the gradient of two diagonal directions are set as:
Figure PCTCN2019117149-appb-000009
To derive the value of the directionality D, these values are compared against each other and with two thresholds t 1 and t 2:
Step 1. If both
Figure PCTCN2019117149-appb-000010
and
Figure PCTCN2019117149-appb-000011
are true, D is set to 0.
Step 2. If
Figure PCTCN2019117149-appb-000012
continue from Step 3; otherwise continue from Step 4.
Step 3. If
Figure PCTCN2019117149-appb-000013
D is set to 2; otherwise D is set to 1.
Step 4. If
Figure PCTCN2019117149-appb-000014
D is set to 4; otherwise D is set to 3.
The activity value A is calculated as:
Figure PCTCN2019117149-appb-000015
A is further quantized to the range of 0 to 4, inclusively, and the quantized value is denoted as
Figure PCTCN2019117149-appb-000016
For both chroma components in a picture, no classification method is applied, i.e. a single set of ALF coefficients is applied for each chroma component.
3.1.2 Geometric transformations of filter coefficients
Before filtering each 2×2 block, geometric transformations such as rotation or diagonal and vertical flipping are applied to the filter coefficients f (k, l) depending on gradient  values calculated for that block. This is equivalent to applying these transformations to the samples in the filter support region. The idea is to make different blocks to which ALF is applied more similar by aligning their directionality.
Three geometric transformations, including diagonal, vertical flip and rotation are introduced:
Diagonal: f D (k, l) =f (l, k) ,
Vertical flip: f V (k, l) =f (k, K-l-1) ,         (9)
Rotation: f R (k, l) =f (K-l-1, k) .
Herein, K is the size of the filter and 0≤k, l≤K-1 are coefficients coordinates, such that location (0, 0) is at the upper left corner and location (K-1, K-1) is at the lower right corner. The transformations are applied to the filter coefficients f (k, l) depending on gradient values calculated for that block. The relationship between the transformation and the four gradients of the four directions are summarized in Table 1.
Table 1: Mapping of the gradient calculated for one block and the transformations
Gradient values Transformation
g d2 < g d1 and g h < g v No transformation
g d2 < g d1 and g v < g h Diagonal
g d1 < g d2 and g h < g v Vertical flip
g d1 < g d2 and g v < g h Rotation
3.1.3 Signaling of filter parameters
In the JEM, GALF filter parameters are signaled for the first CTU, i.e., after the slice header and before the SAO parameters of the first CTU. Up to 25 sets of luma filter coefficients could be signaled. To reduce bits overhead, filter coefficients of different classification can be merged. Also, the GALF coefficients of reference pictures are stored and allowed to be reused as GALF coefficients of a current picture. The current picture may choose to use GALF coefficients stored for the reference pictures, and bypass the GALF coefficients signaling. In this case, only an index to one of the reference pictures is signaled, and the stored GALF coefficients of the indicated reference picture are inherited for the current picture.
To support GALF temporal prediction, a candidate list of GALF filter sets is maintained. At the beginning of decoding a new sequence, the candidate list is empty. After decoding one picture, the corresponding set of filters may be added to the candidate list. Once  the size of the candidate list reaches the maximum allowed value (i.e., 6 in current JEM) , a new set of filters overwrites the oldest set in decoding order, and that is, first-in-first-out (FIFO) rule is applied to update the candidate list. To avoid duplications, a set could only be added to the list when the corresponding picture doesn’t use GALF temporal prediction. To support temporal scalability, there are multiple candidate lists of filter sets, and each candidate list is associated with a temporal layer. More specifically, each array assigned by temporal layer index (TempIdx) may compose filter sets of previously decoded pictures with equal to lower TempIdx. For example, the k-th array is assigned to be associated with TempIdx equal to k, and it only contains filter sets from pictures with TempIdx smaller than or equal to k. After coding a certain picture, the filter sets associated with the picture will be used to update those arrays associated with equal or higher TempIdx.
Temporal prediction of GALF coefficients is used for inter coded frames to minimize signaling overhead. For intra frames, temporal prediction is not available, and a set of 16 fixed filters is assigned to each class. To indicate the usage of the fixed filter, a flag for each class is signaled and if required, the index of the chosen fixed filter. Even when the fixed filter is selected for a given class, the coefficients of the adaptive filter f (k, l) can still be sent for this class in which case the coefficients of the filter which will be applied to the reconstructed image are sum of both sets of coefficients.
The filtering process of luma component can controlled at CU level. A flag is signaled to indicate whether GALF is applied to the luma component of a CU. For chroma component, whether GALF is applied or not is indicated at picture level only.
3.1.4 Filtering process
At decoder side, when GALF is enabled for a block, each sample R (i, j) within the block is filtered, resulting in sample value R′ (i, j) as shown below, where L denotes filter length, f m,n represents filter coefficient, and f (k, l) denotes the decoded filter coefficients.
Figure PCTCN2019117149-appb-000017
3.1.5 Determination process for encoder side filter parameters
Overall encoder decision process for GALF is illustrated in FIG. 3. For luma samples of each CU, the encoder makes a decision on whether or not the GALF is applied and the appropriate signalling flag is included in the slice header. For chroma samples, the decision to  apply the filter is done based on the picture-level rather than CU-level. Furthermore, chroma GALF for a picture is checked only when luma GALF is enabled for the picture.
4 Examples of a geometry transformation-based adaptive loop filter in VVC
The current design of GALF in VVC has the following major changes compared to that in JEM:
1) The adaptive filter shape is removed. Only 7x7 filter shape is allowed for luma component and 5x5 filter shape is allowed for chroma component.
2) Temporal prediction of ALF parameters and prediction from fixed filters are both removed.
3) For each CTU, one bit flag is signaled whether ALF is enabled or disabled.
4) Calculation of class index is performed in 4x4 level instead of 2x2. In addition, as proposed in JVET-L0147, sub-sampled Laplacian calculation method for ALF classification is utilized. More specifically, there is no need to calculate the horizontal/vertical/45 diagonal /135 degree gradients for each sample within one block. Instead, 1: 2 subsampling is utilized.
5 Examples of a region-based adaptive loop filter in AVS2
ALF is the last stage of in-loop filtering. There are two stages in this process. The first stage is filter coefficient derivation. To train the filter coefficients, the encoder classifies reconstructed pixels of the luminance component into 16 regions, and one set of filter coefficients is trained for each category using wiener-hopf equations to minimize the mean squared error between the original frame and the reconstructed frame. To reduce the redundancy between these 16 sets of filter coefficients, the encoder will adaptively merge them based on the rate-distortion performance. At its maximum, 16 different filter sets can be assigned for the luminance component and only one for the chrominance components. The second stage is a filter decision, which includes both the frame level and LCU level. Firstly the encoder decides whether frame-level adaptive loop filtering is performed. If frame level ALF is on, then the encoder further decides whether the LCU level ALF is performed.
5.1 Filter shape
The filter shape adopted inAVS-2 is a 7×7 cross shape superposing a 3×3 square shape, just as illustrated in FIG. 5 for both luminance and chroma components. Each square in FIG. 5 corresponds to a sample. Therefore, a total of 17 samples are used to derive a filtered  value for the sample of position C8. Considering overhead of transmitting the coefficients, a point-symmetrical filter is utilized with only nine coefficients left, {C0, C1, ..., C8} , which reduces the number of filter coefficients to half as well as the number of multiplications in filtering. The point-symmetrical filter can also reduce half of the computation for one filtered sample, e.g., only 9 multiplications and 14 add operations for one filtered sample.
5.2 Region-based adaptive merge
In order to adapt different coding errors, AVS-2 adopts region-based multiple adaptive loop filters for luminance component. The luminance component is divided into 16 roughly-equal-size basic regions where each basic region is aligned with largest coding unit (LCU) boundaries as shown in FIG. 6, and one Wiener filter is derived for each region. The more filters are used, the more distortions are reduced, but the bits used to encode these coefficients increase along with the number of filters. In order to achieve the best rate-distortion performance, these regions can be merged into fewer larger regions, which share the same filter coefficients. In order to simplify the merging process, each region is assigned with an index according to a modified Hilbert order based on the image prior correlations. Two regions with successive indices can be merged based on rate-distortion cost.
The mapping information between regions should be signaled to the decoder. In AVS-2, the number of basic regions is used to represent the merge results and the filter coefficients are compressed sequentially according to its region order. For example, when {0, 1} , {2, 3, 4} , {5, 6, 7, 8, 9} and the left basic regions merged into one region respectively, only three integers are coded to represent this merge map, i.e., 2, 3, 5.
5.3 Signaling of side information
Multiple switch flags are also used. The sequence switch flag, adaptive_loop_filter_enable, is used to control whether adaptive loop filter is applied for the whole sequence. The image switch flags, picture_alf_enble [i] , control whether ALF is applied for the corresponding ith image component. Only if the picture_alf_enble [i] is enabled, the corresponding LCU-level flags and filter coefficients for that color component will be transmitted. The LCU level flags, lcu_alf_enable [k] , control whether ALF is enabled for the corresponding kth LCU, and are interleaved into the slice data. The decision of different level regulated flags is all based on the rate-distortion cost. The high flexibility further makes the ALF improve the coding efficiency much more significantly.
In some embodiments, and for a luma component, there could be up to 16 sets of filter coefficients.
In some embodiments, and for each chroma component (Cb and Cr) , one set of filter coefficients may be transmitted.
6 Drawbacks of existing implementations
In some existing implementations (e.g., region-based ALF design in AVS-2) , the following problems are encountered:
(1) For the region-based ALF design, the correlation of filter coefficients between regions in current picture and previously coded pictures is not utilized.
(2) The region size is fixed for all kinds of videos regardless the video resolution. For a video with high resolution, e.g., 4096x2048, splitting into 16 regions may result in a too big region.
(3) For each LCU, one bit flag for each color component is signaled to indicate whether ALF is applied or not. However, there is some dependency between luma and chroma, when ALF is not applied to luma, typically, it won’t be applied to the corresponding chroma blocks.
The GALF design in VVC has the following problems:
(1) It was designed for 4: 2: 0 color format. For 4: 4: 4 color format, luma and chroma components may be of similar importance. How to better apply GALF is unknown.
7 Exemplary methods for improvements in adaptive loop filtering
Embodiments of the presently disclosed technology overcome the drawbacks of existing implementations, thereby providing video coding with higher coding efficiencies. The improvement of adaptive loop filtering, based on the disclosed technology, may enhance both existing and future video coding standards, is elucidated in the following examples described for various implementations. The examples of the disclosed technology provided below explain general concepts, and are not meant to be interpreted as limiting. In an example, unless explicitly indicated to the contrary, the various features described in these examples may be combined.
Example 1. It is proposed that filter coefficients of one region within current slice/picture/tile group may be predicted/derived from that used in a (e.g., collocated) region in different pictures.
(a) In one example, one flag for a region may be firstly signaled to indicate whether the filter coefficients are predicted/derived from those used in a collocated region.
(b) In one example, the collocated region should be located in a reference picture of current picture.
(c) Alternatively, furthermore, an index may be signaled to indicate from which picture the filter coefficients may be predicted/derived.
(d) In one example, another flag for a region may be signaled to indicate whether its filter coefficients are predicted/derived from the same picture as another region (e.g., its neighboring region) .
(i) In one example, an additional information is signaled to indicate which region the filter coefficients are predicted/derived from.
Example 2. One flag may be signaled in a higher level (i.e., a larger set of video data, such as picture/slice/tile group/tile) to indicate whether all regions’ filter coefficients are predicted/derived from their corresponding collocated regions in different pictures.
(a) In one example, the different pictures should be reference pictures of current picture.
(b) Alternatively, furthermore, an index may be signaled to indicate from which picture the filter coefficients may be predicted/derived.
Example 3. The ALF on/off flags of a region or CTU may be inherited from (e.g., collocated) region/ (e.g., collocated) CTU in different pictures.
(a) In one example, the collocated region should be located in a reference picture of current picture.
(b) One flag is signaled in a higher level (i.e., a larger set of video data, such as picture/slice/tile group/tile) to indicate whether all regions’ on/off flags are inherited from their corresponding collocated regions in different pictures.
(c) An index may be signaled in picture/slice header/tile group header to indicate from which picture the on/off flags may be inherited.
Example 4. Region size or the number of regions may be signaled in a SPS, a VPS, a PPS, a picture header or slice header.
(a) In one example, several sets of region numbers/sizes may be pre-defined. An index to the sets may be signaled.
(b) In one example, the number of regions or region sizes may be dependent on  the width and/or height of the picture, and/or picture/slice types .
Example 5. Predictive coding of filter coefficients associated with two regions may be utilized.
(a) When signaling the filter coefficients of a first region, the differences compared to that of a second region may be signaled.
(i) The second region may be the one with successive index of the first region.
(ii) The second region may be the one with the largest index value of the previously coded regions with ALF enabled.
Example 6. It is proposed that different regions even with non-successive indices could be merged.
(a) Merged regions may share the same set of selected filters.
(b) In one example, it is signaled in picture header which regions are merged.
(c) In one example, for each region, an index of a set of selected filter coefficients may be transmitted.
Example 7. For a given CTU, the signaling of ALF on/off flags for chroma component may be dependent on the on/off values for the luma component.
(a) The signaling of ALF on/off flags for chroma component may be dependent on the on/off values for another chroma component, e.g., Cb depend on Cr, or Cr depend on Cb.
(b) In one example, when ALF is disabled for a color component, it is automatically disabled for another one or more color components for a CTU without any signaling.
(c) In one example, the ALF on/off values of one color component may be used as context for coding the ALF on/off values of another color component.
Example 8. How to handle ALF for chroma color components may depend on the color format.
(a) Whether to apply classification for chroma components is dependent on the color format. For example, for 4: 4: 4, block-based classification for chroma component may be applied while for 4: 2: 0, it is disallowed.
(b) Whether to allow multiple sets of filters for chroma components is dependent on the color format. For example, for 4: 4: 4, multiple sets of filters for chroma component may be  applied while for 4: 2: 0, one set of filter is applied to both color components.
(c) Whether to allow different filters for two chroma components is dependent on the color format. For example, for 4: 4: 4, at least two sets of filters for chroma component may be applied, with at least one set for one color component, respectively. while for 4: 2: 0, one set of filter is applied to both color components
(d) In one example, when the color format is 4: 4: 4, two chroma components may use different filters, or different sets of filters, or selection of filters are based on classification results of each color component.
The examples described above may be incorporated in the context of the method described below, e.g.,  methods  700, 800 and 900, which may be implemented at a video decoder or a video encoder.
FIG. 7 shows a flowchart of an exemplary method for video processing. The method 700 includes, at step 710, determining a first set of filter coefficients for a current region of video based on a second set of filter coefficients for a second region of video that is collocated with the current region of video.
The method 700 includes, at step 720, reconstructing, based on performing a filtering operation using the first set of filter coefficients, the current region of video from a corresponding bitstream representation. In some embodiments, the filtering operations include loop filtering (or adaptive loop filtering) .
In some embodiments, and in the context of Example 1, the second region of video is from a different picture than a current picture of the current region of video. In other embodiments, the different picture is a reference picture of the current picture.
In some embodiments, and in the context of Example 5, the first set of filter coefficients is predicted from the second set of filter coefficients using a prediction operation. In an example, the prediction operation is controlled based on a flag in the bitstream representation.
In some embodiments, and in the context of Example 5, the first set of filter coefficients is based on the second set of filter coefficients and a set of differences between the first and second sets of filter coefficients. In an example, an index of the second region of video is consecutive to an index of the current region of video. In another example, an index of the second region of video corresponds to a largest index value of previously coded regions for which the filtering operation was enabled. In the context of Example 6, an index of the second  region of video is non-consecutive to an index of the current region of video.
FIG. 8 shows a flowchart of an exemplary method for video processing. The method 800 includes, at step 810, determining, for a first chroma component of a current region of video, a value of one or more flags in a bitstream representation of the current region of video based on a value corresponding to another color component. In some embodiments, the color component may be a luma component or another chroma component, e.g., Y, Cb and Cr for YUV files.
The method 800 includes, at step 820, configuring a filtering operation based one the value of the one or more flags. In some embodiments, the filtering operations include loop filtering (or adaptive loop filtering) .
The method 800 includes, at step 830, reconstructing, using the filtering operation, the current region of video from the bitstream representation.
In some embodiments, and in the context of Example 7, the value of the one or more flags corresponding to the first chroma component are based on a value of one or more flags corresponding to a luma component of the current region of video.
In some embodiments, and in the context of Example 7, the value of the one or more flags corresponding to the first chroma component are based on a value of one or more flags corresponding to a second chroma component of the current region of video. In an example, the first chroma component is a blue-difference chroma component and the second chroma component is a red-difference chroma component. In another example, the first chroma component is a red-difference chroma component and the second chroma component is a blue-difference chroma component.
In some embodiments, and in the context of Example 8, the value of the one or more flags corresponding to the first chroma component are based on a color format of the current region of video.
FIG. 9 shows a flowchart of an exemplary method for video processing. The method 900 includes, at step 910, determining, based on a color format of a current region of video, a set of filter coefficients for a filtering operation. In some embodiments, the filtering operations include loop filtering (or adaptive loop filtering) .
The method 900 includes, at step 920, reconstructing, using the filtering operation, the current region of video from a corresponding bitstream representation.
In some embodiments, and in the context of Example 8, different sets of filter  coefficients are used for the filtering operation for different chroma components of the current region of video. In other embodiments, multiple sets of filter coefficients are used for the filtering operation for at least one chroma component of the current region of video. In an example, the color format is 4: 4: 4.
FIG. 10 shows a flowchart of an exemplary method for video processing. The method 1000 includes, determining (1002) , for a conversion between a current region of video and a bitstream representation of the current region of video, a first set of filter coefficients for the current region of video based on a second set of filter coefficients for a second region of video that is collocated with the current region of video; and performing (1004) the conversion by performing a filtering operation using the first set of filter coefficients.
In some example, the first set of filter coefficients is predicted or derived from the second set of filter coefficients.
In some example, the filtering operation comprises loop filtering, and the first set of filter coefficients is the filter coefficients for adaptive loop filters of the loop filtering.
In some example, the current region of video is from a first set of video data, the second region of video is from a second set of video data different from the first set of video data, the set of video data including one of slice, tile, tile group, picture.
In some example, the second region of video is from a different picture than a current picture of the current region of video.
In some example, the different picture is a reference picture of the current picture.
In some example, the method 1000 further comprises: for at least one region of the video, signaling a first flag for the region to indicate whether a set of filter coefficients for the region is predicted/derived based on a corresponding set of filter coefficients for a collocated region that is collocated with the region.
In some example, the method 1000 further comprises: for at least one region of the video, parsing the bitstream representation of the region to obtain a first flag for the region to indicate whether a set of filter coefficients for the region is predicted or derived based on a corresponding set of filter coefficients for a collocated region that is collocated with the region.
In some example, the method 1000 further comprises: for at least one region of the video, signaling an index of picture to indicate from which picture the set filter coefficients of the region is predicted or derived.
In some example, the method 1000 further comprises: for at least one region of the video, parsing the bitstream representation of the region to obtain an index of picture to indicate from which picture the set filter coefficients of the region is predicted or derived.
In some example, the method 1000 further comprises: for at least one region of the video, signaling a second flag for the region to indicate whether the set of filter coefficients of the region is predicted or derived from the same picture as another region.
In some example, the another region is a neighboring region of the region.
In some example, the method 1000 further comprises: signaling an additional information for the region to indicate from which region the set of filter coefficients is predicted or derived.
In some example, the method 1000 further comprises: for at least one region of the video, parsing the bitstream representation of the region to obtain a second flag for the region to indicate whether the set of filter coefficients of the region is predicted or derived from the same picture as another region.
In some example, the another region is a neighboring region of the region.
In some example, the method 1000 further comprises: parsing the bitstream representation of the region to obtain an additional information for the region to indicate from which region the set of filter coefficients is predicted or derived.
In some example, the method 1000 further comprises: signaling a third flag, at a level of the set of video data, to indicate whether filter coefficients of all regions within the first set of video data are predicted or derived from their corresponding collocated regions in different pictures.
In some example, the different pictures are reference pictures of current picture.
In some example, the method 1000 further comprises: signaling an index of picture to indicate from which picture the filter coefficients of all region are predicted or derived.
In some example, the method 1000 further comprises: parsing the bitstream representation of the region to obtain a third flag, at a level of the set of video data, to indicate whether filter coefficients of all regions within the first set of video data are predicted or derived from their corresponding collocated regions in different pictures.
In some example, the different pictures are reference pictures of current picture.
In some example, the method 1000 further comprises: parsing the bitstream  representation of the region to obtain an index of picture to indicate from which picture the filter coefficients of all region are predicted or derived.
FIG. 11 shows a flowchart of an exemplary method for video processing. The method 1100 includes, determining (1102) , for a conversion between a current processing unit of video and a bitstream representation of the current processing unit of video, a first flag indicating on or off condition of an adaptive loop filter for the current processing unit of video based on a second processing unit of video that is collocated with the current processing unit of video; and performing (1104) the conversion by performing a filtering operation based on the first flag.
In some example, the first flag for the current processing unit of video is inherited from the second processing unit of video.
In some example, the filtering operation comprises loop filtering.
In some example, the processing unit includes one of region and coding tree unit (CTU) .
In some example, the current processing unit of video is from a first set of video data, the second processing unit of video is from a second set of video data different from the first set of video data, the set of video data including one of slice, tile, tile group, picture.
In some example, the second processing unit of video is from a different picture than a current picture of the current processing unit of video.
In some example, the different picture is a reference picture of the current picture.
In some example, the method 1100 further comprises: signaling a second flag, at a level of the set of video data, to indicate whether the first flags of all processing units within the set of video data are inherited from their corresponding collocated processing units in different pictures.
In some example, the method 1100 further comprises: parsing the bitstream representation of the region to obtain a second flag, at a level of the set of video data, to indicate whether the first flags of all processing units within the set of video data are inherited from their corresponding collocated processing units in different pictures.
In some example, the method 1100 further comprises: signaling an index of picture in picture header, slice header, tile group header to indicate from which picture the first flags of the first processing unit are inherited.
In some example, the method 1100 further comprises: parsing the bitstream  representation of the region to obtain an index of picture in picture header, slice header, tile group header to indicate from which picture the first flags of the first processing unit are inherited.
FIG. 12 shows a flowchart of an exemplary method for video processing. The method 1200 includes, signaling (1202) , for a conversion between a picture of video and a bitstream representation of the video, information on region numbers and/or size for the picture of video; splitting (1204) the picture into regions based on the information; and performing (1206) the conversion based on the split regions.
In some example, signaling the information on region numbers and/or size in at least one of Sequence Parameter Set (SPS) , Video Parameter Set (VPS) , Picture Parameter Set (PPS) , picture header, slice header.
In some example, the method 1200 further comprises: signaling an index to at least one of a plurality of sets of region numbers and/or size, wherein plurality of sets of region numbers and/or size are pre-defined.
In some example, the region numbers and/or size is dependent on width and/or height of the picture and/or slice types.
FIG. 13 shows a flowchart of an exemplary method for video processing. The method 1300 includes, parsing (1302) , for a conversion between a picture of video and a bitstream representation of the video, the bitstream representation of the video to obtain information on region numbers and/or size for the picture of video; and performing (1304) the conversion based on the information.
In some example, parsing the bitstream representation of the video to obtain the information on region numbers and/or size in at least one of Sequence Parameter Set (SPS) , Video Parameter Set (VPS) , Picture Parameter Set (PPS) , picture header, slice header.
In some example, the method 1300 further comprises: parsing the bitstream representation of the video to obtain an index to at least one of a plurality of sets of region numbers and/or size, wherein plurality of sets of region numbers and/or size are pre-defined.
In some example, the region numbers and/or size is dependent on width and/or height of the picture and/or slice types.
FIG. 14 shows a flowchart of an exemplary method for video processing. The method 1400 includes, determining (1402) , for a conversion between a first region of video and a  bitstream representation of the first region of video, a first set of filter coefficients for the first region of video based on a second set of filter coefficients for a second region of video and a set of differences between the first and second sets of filter coefficients; and performing (1404) the conversion by performing a filtering operation using the first set of filter coefficients.
In some example, when determining the first set filter coefficients of the second region, the set of differences is signaled.
In some example, when determining the first set filter coefficients of the second region, parsing the bitstream representation of the first region of video to obtain the set of differences.
In some example, an index of the second region of video is consecutive to an index of the first region of video.
In some example, an index of the second region of video corresponds to a largest index value of previously coded regions for which the filtering operation was enabled.
In some example, the filtering operation includes adaptive loop filtering.
FIG. 15 shows a flowchart of an exemplary method for video processing. The method 1500 includes, merging (1502) at least two different regions of video to obtain merged regions; and performing (1504) a conversion between the merged regions of video and a bitstream representation of the merged regions by performing a filtering operation using same selected filter coefficients, wherein an index of a first one region in the at least two different regions of video is non-consecutive to an index of a second one region in the at least two different regions of video.
In some example, the merged regions share one same set of selected filters coefficients.
In some example, the method 1500 further comprises: signaling in picture header which regions of video are merged.
In some example, the method 1500 further comprises: for each region, an index of a set of selected filter coefficients is transmitted.
FIG. 16 shows a flowchart of an exemplary method for video processing. The method 1600 includes making a decision (1602) , for a current coding tree unit (CTU) of video, regarding values of first flags associated with adaptive loop filter for a first component; and signaling (1604) second flags associated with adaptive loop filter for a second component based  on the decision.
In some example, the first component comprises luma component and the second component comprises one or more chroma components.
In some example, in response to the decision indicating the adaptive loop filter for luma component is disabled, automatically disabling the adaptive loop filter for one or more chroma components for the CTU without any signaling.
In some example, the first component is a blue-difference (Cb) chroma component and the second component is a red-difference (Cr) chroma component.
In some example, the first component is a red-difference (Cr) chroma component and the second component is a blue-difference (Cb) chroma component.
In some example, in response to the decision indicating the adaptive loop filter for one chroma component is disabled, automatically disabling the adaptive loop filter for another one or more color components for the CTU without any signaling.
In some example, the values of the first flags associated with adaptive loop filter for one color component is used as context for coding the values of second flags associated with adaptive loop filter for another color component.
In some example, the method 1600 further comprises: determining an enabling/disabling of a filtering operation using the second flags, performing, based on the determination, a conversion between the current CTU of video and a bitstream representation of the video including the current CTU.
FIG. 17 shows a flowchart of an exemplary method for video processing. The method 1700 includes, parsing (1702) a bitstream representation of a current coding tree unit (CTU) of video to determine values of first flags for a first component of the CTU of video based on values of second flags corresponding to a second component of the CTU; configuring (1704) a filtering operation based on the values of the first flags; and performing (1706) , using the filtering operation, a conversion between the current CTU of video and the bitstream representation of the video including the current CTU.
In some example, the second component comprises luma component and the first component comprises one or more chroma components.
In some example, in response to the values of the second flags indicating the adaptive loop filter for luma component is disabled, automatically disabling the adaptive loop filter for  one or more chroma components for the CTU.
In some example, the second component is a blue-difference (Cb) chroma component and the first chroma component is a red-difference (Cr) chroma component.
In some example, the second component is a red-difference (Cr) chroma component and the first chroma component is a blue-difference (Cb) chroma component.
In some example, in response to the values of the second flags indicating the adaptive loop filter for one chroma component is disabled, automatically disabling the adaptive loop filter for another one or more color components for the CTU.
In some example, the values of the first flags associated with adaptive loop filter for one color component is used as context for decoding the values of first flags associated with adaptive loop filter for another color component.
FIG. 18 shows a flowchart of an exemplary method for video processing. The method 1800 includes, making a determination (1802) regarding a color format of a current region of video; and determining (1804) adaptive loop filters for one or more chroma components based on the determination.
In some example, whether to use applying classification for the one or more chroma components is based on the determination.
In some example, whether to use multiple sets of filter for the one or more chroma components is based on the determination.
In some example, whether to use different sets of filter for two chroma components is based on the determination.
In some example, in response to the determination that the color format is 4: 4: 4, two chroma components use different filters, or different sets of filters, or selection of filters are based on classification results of each color component.
In some example, the method 1800 further comprises: performing a conversion between the current region of video and a bitstream representation of the current region by performing a filtering operation using the adaptive loop filters for one or more chroma components.
In some example, the filtering operation comprises loop filtering.
In some example, the conversion generates the region of video from the bitstream representation.
In some example, the conversion generates the bitstream representation from the region of video.
8 Example implementations of the disclosed technology
FIG. 19 is a block diagram of a video processing apparatus 1900. The apparatus 1900 may be used to implement one or more of the methods described herein. The apparatus 1900 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on. The apparatus 1900 may include one or more processors 1902, one or more memories 1904 and video processing hardware 1906. The processor (s) 1902 may be configured to implement one or more methods (including, but not limited to,  methods  700, 800 and 900) described in the present document. The memory (memories) 1904 may be used for storing data and code used for implementing the methods and techniques described herein. The video processing hardware 1906 may be used to implement, in hardware circuitry, some techniques described in the present document.
In some embodiments, the video coding methods may be implemented using an apparatus that is implemented on a hardware platform as described with respect to FIG. 19.
From the foregoing, it will be appreciated that specific embodiments of the presently disclosed technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the presently disclosed technology is not limited except as by the appended claims.
Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing unit” or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable  processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document) , in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code) . A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit) .
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example  semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
It is intended that the specification, together with the drawings, be considered exemplary only, where exemplary means an example. As used herein, the singular forms “a” , “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Additionally, the use of “or” is intended to include “and/or” , unless the context clearly indicates otherwise.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims (55)

  1. A method for video processing, comprising:
    determining, for a conversion between a current region of video and a bitstream representation of the current region of video, a first set of filter coefficients for the current region of video based on a second set of filter coefficients for a second region of video that is collocated with the current region of video; and
    performing the conversion by performing a filtering operation using the first set of filter coefficients.
  2. The method of claim 1, wherein the first set of filter coefficients is predicted or derived from the second set of filter coefficients.
  3. The method of claim 1 or 2, wherein the filtering operation comprises loop filtering, and the first set of filter coefficients is the filter coefficients for adaptive loop filters of the loop filtering.
  4. The method of any one of claims 1 to 3, wherein the current region of video is from a first set of video data, the second region of video is from a second set of video data different from the first set of video data, the set of video data including one of slice, tile, tile group, picture.
  5. The method of claim 4, wherein the second region of video is from a different picture than a current picture of the current region of video.
  6. The method of claim 5, wherein, the different picture is a reference picture of the current picture.
  7. The method of claims 1 to 6, further comprising:
    for at least one region of the video, signaling a first flag for the region to indicate whether a set of filter coefficients for the region is predicted or derived based on a corresponding set of filter coefficients for a collocated region that is collocated with the region.
  8. The method of claims 1 to 6, further comprising:
    for at least one region of the video, parsing the bitstream representation of the region to obtain a first flag for the region to indicate whether a set of filter coefficients for the region is predicted or derived based on a corresponding set of filter coefficients for a collocated region that is collocated with the region.
  9. The method of any of claims 1 to 6, further comprising:
    for at least one region of the video, signaling an index of picture to indicate from which picture the set filter coefficients of the region is predicted or derived.
  10. The method of any of claims 1 to 6, further comprising:
    for at least one region of the video, parsing the bitstream representation of the region to obtain an index of picture to indicate from which picture the set filter coefficients of the region is predicted or derived.
  11. The method of any of claims 1 to 4, further comprising:
    for at least one region of the video, signaling a second flag for the region to indicate whether the set of filter coefficients of the region is predicted or derived from the same picture as another region.
  12. The method of claim 11 , wherein the another region is a neighboring region of the region.
  13. The method of claim 11 or 12, further comprising:
    signaling an additional information for the region to indicate from which region the set of filter coefficients is predicted or derived.
  14. The method of any of claims 1 to 4, further comprising:
    for at least one region of the video, parsing the bitstream representation of the region to obtain a second flag for the region to indicate whether the set of filter coefficients of the region is predicted or derived from the same picture as another region.
  15. The method of claim 14 , wherein the another region is a neighboring region of the region.
  16. The method of claim 14 or 15, further comprising:
    parsing the bitstream representation of the region to obtain an additional information for the region to indicate from which region the set of filter coefficients is predicted/derived.
  17. The method of claim 4, further comprising:
    signaling a third flag, at a level of the set of video data, to indicate whether filter coefficients of all regions within the first set of video data are predicted or derived from their corresponding collocated regions in different pictures.
  18. The method of claim 17, wherein, the different pictures are reference pictures of current picture.
  19. The method of any of claims 17 to 18, further comprising:
    signaling an index of picture to indicate from which picture the filter coefficients of all region are predicted or derived.
  20. The method of claim 4, further comprising:
    parsing the bitstream representation of the region to obtain a third flag, at a level of the set of video data, to indicate whether filter coefficients of all regions within the first set of video data are predicted or derived from their corresponding collocated regions in different pictures.
  21. The method of claim 20, wherein, the different pictures are reference pictures of current picture.
  22. The method of any of claims 20 to 21, further comprising:
    parsing the bitstream representation of the region to obtain an index of picture to indicate from which picture the filter coefficients of all region are predicted/derived.
  23. A method for video processing, comprising:
    determining, for a conversion between a current processing unit of video and a bitstream representation of the current processing unit of video, a first flag indicating on/off condition of an adaptive loop filter for the current processing unit of video based on a second processing unit of video that is collocated with the current processing unit of video; and
    performing the conversion by performing a filtering operation based on the first flag.
  24. The method of claim 23, wherein the first flag for the current processing unit of video is inherited from the second processing unit of video.
  25. The method of claim 23 or 24, wherein the filtering operation comprises loop filtering.
  26. The method of any one of claims 23 to 25, wherein the processing unit includes one of region and coding tree unit (CTU) .
  27. The method of any one of claims 23 to 26, wherein the current processing unit of video is from a first set of video data, the second processing unit of video is from a second set of video data different from the first set of video data, the set of video data including one of slice, tile, tile group, picture.
  28. The method of claim 23, wherein the second processing unit of video is from a different picture than a current picture of the current processing unit of video.
  29. The method of claim 28, wherein, the different picture is a reference picture of the current picture.
  30. The method of claim 27, further comprising:
    signaling a second flag, at a level of the set of video data, to indicate whether the first flags of all processing units within the set of video data are inherited from their corresponding collocated processing units in different pictures.
  31. The method of claim 27, further comprising:
    parsing the bitstream representation of the region to obtain a second flag, at a level of the set of video data, to indicate whether the first flags of all processing units within the set of video data are inherited from their corresponding collocated processing units in different pictures.
  32. The method of claim 27, further comprising:
    signaling an index of picture in picture header, slice header, or tile group header to indicate from which picture the first flags of the first processing unit are inherited.
  33. The method of claim 27, further comprising:
    parsing the bitstream representation of the region to obtain an index of picture in picture header, slice header or tile group header to indicate from which picture the first flags of the first processing unit are inherited.
  34. A method for video processing, comprising:
    signaling, for a conversion between a picture of video and a bitstream representation of the video, information on region numbers and/or size for the picture of video;
    splitting the picture into regions based on the information; and
    performing the conversion based on the split regions.
  35. The method of claim 34, wherein signaling the information on region numbers and/or size in at least one of Sequence Parameter Set (SPS) , Video Parameter Set (VPS) , Picture Parameter Set (PPS) , picture header, slice header.
  36. The method of claim 34 or 35, further comprising:
    signaling an index to at least one of a plurality of sets of region numbers and/or size, wherein plurality of sets of region numbers and/or size are pre-defined.
  37. The method of claim 34, wherein the region numbers and/or size is dependent on width and/or height of the picture and/or slice types.
  38. A method for video processing, comprising:
    parsing, for a conversion between a picture of video and a bitstream representation of the video, the bitstream representation of the video to obtain information on region numbers and/or size for the picture of video; and
    performing the conversion based on the information.
  39. The method of claim 38, wherein parsing the bitstream representation of the video to obtain the information on region numbers and/or size in at least one of Sequence Parameter Set (SPS) , Video Parameter Set (VPS) , Picture Parameter Set (PPS) , picture header, slice header.
  40. The method of claim 38 or 39, further comprising:
    parsing the bitstream representation of the video to obtain an index to at least one of a plurality of sets of region numbers and/or size, wherein plurality of sets of region numbers and/or size are pre-defined.
  41. The method of claim 37, wherein the region numbers and/or size is dependent on width and/or height of the picture and/or slice types.
  42. A method for video processing, comprising:
    determining, for a conversion between a first region of video and a bitstream representation of the first region of video, a first set of filter coefficients for the first region of video based on a second set of filter coefficients for a second region of video and a set of differences between the first and second sets of filter coefficients; and
    performing the conversion by performing a filtering operation using the first set of filter coefficients.
  43. The method of claim 42, wherein when determining the first set filter coefficients of the second region, the set of differences is signaled.
  44. The method of claim 42, wherein when determining the first set filter coefficients of the second region, parsing the bitstream representation of the first region of video to obtain the set of differences.
  45. The method of claim 43 or 44, wherein an index of the second region of video is consecutive to an index of the first region of video.
  46. The method of any one of claims 43-45, wherein an index of the second region of video corresponds to a largest index value of previously coded regions for which the filtering operation was enabled.
  47. The method of claim 42, wherein the filtering operation includes adaptive loop filtering.
  48. A method for video processing, comprising:
    merging at least two different regions of video to obtain a merged region; and
    performing a conversion between the merged region of video and a bitstream representation of the merged region by performing a filtering operation using same selected filter coefficients, wherein an index of a first one region in the at least two different regions of video is non-consecutive to an index of a second one region in the at least two different regions of video.
  49. The method of claim 48, wherein the merged regions share one same set of selected filters coefficients.
  50. The method of claim 48 or 49, further comprising: signaling in picture header which regions of video are merged.
  51. The method of claim 48 or 49, further comprising: for each region, an index of a set of selected filter coefficients is transmitted.
  52. The method of any one of claims 1to 51, wherein the conversion generates the region of video from the bitstream representation.
  53. The method of any one of claims 1to 51, wherein the conversion generates the bitstream representation from the region of video.
  54. An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of claims 1 to 53.
  55. A computer program product stored on a non-transitory computer readable media, the computer program product including program code for carrying out the method in any one of claims 1 to 54.
PCT/CN2019/117149 2018-11-09 2019-11-11 Improvements for region based adaptive loop filter WO2020094154A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201980072485.6A CN112997500B (en) 2018-11-09 2019-11-11 Improvements to region-based adaptive loop filters

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2018/114834 2018-11-09
CN2018114834 2018-11-09

Publications (1)

Publication Number Publication Date
WO2020094154A1 true WO2020094154A1 (en) 2020-05-14

Family

ID=70610807

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2019/117145 WO2020094153A1 (en) 2018-11-09 2019-11-11 Component based loop filter
PCT/CN2019/117149 WO2020094154A1 (en) 2018-11-09 2019-11-11 Improvements for region based adaptive loop filter

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117145 WO2020094153A1 (en) 2018-11-09 2019-11-11 Component based loop filter

Country Status (2)

Country Link
CN (2) CN112997504B (en)
WO (2) WO2020094153A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022155922A1 (en) * 2021-01-22 2022-07-28 Oppo广东移动通信有限公司 Video coding method and system, video decoding method and system, video coder and video decoder
WO2024027808A1 (en) * 2022-08-04 2024-02-08 Douyin Vision Co., Ltd. Method, apparatus, and medium for video processing

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023020309A1 (en) * 2021-08-14 2023-02-23 Beijing Bytedance Network Technology Co., Ltd. Advanced fusion mode for adaptive loop filter in video coding
CN116433783A (en) * 2021-12-31 2023-07-14 中兴通讯股份有限公司 Method and device for video processing, storage medium and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010036320A1 (en) * 2000-03-24 2001-11-01 Matsushita Electric Industrial Co., Ltd. Method and apparatus for dynamic loop and post filtering
CN1725860A (en) * 2004-07-19 2006-01-25 三星电子株式会社 The filtering method that in audio-video codec, uses, equipment and medium
CN103141094A (en) * 2010-10-05 2013-06-05 联发科技股份有限公司 Method and apparatus of adaptive loop filtering
WO2016195586A1 (en) * 2015-06-05 2016-12-08 Telefonaktiebolaget Lm Ericsson (Publ) Filtering for video processing
US20180139441A1 (en) * 2015-05-12 2018-05-17 Samsung Electronics Co., Ltd. Method and device for encoding or decoding image by using blocks determined by means of adaptive order

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9094658B2 (en) * 2010-05-10 2015-07-28 Mediatek Inc. Method and apparatus of adaptive loop filtering
GB2500347B (en) * 2011-05-16 2018-05-16 Hfi Innovation Inc Apparatus and method of sample adaptive offset for luma and chroma components
US9807403B2 (en) * 2011-10-21 2017-10-31 Qualcomm Incorporated Adaptive loop filtering for chroma components
GB201119206D0 (en) * 2011-11-07 2011-12-21 Canon Kk Method and device for providing compensation offsets for a set of reconstructed samples of an image
SI2697973T1 (en) * 2012-04-16 2017-11-30 Hfi Innovation Inc. Method and apparatus for loop filtering across slice or tile boundaries
GB2501535A (en) * 2012-04-26 2013-10-30 Sony Corp Chrominance Processing in High Efficiency Video Codecs
EP3761641A1 (en) * 2013-11-15 2021-01-06 MediaTek Inc. Method of block-based adaptive loop filtering
KR102298599B1 (en) * 2014-04-29 2021-09-03 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Encoder-side decisions for sample adaptive offset filtering
US10506230B2 (en) * 2017-01-04 2019-12-10 Qualcomm Incorporated Modified adaptive loop filter temporal prediction for temporal scalability support

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010036320A1 (en) * 2000-03-24 2001-11-01 Matsushita Electric Industrial Co., Ltd. Method and apparatus for dynamic loop and post filtering
CN1725860A (en) * 2004-07-19 2006-01-25 三星电子株式会社 The filtering method that in audio-video codec, uses, equipment and medium
CN103141094A (en) * 2010-10-05 2013-06-05 联发科技股份有限公司 Method and apparatus of adaptive loop filtering
US20180139441A1 (en) * 2015-05-12 2018-05-17 Samsung Electronics Co., Ltd. Method and device for encoding or decoding image by using blocks determined by means of adaptive order
WO2016195586A1 (en) * 2015-06-05 2016-12-08 Telefonaktiebolaget Lm Ericsson (Publ) Filtering for video processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KARCZEWICZ, MARTA ET AL.: "CE2-related: CTU Based Adaptive Loop Filtering", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, 18 July 2018 (2018-07-18), pages 1 - 2, DOI: 20200110154649X *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022155922A1 (en) * 2021-01-22 2022-07-28 Oppo广东移动通信有限公司 Video coding method and system, video decoding method and system, video coder and video decoder
WO2024027808A1 (en) * 2022-08-04 2024-02-08 Douyin Vision Co., Ltd. Method, apparatus, and medium for video processing

Also Published As

Publication number Publication date
CN112997500A (en) 2021-06-18
CN112997504A (en) 2021-06-18
CN112997500B (en) 2023-04-18
CN112997504B (en) 2023-04-18
WO2020094153A1 (en) 2020-05-14

Similar Documents

Publication Publication Date Title
US11523140B2 (en) Nonlinear adaptive loop filtering in video processing
US20230217021A1 (en) On adaptive loop filtering for video coding
US11516497B2 (en) Bidirectional optical flow based video coding and decoding
WO2020094154A1 (en) Improvements for region based adaptive loop filter
JPWO2020192644A5 (en)
JPWO2020192645A5 (en)
WO2021238828A1 (en) Indication of multiple transform matrices in coded video
WO2020200159A1 (en) Interactions between adaptive loop filtering and other coding tools
RU2812618C2 (en) Nonlinear adaptive contour filtering in video data processing
US11968368B2 (en) Cross-component prediction with multiple-parameter model
US20240040116A1 (en) Guided filter usage
US20220329816A1 (en) Cross-component prediction with multiple-parameter model

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19881352

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19881352

Country of ref document: EP

Kind code of ref document: A1