WO2022197375A1 - Adaptive non-linear mapping for sample offset - Google Patents

Adaptive non-linear mapping for sample offset Download PDF

Info

Publication number
WO2022197375A1
WO2022197375A1 PCT/US2022/014255 US2022014255W WO2022197375A1 WO 2022197375 A1 WO2022197375 A1 WO 2022197375A1 US 2022014255 W US2022014255 W US 2022014255W WO 2022197375 A1 WO2022197375 A1 WO 2022197375A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
filter
color component
current
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2022/014255
Other languages
English (en)
French (fr)
Other versions
WO2022197375A8 (en
Inventor
Shan Liu
Xin Zhao
Yixin DU
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent America LLC
Original Assignee
Tencent America LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent America LLC filed Critical Tencent America LLC
Priority to EP22771888.9A priority Critical patent/EP4118822A4/en
Priority to KR1020227039257A priority patent/KR20220165776A/ko
Priority to CN202280003144.5A priority patent/CN115606175B/zh
Priority to CN202411032979.0A priority patent/CN119071482A/zh
Priority to JP2022560887A priority patent/JP7500757B2/ja
Publication of WO2022197375A1 publication Critical patent/WO2022197375A1/en
Publication of WO2022197375A8 publication Critical patent/WO2022197375A8/en
Anticipated expiration legal-status Critical
Priority to JP2024091079A priority patent/JP7765549B2/ja
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • This disclosure generally describes a set of advanced video coding technologies, and is specifically related to sample offset filtering with local adaptability.
  • Uncompressed digital video can include a series of pictures, with each picture having a spatial dimension of, for example, 1920 x 1080 luminance samples and associated full or subsampled chrominance samples.
  • the series of pictures can have a fixed or variable picture rate (alternatively referred to as frame rate) of, for example, 60 pictures per second or 60 frames per second.
  • Uncompressed video has specific bitrate requirements for streaming or data processing. For example, video with a pixel resolution of 1920 x 1080, a frame rate of 60 frames/second, and a chroma subsampling of 4:2:0 at 8 bit per pixel per color channel requires close to 1.5 Gbit/s bandwidth. An hour of such video requires more than 600 GBytes of storage space.
  • One purpose of video coding and decoding can be the reduction of redundancy in the uncompressed input video signal, through compression. Compression can help reduce the aforementioned bandwidth and/or storage space requirements, in some cases, by two orders of magnitude or more. Both lossless compression and lossy compression, as well as a combination thereof can be employed. Lossless compression refers to techniques where an exact copy of the original signal can be reconstructed from the compressed original signal via a decoding process. Lossy compression refers to coding/decoding process where original video information is not fully retained during coding and not fully recoverable during decoding.
  • the reconstructed signal may not be identical to the original signal, but the distortion between original and reconstructed signals is made small enough to render the reconstructed signal useful for the intended application albeit some information loss.
  • lossy compression is widely employed in many applications. The amount of tolerable distortion depends on the application. For example, users of certain consumer video streaming applications may tolerate higher distortion than users of cinematic or television broadcasting applications.
  • the compression ratio achievable by a particular coding algorithm can be selected or adjusted to reflect various distortion tolerance: higher tolerable distortion generally allows for coding algorithms that yield higher losses and higher compression ratios.
  • a video encoder and decoder can utilize techniques from several broad categories and steps, including, for example, motion compensation, Fourier transform, quantization, and entropy coding.
  • Video codec technologies can include techniques known as intra coding.
  • Intra pictures and their derivatives such as independent decoder refresh pictures, can be used to reset the decoder state and can, therefore, be used as the first picture in a coded video bitstream and a video session, or as a still image.
  • the samples of a block after intra prediction can then be subject to a transform into frequency domain, and the transform coefficients so generated can be quantized before entropy coding.
  • Intra prediction represents a technique that minimizes sample values in the pre-transform domain. In some cases, the smaller the DC value after a transform is, and the smaller the AC coefficients are, the fewer the bits that are required at a given quantization step size to represent the block after entropy coding.
  • intra prediction uses intra prediction.
  • some newer video compression technologies include techniques that attempt coding/decoding of blocks based on, for example, surrounding sample data and/or metadata that are obtained during the encoding and/or decoding of spatially neighboring, and that precede in decoding order the blocks of data being intra coded or decoded. Such techniques are henceforth called “intra prediction” techniques. Note that in at least some cases, intra prediction uses reference data only from the current picture under reconstruction and not from other reference pictures.
  • intra prediction mode There can be many different forms of intra prediction. When more than one of such techniques are available in a given video coding technology, the technique in use can be referred to as an intra prediction mode.
  • One or more intra prediction modes may be provided in a particular codec. In certain cases, modes can have submodes and/or may be associated with various parameters, and mode/submode information and intra coding parameters for blocks of video can be coded individually or collectively included in mode codewords. Which codeword to use for a given mode, submode, and/or parameter combination can have an impact in the coding efficiency gain through intra prediction, and so can the entropy coding technology used to translate the codewords into a bitstream.
  • a predictor block can be formed using neighboring sample values that have become available. For example, available values of particular set of neighboring samples along certain direction and/or lines may be copied into the predictor block. A reference to the direction in use can be coded in the bitstream or may itself be predicted.
  • FIG. 1 A depicted in the lower right is a subset of nine predictor directions specified in H.265’s 33 possible intra predictor directions (corresponding to the 33 angular modes of the 35 intra modes specified in H.265).
  • the point where the arrows converge (101) represents the sample being predicted.
  • the arrows represent the direction from which neighboring samples are used to predict the sample at 101.
  • arrow (102) indicates that sample (101) is predicted from a neighboring sample or samples to the upper right, at a 45 degree angle from the horizontal direction.
  • arrow (103) indicates that sample (101) is predicted from a neighboring sample or samples to the lower left of sample (101), in a 22.5 degree angle from the horizontal direction.
  • the square block (104) includes 16 samples, each labelled with an “S”, its position in the Y dimension (e.g., row index) and its position in the X dimension (e.g., column index).
  • sample S21 is the second sample in the Y dimension (from the top) and the first (from the left) sample in the X dimension.
  • sample S44 is the fourth sample in block (104) in both the Y and X dimensions.
  • S44 is at the bottom right.
  • example reference samples that follow a similar numbering scheme.
  • a reference sample is labelled with an R, its Y position (e.g., row index) and X position (column index) relative to block (104). In both H.264 and H.265, prediction samples adjacently neighboring the block under reconstruction are used.
  • Intra picture prediction of block 104 may begin by copying reference sample values from the neighboring samples according to a signaled prediction direction. For example, assuming that the coded video bitstream includes signaling that, for this block 104, indicates a prediction direction of arrow (102) — that is, samples are predicted from a prediction sample or samples to the upper right, at a 45-degree angle from the horizontal direction. In such a case, samples S41, S32, S23, and S14 are predicted from the same reference sample R05. Sample S44 is then predicted from reference sample R08.
  • the values of multiple reference samples may be combined, for example through interpolation, in order to calculate a reference sample; especially when the directions are not evenly divisible by 45 degrees.
  • FIG. IB shows a schematic (180) that depicts 65 intra prediction directions according to JEM to illustrate the increasing number of prediction directions in various encoding technologies developed over time.
  • the manner for mapping of bits representing intra prediction directions to the prediction directions in the coded video bitstream may vary from video coding technology to video coding technology; and can range, for example, from simple direct mappings of prediction direction to intra prediction mode, to codewords, to complex adaptive schemes involving most probable modes, and similar techniques. In all cases, however, there can be certain directions for intro prediction that are statistically less likely to occur in video content than certain other directions. As the goal of video compression is the reduction of redundancy, those less likely directions will, in a well-designed video coding technology, may be represented by a larger number of bits than more likely directions.
  • Inter picture prediction, or inter prediction may be based on motion compensation.
  • motion compensation sample data from a previously reconstructed picture or part thereof (reference picture), after being spatially shifted in a direction indicated by a motion vector (MV henceforth), may be used for a prediction of a newly reconstructed picture or picture part (e.g., a block).
  • the reference picture can be the same as the picture currently under reconstruction.
  • MVs may have two dimensions X and Y, or three dimensions, with the third dimension being an indication of the reference picture in use (akin to a time dimension).
  • a current MV applicable to a certain area of sample data can be predicted from other MVs, for example from those other MVs that are related to other areas of the sample data that are spatially adjacent to the area under reconstruction and precede the current MV in decoding order. Doing so can substantially reduce the overall amount of data required for coding the MVs by relying on removing redundancy in correlated MVs, thereby increasing compression efficiency.
  • MV prediction can work effectively, for example, because when coding an input video signal derived from a camera (known as natural video) there is a statistical likelihood that areas larger than the area to which a single MV is applicable move in a similar direction in the video sequence and, therefore, can in some cases be predicted using a similar motion vector derived from MVs of neighboring area. That results in the actual MV for a given area to be similar or identical to the MV predicted from the surrounding MVs.
  • Such an MV in turn may be represented, after entropy coding, in a smaller number of bits than what would be used if the MV is coded directly rather than predicted from the neighboring MV(s).
  • MV prediction can be an example of lossless compression of a signal (namely: the MVs) derived from the original signal (namely: the sample stream).
  • MV prediction itself can be lossy, for example because of rounding errors when calculating a predictor from several surrounding MVs.
  • a current block (201) comprises samples that have been found by the encoder during the motion search process to be predictable from a previous block of the same size that has been spatially shifted.
  • the MV can be derived from metadata associated with one or more reference pictures, for example from the most recent (in decoding order) reference picture, using the MV associated with either one of five surrounding samples, denoted A0, Al, and B0, Bl, B2 (202 through 206, respectively).
  • the MV prediction can use predictors from the same reference picture that the neighboring block uses.
  • aspects of the disclosure provide methods and apparatuses for cross sample offset filtering and local sample offset filtering in video encoding and decoding.
  • a method for in-loop filtering of a video stream may include comprising obtaining at least one statistical property associated with reconstructed samples of at least a first color component in a current reconstructed data block of the video stream; selecting a target sample offset filter among a plurality of sample offset filters based on the at least one statistical property, the target sample offset filter comprising a nonlinear mapping between sample delta measures and sample offset values; and filtering a current sample in a second color component of the current reconstructed data block using the target sample offset filter and reference samples in a third color component of the current reconstructed data block to generate a filtered reconstructed sample of the current sample.
  • the first color component may be a same color component as the third color component.
  • the first color component may be a same color component as the second color component.
  • the second color component may be a different color component as the third color component.
  • the second color component may be a same color component as the third color component.
  • the at least the first color component may include one, two, or three color components.
  • the at least one statistical property may include edge information of the current reconstructed data block.
  • the edge information of the current reconstructed data block comprises an edge direction derived in a Constrained Directional Enhanced Filtering (CDEF) process
  • the plurality of sample offset filters comprise N sample offset filters corresponding to N CDEF edge directions, N being an integer between 1 and 8 inclusive.
  • the plurality of sample offset filters are signaled at a frame level in high level syntax (HLS).
  • the at least one statistical property comprises a smoothness measure of the current reconstructed data block.
  • the smoothness measure of the current reconstructed data block is mapped to one of M predefined smoothness levels characterized by M-l smoothness level thresholds; the plurality of sample offset filters comprises M sample offset filters corresponding to the M predefined smoothness levels; and the target sample offset filter is selected from the M sample offset filters according the one of the M predefined smoothness levels mapped to the smoothness measure.
  • the at least one statistical property comprises a coding information of the current reconstructed data block.
  • coding information comprises a current prediction mode of the current reconstructed data block; the plurality of sample offset filters correspond to different prediction modes; and the target sample offset filter is selected from the plurality of sample offset filters according to the current prediction mode of the current reconstructed data block.
  • the different prediction modes comprise at least one of intra DC mode, intra Planar mode, intra PAETH mode, intra SMOOTH mode, intra recursive filtering mode, and inter SKIP mode.
  • each of the plurality of sample offset filters is associated with a set of filter coefficients, a number of filter taps, and positions of the number of filter taps.
  • filtering the current sample in the second color component of the current reconstructed data block using the target sample offset filter and the reference samples in the third color component of the current reconstructed data block to generate the filtered reconstructed sample of the current sample may include determining a first location of the current sample of the second color component and second locations of a plurality of filter taps associated with the target sample offset filter; identifying reconstructed samples of the third color component at the first location and the second locations as the reference samples; determining a delta measure between the reference samples corresponding to the second locations and the reference sample corresponding to the first location, both in the third color component of the current reconstructed data block; extracting a sample offset value from the target sample offset filter based on the delta measure; and filtering the current sample of the second color component using the sample offset value to generate the filtered reconstructed sample of the current sample.
  • the plurality of sample offset filters may be predetermined.
  • the plurality of sample offset filters and an index of the selected target sample offset filter among the plurality of sample offset filters may be signaled at a sequence level, a picture level, or coding tree unit level.
  • a video encoding or decoding device may include circuitry configured to implement any of the methods above.
  • aspects of the disclosure also provide non-transitory computer-readable mediums storing instructions which when executed by a computer for video decoding and/or encoding cause the computer to perform the methods for video decoding and/or encoding.
  • FIG. 1 A shows a schematic illustration of an exemplary subset of intra prediction directional modes.
  • FIG. IB shows an illustration of exemplary intra prediction directions.
  • FIG. 2 shows a schematic illustration of a current block and its surrounding spatial merge candidates for motion vector prediction in one example.
  • FIG. 3 shows a schematic illustration of a simplified block diagram of a communication system (300) in accordance with an example embodiment.
  • FIG. 4 shows a schematic illustration of a simplified block diagram of a communication system (400) in accordance with an example embodiment.
  • FIG. 5 shows a schematic illustration of a simplified block diagram of a video decoder in accordance with an example embodiment.
  • FIG. 6 shows a schematic illustration of a simplified block diagram of a video encoder in accordance with an example embodiment.
  • FIG. 7 shows a block diagram of a video encoder in accordance with another example embodiment.
  • FIG. 8 shows a block diagram of a video decoder in accordance with another example embodiment.
  • FIG. 9 shows exemplary adaptive loop filters according to embodiments of the disclosure.
  • FIGs. 10A-10D show examples of subsampled positions used for calculating gradients of a vertical direction, a horizontal direction, and two diagonal directions, respectively, according to embodiments of the disclosure.
  • FIG. 10E shows an example manner to determine block directionality based on various gradients for use by an Adaptive Loop Filter (ALF).
  • ALF Adaptive Loop Filter
  • FIGs. 11 A and 1 IB show modified block classifications at virtual boundaries according to example embodiments of the disclosure.
  • FIGs. 12A-12F show exemplary adaptive loop filters with padding operations at respective virtual boundaries according to embodiments of the disclosure.
  • FIG. 13 shows an example of largest coding unit aligned picture quadtree splitting according to an embodiment of the disclosure.
  • FIG. 14 shows a quadtree split pattern corresponding to FIG. 13 according to an example embodiment of the disclosure.
  • FIG. 15 shows cross-component filters used to generate chroma components according to an example embodiment of the disclosure.
  • FIG. 16 shows an example of a cross-component ALF filter according to an embodiment of the disclosure.
  • FIGs. 17A-17B show exemplary locations of chroma samples relative to luma samples according to embodiments of the disclosure.
  • FIG. 18 shows an example of direction search for a block according to an embodiment of the disclosure.
  • FIG. 19 shows an example of a subspace projection according to an embodiment of the disclosure.
  • FIG. 20 shows an example of a filter support area in a Cross-Component
  • CCSO Sample Offset
  • FIGs. 21 A-21C show an exemplary mapping used in a CCSO filter according to an embodiment of the disclosure.
  • FIG. 22 shows an example implementation of a CCSO filter according to an embodiment of the disclosure.
  • FIG. 23 shows four exemplary patterns for pixel classifications in an edge offset according to an embodiment of the disclosure.
  • FIG. 24 shows a flow chart outlining a process (2400) according to an embodiment of the disclosure.
  • FIG. 25 shows a schematic illustration of a computer system in accordance with an example embodiment.
  • FIG. 3 illustrates a simplified block diagram of a communication system
  • the communication system (300) includes a plurality of terminal devices that can communicate with each other, via, for example, a network (350).
  • the communication system (300) includes a first pair of terminal devices (310) and (320) interconnected via the network (350).
  • the first pair of terminal devices (310) and (320) may perform unidirectional transmission of data.
  • the terminal device (310) may code video data (e.g., of a stream of video pictures that are captured by the terminal device (310)) for transmission to the other terminal device (320) via the network (350).
  • the encoded video data can be transmitted in the form of one or more coded video bitstreams.
  • the terminal device (320) may receive the coded video data from the network (350), decode the coded video data to recover the video pictures and display the video pictures according to the recovered video data.
  • Unidirectional data transmission may be implemented in media serving applications and the like.
  • the communication system (300) includes a second pair of terminal devices (330) and (340) that perform bidirectional transmission of coded video data that may be implemented, for example, during a videoconferencing application.
  • each terminal device of the terminal devices (330) and (340) may code video data (e.g., of a stream of video pictures that are captured by the terminal device) for transmission to the other terminal device of the terminal devices (330) and (340) via the network (350).
  • Each terminal device of the terminal devices (330) and (340) also may receive the coded video data transmitted by the other terminal device of the terminal devices (330) and (340), and may decode the coded video data to recover the video pictures and may display the video pictures at an accessible display device according to the recovered video data.
  • the network (350) represents any number or types of networks that convey coded video data among the terminal devices (310), (320), (330) and (340), including for example wireline (wired) and/or wireless communication networks.
  • the communication network (350) may exchange data in circuit-switched, packet-switched, and/or other types of channels.
  • Representative networks include telecommunications networks, local area networks, wide area networks and/or the Internet.
  • the architecture and topology of the network (350) may be immaterial to the operation of the present disclosure unless explicitly explained herein.
  • FIG. 4 illustrates, as an example for an application for the disclosed subject matter, a placement of a video encoder and a video decoder in a video streaming environment.
  • the disclosed subject matter may be equally applicable to other video applications, including, for example, video conferencing, digital TV broadcasting, gaming, virtual reality, storage of compressed video on digital media including CD, DVD, memory stick and the like, and so on.
  • a video streaming system may include a video capture subsystem (413) that can include a video source (401), e.g., a digital camera, for creating a stream of video pictures or images (402) that are uncompressed.
  • the stream of video pictures (402) includes samples that are recorded by a digital camera of the video source 401.
  • the stream of video pictures (402), depicted as a bold line to emphasize a high data volume when compared to encoded video data (404) (or coded video bitstreams), can be processed by an electronic device (420) that includes a video encoder (403) coupled to the video source (401).
  • the video encoder (403) can include hardware, software, or a combination thereof to enable or implement aspects of the disclosed subject matter as described in more detail below.
  • the encoded video data (404) (or encoded video bitstream (404)), depicted as a thin line to emphasize a lower data volume when compared to the stream of uncompressed video pictures (402), can be stored on a streaming server (405) for future use or directly to downstream video devices (not shown).
  • One or more streaming client subsystems such as client subsystems (406) and (408) in FIG. 4 can access the streaming server (405) to retrieve copies (407) and (409) of the encoded video data (404).
  • a client subsystem (406) can include a video decoder (410), for example, in an electronic device (430).
  • the video decoder (410) decodes the incoming copy (407) of the encoded video data and creates an outgoing stream of video pictures (411) that are uncompressed and that can be rendered on a display (412) (e.g., a display screen) or other rendering devices (not depicted).
  • the video decoder 410 may be configured to perform some or all of the various functions described in this disclosure.
  • the encoded video data (404), (407), and (409) e.g., video bitstreams
  • video coding/compression standards examples include ITU-T Recommendation H.265.
  • a video coding standard under development is informally known as Versatile Video Coding (VVC).
  • VVC Versatile Video Coding
  • the electronic devices (420) and (430) can include other components (not shown).
  • the electronic device (420) can include a video decoder (not shown) and the electronic device (430) can include a video encoder (not shown) as well.
  • FIG. 5 shows a block diagram of a video decoder (510) according to any embodiment of the present disclosure below.
  • the video decoder (510) can be included in an electronic device (530).
  • the electronic device (530) can include a receiver (531) (e.g., receiving circuitry).
  • the video decoder (510) can be used in place of the video decoder (410) in the example of FIG. 4.
  • the receiver (531) may receive one or more coded video sequences to be decoded by the video decoder (510). In the same or another embodiment, one coded video sequence may be decoded at a time, where the decoding of each coded video sequence is independent from other coded video sequences. Each video sequence may be associated with multiple video frames or images.
  • the coded video sequence may be received from a channel (501), which may be a hardware/software link to a storage device which stores the encoded video data or a streaming source which transmits the encoded video data.
  • the receiver (531) may receive the encoded video data with other data such as coded audio data and/or ancillary data streams, that may be forwarded to their respective processing circuitry (not depicted).
  • the receiver (531) may separate the coded video sequence from the other data.
  • a buffer memory (515) may be disposed in between the receiver (531) and an entropy decoder / parser (520) ("parser (520)" henceforth).
  • the buffer memory (515) may be implemented as part of the video decoder (510). In other applications, it can be outside of and separate from the video decoder (510) (not depicted). In still other applications, there can be a buffer memory (not depicted) outside of the video decoder (510) for the purpose of, for example, combating network jitter, and there may be another additional buffer memory (515) inside the video decoder (510), for example to handle playback timing.
  • the buffer memory (515) may not be needed, or can be small.
  • the buffer memory (515) of sufficient size may be required, and its size can be comparatively large.
  • Such buffer memory may be implemented with an adaptive size, and may at least partially be implemented in an operating system or similar elements (not depicted) outside of the video decoder (510).
  • the video decoder (510) may include the parser (520) to reconstruct symbols (521) from the coded video sequence. Categories of those symbols include information used to manage operation of the video decoder (510), and potentially information to control a rendering device such as display (512) (e.g., a display screen) that may or may not an integral part of the electronic device (530) but can be coupled to the electronic device (530), as is shown in FIG. 5.
  • the control information for the rendering device(s) may be in the form of Supplemental Enhancement Information (SEI messages) or Video Usability Information (VUI) parameter set fragments (not depicted).
  • SEI messages Supplemental Enhancement Information
  • VUI Video Usability Information
  • the parser (520) may parse/entropy-decode the coded video sequence that is received by the parser (520).
  • the entropy coding of the coded video sequence can be in accordance with a video coding technology or standard, and can follow various principles, including variable length coding, Huffman coding, arithmetic coding with or without context sensitivity, and so forth.
  • the parser (520) may extract from the coded video sequence, a set of subgroup parameters for at least one of the subgroups of pixels in the video decoder, based upon at least one parameter corresponding to the subgroups.
  • the subgroups can include Groups of Pictures (GOPs), pictures, tiles, slices, macroblocks, Coding Units (CUs), blocks, Transform Units (TUs), Prediction Units (PUs) and so forth.
  • the parser (520) may also extract from the coded video sequence information such as transform coefficients (e.g., Fourier transform coefficients), quantizer parameter values, motion vectors, and so forth.
  • the parser (520) may perform an entropy decoding / parsing operation on the video sequence received from the buffer memory (515), so as to create symbols (521).
  • Reconstruction of the symbols (521) can involve multiple different processing or functional units depending on the type of the coded video picture or parts thereof (such as: inter and intra picture, inter and intra block), and other factors.
  • the units that are involved and how they are involved may be controlled by the subgroup control information that was parsed from the coded video sequence by the parser (520).
  • the flow of such subgroup control information between the parser (520) and the multiple processing or functional units below is not depicted for simplicity.
  • the video decoder (510) can be conceptually subdivided into a number of functional units as described below. In a practical implementation operating under commercial constraints, many of these functional units interact closely with each other and can, at least partly, be integrated with one another. However, for the purpose of describing the various functions of the disclosed subject matter with clarity, the conceptual subdivision into the functional units is adopted in the disclosure below.
  • a first unit may include the scaler / inverse transform unit (551).
  • the scaler / inverse transform unit (551) may receive a quantized transform coefficient as well as control information, including information indicating which type of inverse transform to use, block size, quantization factor/parameters, quantization scaling matrices, and the lie as symbol(s) (521) from the parser (520).
  • the scaler / inverse transform unit (551) can output blocks comprising sample values that can be input into aggregator (555).
  • the output samples of the scaler / inverse transform (551) can pertain to an intra coded block, i.e., a block that does not use predictive information from previously reconstructed pictures, but can use predictive information from previously reconstructed parts of the current picture.
  • Such predictive information can be provided by an intra picture prediction unit (552).
  • the intra picture prediction unit (552) may generate a block of the same size and shape of the block under reconstruction using surrounding block information that is already reconstructed and stored in the current picture buffer (558).
  • the current picture buffer (558) buffers, for example, partly reconstructed current picture and/or fully reconstructed current picture.
  • the aggregator (555) may add, on a per sample basis, the prediction information the intra prediction unit (552) has generated to the output sample information as provided by the scaler / inverse transform unit (551).
  • a motion compensation prediction unit (553) can access reference picture memory (557) to fetch samples used for inter-picture prediction. After motion compensating the fetched samples in accordance with the symbols (521) pertaining to the block, these samples can be added by the aggregator (555) to the output of the scaler / inverse transform unit (551)
  • output of unit 551 may be referred to as the residual samples or residual signal) so as to generate output sample information.
  • the addresses within the reference picture memory (557) from where the motion compensation prediction unit (553) fetches prediction samples can be controlled by motion vectors, available to the motion compensation prediction unit (553) in the form of symbols (521) that can have, for example X, Y components (shift), and reference picture components (time).
  • Motion compensation may also include interpolation of sample values as fetched from the reference picture memory (557) when sub-sample exact motion vectors are in use, and may also be associated with motion vector prediction mechanisms, and so forth.
  • the output samples of the aggregator (555) can be subject to various loop filtering techniques in the loop filter unit (556).
  • Video compression technologies can include in-loop filter technologies that are controlled by parameters included in the coded video sequence (also referred to as coded video bitstream) and made available to the loop filter unit (556) as symbols (521) from the parser (520), but can also be responsive to meta-information obtained during the decoding of previous (in decoding order) parts of the coded picture or coded video sequence, as well as responsive to previously reconstructed and loop-filtered sample values.
  • Several type of loop filters may be included as part of the loop filter unit 556 in various orders, as will be described in further detail below.
  • the output of the loop filter unit (556) can be a sample stream that can be output to the rendering device (512) as well as stored in the reference picture memory (557) for use in future inter-picture prediction.
  • Certain coded pictures once fully reconstructed, can be used as reference pictures for future inter-picture prediction. For example, once a coded picture corresponding to a current picture is fully reconstructed and the coded picture has been identified as a reference picture (by, for example, the parser (520)), the current picture buffer (558) can become a part of the reference picture memory (557), and a fresh current picture buffer can be reallocated before commencing the reconstruction of the following coded picture.
  • the video decoder (510) may perform decoding operations according to a predetermined video compression technology adopted in a standard, such as ITU-T Rec. H.265.
  • the coded video sequence may conform to a syntax specified by the video compression technology or standard being used, in the sense that the coded video sequence adheres to both the syntax of the video compression technology or standard and the profiles as documented in the video compression technology or standard.
  • a profile can select certain tools from all the tools available in the video compression technology or standard as the only tools available for use under that profile.
  • the complexity of the coded video sequence may be within bounds as defined by the level of the video compression technology or standard.
  • levels restrict the maximum picture size, maximum frame rate, maximum reconstruction sample rate (measured in, for example megasamples per second), maximum reference picture size, and so on. Limits set by levels can, in some cases, be further restricted through Hypothetical Reference Decoder (HRD) specifications and metadata for HRD buffer management signaled in the coded video sequence.
  • HRD Hypothetical Reference Decoder
  • the receiver (531) may receive additional
  • the additional data may be included as part of the coded video sequence(s).
  • the additional data may be used by the video decoder (510) to properly decode the data and/or to more accurately reconstruct the original video data.
  • Additional data can be in the form of, for example, temporal, spatial, or signal noise ratio (SNR) enhancement layers, redundant slices, redundant pictures, forward error correction codes, and so on.
  • SNR signal noise ratio
  • FIG. 6 shows a block diagram of a video encoder (603) according to an example embodiment of the present disclosure.
  • the video encoder (603) may be included in an electronic device (620).
  • the electronic device (620) may further include a transmitter (640) (e.g., transmitting circuitry).
  • the video encoder (603) can be used in place of the video encoder (403) in the example of FIG. 4.
  • the video encoder (603) may receive video samples from a video source
  • the video source (601) may be implemented as a portion of the electronic device (620).
  • the video source (601) may provide the source video sequence to be coded by the video encoder (603) in the form of a digital video sample stream that can be of any suitable bit depth (for example: 8 bit, 10 bit, 12 bit, ...), any colorspace (for example, BT.601 YCrCb, RGB, XYZ...), and any suitable sampling structure (for example YCrCb 4:2:0, YCrCb 4:4:4).
  • the video source (601) may be a storage device capable of storing previously prepared video.
  • the video source (601) may be a camera that captures local image information as a video sequence.
  • Video data may be provided as a plurality of individual pictures or images that impart motion when viewed in sequence.
  • the pictures themselves may be organized as a spatial array of pixels, wherein each pixel can comprise one or more samples depending on the sampling structure, color space, and the like being in use.
  • each pixel can comprise one or more samples depending on the sampling structure, color space, and the like being in use.
  • a person having ordinary skill in the art can readily understand the relationship between pixels and samples. The description below focuses on samples.
  • the video encoder may code and compress the pictures of the source video sequence into a coded video sequence (643) in real time or under any other time constraints as required by the application.
  • Enforcing appropriate coding speed constitutes one function of a controller (650).
  • the controller (650) may be functionally coupled to and control other functional units as described below. The coupling is not depicted for simplicity.
  • Parameters set by the controller (650) can include rate control related parameters (picture skip, quantizer, lambda value of rate-distortion optimization techniques, ...), picture size, group of pictures (GOP) layout, maximum motion vector search range, and the like.
  • the controller (650) can be configured to have other suitable functions that pertain to the video encoder (603) optimized for a certain system design.
  • the video encoder (603) may be configured to operate in a coding loop.
  • the coding loop can include a source coder (630) (e.g., responsible for creating symbols, such as a symbol stream, based on an input picture to be coded, and a reference picture(s)), and a (local) decoder (633) embedded in the video encoder (603).
  • the decoder (633) reconstructs the symbols to create the sample data in a similar manner as a (remote) decoder would create even though the embedded decoder 633 process coded video steam by the source coder 630 without entropy coding (as any compression between symbols and coded video bitstream in entropy coding may be lossless in the video compression technologies considered in the disclosed subject matter).
  • the reconstructed sample stream (sample data) is input to the reference picture memory (634).
  • the reference picture memory (634) is also bit exact between the local encoder and remote encoder.
  • the prediction part of an encoder "sees” as reference picture samples exactly the same sample values as a decoder would "see” when using prediction during decoding.
  • This fundamental principle of reference picture synchronicity (and resulting drift, if synchronicity cannot be maintained, for example because of channel errors) is used to improve coding quality.
  • remote decoder such as the video decoder (510), which has already been described in detail above in conjunction with FIG. 5.
  • the entropy decoding parts of the video decoder (510), including the buffer memory (515), and parser (520) may not be fully implemented in the local decoder (633) in the encoder.
  • the source coder
  • the coding engine (632) may perform motion compensated predictive coding, which codes an input picture predictively with reference to one or more previously coded picture from the video sequence that were designated as "reference pictures.” In this manner, the coding engine (632) codes differences (or residue) in the color channels between pixel blocks of an input picture and pixel blocks of reference picture(s) that may be selected as prediction reference(s) to the input picture.
  • the local video decoder (633) may decode coded video data of pictures that may be designated as reference pictures, based on symbols created by the source coder (630). Operations of the coding engine (632) may advantageously be lossy processes.
  • the coded video data may be decoded at a video decoder (not shown in FIG. 6)
  • the reconstructed video sequence typically may be a replica of the source video sequence with some errors.
  • the local video decoder (633) replicates decoding processes that may be performed by the video decoder on reference pictures and may cause reconstructed reference pictures to be stored in the reference picture cache (634). In this manner, the video encoder (603) may store copies of reconstructed reference pictures locally that have common content as the reconstructed reference pictures that will be obtained by a far-end (remote) video decoder (absent transmission errors).
  • the predictor (635) may perform prediction searches for the coding engine
  • the predictor (635) may search the reference picture memory (634) for sample data (as candidate reference pixel blocks) or certain metadata such as reference picture motion vectors, block shapes, and so on, that may serve as an appropriate prediction reference for the new pictures.
  • the predictor (635) may operate on a sample block-by-pixel block basis to find appropriate prediction references.
  • an input picture may have prediction references drawn from multiple reference pictures stored in the reference picture memory (634).
  • the controller (650) may manage coding operations of the source coder
  • Output of all aforementioned functional units may be subjected to entropy coding in the entropy coder (645).
  • the entropy coder (645) translates the symbols as generated by the various functional units into a coded video sequence, by lossless compression of the symbols according to technologies such as Huffman coding, variable length coding, arithmetic coding, and so forth.
  • the transmitter (640) may buffer the coded video sequence(s) as created by the entropy coder (645) to prepare for transmission via a communication channel (660), which may be a hardware/software link to a storage device which would store the encoded video data.
  • the transmitter (640) may merge coded video data from the video coder (603) with other data to be transmitted, for example, coded audio data and/or ancillary data streams (sources not shown).
  • the controller (650) may manage operation of the video encoder (603).
  • the controller (650) may assign to each coded picture a certain coded picture type, which may affect the coding techniques that may be applied to the respective picture. For example, pictures often may be assigned as one of the following picture types:
  • An Intra Picture may be one that may be coded and decoded without using any other picture in the sequence as a source of prediction.
  • Some video codecs allow for different types of intra pictures, including, for example Independent Decoder Refresh (“IDR”) Pictures.
  • IDR Independent Decoder Refresh
  • a predictive picture may be one that may be coded and decoded using intra prediction or inter prediction using at most one motion vector and reference index to predict the sample values of each block.
  • a bi-directionally predictive picture may be one that may be coded and decoded using intra prediction or inter prediction using at most two motion vectors and reference indices to predict the sample values of each block. Similarly, multiple- predictive pictures can use more than two reference pictures and associated metadata for the reconstruction of a single block.
  • Source pictures commonly may be subdivided spatially into a plurality of sample coding blocks (for example, blocks of 4 x 4, 8 x 8, 4 x 8, or 16 x 16 samples each) and coded on a block-by-block basis. Blocks may be coded predictively with reference to other (already coded) blocks as determined by the coding assignment applied to the blocks’ respective pictures.
  • blocks of I pictures may be coded non-predictively or they may be coded predictively with reference to already coded blocks of the same picture (spatial prediction or intra prediction).
  • Pixel blocks of P pictures may be coded predictively, via spatial prediction or via temporal prediction with reference to one previously coded reference picture.
  • Blocks of B pictures may be coded predictively, via spatial prediction or via temporal prediction with reference to one or two previously coded reference pictures.
  • the source pictures or the intermediate processed pictures may be subdivided into other types of blocks for other purposes. The division of coding blocks and the other types of blocks may or may not follow the same manner, as described in further detail below.
  • the video encoder (603) may perform coding operations according to a predetermined video coding technology or standard, such as ITU-T Rec. H.265. In its operation, the video encoder (603) may perform various compression operations, including predictive coding operations that exploit temporal and spatial redundancies in the input video sequence.
  • the coded video data may accordingly conform to a syntax specified by the video coding technology or standard being used.
  • the transmitter (640) may transmit additional data with the encoded video.
  • the source coder (630) may include such data as part of the coded video sequence.
  • the additional data may comprise temporal/spatial/SNR enhancement layers, other forms of redundant data such as redundant pictures and slices, SEI messages, VUI parameter set fragments, and so on.
  • a video may be captured as a plurality of source pictures (video pictures) in a temporal sequence.
  • Intra-picture prediction (often abbreviated to intra prediction) utilizes spatial correlation in a given picture
  • inter-picture prediction utilizes temporal or other correlation between the pictures.
  • a specific picture under encoding/decoding which is referred to as a current picture, may be partitioned into blocks.
  • a block in the current picture when similar to a reference block in a previously coded and still buffered reference picture in the video, may be coded by a vector that is referred to as a motion vector.
  • the motion vector points to the reference block in the reference picture, and can have a third dimension identifying the reference picture, in case multiple reference pictures are in use.
  • a bi-prediction technique can be used for inter-picture prediction. According to such bi-prediction technique, two reference pictures, such as a first reference picture and a second reference picture that both proceed the current picture in the video in decoding order (but may be in the past or future, respectively, in display order) are used.
  • a block in the current picture can be coded by a first motion vector that points to a first reference block in the first reference picture, and a second motion vector that points to a second reference block in the second reference picture.
  • the block can be jointly predicted by a combination of the first reference block and the second reference block.
  • a merge mode technique may be used in the inter-picture prediction to improve coding efficiency.
  • predictions are performed in the unit of blocks.
  • a picture in a sequence of video pictures is partitioned into coding tree units (CTU) for compression, the CTUs in a picture may have the same size, such as 64 x 64 pixels, 32 x 32 pixels, or 16 x 16 pixels.
  • a CTU may include three parallel coding tree blocks (CTBs): one luma CTB and two chroma CTBs.
  • CTBs parallel coding tree blocks
  • Each CTU can be recursively quadtree split into one or multiple coding units (CUs).
  • a CTU of 64 x 64 pixels can be split into one CU of 64 x 64 pixels, or 4 CUs of 32 x 32 pixels.
  • Each of the one or more of the 32 x 32 block may be further split into 4 CUs of 16 x 16 pixels.
  • each CU may be analyzed during encoding to determine a prediction type for the CU among various prediction types such as an inter prediction type or an intra prediction type.
  • the CU may be split into one or more prediction units (PUs) depending on the temporal and/or spatial predictability.
  • each PU includes a luma prediction block (PB), and two chroma PBs.
  • a prediction operation in coding is performed in the unit of a prediction block.
  • the split of a CU into PU may be performed in various spatial pattern.
  • a luma or chroma PB may include a matrix of values (e.g., luma values) for samples, such as 8 x 8 pixels, 16 x 16 pixels, 8 x 16 pixels, 16 x 8 samples, and the like.
  • FIG. 7 shows a diagram of a video encoder (703) according to another example embodiment of the disclosure.
  • the video encoder (703) is configured to receive a processing block (e.g., a prediction block) of sample values within a current video picture in a sequence of video pictures, and encode the processing block into a coded picture that is part of a coded video sequence.
  • the example video encoder (703) may be used in place of the video encoder (403) in the FIG. 4 example.
  • the video encoder (703) receives a matrix of sample values for a processing block, such as a prediction block of 8 x 8 samples, and the like.
  • the video encoder (703) determines whether the processing block is best coded using intra mode, inter mode, or bi-prediction mode using, for example, rate-distortion optimization (RDO).
  • RDO rate-distortion optimization
  • the video encoder (703) may use an intra prediction technique to encode the processing block into the coded picture; and when the processing block is determined to be coded in inter mode or bi-prediction mode, the video encoder (703) may use an inter prediction or bi-prediction technique, respectively, to encode the processing block into the coded picture.
  • a merge mode may be used as a submode of the inter picture prediction where the motion vector is derived from one or more motion vector predictors without the benefit of a coded motion vector component outside the predictors.
  • a motion vector component applicable to the subject block may be present. Accordingly, the video encoder (703) may include components not explicitly shown in FIG.
  • a mode decision module to determine the perdition mode of the processing blocks.
  • the video encoder (703) includes an inter encoder (730), an intra encoder (722), a residue calculator (723), a switch (726), a residue encoder (724), a general controller (721), and an entropy encoder (725) coupled together as shown in the example arrangement in FIG. 7.
  • the inter encoder (730) is configured to receive the samples of the current block (e.g., a processing block), compare the block to one or more reference blocks in reference pictures (e.g., blocks in previous pictures and later pictures in display order), generate inter prediction information (e.g., description of redundant information according to inter encoding technique, motion vectors, merge mode information), and calculate inter prediction results (e.g., predicted block) based on the inter prediction information using any suitable technique.
  • the reference pictures are decoded reference pictures that are decoded based on the encoded video information using the decoding unit 633 embedded in the example encoder 620 of FIG. 6 (shown as residual decoder 728 of FIG. 7, as described in further detail below).
  • the intra encoder (722) is configured to receive the samples of the current block (e.g., a processing block), compare the block to blocks already coded in the same picture, and generate quantized coefficients after transform, and in some cases also to generate intra prediction information (e.g., an intra prediction direction information according to one or more intra encoding techniques).
  • the intra encoder (722) may calculates intra prediction results (e.g., predicted block) based on the intra prediction information and reference blocks in the same picture.
  • the general controller (721) may be configured to determine general control data and control other components of the video encoder (703) based on the general control data. In an example, the general controller (721) determines the prediction mode of the block, and provides a control signal to the switch (726) based on the prediction mode.
  • the general controller (721) controls the switch (726) to select the intra mode result for use by the residue calculator (723), and controls the entropy encoder (725) to select the intra prediction information and include the intra prediction information in the bitstream; and when the predication mode for the block is the inter mode, the general controller (721) controls the switch (726) to select the inter prediction result for use by the residue calculator (723), and controls the entropy encoder (725) to select the inter prediction information and include the inter prediction information in the bitstream.
  • the residue calculator (723) may be configured to calculate a difference
  • the residue encoder (724) may be configured to encode the residue data to generate transform coefficients.
  • the residue encoder (724) may be configured to convert the residue data from a spatial domain to a frequency domain to generate the transform coefficients.
  • the transform coefficients are then subject to quantization processing to obtain quantized transform coefficients.
  • the video encoder (703) also includes a residue decoder (728).
  • the residue decoder (728) is configured to perform inverse-transform, and generate the decoded residue data.
  • the decoded residue data can be suitably used by the intra encoder (722) and the inter encoder (730).
  • the inter encoder (730) can generate decoded blocks based on the decoded residue data and inter prediction information
  • the intra encoder (722) can generate decoded blocks based on the decoded residue data and the intra prediction information.
  • the decoded blocks are suitably processed to generate decoded pictures and the decoded pictures can be buffered in a memory circuit (not shown) and used as reference pictures.
  • the entropy encoder (725) may be configured to format the bitstream to include the encoded block and perform entropy coding.
  • the entropy encoder (725) is configured to include in the bitstream various information.
  • the entropy encoder (725) may be configured to include the general control data, the selected prediction information (e.g., intra prediction information or inter prediction information), the residue information, and other suitable information in the bitstream.
  • FIG. 8 shows a diagram of an example video decoder (810) according to another embodiment of the disclosure.
  • the video decoder (810) is configured to receive coded pictures that are part of a coded video sequence, and decode the coded pictures to generate reconstructed pictures.
  • the video decoder (810) may be used in place of the video decoder (410) in the example of FIG. 4.
  • the video decoder (810) includes an entropy decoder (871), an inter decoder (880), a residue decoder (873), a reconstruction module (874), and an intra decoder (872) coupled together as shown in the example arrangement of FIG. 8.
  • the entropy decoder (871) can be configured to reconstruct, from the coded picture, certain symbols that represent the syntax elements of which the coded picture is made up. Such symbols can include, for example, the mode in which a block is coded (e.g., intra mode, inter mode, bi-predicted mode, merge submode or another submode), prediction information (e.g., intra prediction information or inter prediction information) that can identify certain sample or metadata used for prediction by the intra decoder (872) or the inter decoder (880), residual information in the form of, for example, quantized transform coefficients, and the like.
  • the mode in which a block is coded e.g., intra mode, inter mode, bi-predicted mode, merge submode or another submode
  • prediction information e.g., intra prediction information or inter prediction information
  • residual information in the form of, for example, quantized transform coefficients, and the like.
  • the inter prediction information is provided to the inter decoder (880); and when the prediction type is the intra prediction type, the intra prediction information is provided to the intra decoder (872).
  • the residual information can be subject to inverse quantization and is provided to the residue decoder (873).
  • the inter decoder (880) may be configured to receive the inter prediction information, and generate inter prediction results based on the inter prediction information.
  • the intra decoder (872) may be configured to receive the intra prediction information, and generate prediction results based on the intra prediction information.
  • the residue decoder (873) may be configured to perform inverse quantization to extract de-quantized transform coefficients, and process the de-quantized transform coefficients to convert the residual from the frequency domain to the spatial domain.
  • the residue decoder (873) may also utilize certain control information (to include the Quantizer Parameter (QP)) which may be provided by the entropy decoder (871) (data path not depicted as this may be low data volume control information only).
  • QP Quantizer Parameter
  • the reconstruction module (874) may be configured to combine, in the spatial domain, the residual as output by the residue decoder (873) and the prediction results (as output by the inter or intra prediction modules as the case may be) to form a reconstructed block forming part of the reconstructed picture as part of the reconstructed video. It is noted that other suitable operations, such as a deblocking operation and the like, may also be performed to improve the visual quality.
  • the video encoders (403), (603), and (703), and the video decoders (410), (510), and (810) can be implemented using any suitable technique.
  • the video encoders (403), (603), and (703), and the video decoders (410), (510), and (810) can be implemented using one or more integrated circuits.
  • the video encoders (403), (603), and (603), and the video decoders (410), (510), and (810) can be implemented using one or more processors that execute software instructions.
  • loop filters may be included in the encoders and decoders for reducing encoding artifacts and improving quality of the decoded pictures.
  • loop filters 555 may be included as part of the decoder 530 of FIG. 5.
  • loop filters may be part of the embedded decoder unit 633 in the encoder 620 of FIG. 6. These filters are referred to as loop filters because they are included in the decoding loop for video blocks in decoders or encoders.
  • Each loop filter may be associated with one or more filtering parameters. Such filtering parameters may be predefined or may be derived by the encoder during the encoding process.
  • filtering parameters if derived by the encoder or their indices (if predefined) may be included in the final bitstream in encoded form.
  • a decoder may then parse these filtering parameters from the bitstream and perform loop filtering based on the parsed filtering parameters during decoding.
  • loop filters may be used for reducing coding artifact and improving decoded video quality in different aspects.
  • Such loop filters may include but not limited to one or more deblocking filters, Adaptive Loop Filters (ALFs), Cross-Component Adaptive Loop Filters (CC-ALFs), Constrained Directional Enhancement Filters (CDEFs), Sample Adaptive Offset (SAO) filters, Cross-Component Sample Offset (CCSO) filters, and Local Sample Offset (LSO) filters.
  • ALFs Adaptive Loop Filters
  • CC-ALFs Cross-Component Adaptive Loop Filters
  • CDEFs Constrained Directional Enhancement Filters
  • SAO Sample Adaptive Offset
  • CCSO Cross-Component Sample Offset
  • LSO Local Sample Offset
  • An Adaptive Loop Filter (ALF) with block-based filter adaption can be applied by encoders/decoders to reduce artifacts.
  • ALF is adaptive in the sense that the filtering coefficients/parameters or their indices are signaled in the bitstream and can be designed based on image content and distortion of the reconstructed picture.
  • ALF may be applied to reduce distortion introduced by the encoding process and improve the reconstructed image quality.
  • one of a plurality of filters may be selected for a luma block (e.g., a 4 x 4 luma block), for example, based on a direction and activity of local gradients.
  • the filter coefficients of these filters may be derived by the encoder during encoding process and signaled to the decoder in the bitstream.
  • An ALF can have any suitable shape and size.
  • ALFs (910)-(911) may have a diamond shape, such as a 5> ⁇ 5 diamond-shape for the ALF (910) and a 7x7 diamond-shape for the ALF (911).
  • thirteen (13) elements (920)-(932) can be used in the filtering process and form a diamond shape.
  • Seven values (e.g., C0-C6) can be used and arranged in the illustrated example manner for the 13 elements (920)-(932).
  • twenty-five (25) elements (940)-(964) can be used in the filtering process and form a diamond shape.
  • Thirteen (13) values e.g., C0-C12
  • C0-C12 can be used for the 25 elements (940)-(964) in the illustrated example manner.
  • ALF filters of one of the two diamond shapes (910)-(911) may be selected for processing a luma or chroma block.
  • the 5x5 diamond-shaped filter (910) can be applied for chroma components (e.g., chroma blocks, chroma CBs), and the 7x7 diamond-shaped filter (911) can be applied for a luma component (e.g., a luma block, a luma CB).
  • Other suitable shape(s) and size(s) can be used in the ALF.
  • a 9 x 9 diamond-shaped filter can be used.
  • Filter coefficients at locations indicated by the values can be non-zero.
  • clipping values at the locations can be non-zero.
  • the clipping function may be used to limit the upper bound of the filter value in the luma or chroma blocks.
  • a specific ALF to be applied to a particular block of a luma component may be based on a classification of the luma block.
  • a 4 x 4 block (or luma block, luma CB) can be categorized or classified as one of multiple (e.g., 25) classes, corresponding to, e.g., 25 different ALFs (e.g., 25 of 7 by 7 ALFs with different filter coefficients).
  • a classification index C can be derived based on a directionality parameter D and a quantized value A of an activity value A using Eq. (1).
  • gradients g v , gh, g ⁇ n , and gd2 of a vertical, a horizontal, and two diagonal directions can be calculated using 1-D Laplacian as follows.
  • indices i and j refer to coordinates of an upper left sample within the 4 x 4 block and R(k, 1) indicates a reconstructed sample at a coordinate (k, 1).
  • the directions (e.g., dl and d2) refer to 2 diagonal directions.
  • FIGs. 10A-10D show examples of subsampled positions used for calculating the gradients gv, gh, gdi, and gd2 of the vertical (FIG. 10 A), the horizontal (FIG. 10B), and the two diagonal directions dl (FIG. IOC) and d2 (FIG. 10D), respectively.
  • labels ‘V’ show the subsampled positions to calculate the vertical gradient g v.
  • labels ⁇ ’ show the subsampled positions to calculate the horizontal gradient g3 ⁇ 4.
  • FIG. 10A labels ‘V’ show the subsampled positions to calculate the vertical gradient g v.
  • FIG. 10B labels ⁇ ’ show the subsampled positions to calculate the horizontal gradient g3 ⁇ 4.
  • labels ‘DE show the subsampled positions to calculate the dl diagonal gradient gdi.
  • labels ‘D2’ show the subsampled positions to calculate the d2 diagonal gradient gd2.
  • FIGs. 10A and 10B show that the same subsampled positions can be used for gradient calculation of the different directions.
  • a different subsampling scheme can be used for all directions.
  • different subsampling schemes can be used for different directions.
  • d a minimum value gTM of the gradients of horizontal and vertical directions g v and gh can be set as:
  • d a minimum value gTM ⁇ °f the gradients of two diagonal directions gdi and gd2 can be set as:
  • the directionality parameter D can be derived based on the above values and two thresholds x and t 2 as below.
  • Step 2 If g v X /g v n > continue to Step 3; otherwise continue to Step 4.
  • the directionality parameter D is denoted by several discrete levels and are determined based on the gradient value spread for the luma block between horizontal and vertical directions, and between the two diagonal directions, as illustrated in FIG. 10E.
  • the activity value A can be calculated as:
  • the activity value A thus represents a composite measure of horizonal and vertical 1-D Laplacians.
  • the activation value A for the luma block can be further quantized to a range of, for example, 0 to 4, inclusively, and the quantized value is denoted as A.
  • the classification index C as calculated above may then be used to select one of the multiple classes (e.g., 25 classes) of diamond-shaped AFL filters.
  • the multiple classes e.g. 25 classes
  • no block classification may be applied, and thus a single set of ALF coefficients can be applied for each chroma component.
  • the determination of an ALF coefficient may not be dependent on any classification of a chroma block.
  • Geometric transformations can be applied to filter coefficients and corresponding filter clipping values (also referred to as clipping values). Before filtering a block (e.g., a 4x4 luma block), geometric transformations such as rotation or diagonal and vertical flipping can be applied to the filter coefficients f(k, 1) and the corresponding filter clipping values c(k, 1), for example, depending on gradient values (e.g., gv, gh, gdi , and/or gdi) calculated for the block.
  • the geometric transformations applied to the filter coefficients f(k, 1) and the corresponding filter clipping values c(k, 1) can be equivalent to applying the geometric transformations to samples in a region supported by the filter.
  • the geometric transformations can make different blocks to which an ALF is applied more similar by aligning the respective directionality.
  • K represents a size of the ALF or the filter
  • 0 ⁇ k, 1 ⁇ K — 1 are coordinates of coefficients.
  • a location (0, 0) is at an upper left comer and a location (K — 1, K — 1) is at a lower right corner of the filter f or a clipping value matrix (or clipping matrix) c.
  • the transformations can be applied to the filter coefficients f (k, 1) and the clipping values c(k, 1) depending on the gradient values calculated for the block.
  • Table 1 An example of a relationship between the transformation and the four gradients are summarized in Table 1.
  • Table 1 Mapping of the gradient calculated for a block and the transformation
  • ALF filter parameters derived by the encoder may be signaled in an Adaptation Parameter Set (APS) for a picture.
  • APS Adaptation Parameter Set
  • one or more sets e.g., up to 25 sets
  • luma filter coefficients and clipping value indexes can be signaled. They may be indexed in the APS.
  • a set of the one or more sets can include luma filter coefficients and one or more clipping value indexes.
  • One or more sets (e.g., up to 8 sets) of chroma filter coefficients and clipping value indexes may be derived by the encoder and signaled.
  • filter coefficients of different classifications for luma components can be merged.
  • indices of the APS’s used for a current slice can be signaled.
  • the signaling of ALF may be CTU based.
  • a clipping value index (also referred to as clipping index) can be decoded from the APS.
  • the clipping value index can be used to determine a corresponding clipping value, for example, based on a relationship between the clipping value index and the corresponding clipping value.
  • the relationship can be pre-defmed and stored in a decoder.
  • the relationship is described by one or more tables, such as a table (e.g., used for a luma CB) of the clipping value index and the corresponding clipping value for a luma component, and a table (e.g., used for a chroma CB) of the clipping value index and the corresponding clipping value for a chroma component.
  • the clipping value can be dependent on a bit depth B.
  • the bit depth B may refer to an internal bit depth, a bit depth of reconstructed samples in a CB to be filtered, or the like.
  • a table of clipping values (e.g., for luma and/or for chroma) may be obtained using Eq. (12).
  • n is the clipping value index (also referred to as clipping index or clipldx).
  • the clipping index n can be 0, 1, 2, and 3 in Table 2 (up to N-l).
  • Table 2 can be used for luma blocks or chroma blocks.
  • Table 2 - AlfClip can depend on the bit depth B and clipldx
  • one or more APS indices (e.g., up to 7
  • APS indices can be signaled to specify luma filter sets that can be used for the current slice.
  • the filtering process can be controlled at one or more suitable levels, such as a picture level, a slice level, a CTB level, and/or the like. In an example embodiment, the filtering process can be further controlled at a CTB level.
  • a flag can be signaled to indicate whether the ALF is applied to a luma CTB.
  • the luma CTB can choose a filter set among a plurality of fixed filter sets (e.g., 16 fixed filter sets) and the filter set(s) (e.g., up to 25 filters derived by the encoder, as described above, and also referred to as signaled filter set(s)) that are signaled in the APS’s.
  • a filter set index can be signaled for the luma CTB to indicate the filter set (e.g., the filter set among the plurality of fixed filter sets and the signaled filter set(s)) to be applied.
  • the plurality of fixed filter sets can be pre-defmed and hard-coded in an encoder and a decoder, and can be referred to as pre-defmed filter sets. The pre-defmed filters coefficients thus need not be signaled.
  • an APS index can be signaled in the slice header to indicate the chroma filter sets to be used for the current slice.
  • a filter set index can be signaled for each chroma CTB if there is more than one chroma filter set in the APS.
  • the filter coefficients can be quantized with a norm equal to 128.
  • a bitstream conformance can be applied so that the coefficient value of the non-central position can be in a range of -27 to 27 - 1, inclusive.
  • the central position coefficient is not signaled in the bitstream and can be considered as equal to 128.
  • alf_luma_clip_idx[ sfldx ][ j ] can be used to specify the clipping index of the clipping value to use before multiplying by the j-th coefficient of the signaled luma filter indicated by sfldx.
  • Alf_chroma_clip_idx[ altldx ][ j ] can be used to specify the clipping index of the clipping value to use before multiplying by the j-th coefficient of the alternative chroma filter with index altldx.
  • the filtering process can be described as below.
  • a sample R(i,j) within a CU (or CB) of the CTB can be filtered, resulting in a filtered sample value R'(i,j) as shown below using Eq.
  • each sample in the CU is filtered.
  • f(k,l) denotes the decoded filter coefficients
  • K(x, y) is a clipping function
  • c(k, 1) denotes the decoded clipping parameters (or clipping values).
  • the clipping function K(x, y) min (y, max(-y, x)) corresponds to a clipping function Clip3 (-y, y, x).
  • the selected clipping values can be coded in an “alf data” syntax element as follows: a suitable encoding scheme (e.g., a Golomb encoding scheme) can be used to encode a clipping index corresponding to the selected clipping value such as shown in Table 2.
  • the encoding scheme can be the same encoding scheme used for encoding the filter set index.
  • a virtual boundary filtering process can be used to reduce a line buffer requirement of the ALF. Accordingly, modified block classification and filtering can be employed for samples near CTU boundaries (e.g., a horizontal CTU boundary).
  • a virtual boundary (1130) can be defined as a line by shifting a horizontal CTU boundary (1120) by “Nsampies” samples, as shown in FIG. 11 A, where Nsampies can be a positive integer. In an example, Nsampies is equal to 4 for a luma component, and Nsampies is equal to 2 for a chroma component.
  • a modified block classification can be applied for a luma component.
  • a 1-D Laplacian gradient calculation of a 4x4 block (1110) above the virtual boundary (1130) only samples above the virtual boundary (1130) are used.
  • FIG. 1 IB for a 1-D Laplacian gradient calculation of a 4x4 block (1111) below a virtual boundary (1131) that is shifted from a CTU boundary (1121), only samples below the virtual boundary (1131) are used.
  • the quantization of an activity value A can be accordingly scaled by taking into account a reduced number of samples used in the ID Laplacian gradient calculation.
  • FIGs. 12A-12F illustrate examples of such modified ALF filtering for a luma component at virtual boundaries.
  • FIG. 12 A a neighboring sample CO can be padded with a sample C2 that is located below a virtual boundary (1210).
  • a neighboring sample CO can be padded with a sample C2 that is located above a virtual boundary (1220).
  • neighboring samples C1-C3 can be padded with samples C5-C7, respectively, that are located below a virtual boundary (1230).
  • Sample CO can be padded with sample C6.
  • neighboring samples C1-C3 can be padded with samples C5-C7, respectively, that are located above a virtual boundary (1240).
  • Sample CO can be padded with sample C6.
  • neighboring samples C4- C8 can be padded with samples CIO, C11, C12, Cl 1, and CIO, respectively, that are located below a virtual boundary (1250).
  • Samples C1-C3 can be padded with samples Cl 1, C12, and Cl 1.
  • Sample CO can be padded with sample C12.
  • neighboring samples C4-C8 can be padded with samples CIO, Cl 1, C12, Cl 1, and CIO, respectively, that are located above a virtual boundary (1260).
  • Samples C1-C3 can be padded with samples Cll, C12, and Cll.
  • Sample CO can be padded with sample C12.
  • the above description can be suitably adapted when sample(s) and neighboring sample(s) are located to the left (or to the right) and to the right (or to the left) of a virtual boundary.
  • a largest coding unit (LCU)-aligned picture quadtree splitting can be used.
  • a coding unit synchronous picture quadtree-based adaptive loop filter can be used in video coding.
  • a luma picture may be split into multiple multi-level quadtree partitions, and each partition boundary is aligned to boundaries of largest coding units (LCUs).
  • Each partition can have a filtering process, and thus can be referred to as a filter unit or filtering unit (FU).
  • An example 2-pass encoding flow is described as follows. At a first pass, a quadtree split pattern and the best filter (or an optimal filer) of each FU can be decided. Filtering distortions can be estimated by a fast filtering distortion estimation (FFDE) during the decision process.
  • FFDE fast filtering distortion estimation
  • a reconstructed picture can be filtered.
  • a CU synchronous ALF on/off control can be performed. According to the ALF on/off results, the first filtered picture is partially recovered by the reconstructed picture.
  • a top-down splitting strategy can be adopted to divide a picture into multi level quadtree partitions by using a rate-distortion criterion.
  • Each partition can be referred to as a FU.
  • the splitting process can align quadtree partitions with LCU boundaries, as shown in FIG. 13.
  • FIG. 13 shows an example of LCU-aligned picture quadtree splitting according to an embodiment of the disclosure.
  • an encoding order of FUs follows a z- scan order. For example, referring to FIG.
  • a picture is split into ten FUs (e.g., FU 0 -FU9, with a splitting depth of 2, with FUo, FUi, and FU9 being the first level FUs, FUs, FU7, and FUs being the second depth level FUs, and FU 3 -FU5 being the third depth level FUs) and the encoding order is from FUo to FU9, e.g., FUo, FUi, FU2, FU3, FUi, FUs, FUs, FU7, FUs, and FU9.
  • split flags (“1” representing a quadtree split, and “0” representing no quadtree split) can be encoded and transmitted in a z- scan order.
  • FIG. 14 shows a quadtree split pattern corresponding to FIG. 13 according to an embodiment of the disclosure. As shown in an example in FIG. 14, quadtree split flags are encoded in a z scan order.
  • a filter of each FU can be selected from two filter sets based on a rate- distortion criterion.
  • the first set can have 1/2-symmetric square-shaped and rhombus-shaped filters newly derived for a current FU.
  • the second set can be from time-delayed filter buffers.
  • the time-delayed filter buffers can store filters previously derived for FUs in prior pictures.
  • the filter with the minimum rate-distortion cost of the two filter sets can be chosen for the current FU.
  • the rate-distortion costs of the four children FUs can be calculated.
  • the picture quadtree split pattern can be determined (in other words, whether the quadtree split of the current FU should stop).
  • a maximum quadtree split level or depth may be limited to a predefined number.
  • the maximum quadtree split level or depth may be 2, and thus a maximum number of FUs may be 16 (or 4 to the power of maximum number of depth).
  • correlation values for deriving Wiener coefficients of the 16 FUs at the bottom quadtree level (smallest FUs) can be reused.
  • the remaining FUs can derive the Wiener filters of the remaining FUs from the correlations of the 16 FUs at the bottom quadtree level. Therefore, in an example, there is only one frame buffer access for deriving the filter coefficients of all FUs.
  • the CU synchronous ALF on/off control can be performed.
  • a leaf CU can explicitly switch ALF on/off in a corresponding local region.
  • the coding efficiency may be further improved by redesigning filter coefficients according to the ALF on/off results.
  • the redesigning process needs additional frame buffer accesses.
  • CS-PQALF coding unit synchronous picture quadtree-based adaptive loop filter
  • a cross-component filtering process can apply cross-component filters, such as cross-component adaptive loop filters (CC-ALFs).
  • the cross-component filter can use luma sample values of a luma component (e.g., a luma CB) to refine a chroma component (e.g., a chroma CB corresponding to the luma CB).
  • a luma component e.g., a luma CB
  • a chroma component e.g., a chroma CB corresponding to the luma CB.
  • the luma CB and the chroma CB are included in a CU.
  • FIG. 15 shows cross-component filters (e.g., CC-ALFs) used to generate chroma components according to an example embodiment of the disclosure.
  • FIG. 15 shows filtering processes for a first chroma component (e.g., a first chroma CB), a second chroma component (e.g., a second chroma CB), and a luma component (e.g., a luma CB).
  • the luma component can be filtered by a sample adaptive offset (SAO) filter (1510) to generate a SAO filtered luma component (1541).
  • the SAO filtered luma component (1541) can be further filtered by an ALF luma filter (1516) to become a filtered luma CB (1561)
  • the first chroma component can be filtered by a SAO filter (1512) and an
  • ALF chroma filter (1518) to generate a first intermediate component (1552).
  • the SAO filtered luma component (1541) can be filtered by a cross-component filter (e.g., CC- ALF) (1521) for the first chroma component to generate a second intermediate component (1542).
  • a filtered first chroma component (1562) e.g., ‘Cb’
  • the filtered first chroma component (1562) e.g., ‘Cb’
  • the filtered first chroma component (1562) can be generated by combining the second intermediate component (1542) and the first intermediate component (1552) with an adder (1522).
  • the example cross-component adaptive loop filtering process for the first chroma component thus can include a step performed by the CC-ALF (1521) and a step performed by, for example, the adder (1522).
  • the second chroma component can be filtered by a SAO filter (1514) and the ALF chroma filter (1518) to generate a third intermediate component (1553). Further, the SAO filtered luma component (1541) can be filtered by a cross-component filter (e.g., a CC-ALF) (1531) for the second chroma component to generate a fourth intermediate component (1543). Subsequently, a filtered second chroma component (1563) (e.g., ‘Cr’) can be generated based on at least one of the fourth intermediate component (1543) and the third intermediate component (1553).
  • a cross-component filter e.g., a CC-ALF
  • the filtered second chroma component (1563) (e.g., ‘Cr’) can be generated by combining the fourth intermediate component (1543) and the third intermediate component (1553) with an adder (1532).
  • the cross-component adaptive loop filtering process for the second chroma component thus can include a step performed by the CC-ALF (1531) and a step performed by, for example, the adder (1532).
  • a cross-component filter (e.g., the CC-ALF (1521), the CC-ALF (1531)) can operate by applying a linear filter having any suitable filter shape to the luma component (or a luma channel) to refine each chroma component (e.g., the first chroma component, the second chroma component).
  • the CC-ALF utilize correlation across color components to reduce coding distortion in one color component based on samples from another color component.
  • FIG. 16 shows an example of a CC-ALF filter (1600) according to an embodiment of the disclosure.
  • the filter (1600) can include non-zero filter coefficients and zero filter coefficients.
  • the filter (1600) has a diamond shape (1620) formed by filter coefficients (1610) (indicated by circles having black fill).
  • the non-zero filter coefficients in the filter (1600) are included in the filter coefficients (1610), and filter coefficients not included in the filter coefficients (1610) are zero.
  • the non-zero filter coefficients in the filter (1600) are included in the diamond shape (1620), and the filter coefficients not included in the diamond shape (1620) are zero.
  • a number of the filter coefficients of the filter (1600) is equal to a number of the filter coefficients (1610), which is 18 in the example shown in FIG. 14.
  • the CC-ALF can include any suitable filter coefficients (also referred to as the CC-ALF filter coefficients). Referring back to FIG. 15, the CC-ALF (1521) and the CC- ALF (1531) can have a same filter shape, such as the diamond shape (1620) shown in FIG.
  • values of the filter coefficients in the CC-ALF (1521) are different from values of the filter coefficients in the CC-ALF (1531).
  • filter coefficients e.g., non-zero filter coefficients, as derived by the encoder
  • the filter coefficients can be scaled by a factor (e.g., 2 10 ) and can be rounded for a fixed-point representation.
  • Application of a CC-ALF can be controlled on a variable block size and signaled by a context-coded flag (e.g., a CC-ALF enabling flag) received for each block of samples.
  • the context-coded flag such as the CC-ALF enabling flag, can be signaled at any suitable level, such as a block level.
  • the block size along with the CC-ALF enabling flag can be received at a slice-level for each chroma component. In some examples, block sizes (in chroma samples) 16 x 16, 32 x 32, and 64 x 64 can be supported.
  • CtbLog2SizeY ] 0 can indicate that the cross component Cb filter is not applied to a block of Cb color component samples at a luma location ( xCtb, yCtb ).
  • CtbLog2SizeY] not equal to 0 can indicate that the alf_ctb_cross_component_cb_idc[ xCtb » CtbLog2SizeY ][ yCtb » CtbLog2SizeY ]-th cross component Cb filter is applied to the block of Cb color component samples at the luma location ( xCtb, yCtb ).
  • CtbLog2SizeY ] 0 can indicate that the cross component Cr filter is not applied to block of Cr color component samples at the luma location ( xCtb, yCtb).
  • CtbLog2SizeY] not equal to 0 can indicate that the alf_ctb_cross_component_cr_idc[ xCtb » CtbLog2SizeY ][ yCtb » CtbLog2SizeY ]-th cross component Cr filter is applied to the block of Cr color component samples at the luma location (xCtb, yCtb).
  • a luma block can correspond to one or more chroma blocks, such as two chroma blocks.
  • a number of samples in each of the chroma block(s) can be less than a number of samples in the luma block.
  • a chroma subsampling format (also referred to as a chroma subsampling format, e.g., specified by chroma format idc) can indicate a chroma horizontal subsampling factor (e.g., SubWidthC) and a chroma vertical subsampling factor (e.g., SubHeightC) between each of the chroma block(s) and the corresponding luma block.
  • a chroma horizontal subsampling factor e.g., SubWidthC
  • a chroma vertical subsampling factor e.g., SubHeightC
  • Chroma subsampling scheme may be specified as 4:x:y formats for a nominal 4 (horizontal) by 4 (vertical) block, with x being the horizontal chroma subsampling factor (the number of chroma samples retained in the first row of the block) and y being how many chroma samples are retained in the second row of the block.
  • the chroma subsampling format may be 4:2:0, indicating that the chroma horizontal subsampling factor (e.g., SubWidthC) and the chroma vertical subsampling factor (e.g., SubHeightC) are both 2, as shown in FIGs. 17A-17B.
  • the chroma subsampling format may be 4:2:2, indicating that the chroma horizontal subsampling factor (e.g., SubWidthC) is 2, and the chroma vertical subsampling factor (e.g., SubHeightC) is 1.
  • the chroma subsampling format may be 4:4:4, indicating that the chroma horizontal subsampling factor (e.g., SubWidthC) and the chroma vertical subsampling factor (e.g., SubHeightC) are 1.
  • a chroma sample format or type (also referred to as a chroma sample position) can indicate a relative position of a chroma sample in the chroma block with respect to at least one corresponding luma sample in the luma block.
  • FIGs. 17A-17B show exemplary locations of chroma samples relative to luma samples according to embodiments of the disclosure.
  • the luma samples (1701) are located in rows (1711)-(1718).
  • the luma samples (1701) shown in FIG. 17A can represent a portion of a picture.
  • a luma block e.g., a luma CB
  • the luma block can correspond to two chroma blocks having the chroma subsampling format of 4:2:0.
  • each chroma block includes chroma samples (1703).
  • Each chroma sample corresponds to four luma samples (e.g., the luma samples (1701(1 ))-( 1701 (4)) .
  • the four luma samples are the top-left sample (1701(1)), the top-right sample (1701(2)), the bottom-left sample (1701(3)), and the bottom-right sample (1701(4)).
  • the chroma sample (e.g., (1703(1))) may be located at a left center position that is between the top-left sample (1701(1)) and the bottom-left sample (1701(3)), and a chroma sample type of the chroma block having the chroma samples (1703) can be referred to as a chroma sample type 0.
  • the chroma sample type 0 indicates a relative position 0 corresponding to the left center position in the middle of the top-left sample (1701(1)) and the bottom-left sample (1701(3)).
  • the four luma samples (e.g., (1701(1))-(1701(4))) can be referred to as neighboring luma samples of the chroma sample (1703)(1).
  • each chroma block may include chroma samples (1704).
  • each of the chroma samples (1704) can be located at a center position of four corresponding luma samples, and a chroma sample type of the chroma block having the chroma samples (1704) can be referred to as a chroma sample type 1.
  • the chroma sample type 1 indicates a relative position 1 corresponding to the center position of the four luma samples (e.g., (1701(1))-(1701(4))).
  • one of the chroma samples (1704) can be located at a center portion of the luma samples (1701(1))-(1701(4)).
  • each chroma block includes chroma samples (1705).
  • Each of the chroma samples (1705) can be located at a top left position that is co-located with the top-left sample of the four corresponding luma samples (1701), and a chroma sample type of the chroma block having the chroma samples (1705) can be referred to as a chroma sample type 2.
  • each of the chroma samples (1705) is co-located with the top left sample of the four luma samples (1701) corresponding to the respective chroma sample.
  • the chroma sample type 2 indicates a relative position 2 corresponding to the top left position of the four luma samples (1701).
  • one of the chroma samples (1705) can be located at a top left position of the luma samples (1701(1))-(1701(4)).
  • each chroma block includes chroma samples (1706).
  • Each of the chroma samples (1706) can be located at a top center position between a corresponding top-left sample and a corresponding top-right sample, and a chroma sample type of the chroma block having the chroma samples (1706) can be referred to as a chroma sample type 3.
  • the chroma sample type 3 indicates a relative position 3 corresponding to the top center position between the top-left sample and the top-right sample.
  • one of the chroma samples (1706) can be located at a top center position of the luma samples (1701(1))- (1701(4)).
  • each chroma block includes chroma samples (1707).
  • Each of the chroma samples (1707) can be located at a bottom left position that is co-located with the bottom-left sample of the four corresponding luma samples (1701), and a chroma sample type of the chroma block having the chroma samples (1707) can be referred to as a chroma sample type 4.
  • each of the chroma samples (1707) is co-located with the bottom left sample of the four luma samples (1701) corresponding to the respective chroma sample.
  • the chroma sample type 4 indicates a relative position 4 corresponding to the bottom left position of the four luma samples (1701).
  • one of the chroma samples (1707) can be located at a bottom left position of the luma samples (1701(1))- (1701)(4)).
  • each chroma block includes chroma samples (1708).
  • Each of the chroma samples (1708) is located at a bottom center position between the bottom -left sample and the bottom -right sample, and a chroma sample type of the chroma block having the chroma samples (1708) can be referred to as a chroma sample type 5.
  • the chroma sample type 5 indicates a relative position 5 corresponding to the bottom center position between the bottom -left sample and the bottom -right sample of the four luma samples (1701).
  • one of the chroma samples (1708) can be located between the bottom-left sample and the bottom-right sample of the luma samples (1701(1))-(1701)(4)).
  • any suitable chroma sample type can be used for a chroma sub sampling format.
  • the chroma sample types 0-5 provide exemplary chroma sample types described with the chroma subsampling format 4:2:0. Additional chroma sample types may be used for the chroma subsampling format 4:2:0. Further, other chroma sample types and/or variations of the chroma sample types 0-5 can be used for other chroma subsampling formats, such as 4:2:2, 4:4:4, or the like. In an example, a chroma sample type combining the chroma samples (1705) and (1707) may be used for the chroma subsampling format 4:2:2.
  • the luma block is considered to have alternating rows, such as the rows (1711)-(1712) that include the top two samples (e.g., (1701(1))-(1701)(2))) of the four luma samples (e.g., (1701(1 ))-( 1701 )(4))) and the bottom two samples (e.g., (1701(3 ))-(1701)(4))) of the four luma samples (e.g., (1701(1)-(1701(4))), respectively.
  • the rows (1711)-(1712) that include the top two samples (e.g., (1701(1))-(1701)(2))) of the four luma samples (e.g., (1701(1 ))-( 1701 )(4))) and the bottom two samples (e.g., (1701(3 ))-(1701)(4))) of the four luma samples (e.g., (1701(1)-(1701(4))), respectively.
  • the rows (1711), (1713), (1715), and (1717) can be referred to as current rows (also referred to as a top field), and the rows (1712), (1714), (1716), and (1718) can be referred to as next rows (also referred to as a bottom field).
  • the four luma samples e.g., (1701(1))-(1701)(4)) are located at the current row (e.g., (1711)) and the next row (e.g., (1712)).
  • the relative chroma positions 2-3 above are located in the current rows, the relative chroma positions 0-1 above are located between each current row and the respective next row, and the relative chroma positions 4-5 above are located in the next rows.
  • the chroma samples (1703), (1704), (1705), (1706), (1707), or (1708) are located in rows (1751)-( 1754) in each chroma block. Specific locations of the rows (1751)- (1754) can depend on the chroma sample type of the chroma samples. For example, for the chroma samples (1703)-(1704) having the respective chroma sample types 0-1, the row (1751) is located between the rows (1711)-(1712). For the chroma samples (1705)-(1706) having the respective the chroma sample types 2-3, the row (1751) is co-located with the current row (1711).
  • the row (1751) is co-located with the next row (1712).
  • the above descriptions can be suitably adapted to the rows (1752)-(1754), and the detailed descriptions are omitted for brevity.
  • Any suitable scanning method can be used for displaying, storing, and/or transmitting the luma block and the corresponding chroma block(s) described above in FIG.
  • progressive scanning may be used.
  • an interlaced scan may be used, as shown in FIG. 17B.
  • the chroma subsampling format may be 4:2:0 (e.g., chroma format idc is equal to 1).
  • a variable chroma location type (e.g., ChromaLocType) may indicate the current rows (e.g., ChromaLocType is chroma sample loc type top field) or the next rows (e.g., ChromaLocType is chroma sample loc type bottom field).
  • the current rows (1711), (1713), (1715), and (1717) and the next rows (1712), (1714), (1716), and (1718) can be scanned separately.
  • the current rows (1711), (1713), (1715), and (1717) can be scanned first followed by the next rows (1712), (1714), (1716), and (1718) being scanned.
  • the current rows can include the luma samples (1701) while the next rows can include the luma samples (1702).
  • the corresponding chroma block can be scanned in an interlaced manner.
  • the rows (1751) and (1753) including the chroma samples (1703), (1704), (1705), (1706), (1707), or (1708) with no fill can be referred to as current rows (or current chroma rows), and the rows (1752) and (1754) including the chroma samples (1703), (1704), (1705), (1706), (1707), or (1708) with gray fill can be referred to as next rows (or next chroma rows).
  • the rows (1751) and (1753) may be scanned first followed by scanning the rows (1752) and (1754).
  • CDEF may also be used for loop filtering in video coding.
  • An in-loop CDEF may be used to filter out coding artifacts such as quantization ringing artifacts while retaining details of an image.
  • a sample adaptive offset (SAO) algorithm may be employed to achieve a similar goal by defining signal offsets for different classes of pixels.
  • SAO sample adaptive offset
  • a CDEF is a non-linear spatial filter.
  • the design of the CDEF filter is constrained to be easily vectorizable (e.g., implementable with single instruction, multiple data (SIMD) operations), which was not the case for other non-linear filters such as a median filter and a bilateral filter.
  • the CDEF design originates from the following observations. In some situations, an amount of ringing artifacts in a coded image can be approximately proportional to a quantization step size. The smallest detail retained in the quantized image is also proportional to the quantization step size. As such, retaining image details would demand smaller quantization step size which would yield higher undesirable quantization ringing artifacts. Fortunately, for a given quantization step size, the amplitude of the ringing artifacts can be less than the amplitude of the details, thereby affording an opportunity for designing a CDEF to strike a balance to filter out the ringing artifacts while maintaining sufficient details.
  • a CDEF can first identify a direction of each block. The CDEF can then adaptively filter along the identified direction and to a lesser degree along directions rotated 45° from the identified direction.
  • the filter strengths can be signaled explicitly, allowing a high degree of control over blurring of details.
  • An efficient encoder search can be designed for the filter strengths.
  • CDEF can be based on two in-loop filters and the combined filter can be used for video coding. In some example implementations, the CDEF filter(s) may follow deblocking filter(s) for in-loop filtering.
  • the direction search can operate on reconstructed pixels (or samples), for example, after a deblocking filter, as illustrated in FIG. 18. Since the reconstructed pixels are available to a decoder, the directions may not require signaling.
  • the direction search can operate on blocks having a suitable size (e.g., 8 x 8 blocks) that are small enough to adequately handle non-straight edges (so that the edges appear sufficient straight in the filtering blocks) and are large enough to reliably estimate directions when applied to a quantized image. Having a constant direction over an 8x8 region can make vectorization of the filter easier.
  • the direction that best matches a pattern in the block can be determined by minimizing a difference measure, such as a sum of squared differences (SSD), RMS error, and the like, between the quantized block and each of the perfectly directional blocks.
  • a perfectly directional block e.g., one of (1820) of FIG. 18
  • FIG. 18 shows an example of direction search for an 8 c 8 block (1810) according to an example embodiment of the disclosure.
  • the 45-degree direction (1823) among a set of directions (1820) is selected because the 45-degree direction (1823) can minimize the error (1840).
  • the error for the 45-degree direction is 12 and is the smallest among the errors ranging from 12 to 87 indicated by a row (1840).
  • Identifying the direction can help align filter taps along the identified direction to reduce ringing artifacts while preserving the directional edges or patterns.
  • directional filtering alone cannot sufficiently reduce ringing artifacts.
  • additional filter taps can be treated more conservatively.
  • a CDEF can define primary taps and secondary taps.
  • a complete two-dimensional (2-D) CDEF filter may be expressed as
  • D represents a damping parameter
  • S (p) and S (s) represent the strengths of the primary and secondary taps, respectively, and a function round(-) can round ties away from zero
  • a set of in-loop restoration schemes may be used in video coding post deblocking to generally de-noise and enhance the quality of edges beyond a deblocking operation.
  • the set of in-loop restoration schemes can be switchable within a frame (or a picture) per suitably sized tile.
  • Some examples of the in loop restoration schemes are described below based on separable symmetric Wiener filters and dual self-guided filters with subspace projection. Because content statistics can vary substantially within a frame, the filters can be integrated within a switchable framework where different filters can be triggered in different regions of the frame.
  • the encoder can be configured to estimate H and M from realizations in the deblocked frame and the source, and send the resultant filter F to a decoder.
  • F may be constrained to be separable so that the filtering can be implemented as separable horizontal and vertical w-tap convolutions.
  • each of the horizontal and vertical filters are constrained to be symmetric.
  • a sum of the horizontal and vertical filter coefficients may be assumed to sum to 1.
  • Dual self-guided filtering with subspace projection may also be used as one of the switchable filters for in-loop restoration and is described below.
  • guided filtering can be used in image filtering where a local linear model is used to compute a filtered output y from an unfiltered sample x.
  • F and G can be determined based on statistics of a degraded image and a guidance image (also referred to as a guide image) in a neighborhood of the filtered pixel. If the guide image is identical to the degraded image, the resultant self-guided filtering can have the effect of edge preserving smoothing.
  • the specific form of self-guided filtering may depend on two parameters: a radius r and a noise parameter e, and is enumerated as follows:
  • the dual self-guided filtering may be controlled by the radius r and the noise parameter e, where a larger radius r can imply a higher spatial variance and a higher noise parameter e can imply a higher range variance.
  • FIG. 19 shows an example of a subspace projection according to an example embodiment of the disclosure.
  • the subspace projection may use cheap restorations Xi and X2 to produce a final restoration Xf closer to a source Y.
  • cheap restorations Xi and X2 are not close to a source Y
  • appropriate multipliers (a, b ⁇ can bring the cheap restorations Xi and X2 much closer to the source Y if the cheap restorations Xi and X2 move in the right direction.
  • the final restoration Xf may be obtained based on Eq. (17) below.
  • a loop filtering method referred to as a Cross-Component Sample Offset (CCSO) filter or CCSO may also be implemented in the loop filtering process to reduce distortion of reconstructed samples (also referred to as reconstruction samples).
  • the CCSO filter may be placed anywhere within the loop filing stage.
  • a non-linear mapping can be used to determine an output offset based on processed input reconstructed samples of a first color component. The output offset can be added to a reconstructed sample of a second color component in a filtering process of CCSO.
  • the input reconstructed samples can be from the first color component located in a filter support area, as shown in FIG. 20.
  • FIG. 20 shows an example of the filter support area in a CCSO filter according to an embodiment of the disclosure.
  • the filter support area can include four reconstructed samples: pO, pi, p2, and p3.
  • the four input reconstructed samples in the example of FIG. 20 follow a cross-shape in a vertical direction and a horizontal direction.
  • a center sample (denoted by c) in the first color component and a sample (denoted by f) to be filtered in the second color component are co located.
  • Step 1 Delta values (e.g., differences) between the four reconstructed samples: pO, pi, p2, and p3 and the center sample c are computed, and are denoted as mO, ml, m2, and m3, respectively.
  • the delta value between pO and c is mO.
  • Step 2 The delta values mO to m3 can be further quantized into a number of (e.g., 4) discrete values.
  • the quantized values can be denoted, for example, as dO, dl, d2, and d3 for mO, ml, m2, and m3, respectively.
  • N is a quantization step size
  • the quantized values dO to d3 can be used to identify a combination of the non-linear mapping.
  • the CCSO filter has four filter inputs dO to d3, and each filter input can have one of the three quantized values (e.g., -1, 0, and 1), and thus a total number of combinations is 81 (e.g., 3 4 ).
  • FIGs. 21 A-21C show an example of the 81 combinations according to an embodiment of the disclosure.
  • the last column can represent the output offset value for each combination.
  • the output offset values can be integers, such as 0, 1, -1, 3, -3, 5, -5, -7, and the like.
  • the first column represents indices assigned to these combinations of quantized dO, dl, d2 and d3.
  • the middle columns represent all possible combinations of the quantized dO, dl, d2 and d3.
  • the final filtering process of the CCSO filter can be applied as follows:
  • a Local Sample Offset (LSO) method or an LSO filtering process can be used in video coding.
  • LSO a similar filtering approach as used in CCSO can be applied.
  • an output offset value can be applied on a color component that is the same color component of which the input reconstructed samples used in the filtering process are in.
  • the input reconstructed samples (e.g., p0-p3 and c) used in the filtering process and the reconstructed sample to be filtered (e.g., f) are in a same component, such as a luma component, a chroma component, or any suitable component.
  • An LSO can a filter shape (such as shown in FIG. 20) that is similar or identical to that of a CCSO.
  • the example CCSO filtering of reconstructed sample f in the second color to be filtered corresponding to sample c of the first color with pO, pi, p2, and pi of the first color, as shown in FIG. 20, may be referred to as a 5-tap CCSO filter design.
  • CCSO design with different number of filter tabs may be used.
  • a lower- complexity three tap CCSO design can be used in video coding.
  • FIG. 22 shows an example implementation of CCSO according to an embodiment of the disclosure. Any of the eight different example filter shapes may be defined for a 3-tap CCSO implementation.
  • Each of the filter shapes can define positions of the three reconstructed samples (also referred to as three taps) in a first component (also referred to as a first color component).
  • the three reconstructed samples can include a center sample (denoted as c) and two symmetrically located samples, as denoted with same number (one of 1-8) in FIG. 22.
  • a reconstructed sample in a second color component to be filtered is co-located with the center sample c.
  • the reconstructed sample in the second color component to be filtered is not shown in FIG. 22.
  • a Sample Adaptive Offset (SAO) filter can be used in video coding.
  • a SAO filter or a SAO filtering process can be applied to a reconstruction signal after a deblocking filter by using offset values, for example, in a slice header.
  • an encoder can determine whether the SAO filter is applied for a current slice. If the SAO filter is enabled, a current picture can be recursively split into four sub-regions and one of six SAO types (e.g., SAO types 1-6) can be selected for each sub- region, as shown in Table 4.
  • the SAO filter can classify reconstructed pixels into a plurality of categories and reduce the distortion by adding an offset to pixels of each category in a current sub-region.
  • Edge properties can be used for pixel classification in the SAO types 1-4, and a pixel intensity can be used for pixel classification in the SAO types 5-6.
  • a band offset can be used to classify pixels (e.g., all pixels) of a sub- region into multiple bands where each band can include pixels in a same intensity interval.
  • An intensity range can be equally divided into a plurality of intervals (e.g., 32 intervals) from a minimum intensity value (e.g., zero) to a maximum intensity value (e.g. 255 for 8-bit pixels), and each interval can have an offset.
  • the plurality of intervals or bands e.g., 32 bands
  • One group can include the 16 central bands, and the other group can include the 16 remaining bands. In an example, only offsets in one group are transmitted.
  • the five most significant bits of each pixel can be directly used as a band index.
  • An edge offset can use four 1-D 3-pixel patterns for pixel classification with consideration of edge directional information, as shown in FIG. 23.
  • FIG. 23 shows examples of the four 1-D 3-pixel patterns for the pixel classification in the EO. From left to right, the four 1-D 3-pixel patterns correspond to a ID 0-degree pattern (2310), a ID 90-degree pattern (2320), a ID 135-degree pattern (2330), and a ID 45-degree pattern (2340), respectively.
  • the four 1-D 3-pixel patterns correspond to a ID 0-degree pattern (2310), a ID 90-degree pattern (2320), a ID 135-degree pattern (2330), and a ID 45-degree pattern (2340), respectively.
  • For each sub-region of a picture e.g., the current picture, one of the four patterns can be selected to classify pixels into multiple categories by comparing each pixel with two neighboring pixels of the pixel. The selection can be sent in a bit-stream as side information.
  • Table 5 shows the pixel classification rule for the EO.
  • pixels of the top and the bottom rows in each LCU are not SAO processed when the 90-degree, the 135-degree, and the 45-degree classification patterns are chosen, and pixels of the leftmost and rightmost columns in each LCU are not SAO processed when the 0-degree, the 135-degree, and the 45-degree patterns are chosen.
  • Table 6 below describes syntaxes that may be signaled for a CTU if the parameters are not merged from neighboring CTU.
  • a correspondence between combinations of quantized delta values (or their indices) and the cross-component sample offset values may be referred to as a CCSO lookup table (LUT).
  • LUT CCSO lookup table
  • One or more of LUTs may be potentially used in a CCSO filtering process during video encoding or decoding.
  • a selection from the multiple LUTs may be dynamically and adaptively made by the encoder or decoder during the loop filtering process at various levels (e.g.
  • Each of these LUTs may be based on any suitable number of taps (e.g. 5 taps or 3-taps, or any other number of taps) and delta quantization levels (e.g., 3-level delta quantization described above, or any other number of delta quantization levels).
  • these LUTs may be predefined.
  • these predefined CCSO LUTs may be pre-trained offline using training image data for general use by CCSO filtering processes.
  • Such predefined LUTs may be fixed constant (fixed constant offsets for various predefined quantized delta value combinations) and thus the contents of these predefined LUTs may not need to be signaled in a video bitstream from an encoder to a decoder. Instead, these LUTs may be prestored, or may be hardwired or hard coded for use by the CCSO filtering process in a video encoder or video decoder.
  • CCSO LUTs besides the predefmed/pre-trained LUTs that are used during the CCSO filtering process may be derived by the encoder during the encoding process rather than offline-trained. These CCSO LUTs are not pre-defmed and thus their contents would need to be explicitly signaled in the bitstream.
  • the signaling of these encoder-derived LUTs are usually expensive since it involves significant overhead per frame, particularly for large LUTs, thereby potentially causing significant and undesirable overall bitrate loss. As such, it may be desirable to devise an efficient scheme for organizing, encoding, and signaling these LUTs in a bitstream.
  • only predefined LUTs may be used in the CCSO filtering process when encoding or decoding a video.
  • only encoder-derived LUTs may be used in the CCSO filtering process when encoding or decoding a video.
  • both predefined LUTs and encoder-derived LUTs may be used in the CCSO filtering process when encoding or decoding a video, and CCSO filtering of a particular FU may use any LUT selected from the predefined and encoder-derived LUTs.
  • the CCSO process refers to a filtering process which uses the reconstructed samples of a first color component as input (e.g., Y or Cb or Cr, in other words, including the luma component and not just limited to chroma components), and the output is applied on a second color component which is a different color component of the first color component according to a particular CCSO LUT.
  • a first color component e.g., Y or Cb or Cr, in other words, including the luma component and not just limited to chroma components
  • a second color component which is a different color component of the first color component according to a particular CCSO LUT.
  • An example 5-tap filter shape of CCSO filter is shown in Figure 20 and a corresponding example LUT is shown in FIGs. 21A-21C.
  • LSO Local Sample Offset
  • the LSO filter process may use reconstructed samples of a first color component as input (e.g., Y or Cb or Cr), and the output is applied on the same first color component according to a particular LSO LUT.
  • the particular LSO LUT may be selected from one or more LUTs for LSO and used to determine the local sample offset, similar to the determination of the cross-component sample offset in the CCSO process.
  • LSO LUTs may be predefined (offline-trained) as fixed constant LUTs or may be derived by the encoder during the encoding process.
  • the encoder-derived the LSO LUTs would need to be signaled in the bitstream, whereas the predefmed/fixed/constant/offline-trained LSO LUTs may be pre-stored or hardwired or hard coded in an encoder or decoder and may not need to be signaled, similar to the predefined CCSO LUTs described above.
  • one or more pre-defmed lookup tables are provided.
  • LUTs may be defined for CCSO and/or LSO and these lookup tables may be used to derive the offset values that are added on the reconstructed sample value of a particular color component to calculate CCSO or LSO filtered sample value, according to, for example Eq. (21).
  • These predefined CCSO and/or LSO LUTs are shared and made known among the encoders and decoders ahead of time. These LUTs thus may be stored, hardwired, or hard coded in any of the encoder or decoder devices.
  • Each LUTs represents a particular non-linear mapping between certain delta values and sample offset values.
  • selection of the LUTs at various filter levels (FU, or any other level) for each color component may be made in the encoder and signaled to the decoder.
  • the selection of the LUTs may be implemented with various local adaptivity.
  • the local adaptivity may be based on a certain statistic derived based on the coded information of the block area win which CCSO or LSO filtering is applied.
  • These statistics characteristics of the filtering block area may represent image characteristics in the reconstructed samples that correlates with potential distortions that may be adaptively dealt with by sample offset filter selection (e.g., selection of the CCSO or LSO lookup tables).
  • sample offset filter selection e.g., selection of the CCSO or LSO lookup tables.
  • These statistics may include but are not limited to one or more of the followings.
  • Edge direction derived in CDEF described above, or edge direction derived using other edge detection methods such as Canny Edge detector or Sobel Edge detector.
  • the smoothness may be computed by an range of sample values within the sample area.
  • the range of sample value may be defined as the absolute difference between the maximum and the minimum sample values.
  • the smoothness may be computed by interquartile range of sample values of pixels located within the sample area.
  • the interquartile range of sample values is defined as the A th percentile minus the B th percentile of sample values.
  • An example value of A may be 75% and an example value of B may be 25%.
  • the smoothness may be computed by the variance of sample values within the sample area.
  • the variance can be computed as follows: where S 2 is the variance, xi refers to the i th sample of current sample area which covers sample 1 to sample n, x represents the sample mean value, n represents the total number of samples within the sample area to which CCSO and/or LSO is applied.
  • the smoothness may be computed by the standard deviation of sample values within the sample area.
  • the standard deviation can be computed as follows:
  • the smoothness may be computed by a range of gradient values within the sample area.
  • the range of gradient values may be defined as the absolute difference between the maximum and the minimum gradient values.
  • the gradient value for each sample location may be calculated as a difference between the value of sample located at the current position and the value of sample located at its neighboring positions.
  • Coded/coding information including but not limited to prediction mode (e.g., whether intra DC mode, intra Planar mode, intra PAETH mode, intra SMOOTH mode, intra recursive filtering mode, inter SKIP modes are applied) signaled for the block to which CCSO or LSO is applied, the information related to coefficients coding (such as coded block flag), the block size, quantization parameters, and motion vector (magnitude) associated with the current block.
  • prediction mode e.g., whether intra DC mode, intra Planar mode, intra PAETH mode, intra SMOOTH mode, intra recursive filtering mode, inter SKIP modes are applied
  • coefficients coding such as coded block flag
  • the block size such as quantization parameters
  • motion vector magnitude
  • CCSO or LSO filtering (or the CCSO or LSO LUTs) for a block may be selected depending on the statistics of the block.
  • the statistics may include but are not limited to I, II, III, and IV described above.
  • the color component that is the input of CCSO may be used to derive the statistics.
  • the color component that is filtered by the output of CCSO may be used to derive the statistics.
  • the LUT used for current block in CCSO may be selected based on the edge information of the current block.
  • edge direction derived in CDEF there may be a predefined number (e.g., eight) of LUTs in total for CCSO and/or LSO.
  • Each of the LUTs may correspond to one of the, e.g., eight, edge directions which are outputs of an example CDEF edge derivation process.
  • One of the LUTs may be selected according to this correspondence for current block based on the edge direction.
  • the example eight LUTs may be signaled (if not predefined) at various levels, e.g., frame-level in HLS (APS, Slice header, frame header, PPS).
  • N LUTs in total, each of which corresponds to more than one of the eight edge directions which are outputs of an example CDEF edge derivation process.
  • One of the LUTs may be selected according to this correspondence for current block based on the edge direction.
  • Example values of N are integers, and 1 ⁇ N ⁇ 8.
  • the N LUTs maybe be signaled (if not predefined) at frame-level in HLS (APS, Slice header, frame header, PPS).
  • the selection of LUT for current block in CCSO and/or LSO may be based on whether the measurement smoothness value is smaller than (or greater than) one or more given thresholds.
  • the standard deviation (S) of current block may be compared with a predefined number of threshold values for LUT selection. For example, there may be a number of LUTs to selection from, e.g., 4 LUTs to select from. Correspondingly, there may be three threshold values (SI, S2, S3). As such, LUT selection from the four LUTs may be based on the following criteria: LUT1 may be selected if S ⁇ SI, LUT2 may be selected if SI ⁇ S ⁇ S2, LUT3 may be selected if S2 ⁇ S ⁇ S3, LUT4 may be selected if S > S4.
  • the selection of LUT used for current block in CCSO and/or LSO may be based on the coding/coded information.
  • the LUT for current block may be selected based on a prediction mode of the current block.
  • Various prediction modes may correspond to various LUTs.
  • the corresponding LUT may be selected.
  • Example prediction modes include but are not limited to intra DC mode, intra Planar mode, intra PAETH mode, intra SMOOTH mode, intra recursive filtering mode, inter SKIP modes, and the like.
  • the various statistics information above may be used to determine the filter used in CCSO, including, but not limited to filter coefficients, filter tap positions and number of filter taps. Such determination may be equivalent to selectin of LUTs as described above, when such filtering parameters are embodied in different look up tables. Alternatively, if these composite LUTs are constructed that include variations of these parameters in a particular composite LUT, then the selection may involves selecting entries in the LUT rather than selection of LUTs.
  • the statistics for selecting a LUT may be based on statistics of any color components. It may be based on a single color component or a combination of multiple color components. It may be based on a same color component as the color component to be filtered or on another color component different from the color component to be filtered.
  • the color component(s) on which the statistic is performed may be referred to as “at least a first color component.”
  • the input color component to the selected sample offset filter and to be filtered may be referred to as “a second color component”.
  • the delta values used for looking up sample offset values in the selected sample offset filter may be taken from samples co-located with the samples being filtered in any of the color components, referred to as “a third color components”.
  • the first, second, and the third color components may be the same or may be different.
  • the second and third color component may be different (hence “cross component sample offset” filtering) or the same (hence “local sample offset” filtering).
  • a number (e.g., M) of LUTs are signaled for a color component to CCSO or LSO is applied, and the index of selected LUT to apply for current block is also signaled.
  • M a number of LUTs are signaled for a color component to CCSO or LSO is applied, and the index of selected LUT to apply for current block is also signaled.
  • Example values of M are any integers within 1 to 1024.
  • the number of LUTs (e.g., M) and the actual LUTs (if not predefined) may be signaled at various levels, including but not limited to sequence level, or picture level, or CTU/SB level in HLS (APS, Slice header, frame header, PPS, SPS, VPS).
  • the index of selected LUT for current block may be signaled at various levels, including but not limited to CTU/SB level, or coded block level, or filter unit level in HLS (APS, Slice header, frame header, PPS, SPS, VPS).
  • loop filtering parameter refers to parameters signaled in the bitstream indicating various characteristics of the CCSO or LSO filters, including but not limited to the flags indicating the type of LUTs (e.g., either predefined or encoder-derived), indices to LUTs, the number of delta levels in the LUTs, the quantization steps for the deltas, the number of taps for each filter, tap positions relative to the filtered sample position for each filter, and the like.
  • FIG. 24 shows a flow chart 2400 of an example method following the principles underlying the implementations above for in-loop cross sample offset filtering or local sample offset filtering.
  • the example method flow starts at 2401.
  • S2410 at least one statistical property associated with reconstructed samples of at least a first color component in a current reconstructed data block of a video stream is obtained.
  • S2420 a target sample offset filter among a plurality of sample offset filters is selected based on the at least one statistical property, the target sample offset filter including a nonlinear mapping between sample delta measures and sample offset values.
  • a current sample in a second color component of the current reconstructed data block is filtered using the target sample offset filter and reference samples in a third color component of the current reconstructed data block to generate a filtered reconstructed sample of the current sample.
  • the example method flow ends at S2499.
  • Embodiments in the disclosure may be used separately or combined in any order. Further, each of the methods (or embodiments), an encoder, and a decoder may be implemented by processing circuitry (e.g., one or more processors or one or more integrated circuits). In one example, the one or more processors execute a program that is stored in a non-transitory computer-readable medium. Embodiments in the disclosure may be applied to a luma block or a chroma block.
  • FIG. 25 shows a computer system (2500) suitable for implementing certain embodiments of the disclosed subject matter.
  • the computer software can be coded using any suitable machine code or computer language, that may be subject to assembly, compilation, linking, or like mechanisms to create code comprising instructions that can be executed directly, or through interpretation, micro-code execution, and the like, by one or more computer central processing units (CPUs), Graphics Processing Units (GPUs), and the like.
  • CPUs computer central processing units
  • GPUs Graphics Processing Units
  • the instructions can be executed on various types of computers or components thereof, including, for example, personal computers, tablet computers, servers, smartphones, gaming devices, internet of things devices, and the like.
  • Computer system (2500) may include certain human interface input devices. Such a human interface input device may be responsive to input by one or more human users through, for example, tactile input (such as: keystrokes, swipes, data glove movements), audio input (such as: voice, clapping), visual input (such as: gestures), olfactory input (not depicted).
  • tactile input such as: keystrokes, swipes, data glove movements
  • audio input such as: voice, clapping
  • visual input such as: gestures
  • olfactory input not depicted.
  • the human interface devices can also be used to capture certain media not necessarily directly related to conscious input by a human, such as audio (such as: speech, music, ambient sound), images (such as: scanned images, photographic images obtain from a still image camera), video (such as two-dimensional video, three-dimensional video including stereoscopic video).
  • audio such as: speech, music, ambient sound
  • images such as: scanned images, photographic images obtain from a still image camera
  • video such as two-dimensional video, three-dimensional video including stereoscopic video.
  • Input human interface devices may include one or more of (only one of each depicted): keyboard (2501), mouse (2502), trackpad (2503), touch screen (2510), data- glove (not shown), joystick (2505), microphone (2506), scanner (2507), camera (2508).
  • Computer system (2500) may also include certain human interface output devices.
  • Such human interface output devices may be stimulating the senses of one or more human users through, for example, tactile output, sound, light, and smell/taste.
  • Such human interface output devices may include tactile output devices (for example tactile feedback by the touch-screen (2510), data-glove (not shown), or joystick (2505), but there can also be tactile feedback devices that do not serve as input devices), audio output devices (such as: speakers (2509), headphones (not depicted)), visual output devices (such as screens (2510) to include CRT screens, LCD screens, plasma screens, OLED screens, each with or without touch-screen input capability, each with or without tactile feedback capability — some of which may be capable to output two dimensional visual output or more than three- dimensional output through means such as stereographic output; virtual-reality glasses (not depicted), holographic displays and smoke tanks (not depicted)), and printers (not depicted).
  • Computer system (2500) can also include human accessible storage devices and their associated media such as optical media including CD/DVD ROM/RW (2520) with CD/DVD or the like media (2521), thumb-drive (2522), removable hard drive or solid state drive (2523), legacy magnetic media such as tape and floppy disc (not depicted), specialized ROM/ASIC/PLD based devices such as security dongles (not depicted), and the like.
  • optical media including CD/DVD ROM/RW (2520) with CD/DVD or the like media (2521), thumb-drive (2522), removable hard drive or solid state drive (2523), legacy magnetic media such as tape and floppy disc (not depicted), specialized ROM/ASIC/PLD based devices such as security dongles (not depicted), and the like.
  • Computer system (2500) can also include an interface (2554) to one or more communication networks (2555).
  • Networks can for example be wireless, wireline, optical. Networks can further be local, wide-area, metropolitan, vehicular and industrial, real-time, delay-tolerant, and so on. Examples of networks include local area networks such as Ethernet, wireless LANs, cellular networks to include GSM, 3G, 4G, 5G, LTE and the like, TV wireline or wireless wide area digital networks to include cable TV, satellite TV, and terrestrial broadcast TV, vehicular and industrial to include CAN bus, and so forth.
  • Certain networks commonly require external network interface adapters that attached to certain general-purpose data ports or peripheral buses (2549) (such as, for example USB ports of the computer system (2500)); others are commonly integrated into the core of the computer system (2500) by attachment to a system bus as described below (for example Ethernet interface into a PC computer system or cellular network interface into a smartphone computer system).
  • computer system (2500) can communicate with other entities.
  • Such communication can be uni-directional, receive only (for example, broadcast TV), uni-directional send-only (for example CANbus to certain CANbus devices), or bi directional, for example to other computer systems using local or wide area digital networks.
  • Certain protocols and protocol stacks can be used on each of those networks and network interfaces as described above.
  • Aforementioned human interface devices, human-accessible storage devices, and network interfaces can be attached to a core (2540) of the computer system (2500).
  • the core (2540) can include one or more Central Processing Units (CPU)
  • the screen (2510) can be connected to the graphics adapter (2550).
  • Architectures for a peripheral bus include PCI, USB, and the like.
  • CPUs (2541), GPUs (2542), FPGAs (2543), and accelerators (2544) can execute certain instructions that, in combination, can make up the aforementioned computer code. That computer code can be stored in ROM (2545) or RAM (2546). Transitional data can also be stored in RAM (2546), whereas permanent data can be stored for example, in the internal mass storage (2547). Fast storage and retrieve to any of the memory devices can be enabled through the use of cache memory, that can be closely associated with one or more CPU (2541), GPU (2542), mass storage (2547), ROM (2545), RAM (2546), and the like.
  • the computer readable media can have computer code thereon for performing various computer-implemented operations.
  • the media and computer code can be those specially designed and constructed for the purposes of the present disclosure, or they can be of the kind well known and available to those having skill in the computer software arts.
  • processor(s) including CPUs, GPUs, FPGA, accelerators, and the like
  • software embodied in one or more tangible, computer-readable media.
  • Such computer-readable media can be media associated with user-accessible mass storage as introduced above, as well as certain storage of the core (2540) that are of non-transitory nature, such as core-internal mass storage (2547) or ROM (2545).
  • the software implementing various embodiments of the present disclosure can be stored in such devices and executed by core (2540).
  • a computer-readable medium can include one or more memory devices or chips, according to particular needs.
  • the software can cause the core (2540) and specifically the processors therein (including CPU, GPU, FPGA, and the like) to execute particular processes or particular parts of particular processes described herein, including defining data structures stored in RAM (2546) and modifying such data structures according to the processes defined by the software.
  • the computer system can provide functionality as a result of logic hardwired or otherwise embodied in a circuit (for example: accelerator (2544)), which can operate in place of or together with software to execute particular processes or particular parts of particular processes described herein.
  • Reference to software can encompass logic, and vice versa, where appropriate.
  • Reference to a computer-readable media can encompass a circuit (such as an integrated circuit (IC)) storing software for execution, a circuit embodying logic for execution, or both, where appropriate.
  • VVC versatile video coding
  • HEVC High Efficiency Video Coding
  • CPUs Central Processing Units
  • GPUs Graphics Processing Units
  • OLED Organic Light-Emitting Diode
  • CD Compact Disc
  • RAM Random Access Memory
  • ASIC Application-Specific Integrated Circuit
  • PLD Programmable Logic Device
  • GSM Global System for Mobile communications
  • CANBus Controller Area Network Bus
  • USB Universal Serial Bus
  • PCI Peripheral Component Interconnect
  • HDR high dynamic range
  • VPS Video Parameter Set
  • ALF Adaptive Loop Filter
  • CC-ALF Cross-Component Adaptive Loop Filter
  • CDEF Constrained Directional Enhancement Filter

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
PCT/US2022/014255 2021-03-19 2022-01-28 Adaptive non-linear mapping for sample offset Ceased WO2022197375A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
EP22771888.9A EP4118822A4 (en) 2021-03-19 2022-01-28 ADAPTIVE NONLINEAR MAPPING FOR SAMPLING OFFSET
KR1020227039257A KR20220165776A (ko) 2021-03-19 2022-01-28 샘플 오프셋에 대한 적응적 비선형 매핑
CN202280003144.5A CN115606175B (zh) 2021-03-19 2022-01-28 用于视频流的环路内滤波的方法和装置
CN202411032979.0A CN119071482A (zh) 2021-03-19 2022-01-28 用于视频流的环路内滤波的方法和装置
JP2022560887A JP7500757B2 (ja) 2021-03-19 2022-01-28 サンプル・オフセットのための適応非線形マッピング
JP2024091079A JP7765549B2 (ja) 2021-03-19 2024-06-05 サンプル・オフセットのための適応非線形マッピング

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202163163707P 2021-03-19 2021-03-19
US63/163,707 2021-03-19
US17/568,565 2022-01-04
US17/568,565 US11683530B2 (en) 2021-03-19 2022-01-04 Adaptive non-linear mapping for sample offset

Publications (2)

Publication Number Publication Date
WO2022197375A1 true WO2022197375A1 (en) 2022-09-22
WO2022197375A8 WO2022197375A8 (en) 2023-01-05

Family

ID=83284306

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/014255 Ceased WO2022197375A1 (en) 2021-03-19 2022-01-28 Adaptive non-linear mapping for sample offset

Country Status (6)

Country Link
US (2) US11683530B2 (enExample)
EP (1) EP4118822A4 (enExample)
JP (2) JP7500757B2 (enExample)
KR (1) KR20220165776A (enExample)
CN (2) CN115606175B (enExample)
WO (1) WO2022197375A1 (enExample)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021107964A1 (en) * 2019-11-26 2021-06-03 Google Llc Vector quantization for prediction residual coding
US11818343B2 (en) * 2021-03-12 2023-11-14 Tencent America LLC Sample offset with predefined filters
US11683530B2 (en) * 2021-03-19 2023-06-20 Tencent America LLC Adaptive non-linear mapping for sample offset
JP7685069B2 (ja) * 2021-03-30 2025-05-28 テレフオンアクチーボラゲット エルエム エリクソン(パブル) 組み合わせられたループフィルタ処理
CN117769834A (zh) * 2021-07-27 2024-03-26 北京达佳互联信息技术有限公司 跨分量样点自适应偏移中的编解码增强
JP2024537126A (ja) * 2021-11-19 2024-10-10 ベイジン ダジア インターネット インフォメーション テクノロジー カンパニー リミテッド クロス成分サンプル適応オフセット関連出願の相互参照
US11997298B2 (en) * 2022-08-03 2024-05-28 Qualcomm Incorporated Systems and methods of video decoding with improved buffer storage and bandwidth efficiency
CN120530622A (zh) * 2022-09-28 2025-08-22 联发科技股份有限公司 用于通过转置索引对色度分类器进行适应性环路滤波器的视频编解码的方法和设备
US12334004B2 (en) * 2023-01-05 2025-06-17 Meta Platforms Technologies, Llc Display stream compression (DCS) with built-in high pass filter
US12262019B2 (en) 2023-01-13 2025-03-25 Tencent America LLC Adaptive bands for filter offset selection in cross-component sample offset
US20240283926A1 (en) * 2023-02-21 2024-08-22 Tencent America LLC Adaptive Cross-Component Sample Offset Filtering Parameters
US12388991B2 (en) 2023-02-27 2025-08-12 Tencent America LLC Feature based cross-component sample offset optimization and signaling improvement
US20240414333A1 (en) * 2023-06-08 2024-12-12 Qualcomm Incorporated Mapping table derivation for fixed filter sets in video coding
US12425598B2 (en) 2023-08-03 2025-09-23 Tencent America LLC Cross component sample offset filtering with asymmetric quantizer
US12413720B2 (en) 2023-08-31 2025-09-09 Tencent America LLC Cross component sample offset filtering with interpolated filter taps
WO2025152690A1 (en) * 2024-01-17 2025-07-24 Mediatek Inc. Method and apparatus of adaptive for in-loop filtering of reconstructed video

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190045186A1 (en) * 2018-05-31 2019-02-07 Intel Corporation Constrained directional enhancement filter selection for video coding
US20190182482A1 (en) * 2016-04-22 2019-06-13 Vid Scale, Inc. Prediction systems and methods for video coding based on filtering nearest neighboring pixels

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050100235A1 (en) * 2003-11-07 2005-05-12 Hao-Song Kong System and method for classifying and filtering pixels
US8576906B2 (en) 2008-01-08 2013-11-05 Telefonaktiebolaget L M Ericsson (Publ) Adaptive filtering
WO2015163046A1 (ja) * 2014-04-23 2015-10-29 ソニー株式会社 画像処理装置及び画像処理方法
CN104702963B (zh) 2015-02-13 2017-11-24 北京大学 一种自适应环路滤波的边界处理方法及装置
JP2017085496A (ja) * 2015-10-30 2017-05-18 キヤノン株式会社 動画像符号化装置及びその制御方法、コンピュータプログラム
US11405611B2 (en) 2016-02-15 2022-08-02 Qualcomm Incorporated Predicting filter coefficients from fixed filters for video coding
CN112236999B (zh) 2018-03-29 2022-12-13 弗劳恩霍夫应用研究促进协会 依赖性量化
US10674151B2 (en) * 2018-07-30 2020-06-02 Intel Corporation Adaptive in-loop filtering for video coding
US20200162736A1 (en) 2018-11-15 2020-05-21 Electronics And Telecommunications Research Institute Method and apparatus for image processing using quantization parameter
EP3935845A4 (en) * 2019-03-05 2022-11-09 ZTE Corporation CROSS COMPONENT QUANTIFICATION IN VIDEO CODING
EP3987813A4 (en) * 2019-06-24 2023-03-08 Sharp Kabushiki Kaisha SYSTEMS AND METHODS FOR REDUCING RECONSTRUCTION ERROR IN VIDEO CODING BASED ON INTER-COMPONENT CORRELATION
FI4024858T3 (fi) * 2019-08-29 2025-09-10 Lg Electronics Inc Komponenttien väliseen adaptiiviseen silmukkasuodatukseen perustuva kuvan koodauslaitteisto ja menetelmä
US11451834B2 (en) * 2019-09-16 2022-09-20 Tencent America LLC Method and apparatus for cross-component filtering
KR20220073745A (ko) * 2019-10-14 2022-06-03 바이트댄스 아이엔씨 비디오 처리에서 크로마 잔차 및 필터링의 공동 코딩
EP4042692A4 (en) * 2019-10-29 2022-11-30 Beijing Bytedance Network Technology Co., Ltd. COMPONENT ADAPTIVE LOOP FILTER
WO2021088835A1 (en) * 2019-11-04 2021-05-14 Beijing Bytedance Network Technology Co., Ltd. Cross-component adaptive loop filter
WO2021104409A1 (en) * 2019-11-30 2021-06-03 Beijing Bytedance Network Technology Co., Ltd. Cross-component adaptive filtering and subblock coding
CN115956362A (zh) 2020-08-20 2023-04-11 北京达佳互联信息技术有限公司 跨分量样点自适应偏移中的色度编码增强
US20220101095A1 (en) 2020-09-30 2022-03-31 Lemon Inc. Convolutional neural network-based filter for video coding
US12335470B2 (en) * 2021-02-26 2025-06-17 Alibaba Group Holding Limited Directional cross component filter for video coding
US11818343B2 (en) * 2021-03-12 2023-11-14 Tencent America LLC Sample offset with predefined filters
US11683530B2 (en) * 2021-03-19 2023-06-20 Tencent America LLC Adaptive non-linear mapping for sample offset
US20220321919A1 (en) 2021-03-23 2022-10-06 Sharp Kabushiki Kaisha Systems and methods for signaling neural network-based in-loop filter parameter information in video coding
US12323608B2 (en) 2021-04-07 2025-06-03 Lemon Inc On neural network-based filtering for imaging/video coding
US20230156185A1 (en) * 2021-11-15 2023-05-18 Tencent America LLC Generalized sample offset
US12382101B2 (en) * 2021-11-17 2025-08-05 Tencent America LLC Adaptive application of generalized sample offset

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190182482A1 (en) * 2016-04-22 2019-06-13 Vid Scale, Inc. Prediction systems and methods for video coding based on filtering nearest neighboring pixels
US20190045186A1 (en) * 2018-05-31 2019-02-07 Intel Corporation Constrained directional enhancement filter selection for video coding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4118822A4

Also Published As

Publication number Publication date
KR20220165776A (ko) 2022-12-15
US20230224503A1 (en) 2023-07-13
US20220303586A1 (en) 2022-09-22
CN115606175A (zh) 2023-01-13
JP7765549B2 (ja) 2025-11-06
CN115606175B (zh) 2024-06-25
JP2023521683A (ja) 2023-05-25
WO2022197375A8 (en) 2023-01-05
JP2024116223A (ja) 2024-08-27
CN119071482A (zh) 2024-12-03
JP7500757B2 (ja) 2024-06-17
US12389044B2 (en) 2025-08-12
EP4118822A4 (en) 2023-05-10
US11683530B2 (en) 2023-06-20
EP4118822A1 (en) 2023-01-18

Similar Documents

Publication Publication Date Title
US12389044B2 (en) Adaptive non-linear mapping for sample offset
US11818343B2 (en) Sample offset with predefined filters
US20250030850A1 (en) Cross-component sample offset
US11546638B2 (en) Method and apparatus for video filtering
WO2022132230A1 (en) Method and apparatus for video filtering
US11750846B2 (en) Method and apparatus for video filtering
US20220368897A1 (en) Method and apparatus for boundary handling in video coding
US12034978B2 (en) Lower-complexity sample offset filter
US20250337896A1 (en) Filter shape for sample offset

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022560887

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022771888

Country of ref document: EP

Effective date: 20221010

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22771888

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20227039257

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE