WO2012155553A1 - Appareil et procédé de décalage adaptatif d'échantillon pour composants de luminance et de chrominance - Google Patents

Appareil et procédé de décalage adaptatif d'échantillon pour composants de luminance et de chrominance Download PDF

Info

Publication number
WO2012155553A1
WO2012155553A1 PCT/CN2012/071147 CN2012071147W WO2012155553A1 WO 2012155553 A1 WO2012155553 A1 WO 2012155553A1 CN 2012071147 W CN2012071147 W CN 2012071147W WO 2012155553 A1 WO2012155553 A1 WO 2012155553A1
Authority
WO
WIPO (PCT)
Prior art keywords
loop filter
chroma
block
information
blocks
Prior art date
Application number
PCT/CN2012/071147
Other languages
English (en)
Inventor
Chih-Ming Fu
Ching-Yeh Chen
Chia-Yang Tsai
Yu-Wen Huang
Shaw-Min Lei
Original Assignee
Mediatek Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/158,427 external-priority patent/US9055305B2/en
Priority claimed from US13/311,953 external-priority patent/US20120294353A1/en
Application filed by Mediatek Inc. filed Critical Mediatek Inc.
Priority to DE112012002125.8T priority Critical patent/DE112012002125T5/de
Priority to GB1311592.8A priority patent/GB2500347B/en
Priority to CN201280022870.8A priority patent/CN103535035B/zh
Publication of WO2012155553A1 publication Critical patent/WO2012155553A1/fr
Priority to ZA2013/05528A priority patent/ZA201305528B/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • the present invention relates to video processing.
  • the present invention relates to apparatus and method for adaptive in-loop filtering including sample adaptive offset compensation and adaptive loop filter.
  • the video data are subject to various processing such as prediction, transform, quantization, deblocking, and adaptive loop filtering.
  • certain characteristics of the processed video data may be altered from the original video data due to the operations applied to video data.
  • the mean value of the processed video may be shifted. Intensity shift may cause visual impairment or artifacts, which is especially more noticeable when the intensity shift varies from frame to frame. Therefore, the pixel intensity shift has to be carefully compensated or restored to reduce the artifacts.
  • Some intensity offset schemes have been used in the field.
  • an intensity offset scheme termed as sample adaptive offset (SAO) classifies each pixel in the processed video data into one of multiple categories according to a context selected.
  • SAO sample adaptive offset
  • the conventional SAO scheme is only applied to the luma component. It is desirable to extend SAO processing to the chroma components as well.
  • the SAO scheme usually requires incorporating SAO information in the video bitstream, such as partition information to divide a picture or slice into blocks and the SAO offset values for each block so that a decoder can operate properly.
  • the SAO information may take up a noticeable portion of the bitrate of compressed video and it is desirable to develop efficient coding to incorporate the SAO information.
  • ALF adaptive loop filter
  • ALF adaptive loop filter
  • ALF a component that has to be incorporated in the video bitstream so that a decoder can operate properly. Therefore, it is also desirable to develop efficient coding to incorporate the ALF information in the video bitstream.
  • a method and apparatus for processing reconstructed video using in-loop filter in a video decoder comprises deriving reconstructed video data from a video bitstream, wherein the reconstructed video data comprises luma component and chroma components; receiving chroma in-loop filter indication from the video bitstream if luma in-loop filter indication in the video bitstream indicates that in-loop filter processing is applied to the luma component; determining chroma in-loop filter information if the chroma in-loop filter indication indicates that the in-loop filter processing is applied to the chroma components; and applying the in-loop filter processing to the chroma components according to the chroma in-loop filter information if the chroma in-loop filter indication indicates that the in-loop filter processing is applied to the chroma components.
  • the chroma components may use a single chroma in-loop filter flag or each of the chroma components may use its own chroma in-loop filter flag to control whether the in-loop filter processing is applied.
  • An entire picture may share the in-loop filter information. Alternatively, the picture may be divided into blocks and each block uses its own in-loop filter information.
  • the in-loop filter information for a current block may be derived from neighboring blocks in order to increase coding efficiency.
  • in-loop filter information are taken into consideration for efficient coding such as the property of quadtree-based partition, boundary conditions of a block, in-loop filter information sharing between luma and chroma components, indexing to a set of in-loop filter information, and prediction of in-loop filter information.
  • a method and apparatus for processing reconstructed video using in-loop filter in a video decoder, wherein a picture area of the reconstructed video is partitioned into blocks and the in- loop filter is applied to the blocks are disclosed.
  • the method and apparatus comprise deriving reconstructed block from a video bitstream; receiving in-loop filter information from the video bitstream if a current reconstructed block is a new partition; deriving the in-loop filter information from a target block if the current reconstructed block is not said new partition, wherein the current reconstructed block is merged with the target block selected from one or more candidate blocks corresponding to one or more neighboring blocks of the current reconstructed block; and applying in-loop filter processing to the current reconstructed block using the in-loop filter information.
  • a merge flag in the video bitstream may be used for the current block to indicate the in-loop filter information sharing with one of neighboring blocks if more than one neighboring block exists. If only one neighboring block exists, the in-loop filter information sharing is inferred without the need for the merge flag.
  • a candidate block may be eliminated from merging with the current reconstructed block so as to increase coding efficiency.
  • a method and apparatus for processing reconstructed video using in-loop filter in a corresponding video encoder are disclosed. Furthermore, a method and apparatus for processing reconstructed video using in-loop filter in a corresponding video encoder, wherein a picture area of the reconstructed video is partitioned into blocks and the in-loop filter is applied to the blocks, are also disclosed.
  • Fig. 1 illustrates a system block diagram of an exemplary video encoder incorporating a reconstruction loop, where the in-loop filter processing includes deblocking filter (DF), sample adaptive offset (SAO) and adaptive loop filter (ALF).
  • DF deblocking filter
  • SAO sample adaptive offset
  • ALF adaptive loop filter
  • FIG. 2 illustrates a system block diagram of an exemplary video decoder incorporating a reconstruction loop, where the in-loop filter processing includes deblocking filter (DF), sample adaptive offset (SAO) and adaptive loop filter (ALF).
  • DF deblocking filter
  • SAO sample adaptive offset
  • ALF adaptive loop filter
  • Fig. 3 illustrates an example of sample adaptive offset (SAO) coding for current block C using information from neighboring blocks A, D, B and E.
  • SAO sample adaptive offset
  • Fig. 4A illustrates an example of quadtree-based picture partition for sample adaptive offset (SAO) processing.
  • Fig. 4B illustrates an example of LCU-based picture partition for sample adaptive offset (SAO) processing.
  • Fig. 5 A illustrates an example of allowable quadtree partition for block C, where blocks A and D are in the same partition and block B is in a different partition.
  • Fig. 5B illustrates another example of allowable quadtree partition for block C, where blocks A and D are in the same partition and block B is in a different partition.
  • Fig. 5C illustrates an example of unallowable quadtree partition for block C, where blocks A and D are in the same partition and block B is in a different partition.
  • Fig. 6A illustrates an example of allowable quadtree partition for block C, where blocks
  • Fig. 6B illustrates another example of allowable quadtree partition for block C, where blocks B and D are in the same partition and block A is in a different partition.
  • Fig. 6C illustrates an example of unallowable quadtree partition for block C, where blocks B and D are in the same partition and block A is in a different partition.
  • Fig. 7 illustrates an exemplary syntax design to incorporate a flag in SPS to indicate whether SAO is enable or disabled for the sequence.
  • Fig. 8 illustrates an exemplary syntax design for sao _param ⁇ ), where separate SAO information is allowed for the chroma components.
  • Fig. 9 illustrates an exemplary syntax design for sao_split _param(), where syntax sao_split _param() includes "component” parameter and "component” indicates either the luma component or one of the chroma components.
  • Fig. 10 illustrates an exemplary syntax design for sao_offset _param(), where syntax sao_offset _param() includes "component” as a parameter and "component” indicates either the luma component or one of the chroma components.
  • Fig. 11 illustrates an example of quadtree -based picture partition for sample adaptive offset (SAO) type determination.
  • Fig. 12A illustrates an example of picture-based sample adaptive offset (SAO), where the entire picture uses same SAO parameters.
  • Fig. 12B illustrates an example of LCU-based sample adaptive offset (SAO), where each LCU uses its own SAO parameters.
  • Fig. 13 illustrates an example of using a run equal to two for SAO information sharing of the first three LCUs.
  • Fig. 14 illustrates an example of using run signals and merge-above flags to encode SAO information sharing.
  • Fig. 15 illustrates an example of using run signals, run prediction and merge-above flags to encode SAO information sharing.
  • Adaptive Offset In High Efficiency Video Coding (HEVC), a technique named Adaptive Offset (AO) is introduced to compensate the offset of reconstructed video and AO is applied inside the reconstruction loop.
  • AO Adaptive Offset
  • a method and system for offset compensation is disclosed in US Non- Provisional Patent Application, Serial No. 13/158,427, entitled “Apparatus and Method of Sample Adaptive Offset for Video Coding". The method and system classify each pixel into a category and apply intensity shift compensation or restoration to processed video data based on the category of each pixel.
  • Adaptive Loop Filter ALF has also been introduced in HEVC to improve video quality.
  • ALF applies spatial filter to reconstructed video inside the reconstruction loop. Both AO and ALF are considered as a type of in-loop filter in this disclosure.
  • Intra-prediction 110 is responsible to provide prediction data based on video data in the same picture.
  • motion estimation (ME) and motion compensation (MC) 112 is used to provide prediction data based on video data from other picture or pictures.
  • Switch 114 selects intra-prediction or inter-prediction data and the selected prediction data are supplied to adder 116 to form prediction errors, also called residues.
  • the prediction error is then processed by transformation (T) 118 followed by quantization (Q) 120.
  • the transformed and quantized residues are then coded by entropy coding 122 to form a bitstream corresponding to the compressed video data.
  • the bitstream associated with the transform coefficients is then packed with side information such as motion, mode, and other information associated with the image area.
  • the side information may also be subject to entropy coding to reduce required bandwidth. Accordingly the data associated with the side information are provided to entropy coding 122 as shown in Fig. 1.
  • entropy coding 122 As shown in Fig. 1.
  • IQ inverse quantization
  • IT inverse transformation
  • the residues are then added back to prediction data 136 at reconstruction (REC) 128 to reconstruct video data.
  • the reconstructed video data may be stored in reference picture buffer 134 and used for prediction of other frames.
  • incoming video data undergo a series of processing in the encoding system.
  • the reconstructed video data from REC 128 may be subject to intensity shift and other noises due to the series of processing.
  • deblocking filter 130, sample adaptive offset (SAO) 131 and adaptive loop filter (ALF) 132 are applied to the reconstructed video data before the reconstructed video data are stored in the reference picture buffer 134 in order to improve video quality.
  • the adaptive offset information and adaptive loop filter information may have to be transmitted in the bitstream so that a decoder can properly recover the required information in order to apply the adaptive offset and adaptive loop filter.
  • adaptive offset information from AO 131 and adaptive loop filter information from ALF 132 are provided to entropy coding 122 for incorporation into the bitstream.
  • the encoder may need to access to the original video data in order to derive AO information and ALF information.
  • the paths from the input to AO 131 and ALF 132 are not explicitly shown in Fig. 1.
  • Fig. 2 illustrates a system block diagram of an exemplary video decoder including deblocking filter and adaptive loop filter. Since the encoder also contains a local decoder for reconstructing the video data, some decoder components are already used in the encoder except for the entropy decoder 222. Furthermore, only motion compensation 212 is required for the decoder side.
  • the switch 214 selects intra-prediction or inter-prediction and the selected prediction data are supplied to reconstruction (REC) 128 to be combined with recovered residues.
  • entropy decoding 222 is also responsible for entropy decoding of side information and provides the side information to respective blocks.
  • intra mode information is provided to intra- prediction 110
  • inter mode information is provided to motion compensation 212
  • adaptive offset information is provided to SAO 131
  • adaptive loop filter information is provided to ALF 132
  • residues are provided to inverse quantization 124.
  • the residues are processed by IQ 124, IT 126 and subsequent reconstruction process to reconstruct the video data.
  • reconstructed video data from REC 128 undergo a series of processing including IQ 124 and IT 126 as shown in Fig. 2 and are subject to intensity shift.
  • the reconstructed video data are further processed by deblocking filter 130, sample adaptive offset 131 and adaptive loop filter 132.
  • the in-loop filtering is only applied to the luma component of reconstructed video according to the current HEVC standard. It is beneficial to apply in-loop filtering to chroma components of reconstructed video as well.
  • the information associated with in-loop filtering for the chroma components may be sizeable.
  • a chroma component typically results in much smaller compressed data than the luma component. Therefore, it is desirable to develop a method and apparatus for applying in-loop filtering to the chroma components efficiently. Accordingly, an efficient method and apparatus of SAO for chroma component are disclosed.
  • an indication is provided for signaling whether in-loop filtering is turned ON or not for chroma components when SAO for the luma component is turned ON. If SAO for the luma component is not turned
  • a flag is signaled to indicate whether SAO for chroma is turned ON or not.
  • the flag is signaled.
  • chroma in-loop filter indication The flag to indicate if SAO for chroma is turned ON is called chroma in-loop filter indication since it can be used for SAO as well as ALF.
  • SAO is one example of in-loop filter processing, where the in-loop filter processing may be ALF.
  • individual indications are provided for signaling whether in-loop filtering is turned ON or not for chroma components Cb and Cr when SAO for the luma component is turned ON. If SAO for the luma component is not turned ON, the SAO for the two chroma components is also not turned ON. Therefore, there is no need to provide the individual indications for signaling whether in-loop filtering is turned ON or not for the two chroma components in this case.
  • a example of pseudo codes for the embodiment mentioned above is shown below:
  • a first flag is signaled to indicate whether SAO for Cb is turned ON or not;
  • a second flag is signaled to indicate whether SAO for Cr is turned ON or not.
  • Fig. 3 illustrates an example of utilizing neighboring block to reduce SAO information.
  • Block C is the current block being processed by SAO.
  • Blocks B, D, E and A are previously processed neighboring blocks around C, as shown in Fig. 3.
  • the block-based syntax represents the parameters of current processing block.
  • a block can be a coding unit (CU), a largest coding unit (LCU), or multiple LCUs.
  • a flag can be used to indicate that the current block shares the SAO parameters with neighboring blocks to reduce the rate. If the processing order of blocks is raster scan, the parameters of blocks D, B, E, and A are available when the parameters of block C are encoded. When the block parameters are available from neighboring blocks, these block parameters can be used to encode the current block. The amount of data required to send the flag to indicate SAO parameter sharing is usually much less than that for SAO parameters. Therefore, efficient SAO is achieved. While SAO is used as an example of in-loop filter to illustrate parameter sharing based on neighboring blocks, the technique can also be applied to other in-loop filter such as ALF.
  • the quadtree-based algorithm can be used to adaptively divide a picture region into four sub-regions to achieve better performance.
  • the encoding algorithm for the quadtree-based SAO partition has to be efficiently designed.
  • the SAO parameters (SAOP) include SAO type index and offset values of the selected type.
  • An exemplary quadtree-based SAO partition is shown in Figs. 4A and 4B.
  • Fig. 4A represents a picture being partitioned using quadtree partition, where each small square corresponds to an LCU.
  • the first partition (depth 0 partition) is indicated by split_0( ).
  • a value 0 implies no split and a value 1 indicates a split applied.
  • the picture consists of twelve LCUs as labeled by PI, P2, ... , P12 in Fig. 4B.
  • the depth-0 quadtree partition, split _0 ⁇ ) splits the picture into four regions: upper left, upper right, lower left and lower right regions. Since the lower left and lower right regions have only one row of blocks, no further quadtree partition is applied. Therefore, depth- 1 quadtree partition is only considered for the upper left and upper right regions.
  • the example in Fig. 4A shows that the upper left region is not split as indicated by split _i(0) and the upper right region is further split into four regions as indicated by split_l ⁇ ). Accordingly, the quadtree partition results in seven partitions labeled as P'O, P'6 in Fig. 4A, where:
  • SAOP of PI is the same as SAOP for P2, P5, and P6;
  • SAOP of P9 is the same as SAOP for PI 0;
  • SAOP of PI 1 is the same as SAOP for PI 2.
  • each LCU can be a new partition or merged with other LCUs. If the current LCU is merged, several merge candidates can be selected.
  • To illustrate an exemplary syntax design to allow information sharing only two merge candidates are allowed for quad- tree partitioning of Fig. 3. While two candidates are illustrated in the example, more candidates from the neighboring blocks may be used to practice the present invention.
  • the syntax design is illustrated as follows:
  • Use one flag to indicate block C is a new partition.
  • Block C is inferred as a new partition.
  • block C is merged with block B .
  • block C is merged with block A.
  • block C is merged with block B.
  • the relation with neighboring blocks (LCUs) and the properties of quadtree partition are used to reduce the amount of data required to transmit SAO related information.
  • the boundary condition of a picture region such as a slice may introduce some redundancy in dependency among neighboring blocks and the boundary condition can be used to reduce the amount of data required to transmit SAO related information.
  • the relation among neighboring blocks may also introduce redundancy in dependency among neighboring blocks and the relation among neighboring blocks may be used to reduce the amount of data required to transmit SAO related information.
  • FIG. 5A-C An example of redundancy in dependency among neighboring blocks is illustrated in Figs. 5A-C.
  • blocks D and A are in the same partition and block B is in another partition, blocks A and C will be in different partitions as shown in Fig.5A and Fig. 5B.
  • the case shown in Fig. 5C is not allowed in quadtree partition. Therefore, the merge-candidate in Fig. 5C is redundant and there is no need to assign a code to represent the merge flag corresponding to Fig. 5C.
  • Exemplary pseudo codes to implement the merge algorithm are shown as follows:
  • Send newPartitionFlag to indicate that block C is a new partition.
  • Block C is a new partition as shown in Fig.5A.
  • Block C is merged with block B without signaling as shown in Fig.5B.
  • Send newPartitionFlag to indicate that block C is a new partition.
  • Block C is a new partition as shown in Fig.6A.
  • Block C is merged with block A without signaling as shown in Fig. 6B.
  • Figs. 5A-C and Fig. 6A-C illustrate two examples of utilizing redundancy in dependency among neighboring blocks to further reduce transmitted data associated with SAO information for the current block.
  • the system can take advantage of the redundancy in dependency among neighboring blocks. For example, if blocks A, B and D are in the same partition, then block C cannot be in another partition. Therefore, block C must be in the same partition as A, B, and D and there is no need to transmit an indication of SAO information sharing.
  • the LCU block in the slice boundary can be taken into consideration to reduce the transmitted data associated with SAO information for the current block. For example, if block A does not exist, only one direction can be merged.
  • block B does not exist, only one direction can be merged as well. If both blocks A and B do not exist, there is no need to transmit a flag to indicate block C as a new partition.
  • a flag can be used to indicate that current slice uses only one SAO type without any LCU-based signaling. When the slice is a single partition, the number of transmitted syntax elements can also be reduced. While LCU is used as a unit of block in the above examples, other block configurations (such as block size and shape) may also be used. While slice is mentioned here as an example of picture area that the blocks are grouped to share common information, other picture areas such as group of slices and a picture may also be used.
  • chroma and luma components may share the same SAO information for color video data.
  • the SAO information may also be shared between chroma components.
  • chroma components Cb and Cr
  • Cb and Cr may use the partition information of luma so that there is no need to signal the partition information for the chroma components.
  • Cb and Cr may share the same SAO parameters (SAOP) and therefore only one set of SAOP needs to be transmitted for Cb and Cr to share.
  • SAO syntax for luma can be used for chroma components where the SAO syntax may include quadtree syntax and LCU-based syntax.
  • the examples of utilizing redundancy in dependency among neighboring blocks as shown in Figs. 5A-C and Fig. 6A-C to reduce transmitted data associated with SAO information can also be applied to the chroma components.
  • the SAOP including SAO type and SAO offset values of the selected type can be coded before partitioning information, and therefore an SAO parameter set (SAOPS) can be formed. Accordingly, indexing can be used to identify SAO parameters from the SAOPS for the current block where the data transmitted for the index is typically less than the data transmitted for the SAO parameters.
  • partition information is encoded, the selection among SAOPS can be encoded at the same time. The number of SAOPS can be increased dynamically.
  • the number of SAOP in SAOPS will be increased by one.
  • the number of bits can be dynamically adjusted to match the data range. For example, three bits are required to represent SAOPS having five to eight members.
  • the number of SAOPS will grow to nine and four bits will be needed to represent the SAOPS having nine members.
  • SAO parameters can be transmitted in a predicted form, such as the difference between SAO parameters for a current block and the SAO parameters for a neighboring block or neighboring blocks.
  • Another embodiment according to the present invention is to reduce SAO parameters for chroma.
  • Edge-based Offset (EO) classification classifies each pixel into four categories for the luma component.
  • the number of EO categories for the chroma components can be reduced to two to reduce the transmitted data associated with SAO information for the current block.
  • the number of bands for band offset (BO) classification is usually sixteen for the luma component.
  • the number of bands for band offset (BO) classification may be reduced to eight for the chroma components.
  • the example in Fig. 3 illustrates a case that current block C has four merge candidates, i.e., blocks A, B, D and E.
  • the number of merge candidates can be reduced if the merge candidates are in the same partition. Accordingly, the number of bits to indicate which merge candidate is selected can be reduced or saved. If the processing of SAO refers to the data located in the other slice, SAO will avoid fetching data from any other slice and skip the current processing pixel to avoid data from other slices. In addition, a flag may be used to control whether the SAO processing avoids fetching data from any other slice.
  • the control flag regarding whether the SAO processing avoids fetching data from any other slice can be incorporated in a sequence level or a picture level.
  • the control flag regarding whether the SAO processing avoids fetching data from any other slice can also be shared with the non-crossing slice boundary flag of adaptive loop filter (ALF) or deblocking filter (DF).
  • ALF adaptive loop filter
  • DF deblocking filter
  • the ON/OFF control of chroma SAO depend on luma SAO ON/OFF information.
  • the category of chroma SAO can be a subset of luma SAO for a specific SAO type.
  • Fig. 7 illustrates an example of incorporating sao_used_flag in the sequence level data, such as Sequence Parameter Set (SPS).
  • SPS Sequence Parameter Set
  • sao_used_flag has a value 0
  • SAO is disabled for the sequence.
  • sao_used_flag has a value 1
  • SAO is enabled for the sequence.
  • An exemplary syntax for SAO parameters is shown in Fig. 8, where the sao _param( ) syntax can be incorporated in Adaptation Parameter Set (APS), Picture Parameter
  • the syntax will include split parameter sao_split _param( 0, 0, 0, 0 ) and offset parameter sao_offset _param( 0, 0, 0, 0 ) for the luma component. Furthermore, the syntax also includes SAO flag sao_flag_cb for the Cb component and SAO flag sao_flag_cr for the Cr component.
  • the syntax will include split parameter sao_split _param( 0, 0, 0, 1 ) and offset parameter sao_offset _param( 0, 0, 0, 1 ) for chroma component Cb. If sao_ flag_cr indicates that the SAO for the Cr component is enabled, the syntax will include split parameter sao_split _param( 0, 0, 0, 2 ) and offset parameter sao_offset _param( 0, 0, 0, 2 ) for chroma component Cr. Fig.
  • FIG. 9 illustrates an exemplary syntax for sao_split _param( rx, ry, Depth, component ), where the syntax is similar to a conventional sao_split jparam ( ) except that an additional parameter "component” is added, where "component” is used to indicate the luma or one of the chroma components.
  • Fig. 10 illustrates an exemplary syntax for sao_offset _param( rx, ry, Depth, component ), where the syntax is similar to a conventional sao_offset jparam ( ) except that an additional parameter "component" is added.
  • the syntax includes sao_type_idx [ component ] [ Depth ][ ry ][ rx ] if the split flag sao_sp ⁇ it_flag[component][Depth][ry][rx] indicates the region is not further split.
  • Syntax sao_type_idx [ component ] [ Depth ][ ry ][ rx ] specification is shown in Table 1.
  • the sample adaptive offset (SAO) adopted in HM-3.0 uses a quadtree-based syntax, which divides a picture region into four sub-regions using a split flag recursively, as shown in Fig. 11.
  • Each leaf region has its own SAO parameters (SAOP), where the SAOP includes the information of SAO type and the offset values to be applied for the region.
  • SAOP SAO parameters
  • Fig. 11 illustrates an example where the picture is divided into seven leaf regions, 1110 through 1170, where band offset (BO) type SAO is applied to leaf regions 1110 and 1150, edge offset (EO) type SAO is applied to leaf regions 1130, 1140 and 1160, and SAO is turned off for leaf regions 1120 and 1170.
  • BO band offset
  • EO edge offset
  • a syntax design incorporating an embodiment according to the present invention uses a picture-level flag to switch between picture-based SAO and block-based SAO, where the block may be an LCU or other block sizes.
  • Figs. 12A illustrates an example of picture-based SAO
  • Fig. 12B illustrates a block -based SAO, where each region is one LCU and there are fifteen LCUs in the picture.
  • picture-based SAO the entire picture shares one SAOP.
  • slice-based SAO so that the entire slice or multiple slices share one SAOP.
  • each LCU has its own SAOP and SAOP1 through SAOP15 are used by the fifteen LCUs (LCUl through LCU 15) respectively.
  • SAOP for each LCU may be shared by following LCUs.
  • the number of consecutive subsequent LCUs sharing the same SAOP may be indicated by a run signal.
  • Fig. 13 illustrates an example where SAOP1, SAOP2 and SAOP3 are the same.
  • the SAOP of the first LCU is SAOP1
  • SAOP1 is used for the subsequent two LCUs.
  • the LCU in a following row according to the raster scan order may share the SAOP of a current LCU.
  • a merge-above flag may be used to indicate the case that the current LCU shares the SAOP of the LCU above if the above LCU is available. If the merge-above flag is set to "1", the current LCU will use the SAOP of the LCU above.
  • the merge-above syntax has a value 0 for blocks associated SAOP1, SAOP3 and SAOP4.
  • the run signal of the above LCU can be used as a predictor for the run signal of the current LCU.
  • the difference of the two run signals is encoded, where the difference is denoted as d_run as shown in Fig. 15.
  • the run prediction value can be the run of the above LCU group subtracted by the number of LCUs that are prior to the above LCU in the same LCU group.
  • the first LCU sharing SAOP3 has a run value of 2 and the first LCU above also has a run value of 2 (sharing SAOP1).
  • d_run for the LCU sharing SAOP3 has a value of 0.
  • the first LCU sharing SAOP4 has a run value of 4 and the first LCU above also has a run value of 2 (sharing SAOP3). Accordingly, d run for the LCU sharing SAOP4 has a value of 2.
  • the predictor of a run is not available, the run may be encoded by using an unsigned variable length code (U_VLC).
  • U_VLC unsigned variable length code
  • S_VLC signed variable length code
  • the U_VLC and S_VLC can be £-th order exp-Golomb coding
  • Golomb-Rice coding or a binarization process of CABAC coding.
  • a flag may be used to indicate that all SAOPs in the current LCU row are the same as those in the above LCU row.
  • a flag, RepeatedRow for each LCU row can be used to indicate that all SAOPs in this LCU row are the same as those in the above LCU row. If RepeatedRow flag is equal to 1, no more information needs to be coded.
  • the related SAOP is copied from the LCU in the above LCU row. If RepeatedRow flag is equal to 0, the SAOPs of this LCU row are coded.
  • a flag may be used to signal whether RepeatedRow flag is used or not.
  • the EnableRepeatedRow flag can be used to indicate whether RepeatedRow flag is used or not.
  • the EnableRepeatedRow flag can be signaled at a slice or picture level. If EnableRepeatedRow is equal to 0, the RepeatedRow flag is not coded for each LCU row. If EnableRepeatedRow is equal to 1, the RepeatedRow flag is coded for each LCU row.
  • the RepeatedRow flag at the first LCU row of a picture or a slice can be saved.
  • the RepeatedRow flag of the first LCU row can be saved.
  • the RepeatedRow flag of the first LCU row in a slice can be saved; otherwise, the RepeatedRow flag will be signaled.
  • the method of saving RepeatedRow flag at the first LCU row of one picture or one slice can also be applied to the case where the EnableRepeatedRow flag is used.
  • an embodiment according to the present invention uses a run signal to indicate that all of SAOPs in the following LCU rows are the same as those in the above LCU row. For example, for N consecutive LCU rows containing the same SAOP, the SAOP and a run signal equal to N-l are signaled at the first LCU row of the N consecutive repeated LCU rows.
  • the maximum and minimum runs of the repeated LCU rows in one picture or slice can be derived and signaled at slice or picture level. Based on the maximum and minimum values, the run number can be coded using a fixed-length code word. The word length of the fixed-length code can be determined according to the maximum and minimum run values and thus can be adaptively changed at slice or picture level.
  • the run number in the first LCU row of a picture or a slice is coded.
  • a run is coded to indicate the number of LCUs sharing the SAOP. If the predictor of a run is not available, the run can be encoded by using unsigned variable length code (U_VLC) or fixed-length code word.
  • U_VLC unsigned variable length code
  • the word length can be coded adaptively based on the image width, the coded runs, or the remaining LCU, or the word length can be fixed based on the image width or be signaled to the decoder.
  • the maximum number of run is N-l-k.
  • the word length of the to-be-coded run is floor(log2(N-l-&) +1).
  • the maximum and minimum number of run in a slice or picture can be calculated first. Based on the maximum and minimum value, the word length of the fixed-length code can be derived and coded.
  • the information for the number of runs and delta-runs can be incorporated at slice level.
  • the number of runs, delta-runs or the number of LCUs, NumSaoRun is signaled at slice level.
  • the number of LCUs for the current coding SAOP can be specified using the NumSaoRun flag.
  • the number of runs and delta-runs or the number of LCUs can be predicted using the number of LCUs in one coding picture.
  • NumTBsInPicture is the number of LCUs in one picture and sao_num_run_info is the predicted residual value.
  • sao_num_run_info can be coded using a signed or unsigned variable-length.
  • sao_num_run_info may also be coded using a signed or unsigned fixed-length code word.
  • Embodiment of in-loop filter according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both.
  • an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein.
  • An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
  • DSP Digital Signal Processor
  • the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine- readable software code or firmware code that defines the particular methods embodied by the invention.
  • the software code or firmware codes may be developed in different programming languages and different format or style.
  • the software code may also be compiled for different target platform.
  • different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé et un appareil permettant de traiter une vidéo reconstruite au moyen d'un filtre à boucle dans un système de codage vidéo. Dans le procédé, on utilise une indication de filtre à boucle de chrominance pour indiquer si les composants de chrominance sont traités par un filtre à boucle lorsque l'indication du filtre à boucle de luminance indique que le traitement par filtre à boucle est appliqué au composant de luminance. Un drapeau supplémentaire peut être utilisé pour indiquer si le traitement par filtre à boucle est appliqué à une image entière utilisant les mêmes informations de filtre à boucle ou chaque bloc de l'image utilisant des informations de filtre à boucle individuelles. L'invention concerne également divers modes de réalisation pour améliorer l'efficacité, tandis que divers aspects des informations de filtre à boucle sont pris en compte pour le codage efficace tels que la propriété de partitionnement par arbre quaternaire, de conditions limites d'un bloc, de partage d'informations de filtre à boucle entre composants de luminance et de chrominance, d'indexation à un ensemble d'informations de filtre à boucle, et de prédiction d'informations de filtre à boucle.
PCT/CN2012/071147 2011-05-16 2012-02-15 Appareil et procédé de décalage adaptatif d'échantillon pour composants de luminance et de chrominance WO2012155553A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
DE112012002125.8T DE112012002125T5 (de) 2011-05-16 2012-02-15 Vorrichtung und Verfahren für einen abtastungsadaptiven Offset für Luminanz- und Chrominanz-Komponenten
GB1311592.8A GB2500347B (en) 2011-05-16 2012-02-15 Apparatus and method of sample adaptive offset for luma and chroma components
CN201280022870.8A CN103535035B (zh) 2011-05-16 2012-02-15 用于亮度和色度分量的样本自适应偏移的方法和装置
ZA2013/05528A ZA201305528B (en) 2011-05-16 2013-07-22 Apparatus and method of sample adaptive offset for luma and chroma components

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
US201161486504P 2011-05-16 2011-05-16
US61/486,504 2011-05-16
US13/158,427 US9055305B2 (en) 2011-01-09 2011-06-12 Apparatus and method of sample adaptive offset for video coding
US13/158,427 2011-06-12
US201161498949P 2011-06-20 2011-06-20
US61/498,949 2011-06-20
US201161503870P 2011-07-01 2011-07-01
US61/503,870 2011-07-01
US13/311,953 2011-12-06
US13/311,953 US20120294353A1 (en) 2011-05-16 2011-12-06 Apparatus and Method of Sample Adaptive Offset for Luma and Chroma Components

Publications (1)

Publication Number Publication Date
WO2012155553A1 true WO2012155553A1 (fr) 2012-11-22

Family

ID=47176199

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/071147 WO2012155553A1 (fr) 2011-05-16 2012-02-15 Appareil et procédé de décalage adaptatif d'échantillon pour composants de luminance et de chrominance

Country Status (5)

Country Link
CN (3) CN106028050B (fr)
DE (1) DE112012002125T5 (fr)
GB (1) GB2500347B (fr)
WO (1) WO2012155553A1 (fr)
ZA (1) ZA201305528B (fr)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013012845A (ja) * 2011-06-28 2013-01-17 Sony Corp 画像処理装置および方法
JP2013255252A (ja) * 2011-06-27 2013-12-19 Panasonic Corp 画像復号方法、及び画像復号装置
EP2723073A2 (fr) * 2011-06-14 2014-04-23 LG Electronics Inc. Procédé permettant de coder et de décoder des informations d'image
EP2725790A1 (fr) * 2011-06-24 2014-04-30 LG Electronics Inc. Procédé de codage et de décodage d'informations d'image
JP2014523183A (ja) * 2011-06-28 2014-09-08 サムスン エレクトロニクス カンパニー リミテッド ピクセル分類によるオフセット調整を利用するビデオ符号化方法及びその装置、並びに該ビデオ復号化方法及びその装置
US9106931B2 (en) 2011-11-07 2015-08-11 Canon Kabushiki Kaisha Method and device for providing compensation offsets for a set of reconstructed samples of an image
JP2016015753A (ja) * 2015-08-31 2016-01-28 ソニー株式会社 画像処理装置および方法、プログラム、並びに記録媒体
TWI554082B (zh) * 2013-04-12 2016-10-11 高通公司 於視訊寫碼處理中用於係數階層寫碼之萊斯(rice)參數更新
JP2017112637A (ja) * 2017-02-14 2017-06-22 ソニー株式会社 画像処理装置および方法、プログラム、並びに記録媒体
US10021419B2 (en) 2013-07-12 2018-07-10 Qualcomm Incorported Rice parameter initialization for coefficient level coding in video coding process
CN109565594A (zh) * 2016-08-02 2019-04-02 高通股份有限公司 基于几何变换的自适应环路滤波
CN110651473A (zh) * 2017-05-09 2020-01-03 华为技术有限公司 视频压缩中的编码色度样本
US10623738B2 (en) 2017-04-06 2020-04-14 Futurewei Technologies, Inc. Noise suppression filter
WO2021141714A1 (fr) * 2020-01-08 2021-07-15 Tencent America LLC Procédé et appareil de codage vidéo
CN114391255A (zh) * 2019-09-11 2022-04-22 夏普株式会社 用于基于交叉分量相关性来减小视频编码中的重构误差的系统和方法
TWI768414B (zh) * 2019-07-26 2022-06-21 寰發股份有限公司 用於視訊編解碼的跨分量適應性環路濾波的方法及裝置
EP3991435A4 (fr) * 2019-08-07 2022-08-31 Huawei Technologies Co., Ltd. Procédé et appareil de filtre en boucle à décalage adaptatif d'échantillon avec contrainte de taille de région d'application
RU2782516C1 (ru) * 2020-01-08 2022-10-28 Тенсент Америка Ллс Способ и устройство для видеокодирования
US11991353B2 (en) 2019-03-08 2024-05-21 Canon Kabushiki Kaisha Adaptive loop filter

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017063168A1 (fr) * 2015-10-15 2017-04-20 富士通株式会社 Procédé et appareil de codage d'image et dispositif de traitement d'image
US20180359486A1 (en) * 2017-06-07 2018-12-13 Mediatek Inc. Non-local adaptive loop filter processing
CN110662065A (zh) * 2018-06-29 2020-01-07 财团法人工业技术研究院 图像数据解码方法及解码器、图像数据编码方法及编码器
EP3868109A4 (fr) 2018-10-23 2022-08-17 HFI Innovation Inc. Procédé et appareil pour une réduction de tampon de filtre en boucle
CN112997500B (zh) * 2018-11-09 2023-04-18 北京字节跳动网络技术有限公司 对基于区域的自适应环路滤波器的改进
KR102630411B1 (ko) 2019-04-20 2024-01-29 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 크로마 잔차 조인트 코딩을 위한 구문 요소의 시그널링
CN113785574B (zh) * 2019-05-30 2022-10-11 北京字节跳动网络技术有限公司 色度分量的自适应环路滤波
US11930169B2 (en) 2019-06-27 2024-03-12 Hfi Innovation Inc. Method and apparatus of cross-component adaptive loop filtering for video coding
CN117221538A (zh) 2019-07-08 2023-12-12 Lg电子株式会社 解码和编码设备、数据发送设备及计算机可读存储介质
KR20220041898A (ko) * 2019-08-29 2022-04-01 엘지전자 주식회사 적응적 루프 필터링 기반 영상 코딩 장치 및 방법
WO2021101345A1 (fr) * 2019-11-22 2021-05-27 한국전자통신연구원 Procédé et dispositif de filtrage en boucle adaptative
CN116366859A (zh) * 2020-07-28 2023-06-30 北京达佳互联信息技术有限公司 对视频信号进行解码的方法、设备和介质
US11849117B2 (en) 2021-03-14 2023-12-19 Alibaba (China) Co., Ltd. Methods, apparatus, and non-transitory computer readable medium for cross-component sample adaptive offset
CN116433783A (zh) * 2021-12-31 2023-07-14 中兴通讯股份有限公司 用于视频处理的方法及装置、存储介质及电子装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060227883A1 (en) * 2005-04-11 2006-10-12 Intel Corporation Generating edge masks for a deblocking filter
EP1944974A1 (fr) * 2007-01-09 2008-07-16 Matsushita Electric Industrial Co., Ltd. Algorithmes d'optimisation post-filtre dépendants de la position
CN101517909A (zh) * 2006-09-15 2009-08-26 飞思卡尔半导体公司 带有选择性色度去块滤波的视频信息处理系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101371571B (zh) * 2006-01-12 2013-06-19 Lg电子株式会社 处理多视图视频
CN105721881B (zh) * 2007-01-11 2019-07-09 汤姆森许可贸易公司 对mpeg-4avc高层编码中简档使用语法的方法和装置
US8938009B2 (en) * 2007-10-12 2015-01-20 Qualcomm Incorporated Layered encoded bitstream structure
WO2010123855A1 (fr) * 2009-04-20 2010-10-28 Dolby Laboratories Licensing Corporation Sélection de filtre pour un prétraitement vidéo dans des applications vidéo
JP5763210B2 (ja) * 2011-04-21 2015-08-12 メディアテック インコーポレイテッド 改良されたループ型フィルタリング処理のための方法と装置
US9008170B2 (en) * 2011-05-10 2015-04-14 Qualcomm Incorporated Offset type and coefficients signaling method for sample adaptive offset
PL3361725T3 (pl) * 2011-06-23 2020-07-13 Huawei Technologies Co., Ltd. Urządzenie do dekodowania przesunięcia, urządzenie do kodowania przesunięcia, urządzenie do filtrowania obrazu i struktura danych

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060227883A1 (en) * 2005-04-11 2006-10-12 Intel Corporation Generating edge masks for a deblocking filter
CN101517909A (zh) * 2006-09-15 2009-08-26 飞思卡尔半导体公司 带有选择性色度去块滤波的视频信息处理系统
EP1944974A1 (fr) * 2007-01-09 2008-07-16 Matsushita Electric Industrial Co., Ltd. Algorithmes d'optimisation post-filtre dépendants de la position

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MADHUKAR BUDAGAVI ET AL.: "Chroma ALF with reduced vertical filter size", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11, 5TH MEETING, JCTVC-E287, 16 March 2011 (2011-03-16) - 23 March 2011 (2011-03-23), GENEVA, CH, pages 1 - 6 *

Cited By (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9300982B2 (en) 2011-06-14 2016-03-29 Lg Electronics Inc. Method for encoding and decoding image information
US11671630B2 (en) 2011-06-14 2023-06-06 Lg Electronics Inc. Method for encoding and decoding image information
EP2723073A2 (fr) * 2011-06-14 2014-04-23 LG Electronics Inc. Procédé permettant de coder et de décoder des informations d'image
US11418815B2 (en) 2011-06-14 2022-08-16 Lg Electronics Inc. Method for encoding and decoding image information
US10924767B2 (en) 2011-06-14 2021-02-16 Lg Electronics Inc. Method for encoding and decoding image information
US10798421B2 (en) 2011-06-14 2020-10-06 Lg Electronics Inc. Method for encoding and decoding image information
EP2723073A4 (fr) * 2011-06-14 2014-11-26 Lg Electronics Inc Procédé permettant de coder et de décoder des informations d'image
US10531126B2 (en) 2011-06-14 2020-01-07 Lg Electronics Inc. Method for encoding and decoding image information
US9992515B2 (en) 2011-06-14 2018-06-05 Lg Electronics Inc. Method for encoding and decoding image information
US9565453B2 (en) 2011-06-14 2017-02-07 Lg Electronics Inc. Method for encoding and decoding image information
US10944968B2 (en) 2011-06-24 2021-03-09 Lg Electronics Inc. Image information encoding and decoding method
US11700369B2 (en) 2011-06-24 2023-07-11 Lg Electronics Inc. Image information encoding and decoding method
EP2725790A1 (fr) * 2011-06-24 2014-04-30 LG Electronics Inc. Procédé de codage et de décodage d'informations d'image
US10547837B2 (en) 2011-06-24 2020-01-28 Lg Electronics Inc. Image information encoding and decoding method
EP2725790A4 (fr) * 2011-06-24 2014-12-10 Lg Electronics Inc Procédé de codage et de décodage d'informations d'image
US9253489B2 (en) 2011-06-24 2016-02-02 Lg Electronics Inc. Image information encoding and decoding method
US9294770B2 (en) 2011-06-24 2016-03-22 Lg Electronics Inc. Image information encoding and decoding method
US10091505B2 (en) 2011-06-24 2018-10-02 Lg Electronics Inc. Image information encoding and decoding method
US11303893B2 (en) 2011-06-24 2022-04-12 Lg Electronics Inc. Image information encoding and decoding method
US9743083B2 (en) 2011-06-24 2017-08-22 Lg Electronics Inc. Image information encoding and decoding method
JP2013255252A (ja) * 2011-06-27 2013-12-19 Panasonic Corp 画像復号方法、及び画像復号装置
US9426482B2 (en) 2011-06-28 2016-08-23 Samsung Electronics Co., Ltd. Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor
US10542273B2 (en) 2011-06-28 2020-01-21 Samsung Electronics Co., Ltd. Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor
US9462288B2 (en) 2011-06-28 2016-10-04 Samsung Electronics Co., Ltd. Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor
JP2013012845A (ja) * 2011-06-28 2013-01-17 Sony Corp 画像処理装置および方法
JP2016197890A (ja) * 2011-06-28 2016-11-24 サムスン エレクトロニクス カンパニー リミテッド ピクセル分類によるオフセット調整を利用するビデオ符号化方法及びその装置、並びに該ビデオ復号化方法及びその装置
JP2015128333A (ja) * 2011-06-28 2015-07-09 サムスン エレクトロニクス カンパニー リミテッド ピクセル分類によるオフセット調整を利用するビデオ符号化方法及びその装置、並びに該ビデオ復号化方法及びその装置
EP2727360A1 (fr) * 2011-06-28 2014-05-07 Sony Corporation Dispositif et procédé de traitement d'image
US9438921B2 (en) 2011-06-28 2016-09-06 Samsung Electronics Co., Ltd. Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor
US9426483B2 (en) 2011-06-28 2016-08-23 Samsung Electronics Co., Ltd. Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor
JP2014523183A (ja) * 2011-06-28 2014-09-08 サムスン エレクトロニクス カンパニー リミテッド ピクセル分類によるオフセット調整を利用するビデオ符号化方法及びその装置、並びに該ビデオ復号化方法及びその装置
EP2727360A4 (fr) * 2011-06-28 2015-03-11 Sony Corp Dispositif et procédé de traitement d'image
US9438922B2 (en) 2011-06-28 2016-09-06 Samsung Electronics Co., Ltd. Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor
US10038911B2 (en) 2011-06-28 2018-07-31 Samsung Electronics Co., Ltd. Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor
JP2015164331A (ja) * 2011-06-28 2015-09-10 サムスン エレクトロニクス カンパニー リミテッド ピクセル分類によるオフセット調整を利用するビデオ符号化方法及びその装置、並びに該ビデオ復号化方法及びその装置
JP2015128332A (ja) * 2011-06-28 2015-07-09 サムスン エレクトロニクス カンパニー リミテッド ピクセル分類によるオフセット調整を利用するビデオ符号化方法及びその装置、並びに該ビデオ復号化方法及びその装置
US10187664B2 (en) 2011-06-28 2019-01-22 Sony Corporation Image processing device and method
JP2015128334A (ja) * 2011-06-28 2015-07-09 サムスン エレクトロニクス カンパニー リミテッド ピクセル分類によるオフセット調整を利用するビデオ符号化方法及びその装置、並びに該ビデオ復号化方法及びその装置
US10085042B2 (en) 2011-11-07 2018-09-25 Canon Kabushiki Kaisha Method, device and program for encoding and decoding a sequence of images using area-by-area loop filtering
US9118931B2 (en) 2011-11-07 2015-08-25 Canon Kabushiki Kaisha Method and device for optimizing encoding/decoding of compensation offsets for a set of reconstructed samples of an image
US9106931B2 (en) 2011-11-07 2015-08-11 Canon Kabushiki Kaisha Method and device for providing compensation offsets for a set of reconstructed samples of an image
EP2777255B1 (fr) * 2011-11-07 2017-03-29 Canon Kabushiki Kaisha Procédé et dispositif pour optimiser le codage/décodage d'écarts de compensation pour un ensemble d'échantillons reconstitués d'une image
US9936200B2 (en) 2013-04-12 2018-04-03 Qualcomm Incorporated Rice parameter update for coefficient level coding in video coding process
TWI554082B (zh) * 2013-04-12 2016-10-11 高通公司 於視訊寫碼處理中用於係數階層寫碼之萊斯(rice)參數更新
US10021419B2 (en) 2013-07-12 2018-07-10 Qualcomm Incorported Rice parameter initialization for coefficient level coding in video coding process
JP2016015753A (ja) * 2015-08-31 2016-01-28 ソニー株式会社 画像処理装置および方法、プログラム、並びに記録媒体
CN109565594A (zh) * 2016-08-02 2019-04-02 高通股份有限公司 基于几何变换的自适应环路滤波
JP2017112637A (ja) * 2017-02-14 2017-06-22 ソニー株式会社 画像処理装置および方法、プログラム、並びに記録媒体
US10623738B2 (en) 2017-04-06 2020-04-14 Futurewei Technologies, Inc. Noise suppression filter
CN110651473A (zh) * 2017-05-09 2020-01-03 华为技术有限公司 视频压缩中的编码色度样本
CN110651473B (zh) * 2017-05-09 2022-04-22 华为技术有限公司 视频压缩中的编码色度样本
RU2805997C2 (ru) * 2019-03-07 2023-10-24 ЭлДжи ЭЛЕКТРОНИКС ИНК. Кодирование видео или изображений на основе преобразования сигнала яркости с масштабированием сигнала цветности
US11991353B2 (en) 2019-03-08 2024-05-21 Canon Kabushiki Kaisha Adaptive loop filter
TWI768414B (zh) * 2019-07-26 2022-06-21 寰發股份有限公司 用於視訊編解碼的跨分量適應性環路濾波的方法及裝置
US11997266B2 (en) 2019-07-26 2024-05-28 Hfi Innovation Inc. Method and apparatus of cross-component adaptive loop filtering for video coding
EP3991435A4 (fr) * 2019-08-07 2022-08-31 Huawei Technologies Co., Ltd. Procédé et appareil de filtre en boucle à décalage adaptatif d'échantillon avec contrainte de taille de région d'application
US20220345698A1 (en) * 2019-09-11 2022-10-27 Sharp Kabushiki Kaisha Systems and methods for reducing a reconstruction error in video coding based on a cross-component correlation
CN114391255A (zh) * 2019-09-11 2022-04-22 夏普株式会社 用于基于交叉分量相关性来减小视频编码中的重构误差的系统和方法
CN114391255B (zh) * 2019-09-11 2024-05-17 夏普株式会社 用于基于交叉分量相关性来减小视频编码中的重构误差的系统和方法
RU2782516C1 (ru) * 2020-01-08 2022-10-28 Тенсент Америка Ллс Способ и устройство для видеокодирования
WO2021141714A1 (fr) * 2020-01-08 2021-07-15 Tencent America LLC Procédé et appareil de codage vidéo
US11736710B2 (en) 2020-01-08 2023-08-22 Tencent America LLC Method and apparatus for video coding
US11303914B2 (en) 2020-01-08 2022-04-12 Tencent America LLC Method and apparatus for video coding

Also Published As

Publication number Publication date
GB2500347A (en) 2013-09-18
CN106028050A (zh) 2016-10-12
CN103535035A (zh) 2014-01-22
GB2500347B (en) 2018-05-16
CN105120270A (zh) 2015-12-02
CN103535035B (zh) 2017-03-15
GB201311592D0 (en) 2013-08-14
ZA201305528B (en) 2014-10-29
DE112012002125T5 (de) 2014-02-20
CN105120270B (zh) 2018-09-04
CN106028050B (zh) 2019-04-26

Similar Documents

Publication Publication Date Title
US10405004B2 (en) Apparatus and method of sample adaptive offset for luma and chroma components
US10116967B2 (en) Method and apparatus for coding of sample adaptive offset information
WO2012155553A1 (fr) Appareil et procédé de décalage adaptatif d'échantillon pour composants de luminance et de chrominance
US9872015B2 (en) Method and apparatus for improved in-loop filtering
AU2013248857B2 (en) Method and apparatus for loop filtering across slice or tile boundaries
AU2012327672B2 (en) Method and apparatus for non-cross-tile loop filtering
KR101752612B1 (ko) 비디오 코딩을 위한 샘플 적응적 오프셋 프로세싱의 방법
CN113612998A (zh) 用于视频编解码的采样自适应偏移处理的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12786530

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 1311592

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20120215

WWE Wipo information: entry into national phase

Ref document number: 1311592.8

Country of ref document: GB

WWE Wipo information: entry into national phase

Ref document number: 112012002125

Country of ref document: DE

Ref document number: 1120120021258

Country of ref document: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12786530

Country of ref document: EP

Kind code of ref document: A1