CN103535035A - Apparatus and method of sample adaptive offset for luma and chroma components - Google Patents

Apparatus and method of sample adaptive offset for luma and chroma components Download PDF

Info

Publication number
CN103535035A
CN103535035A CN201280022870.8A CN201280022870A CN103535035A CN 103535035 A CN103535035 A CN 103535035A CN 201280022870 A CN201280022870 A CN 201280022870A CN 103535035 A CN103535035 A CN 103535035A
Authority
CN
China
Prior art keywords
loop filtering
information
loop
block
colourity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201280022870.8A
Other languages
Chinese (zh)
Other versions
CN103535035B (en
Inventor
傅智铭
陈庆晔
蔡家扬
黄毓文
雷少民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HFI Innovation Inc
Original Assignee
MediaTek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/158,427 external-priority patent/US9055305B2/en
Priority claimed from US13/311,953 external-priority patent/US20120294353A1/en
Application filed by MediaTek Inc filed Critical MediaTek Inc
Priority to CN201510473630.5A priority Critical patent/CN105120270B/en
Priority to CN201610409900.0A priority patent/CN106028050B/en
Publication of CN103535035A publication Critical patent/CN103535035A/en
Application granted granted Critical
Publication of CN103535035B publication Critical patent/CN103535035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method and apparatus for processing reconstructed video using in-loop filter in a video coding system are disclosed. The method uses chroma in-loop filter indication to indicate whether chroma components are processed by in-loop filter when the luma in-loop filter indication indicates that in-loop filter processing is applied to the luma component. An additional flag may be used to indicate whether the in-loop filter processing is applied to an entire picture using same in-loop filter information or each block of the picture using individual in-loop filter information. Various embodiments according to the present invention to increase efficiency are disclosed, wherein various aspects of in-loop filter information are taken into consideration for efficient coding such as the property of quadtree -based partition, boundary conditions of a block, in-loop filter information sharing between luma and chroma components, indexing to a set of in-loop filter information, and prediction of in-loop filter information..

Description

The method and apparatus that is used for the sample self adaptation skew of brightness and chromatic component
cross reference
The application requires the priority of following U.S. Provisional Application case: the application number that on May 16th, 2011 submits is 61/486,504, denomination of invention is the U.S. Provisional Application case of " Sample Adaptive Offset for Luma and Chroma Components "; And the application number that on June 20th, 2011 submits is 61/498,949, denomination of invention is the U.S. Provisional Application case of " LCU-based Syntax for Sample Adaptive Offset "; The application number that on July 1st, 2011 submits is 61/503,870, and denomination of invention is the U.S. Provisional Application case of " LCU-based Syntax for Sample Adaptive Offset ".The application also requires the priority of following U.S. patent application case: the application number that on June 12nd, 2011 submits is 13/158,427, denomination of invention is the U.S. patent application case of " Apparatus and Method of Sample Adaptive Offset for Video Coding "; The application number that on December 6th, 2011 submits is 13/311,953, and denomination of invention is the U.S. patent application case of " Apparatus and Method of Sample Adaptive Offset for Luma and Chroma Components ".The application merges with reference to above-mentioned U.S. Provisional Application case and patent application case at this.
Technical field
The present invention is relevant for Video processing, particularly relevant for comprising sample self adaptation skew (sample adaptive offset, SAO) adaptive loop filter (the adaptive in-loop filtering) apparatus and method of compensation and self-adaption loop filtering (adaptive loop filter, ALF).
Background technology
In video coding system, video data carried out multiple processing as: predict, change, quantize, deblock and self-adaption loop filtering.Along the processing path of video coding system, because carry out aforesaid operations on video data, some feature of processed video data may be changed from original video data.As: the mean value of processed video may be offset.Intensity skew may cause visual impairment or artifact (artifacts), and especially, when changing from frame to frame, intensity skew is more obvious.Therefore, pixel intensity skew need carefully be compensated or be recovered to alleviate artifact.In this field, some intensity compensation schemes are used.For example: a kind of intensity skew scheme that is called sample self adaptation skew (sample adaptive offset, SAO), this intensity compensation scheme is generally selected according to context, and each pixel in processed video data is classified to in a plurality of classifications.Traditional SAO scheme is only applied to luminance component (luma component).Need to expand SAO scheme and be applied to equally process chromatic component (chroma component).SAO scheme conventionally need to be in conjunction with the SAO information comprising in video bit stream (bitstream) (for example, picture or sheet (slice) are carried out to the partition information (partition information) of piecemeal and the SAO deviant of each piece) so that the operation that decoder can be suitable.SAO information may occupy quite a few of bit rate of compressed video, and need to develop efficient encoded packets containing the method for SAO information.Except SAO, self-adaption loop filtering is the loop filtering (in-loop filter) of another kind of type, is conventionally applied to reconstructing video (reconstructed video) to improve video quality.Similarly, need to be by self-adaption loop filtering application in processing chromatic component to improve video quality.Moreover, in video bit stream, need to include self-adaption loop filtering information (for example, partition information and filtering parameter), so that the operation that decoder can be suitable.Therefore, also need to develop efficient encoded packets containing the method for the video bit stream of self-adaption loop filtering information.
Summary of the invention
The invention provides a kind of square law device that uses loop filtering to process reconstructing video in Video Decoder.Methods and apparatus according to embodiments of the present invention comprises: from video bit stream, obtain reconstructing video data, wherein said reconstructing video data comprise luminance component and a plurality of chromatic component; If the indication of the brightness loop filtering in described video bit stream shows loop filtering processing and is applied to described luminance component, receives the indication of colourity loop filtering from described video bit stream; If described colourity loop filtering indication shows described loop filtering processing and is applied to described a plurality of chromatic component, determines colourity loop filtering information; And if the indication of described colourity loop filtering shows that described loop filtering processing is applied to described a plurality of chromatic component, according to described chroma sample loop filtering information, applies described loop filtering and processes to described a plurality of chromatic components.Described chromatic component can be used single colourity loop filtering mark or each chromatic component can use the loop filtering mark of oneself, to control, whether applies described loop filtering processing.Whole image is used identical loop filtering information.As a kind of selection, image can be divided into a plurality of, and each piece is used the loop filtering information of oneself.When described loop filtering processing is applied to piece, the loop filtering information that can obtain current block from adjacent block is to improve code efficiency.The invention discloses a plurality of embodiment that raise the efficiency, wherein the many aspects of loop filtering information are for the efficiency with raising coding, and for example editor's condition, luminance component and the chromatic component of the attribute based on quaternary tree subregion, piece are shared loop filtering information, the epitome of loop filtering information aggregate and the prediction of loop filtering information.
The invention provides a kind of square law device that uses loop filtering to process reconstructing video in Video Decoder, the image-region of wherein said reconstructing video is divided into a plurality of, and described loop filtering is applied to described a plurality of.Described method and apparatus comprises: from video bit stream, obtain reconstructing video data, wherein said reconstructing video data comprise reconstructed blocks; If current reconstructed blocks is new subregion, from described video bit stream, receive loop filtering information; If described current reconstructed blocks is not described new subregion, from object block, obtain loop filtering information, wherein said current reconstructed blocks and described object block merge, and described object block is to choose from one or more candidate blocks of the one or more adjacent blocks corresponding to described current reconstructed blocks; And the described current reconstructed blocks that loop filtering processing is applied to use described loop filtering information.In order to improve code efficiency, if existed, surpass an adjacent block, the merging mark in video bit stream can be used for current block, to indicate a shared loop filtering information in adjacent block.If only have an adjacent block, without merging mark, be the shared loop filtering information of deducibility.According to the pooling information of the attribute of quaternary tree subregion and one or more candidate blocks, candidate blocks eliminates to improve code efficiency from the merging with current reconstructed blocks.
The invention provides a kind of square law device that uses loop filtering to process reconstructing video in corresponding video encoder.In addition, also provide a kind of square law device that uses loop filtering to process reconstructing video in corresponding video encoder, the image-region of wherein said reconstructing video is divided into a plurality of, and loop filtering is applied to described a plurality of.
Accompanying drawing explanation
The system block diagram that Fig. 1 is the video encoder that comprises reconstruct loop (reconstruction loop), wherein loop filtering is processed and is comprised de-blocking filter, the skew of sample self adaptation and self-adaption loop filtering.
Fig. 2 is the system block diagram of the Video Decoder that comprises reconstruct loop, and wherein loop filtering is processed and comprised de-blocking filter, the skew of sample self adaptation and self-adaption loop filtering.
Fig. 3 is that the present invention uses the information of adjacent block A, D, B, E to carry out an embodiment of SAO coding current block C.
Fig. 4 A is the embodiment that the present invention divides for the image based on quaternary tree (quadtree-based) of SAO processing.
Fig. 4 B is the embodiment that the present invention divides for the image based on LCU of SAO processing.
Fig. 5 A is the embodiment that piece C allows quaternary tree subregion, and wherein piece A and D are positioned at identical subregion, and piece B is positioned at different subregions.
Fig. 5 B is another embodiment that piece C allows quaternary tree subregion, and wherein piece A and D are positioned at identical subregion, and piece B is positioned at different subregions.
Fig. 5 C is the embodiment that piece C does not allow quaternary tree subregion, and wherein piece A and D are positioned at identical subregion, and piece B is positioned at different subregions.
Fig. 6 A is the embodiment that piece C allows quaternary tree subregion, and wherein piece B and D are positioned at identical subregion, and piece A is positioned at different subregions.
Fig. 6 B is another embodiment that piece C allows quaternary tree subregion, and wherein piece B and D are positioned at identical subregion, and piece A is positioned at different subregions.
Fig. 6 C is the embodiment that piece C does not allow quaternary tree subregion, and wherein piece B and D are positioned at identical subregion, and piece A is positioned at different subregions.
Fig. 7 comprises the grammer design (syntax design) of mark (flag) at sequence parameter set (Sequence Parameter Set, SPS), wherein mark is used to indicate SAO in sequence and enables or forbidden energy.
Fig. 8 is the grammer design of SAO parameter s ao_param (), and wherein independent SAO information allows chromatic component.
Fig. 9 is the grammer design of SAO partitioning parameters sao_split_param (), and wherein SAO partitioning parameters sao_split_param () comprises " component " parameter, and " component " can be one in luminance component or a plurality of chromatic component.
Figure 10 is the grammer design of SAO partitioning parameters sao_split_param (), and wherein SAO partitioning parameters sao_split_param () comprises " component " parameter, and " component " can be one in luminance component or a plurality of chromatic component.
Figure 11 is an embodiment who divides for the image based on quaternary tree of SAO kind decision.
Figure 12 A is the embodiment of the SAO based on image, and wherein whole image adopts identical SAO parameter.
Figure 12 B is the embodiment of the SAO based on LCU, and wherein each LCU adopts SAO parameter separately.
Figure 13 runs the example for the SAO information for first three LCU, and this operation equals 2.
Figure 14 shares the embodiment of SAO information for using run signal and merging top mark (merge-above flag) to encode.
Figure 15 shares the embodiment of SAO information for using run signal, operation prediction and merging top mark to encode.
Embodiment
At high efficiency video coding (High Efficiency Video Coding, HEVC) field, has introduced a kind of technology that is called self adaptation skew (Adaptive Offset, AO), for compensating the skew of reconstructing video, and self adaptation migration technology is applied in reconstruct loop.Application number is 13/158,427, and the U.S. patent application case that denomination of invention is " Apparatus and Method of Sample Adaptive Offset for Video Coding " has disclosed a kind of offset compensating method and system.This offset compensating method and system are classified to a class by each pixel, and the classification based on each pixel is to processing video data using strength migration or reparation.Except self adaptation skew, in high efficiency video coding field, also introduced self-adaption loop filtering (Adaptive Loop Filter, ALF) to improve the quality of video.Self-adaption loop filtering uses space filtering (spatial filter) to rebuild the video that is arranged in reconstruct loop.In foregoing invention, AO and ALF are considered to a kind of loop filtering.
The example of encoder has as shown in Figure 1 represented the system of use in a frame/inter prediction (intra/inter-prediction).The video data of intraprediction unit 110 based on same image, is responsible for providing prediction data.For inter prediction, ME/MC unit 112, motion prediction (motion estimation, ME) and motion compensation (motion compensation, MC), provide prediction data for the video data based on other image.Switch 114 is for selecting in frame or inter prediction data, and selecteed prediction data provides to adder 116 to produce predicated error (prediction errors), is also residual error (residues).This predicated error then, successively by T (transformation, conversion) 118 and Q(quantization, quantizes) 120 processing.Be converted and quantize after residual error be coded by entropy unit 122 and encode to form the bit stream corresponding to this compressed video data.This bit stream relevant to conversion parameter and additional information (side information) are packaged.This additional information can be: motion, the information that pattern is relevant to image-region with other.This additional information is also carried out entropy coding to reduce required bandwidth.As shown in Figure 1, the data relevant to additional information are provided to entropy coding unit 122.When using inter-frame forecast mode, a reference picture or a plurality of reference picture must be reconstructed in encoder-side.Therefore, IQ(inverse quantization, re-quantization) 124 and IT(inverse transformation, inverse conversion) 126 process this be converted and quantize after residual error to recover this residual error.Then, at REC(reconstruction, reconstruct) 128, this residual error is added back to this prediction data 136 with reconstructing video data.These reconstructing video data can be stored in reference picture buffer 134, and are used to predict other frame.As shown in Figure 1, the video data of input has experienced a series of processing in coded system.Reconstructing video data through REC128 are offset and other noise because intensity may occur in above-mentioned a series of processing.Therefore, before reconstruct data is stored to this reference picture buffer 134, DF (deblocking filter, de-blocking filter) 130 and SAO (skew of sample self adaptation) 131 and the filtering of ALF(self-adaption loop) 132 be applied to these reconstructing video data to improve video quality.This sample self adaptation offset information and this adaptive-filtering information must be transmitted in this bit stream, so the recovery information needed that decoder can be suitable is with application self-adapting skew and adaptive-filtering.Therefore,, for being incorporated to this bit stream, the self-adaption loop filtering information of the self adaptation offset information of AO131 output and ALF132 output is provided to entropy coder 122.In order to be obtained from, adapt to offset information and self-adaption loop filtering information, encoder may need to access original video data.In Fig. 1, do not draw clearly from being input to the path between AO131, ALF132.
Fig. 2 is the system block diagram of the Video Decoder embodiment that comprises de-blocking filter and self-adaption loop filtering.Because encoder also comprises local decoder in order to this video data of reconstruct, therefore except entropy decoder 222, some decoder element are used in encoder.Further, only there is motion compensation units 212 to be required in decoder end.Switch element 214 is selected in frame or inter-frame forecast mode, and the prediction data of selecting is provided to REC128 with the residual error merging with recovering.Except compressed video data being carried out to entropy decoding, entropy decoding unit 222 is gone back entropy decoding additional information and this additional information piece extremely is separately provided.For instance, frame mode information is provided to intraprediction unit 110, and inter-frame mode information is provided to motion compensation units MC212, and self adaptation offset information is provided to SAO131, self-adaption loop filtering information is provided to ALF132, and residual error is provided to IQ124.IQ124, IT126 and reconstruction processing are subsequently processed this residual error, with this video data of reconstruct.Again, as shown in Figure 2, from going through a series of processing that comprise IQ124 and IT126, the reconstructing video data of REC128 output there is intensity skew.These reconstructing video data are further processed by de-blocking filter 130, SAO131 and ALF132.
According to existing HEVC standard, loop filtering is only used to the luminance component of reconstructing video.Equally, by loop filtering, for the chromatic component of reconstructing video, be also quite useful.For chromatic component, the information relevant to loop filtering is sizable.Yet chromatic component conventionally can the less packed data of boil down to specific luminance component.Therefore, need to develop a kind of effective application loop filtering to the method and apparatus of chromatic component.Therefore, a kind of SAO method and apparatus that is effectively applied to chromatic component of the bright exposure of this law.
In one embodiment of this invention, when the SAO for luminance component has started, provide an indication (indication) to illustrate that (signaling) is start or close for the loop filtering of chromatic component.If the SAO for luminance component is not activated, the SAO for chromatic component can not start yet so.Therefore, in this case, just do not need to provide indication, for the loop filtering of chromatic component, start or close illustrating.The example of the pseudo-code in the present embodiment (pseudo codes) is as follows:
Figure BDA0000412249580000071
Indication is that the mark starting is called the indication of colourity loop filtering for the SAO of chromatic component, because it can be for ALF, and also can be for SAO.SAO is a kind of embodiment that loop filtering is processed, and it can be also ALF that loop filtering is processed.In another embodiment of the present invention, when the SAO for luminance component has started, provide indication separately (individual indications), to illustrate that for example, loop filtering for a plurality of chromatic components (Cb and Cr) is start or close.If the SAO for luminance component is not activated, the SAO for two chromatic components can not start yet so.Therefore, in this case, just do not need to provide indication separately, for the loop filtering of two chromatic components, start or close illustrating.The example of the pseudo-code in the present embodiment is as follows:
Figure BDA0000412249580000072
Figure BDA0000412249580000081
As mentioned above, need to develop a kind of effective loop circuit filtering method to reduce data volume.For example the, when indication, whether start about SAO if reduce and SAO start required information such as SAO parameter.Because adjacent block has similar feature conventionally, so adjacent block can be for reducing required SAO information.Fig. 3 is for reducing the embodiment of SAO information with adjacent block.Piece C is the current block that SAO is processing.As shown in Figure 3, piece B, D, E and A for before piece C, process and around the adjacent block of piece C.The parameter of the current processing block of block-based syntactic representation.Piece can be a coding unit (coding unit, CU), maximum coding unit (largest coding unit, LCU) or a plurality of maximum coding unit (multiple LCUs).Adopt mark to indicate current block and adjacent block to share SAO parameter and can reduce the required bit rate of current block.The processing sequence of if block is raster scan order, and when the parameter of encoding block C, the parameter of piece D, B, E and A is available so.When the parameter from adjacent block is while being available, the parameter of these pieces can be for coding current block.Need to send in order to the data volume of indicating the mark of sharing SAO parameter conventionally far less than the data volume of SAO parameter.Therefore, can realize efficient SAO.Yet SAO in the parameter sharing of adjacent block, is only that this technology also can be applied to other loop filtering, as ALF as an example as loop filtering Benq.
In current HEVC standard, the algorithm based on quaternary tree can be for adaptively image-region being divided into four sub regions, to realize better performance.In order to maintain the coding gain of SAO, the encryption algorithm of dividing for the SAO based on quaternary tree need to be designed effectively.SAO parameter (SAOP) comprises the deviant of types index (type index) and selected type.Fig. 4 A and Fig. 4 B are the embodiment that the SAO based on quaternary tree divides.Fig. 4 A represents employing quaternary tree partition method to carry out the image of subregion, wherein the corresponding LCU of each little square.The first subregion (degree of depth 0 subregion) represents with split_0 ().Value 0 represents not cut apart, and is worth 1 expression application and cuts apart.In Fig. 4 B, image comprises 12 LCU, be labeled as respectively P1, P2 ..., P12.The degree of depth 0 quaternary tree subregion, split_0 (1) is divided into 4 parts by image, upper left, upper right, lower-left and lower right area.Because lower-left and lower right area only have a line piece, therefore can not further apply quaternary tree subregion.Therefore, the degree of depth 1 quaternary tree subregion is only considered upper left and right regions.In embodiment shown in Fig. 4 A, top left region does not have divided, adopts split_1 (0) to represent; Right regions is further split into four regions, adopts split_1 (1) to represent.Correspondingly, the quaternary tree subregion result in Fig. 4 A is 7 subregions, submeter be labeled as P ' 0 ..., P ' 6, wherein:
The SAO parameter of P1 is identical with the parameter of P2, P5 and P6;
The SAO parameter of P9 is identical with the SAO parameter of P10; And
The SAO parameter of P11 is identical with the SAO parameter of P12.
According to the partition information of SAO, each LCU can be a new subregion, or combines and become a subregion with other LCU.If current LCU is that indication is merged, can select a plurality of merging candidates.For the grammer design example that allows information sharing is described, in the quaternary tree subregion shown in Fig. 3, only allowed two to merge candidate.Although only have in the present embodiment two candidates, can select a plurality of candidates in other embodiments of the invention from adjacent block.Grammer design is as follows:
Figure BDA0000412249580000091
According to another embodiment of the present invention, the relation between adjacent block (LCUs) and the attribute of quaternary tree subregion, can be for reducing the data volume of the SAO relevant information that needs transmission.In addition, the boundary condition of image-region (for example sheet) may comprise the relation information between adjacent block, can be used for reducing the data volume of the SAO relevant information of required transmission.Relation between adjacent block also may be introduced the redundancy that depends on adjacent block, and the relation between adjacent block can be used for reducing the data volume of the SAO relevant information of required transmission.
Fig. 5 A-Fig. 5 C is the embodiment that depends on the redundancy of adjacent block.As shown in Fig. 5 A and Fig. 5 B, according to the attribute of quaternary tree subregion, if block D and A are in identical subregion, and piece B is in another subregion, and piece A and piece C will be in different subregions so.In other words, according to quaternary tree subregion, the situation shown in Fig. 5 C is not allow to occur.Therefore, the merging candidate in Fig. 5 C is redundancy (redundant), and there is no need as representing that the merging sign of corresponding diagram 5C distributes a code.The pseudo-code embodiment that implements merge algorithm is as follows:
As described in above-described embodiment, only there is the situation of two kinds of permissions, piece C is a new subregion, or piece C and piece B combine.Therefore, only with individual bit (single bit), represent that newPartitionFlag has been enough to identify both of these case.In another embodiment, as shown in Figure 6 A and 6 B, piece D and piece B are in identical subregion, and piece A is in another subregion, and piece B and piece C will be in different subregions.In other words, according to quaternary tree subregion, the situation shown in Fig. 6 C is not allow to occur.Therefore, the merging candidate in Fig. 6 C is redundancy, and there is no need as representing that the merging sign of corresponding diagram 6C distributes a code.The pseudo-code embodiment that implements merge algorithm is as follows:
Figure BDA0000412249580000102
Fig. 5 A-5C has illustrated two embodiment of the SAO information that the current block that uses the redundancy that depends on adjacent block further to reduce required transmission is relevant with Fig. 6 A-6C.The redundancy that system utilization depends on adjacent block also has many other conditions.For example, if block A, B and D are in identical subregion, and piece C just can not be in other subregion so.Therefore, piece C must be with piece A, B and D in identical subregion, and also there is no need to transmit the indication of SAO information sharing.The LCU piece on sheet border also can be considered for reducing the relevant SAO information of current block of required transmission.For example, if block A does not exist, and that just only has a direction to merge.If block B does not exist, and equally only has a direction to merge yet.If block A and B do not exist, and just there is no need to transmit mark and come indicator collet C as a new subregion.In order further to reduce the quantity of transfer syntax element (syntax elements), can show only to apply a kind of SAO type when anter with a mark, and without any need for the signaling based on LCU (signaling).When above-mentioned be single subregion, the quantity of transfer syntax element also can reduce.Yet in the above-described embodiments, LCU is the unit as piece, also can use other piece configuration (for example, the size and shape of piece).For example, although using sheet here as an example of image-region, the piece in image-region divides into groups to share common information, certainly also can use other image-region: one group of sheet and piece image.
In addition, colourity and luminance component can be shared the identical SAO information for color video data.SAO information also can be shared between a plurality of chromatic components.For example, a plurality of chromatic components (Cb and Cr) can be used brightness partition information, so just without providing partition information for chromatic component.In another embodiment, chromatic component Cb and Cr can share identical SAO parameter, therefore, only need to transmit one group of SAO parameter and share for Cb and Cr.The SAO grammer of luminance component can be for chromatic component, and wherein, SAO grammer can comprise quaternary tree grammer and the grammer based on LCU.
The embodiment that the redundancy that utilization as shown in Fig. 6 A-6C depends on adjacent block as Fig. 5 A-5C reduces the data that the SAO information of required transmission is relevant also can be applied to chromatic component.SAO parameter comprises the SAO deviant of SAO types index and selected type, and SAO parameter can be encoded before partition information, therefore can form SAO parameter set (SAO parameter set, SAOPS).Correspondingly, can make index of reference from SAO parameter set, identify the SAO parameter of current block, wherein, the data that transmit index are less than the data that transmit SAO parameter conventionally.When partition information is encoded, in order to indicate the index selection of SAO parameter to be together encoded.The quantity of SAO parameter set dynamically increases.For example, after signal has new SAO parameter, the quantity of the SAO parameter in SAO parameter set will increase by 1.In order to represent the quantity in SAO parameter set, can dynamically adjust bit number (number of bits) to conform to data area.For example, the SAO parameter set that comprises 5-8 parameter need to represent with 3 bits.After signal has new SAO parameter, the quantity in SAO parameter set can be increased to 9, at this moment just need to represent to comprise with 4 bits the SAO parameter set of 9 parameters.
If the processing of SAO relates to the data in other sheet, SAO can avoid obtaining data from any other sheet with filling technique, or change pattern (pattern) is to replace the data from other sheet.In order to reduce the data of required SAO information, after SAO parameter can adopt prediction, form (predicted form) transmits, and for example, transmits the difference between the SAO parameter of current block and the SAO parameter of adjacent block.Reduce according to another embodiment of the present invention the SAO parameter for chromatic component.For example, the skew based on edge (Edge-based Offset, EO) classification is classified to each pixel in four kinds of chromatic component.The quantity of the EO kind of chromatic component can reduce to two kinds, to reduce the transmission data relevant to the SAO information of current block.The band quantity of band skew (band offset, the BO) classification of luminance component normally 16.In another embodiment, the band quantity of the band of luminance component skew (band offset, BO) classification can reduce to 8.
Embodiment shown in Fig. 3 has illustrated that current block C has four situations that merge candidate (that is, piece A, B, D and E).If merge candidate blocks, be arranged in identical subregion, the quantity that merges candidate can reduce.Correspondingly, being used to indicate the selecteed bit number of which merging candidate can reduce or save.If the processing of SAO relates to the data in other sheet, SAO will avoid obtaining data from any other sheet, and skips pixel when pre-treatment to avoid the data from other sheet.In addition, whether mark can be used for controlling SAO and processes and to avoid obtaining data from any other sheet.About SAO, process the control mark of whether avoiding obtaining from any other sheet data, can be included in sequence level or image level.About SAO, process the control mark of whether avoiding obtaining from any other sheet data, also can be shared by the non-cross-piece boundary marker of adaptive-filtering or de-blocking filter.In order further to reduce, transmit the data relevant to SAO information, the ON/OFF of colourity SAO is controlled the ON/OFF information that depends on brightness SAO.The kind of colourity SAO can be the subset of the brightness SAO in specific SAO type.
Grammer design example is according to various embodiments of the present invention described as follows.Fig. 7 has illustrated the sao_used_flag being included in sequence layer DBMS, for example sequence parameter set (Sequence Parameter Set, SPS).When the value of sao_used_flag is 0, SAO is forbidden energy to sequence.When the value of sao_used_flag is 1, SAO enables sequence.Fig. 8 has disclosed a grammer example of SAO parameter, wherein, sao_param () grammer can be included in auto-adaptive parameter collection (Adaptation Parameter Set, APS), picture parameter set (Picture Parameter Set, PPS) or sheet header (header).Except PPS, APS is the header of another image level, and comprising may be with the parameter of image modification.If sao_flag indication SAO enables, grammer will comprise partitioning parameters sao_split_param (0,0,0,0) and the offset parameter sao_offset_param (0,0,0,0) for luminance component.In addition, grammer also comprises for the SAO mark sao_flag_cb of Cb component with for the SAO mark sao_flag_cr of Cr component.If sao_flag_cb indication enables for the SAO of Cb component, grammer will comprise for the partitioning parameters sao_split_param of chromatic component Cb (0,0,0,1) and offset parameter sao_offset_param (0,0,0,1).If sao_flag_cr indication enables for the SAO of Cr component, grammer will comprise for the partitioning parameters sao_split_param of chromatic component Cr (0,0,0,2) and offset parameter sao_offset_param (0,0,0,2).Fig. 9 has disclosed sao_split_param (rx, ry, Depth, component) a grammer example, wherein this grammer is similar to traditional sao_split_param (), except the extra parameter " component " increasing, wherein " component " is used to indicate in luminance component or a plurality of chromatic component.Figure 10 has disclosed a grammer example of sao_offset_param (rx, ry, Depth, component), and wherein, this grammer is similar to traditional sao_offset_param (), except extra " component " parameter increasing.At sao_offset_param (rx, ry, Depth, component) in, if cut apart sign sao_split_flag[component] [Depth] [ry] [rx] indicate this region can further not cut apart, grammer can comprise sao_type_idx[component] [Depth] [ry] [rx].Grammer sao_type_idx[component] explanation of [Depth] [ry] [rx] is as shown in table 1.
Table 1
As shown in figure 11, the sample self adaptation skew (SAO) of HM-3.0 application adopts the grammer based on quaternary tree, and the grammer based on quaternary tree adopts dividing mark that image-region recurrence (recursively) is divided into 4 sub regions.There is the SAO parameter of oneself in each leaf region (leaf region), and wherein SAO parameter comprises the deviant information of SAO type and application region.In the embodiment that Figure 11 discloses, image is divided into 7 leaf regions, 1110 to 1170, wherein band skew type SAO is applied to leaf region 1110 and 1150, edge offset type SAO is applied to leaf region 1130,1140 and 1160, and closes for leaf region 1120 and 1170, SAO.In order to improve coding gain, grammer design is used image level mark to switch between the SAO based on image and block-based SAO according to an embodiment of the invention, and wherein, piece can be a LCU or other block size.Figure 12 A has disclosed the embodiment of the SAO based on image, and Figure 12 B has disclosed the embodiment of block-based SAO, and wherein, each region is a LCU, and in image, has 15 LCU.In the SAO based on image, a SAO parameter of whole Image Sharing (SAOP).Can also use the SAO based on sheet, so that whole or a plurality of is shared a SAOP.In the SAO based on LCU, each LCU has the SAO parameter of oneself, and 15 LCU(LCU1-LCU15) use respectively in SAOP1-SAOP15.
According to another embodiment of the present invention, the SAOP of each LCU can be shared by LCU subsequently.Sharing the quantity of the continuous LCU subsequently of identical SAOP can be indicated by run signal (run signal).In embodiment shown in Figure 13, SAOP1, SAOP2 and SAOP3 are identical.In other words, the SAOP of first LCU is SAOP1, and SAOP1 is used for two LCU subsequently.In this case, grammer " run=2 " will be encoded to illustrate to share the quantity of the continuous LCU subsequently of identical SAOP.Because the SAOP of two LCU next need not transmit, therefore can save the bit rate of these SAOP of coding.According to still another embodiment of the invention, except using run signal, according to the LCU in the next line of raster scan order, can share the SAOP of current LCU.If the LCU of top is available, can indicate current LCU to share the SAOP of the LCU of top with merging top mark (merge-above flag).If merging top mark is set to 1, current LCU will be used the SAOP of the LCU of top.As shown in figure 14, SAOP2 is shared by four LCU, i.e. 1410-1440, and wherein " run=1 " and " no merge-above " is used to refer to LCU1410 and 1420 and shares SAOP2, and do not share the SAOP of the LCU above their.In addition, " run=1 " and " merge-above=1 " is used to refer to LCU1430 and 1440 and shares SAOP2, and they share the SAOP of the LCU of top.In addition, SAOP1 and SAOP3 are shared by two LCU subsequently respectively, and SAOP4 is shared by four LCU subsequently.Correspondingly, the run signal that the run signal of SAOP1 and SAOP3 is 2, SAOP4 is 4.Because these LCU do not share the SAOP of the LCU of top, so the value of the merge-above grammer of SAOP1, SAOP3 and SAOP4 is 0.
In order to reduce the bit rate of run signal, the run signal of the LCU of top can be used as the predictor of the run signal of current LCU.Directly run signal is not encoded, the substitute is the difference between two run signal is encoded, wherein, difference is expressed as d_run in Figure 15.When the LCU of top is not while having first LCU of runtime value in LCU group, operation predicted value (run prediction value) can be the number that the runtime value of LCU up group deducts the LCU in the identical LCU group before LCU up.The runtime value of sharing first LCU of SAOP3 is 2, and the runtime value of the LCU of the shared SAOP1 of first LCU top is also 2, and the value of the d_run of the LCU of so shared SAOP3 is 0.The runtime value of sharing first LCU of SAOP4 is 4, and the runtime value of the LCU of the shared SAOP3 of first LCU top is 2.Correspondingly, share SAOP4 the value of d_run of LCU be 2.If the predictor of operation is disabled, can to operation, encode by using without symbol variable-length code (VLC) (unsigned variable length code, U_VLC).If predictor exists, can be by using symbol variable-length code (VLC) (signed variable length code, S_VLC) to come operation difference (delta run), d_run encodes.U_VLC and S_VLC can be the binary conversion treatment (binarization proces) during index Columbus code (exp-Golomb coding), Golomb-Rice coding or the CABAC on k rank encodes.
According to one embodiment of the invention, it is all identical with the LCU of lastrow can indicating all SAOPs of current LCU in capable with mark.For example, for the capable RepeatedRow mark of each LCU, be used to refer to all SAOPs of current LCU in capable all capable identical with the LCU of top.If RepeatedRow mark equals 1, more information does not just need to encode.The relevant SAOP of each LCU during current LCU is capable, be all LCU from top capable LCU copy.If RepeatedRow mark equals 0, the SAOP that current LCU is capable needs coding.
According to another embodiment of the present invention, can illustrate that RepeatedRow marks whether with mark available.For example, EnableRepeatedRow mark can be used to refer to RepeatedRow and marks whether available.EnableRepeatedRow mark can be indicated at sheet or image level.If EnableRepeatedRow equals 0, RepeatedRow mark can not encoded to each LCU is capable so.If EnableRepeatedRow equals 1, RepeatedRow mark is encoded to each LCU provisional capital so.
According to still another embodiment of the invention, the RepeatedRow mark during a LCU of image or sheet is capable can be saved (save).In the situation that image only has a slice, can save the RepeatedRow mark of a LCU in capable.In the situation that an image has a plurality of, if SAO processes, be sheet independent operation (slice-independent operation), can save so the RepeatedRow mark of a LCU in capable; Otherwise RepeatedRow mark need be encoded.In an image or a sheet, save the method for the RepeatedRow mark of a LCU in capable, also can be applied to use the situation of EnableRepeatedRow mark.
In order reducing, to transmit the data relevant to SAOP, according to one embodiment of the invention, to adopt identical in capable of the LCU of all SAOPs of run signal in capable with the LCU below indication and top.For example, the continuous LCU of N is capable comprises identical SAOP, and during a LCU in the LCU of N continuous repetition is capable is capable, SAOP and run signal equal N-1 and posted a letter.In an image or sheet, the minimum and maximum operation that the LCU of repetition is capable can be obtained and be posted a letter at sheet or image level.Based on maximum and minimum value, operation number can be encoded with regular length code word (fixed-length code word).The length of regular length code word can determine according to maximum runtime value and minimum runtime value, therefore can sheet or image layer level adaptation change.
According to another embodiment of the present invention, the operation number during a LCU of image or sheet is capable is encoded.Using the operation that the entropy above-mentioned image of mentioning of coding or the LCU in a sheet are capable and moving in the method for difference, if repeat SAOP in continuous LCU, operation is encoded to indicate the quantity of the LCU that shares SAOP.If the predictor of operation is unavailable, can be by using without the operation of encoding of symbol variable-length code (VLC) U_VLC or regular length code word.If use regular length code word, word length can operation or remaining LCU based on picture traverse, coding be carried out adaptive coding, or word length can be fixed based on picture traverse or be posted a letter to decoder.For example, a LCU who includes in the image of N LCU is capable, and the LCU being processed by SAO is k the LCU of this LCU in capable, k=0 wherein ..., N-1.If operation need to be encoded, the maximum number of operation is N-1-k so.By the word length of the operation being encoded, be floor (log2 (N-1-k)+1).In another example, first the maximum number of the operation in a slice or image and minimum number can be calculated.Based on maximum and minimum value, the word length of regular length code word can be acquired and encode.
According to still another embodiment of the invention, the information of operation number and operation difference number can be included in sheet level.The quantity (NumSaoRun) of operation number, operation difference number or LCU, can be posted a letter in sheet level.The quantity of the LCU of present encoding SAOP can illustrate with NumSaoRun mark.In addition the quantity that, the quantity of operation number, operation difference number or LCU can be used in a LCU in coded image is predicted.Predictive equation formula is as follows:
NumSaoRun=sao_num_run_info+NumTBsInPicture
Wherein, NumTBsInPicture is the quantity of the LCU in an image, and sao_num_run_info is prediction residual value.Grammer sao_num_run_info available symbols variable-length code (VLC) or encode without symbol variable-length code (VLC).Grammer sao_num_run_info is available symbols regular length code word or encode without symbol regular length code word also.
According to the embodiment of loop filtering of the present invention as mentioned above, can adopt multiple hardwares, software code or both combinations to realize.For instance, one embodiment of the invention can be circuit and are integrated into video compression chip, or procedure code is integrated into video compression system, to carry out respective handling.One embodiment of the invention also can be in the upper execution of digital signal processor (Digital Signal Processor, DSP) to carry out the procedure code of respective handling.The present invention also can comprise a series of functions, and is carried out by computer processor, digital signal processor, microprocessor, field programmable gate array (Field Programmable Gate Array, FPGA).The machine readable software code or the firmware code that by execution, define the embodiment of the present invention, above-mentioned processor can be carried out according to the present invention particular task.Software code or firmware code can carry out in distinct program language and different-format or mode.Software code can be compiled into different target platforms.But, different coded formats, mode and software code language, and relevant with the present invention all spirit according to the invention of other method that code executes the task that makes, fall into protection scope of the present invention.
Although the present invention discloses as above with regard to preferred embodiment, so it is not intended to limiting the invention.Those skilled in the art in the technical field of the invention, without departing from the spirit and scope of the present invention, when being used for a variety of modifications and variations.Therefore, protection scope of the present invention is when defining and be as the criterion depending on previous claims.

Claims (59)

1. adopt loop filtering to process a method for reconstructing video, for Video Decoder, it is characterized in that, described method comprises:
From video bit stream, obtain reconstructing video data, wherein said reconstructing video data comprise luminance component and a plurality of chromatic component;
If the indication of the brightness loop filtering in described video bit stream shows loop filtering processing and is applied to described luminance component, receives the indication of colourity loop filtering from described video bit stream;
If described colourity loop filtering indication shows described loop filtering processing and is applied to described a plurality of chromatic component, determines colourity loop filtering information; And
If described colourity loop filtering indication shows described loop filtering processing and is applied to described a plurality of chromatic component, according to described colourity loop filtering information, applies described loop filtering and process to described a plurality of chromatic components.
2. the method for claim 1, is characterized in that, described colourity loop filtering indication is used single colourity loop filtering mark to share for described a plurality of chromatic components.
3. the method for claim 1, is characterized in that, the loop filtering of the colourity separately mark that corresponds respectively to described a plurality of chromatic components is used in described colourity loop filtering indication.
4. the method for claim 1, is characterized in that, the colourity image-region of described reconstructing video is divided into a plurality of chrominance block, and described colourity loop filtering application is in described a plurality of chrominance block; If one of them the current reconstruct chrominance block corresponding to described a plurality of chromatic components is new subregion, described colourity loop filtering information is to obtain from described video bit stream; If described current reconstruct chrominance block is not described new subregion, described colourity loop filtering information is to obtain from target chrominance block; And described current reconstruct chrominance block and the merging of described target chrominance block, described target chrominance block is to select from one or more candidate's chrominance block of the one or more adjacent chrominance block corresponding to described current reconstruct chrominance block.
5. method as claimed in claim 4, is characterized in that, if described one or more adjacent chrominance block comprises, surpasses an adjacent chrominance block, the merging mark of described colourity loop filtering information based in described video bit stream and determining; And if described one or more adjacent chrominance block comprises an adjacent chrominance block, described colourity loop filtering information is inferred.
6. method as claimed in claim 5, it is characterized in that, according to the pooling information of quaternary tree zone attribute and described one or more candidate's chrominance block, at least one in described one or more candidate's chrominance block eliminated from the merging with described current reconstruct chrominance block.
7. the method for claim 1, is characterized in that, the image-region of described reconstructing video is divided into a plurality of; Described brightness loop filtering is applied to a plurality of luminance block, and described colourity loop filtering application is in a plurality of chrominance block; And the partition information of described a plurality of chromatic components is to obtain from the partition information of described luminance component.
8. method as claimed in claim 7, is characterized in that, described a plurality of chromatic components are shared described colourity loop filtering information.
9. method as claimed in claim 7, is characterized in that, the image-region of described reconstructing video adopts quaternary tree subregion to divide; And the grammer based on quaternary tree of described a plurality of chromatic components is to obtain from the grammer based on quaternary tree of described luminance component.
10. the method for claim 1, is characterized in that, the image-region of described reconstructing video is divided into a plurality of; Described brightness loop filtering is applied to use a plurality of luminance block of brightness loop information, and described colourity loop filtering application is in a plurality of chrominance block of using colourity loop information; And relevant to each luminance block described brightness loop information used the index of the first set of pointing to described brightness loop information to encode, or the described colourity loop information relevant to each chrominance block used the second index of gathering that points to described colourity loop information to encode.
11. methods as claimed in claim 10, it is characterized in that, when new brightness loop filtering information is posted a letter, the first set sizes of the quantity of the brightness loop filtering information in gathering corresponding to described first is updated, or when new colourity loop filtering information is posted a letter, the second set sizes of the quantity of the colourity loop filtering information in gathering corresponding to described second is updated.
12. methods as claimed in claim 11, it is characterized in that, the first bit length that represents described the first set sizes is dynamically adjusted to adapt to described the first set sizes, or represents that the second bit length of described the second set sizes is dynamically adjusted to adapt to described the second set sizes.
13. the method for claim 1, it is characterized in that, the described loop filtering that is applied to described a plurality of chromatic components is processed the external data from one or more other colourity image-regions with given data or current chroma image-region data replacement, if or relevant to described external data for the described loop filtering processing of described current chroma image-region, described loop filtering is processed and is skipped.
14. methods as claimed in claim 13, it is characterized in that, control mark is used to refer to described loop filtering and processes whether replace external data, if or relevant to described external data for the described loop filtering processing of described current chroma image-region, whether skip described loop filtering and process.
15. methods as claimed in claim 14, is characterized in that, described control mark is sequence level mark or image level mark.
16. methods as claimed in claim 14, is characterized in that, described control mark is shared by a plurality of loop filterings.
17. the method for claim 1, is characterized in that, the image-region of described reconstructing video is divided into a plurality of; Described brightness loop filtering is applied to use a plurality of luminance block of brightness loop information, and described colourity loop filtering application is in a plurality of chrominance block of using colourity loop information; And the described brightness loop information of current block predicted by the described brightness loop information of one or more other pieces, or the described colourity loop information of current block is by the described colourity loop information prediction of one or more other pieces.
18. methods as claimed in claim 17, it is characterized in that, the described brightness loop information of described current block is to predict according to the described brightness loop information of the one or more adjacent blocks corresponding to described current block, or the described colourity loop information of described current block is to predict according to the described colourity loop information of the one or more adjacent blocks corresponding to described current block.
19. the method for claim 1, is characterized in that, described loop filtering is to select from the combination of the skew of sample self adaptation, sample loop filter or the skew of sample self adaptation and sample loop filter.
20. 1 kinds of methods that adopt loop filtering to process reconstructing video, for Video Decoder, the image-region of wherein said reconstructing video is divided into a plurality of, and described loop filtering is applied to described a plurality of, it is characterized in that, and described method comprises:
From video bit stream, obtain reconstructing video data, wherein said reconstructing video data comprise reconstructed blocks;
If current reconstructed blocks is new subregion, from described video bit stream, receive loop filtering information;
If described current reconstructed blocks is not described new subregion, from object block, obtain described loop filtering information, wherein said current reconstructed blocks and described object block merge, and described object block is to choose from one or more candidate blocks of the one or more adjacent blocks corresponding to described current reconstructed blocks; And
Loop filtering is processed to the described current reconstructed blocks that is applied to use described loop filtering information.
21. methods as claimed in claim 20, is characterized in that, if described one or more adjacent block comprises, surpass an adjacent block, and described loop filtering information is the merging mark based in described video bit stream and obtaining; And if described one or more adjacent block comprises an adjacent block, described loop filtering information is inferred and is obtained.
22. methods as claimed in claim 20, is characterized in that, according to the pooling information of quaternary tree zone attribute and described one or more candidate blocks, at least one in described one or more candidate blocks eliminated from the merging with described current reconstructed blocks.
23. 1 kinds of methods that adopt loop filtering to process reconstructing video, for video encoder, is characterized in that, described method comprises:
Obtain the reconstructing video data that comprise luminance component and a plurality of chromatic components;
If brightness loop filtering is indicated, show that described loop filtering processing is applied to described luminance component, video bit stream comprises the indication of colourity loop filtering;
If described colourity loop filtering indication shows that described loop filtering is applied to described a plurality of chromatic component, described video bit stream comprises colourity loop filtering information; And
If described colourity loop filtering indication shows described loop filtering processing and is applied to described a plurality of chromatic component, according to described colourity loop filtering information, applies described loop filtering and process to described a plurality of chromatic components.
24. methods as claimed in claim 23, is characterized in that, the colourity image-region of described reconstructing video is divided into a plurality of chrominance block, and described colourity loop filtering application is in described a plurality of chrominance block; If one of them the current reconstruct chrominance block corresponding to described a plurality of chromatic components is new subregion, described colourity loop filtering information is contained in described video bit stream; If described current reconstruct chrominance block is not described new subregion, described colourity loop filtering information is to obtain from target chrominance block; And described current reconstruct chrominance block and the merging of described target chrominance block, described target chrominance block is to choose from one or more candidate's chrominance block of the one or more adjacent chrominance block corresponding to described current reconstruct chrominance block.
25. methods as claimed in claim 23, is characterized in that, the image-region of described reconstructing video is divided into a plurality of; Described brightness loop filtering is applied to a plurality of luminance block, and described colourity loop filtering application is in a plurality of chrominance block; And the partition information that obtains described a plurality of chromatic components from the partition information of described luminance component.
26. methods as claimed in claim 23, is characterized in that, the image-region of described reconstructing video is divided into a plurality of; Described brightness loop filtering is applied to use a plurality of luminance block of brightness loop information, and described colourity loop filtering application is in a plurality of chrominance block of using colourity loop information; And relevant to each luminance block described brightness loop information used the index of the first set of pointing to described brightness loop information to encode, or the described chroma sample adaptive information relevant to each chrominance block used the second index of gathering that points to described colourity loop information to encode.
27. methods as claimed in claim 23, is characterized in that, the image-region of described reconstructing video is divided into a plurality of; Described brightness loop filtering is applied to use a plurality of luminance block of brightness loop information, and described colourity loop filtering application is in a plurality of chrominance block of using colourity loop information; And according to the described brightness loop information of the described brightness loop information prediction current block of one or more other pieces, or according to the described colourity loop information of the described colourity loop information prediction current block of one or more other pieces.
28. methods as claimed in claim 23, is characterized in that, described loop filtering is to select from the combination of the skew of sample self adaptation, sample loop filter or the skew of sample self adaptation and sample loop filter.
29. 1 kinds of methods that adopt loop filtering to process reconstructing video, for video encoder, the image-region of wherein said reconstructing video is divided into a plurality of, and described loop filtering is applied to described a plurality of, it is characterized in that, and described method comprises:
Obtain reconstructing video data;
If current reconstructed blocks is new subregion, merge the loop filtering information in video bit stream;
If described current reconstructed blocks is not described new subregion, based target piece merges the described loop filtering information in described video bit stream, wherein said current reconstructed blocks and described object block merge, and described object block is to select from one or more candidate blocks of the one or more adjacent blocks corresponding to described current reconstructed blocks; And
Loop filtering is processed to the described current reconstructed blocks that is applied to use described loop filtering information.
30. 1 kinds of devices that adopt loop filtering to process reconstructing video, for Video Decoder, is characterized in that, described device comprises:
For obtain the device of reconstructing video data from video bit stream, wherein said reconstructing video data comprise luminance component and a plurality of chromatic component;
If the skew of the brightness self adaptation in described video bit stream indication shows that loop filtering processing is applied to described luminance component, for receive the device of colourity loop filtering indication from described video bit stream;
If described colourity loop filtering indication shows described loop filtering processing and is applied to described a plurality of chromatic component, for determining the device of colourity loop filtering information; And
If described colourity loop filtering indication shows described loop filtering processing and is applied to described a plurality of chromatic component, processes the device to described a plurality of chromatic components for apply described loop filtering according to described colourity loop filtering information.
31. devices as claimed in claim 30, is characterized in that, the colourity image-region of described reconstructing video is divided into a plurality of chrominance block, and described colourity loop filtering application is in described a plurality of chrominance block; If one of them the current reconstruct chrominance block corresponding to described a plurality of chromatic components is new subregion, described colourity loop filtering information is to obtain from described video bit stream; If described current reconstruct chrominance block is not described new subregion, described colourity loop filtering information is to obtain from target chrominance block; And described current reconstruct chrominance block and the merging of described target chrominance block, described target chrominance block is to select from one or more candidate's chrominance block of the one or more adjacent chrominance block corresponding to described current reconstruct chrominance block.
32. devices as claimed in claim 30, is characterized in that, described loop filtering is to select from the combination of the skew of sample self adaptation, sample loop filter or the skew of sample self adaptation and sample loop filter.
33. 1 kinds of devices that adopt loop filtering to process reconstructing video, for Video Decoder, the image-region of wherein said reconstructing video is divided into a plurality of, and described loop filtering is applied to described a plurality of, it is characterized in that, and described device comprises:
For obtain the device of reconstructing video data from video bit stream, wherein said reconstructing video data comprise reconstructed blocks;
If current reconstructed blocks is new subregion, for receive the device of loop filtering information from described video bit stream;
If described current reconstructed blocks is not described new subregion, for obtain the device of loop filtering information from object block, wherein said current reconstructed blocks and described object block merge, and described object block is to choose from one or more candidate blocks of the one or more adjacent blocks corresponding to described current reconstructed blocks; And
For loop filtering being processed to the device of the described current reconstructed blocks be applied to use described loop filtering information.
34. devices as claimed in claim 33, is characterized in that, if described one or more adjacent block comprises, surpass an adjacent block, and described loop filtering information is the merging mark based in described video bit stream and obtaining; And if described one or more adjacent block comprises an adjacent block, described loop filtering information is inferred and is obtained.
35. devices as claimed in claim 33, is characterized in that, according to the pooling information of quaternary tree zone attribute and described one or more candidate blocks, at least one in described one or more candidate blocks eliminated from the merging with described current reconstructed blocks.
36. 1 kinds of devices that adopt loop filtering to process reconstructing video, for video encoder, is characterized in that, described device comprises:
For obtaining the device of the reconstructing video data that comprise luminance component and a plurality of chromatic components;
If the indication of brightness loop filtering shows described loop filtering processing and is applied to described luminance component, for showing that video bit stream comprises the device of colourity loop filtering indication;
If described colourity loop filtering indication shows that described loop filtering is applied to described a plurality of chromatic component, for showing that described video bit stream comprises the device of colourity loop filtering information; And
If described colourity loop filtering indication shows described loop filtering processing and is applied to described a plurality of chromatic component, processes the device to described a plurality of chromatic components for apply described loop filtering according to described colourity loop filtering information.
37. devices as claimed in claim 36, is characterized in that, described loop filtering is to select from the combination of the skew of sample self adaptation, sample loop filter or the skew of sample self adaptation and sample loop filter.
38. 1 kinds of devices that adopt loop filtering to process reconstructing video, for video encoder, the image-region of wherein said reconstructing video is divided into a plurality of, and described loop filtering is applied to described a plurality of, it is characterized in that, and described device comprises:
For obtaining the device of reconstructing video data;
If current reconstructed blocks is new subregion, for merging the device of the loop filtering information of video bit stream;
If described current reconstructed blocks is not described new subregion, the device that merges the described loop filtering information of described video bit stream for based target piece, wherein said current reconstructed blocks and described object block merge, and described object block is to select from one or more candidate blocks of the one or more adjacent blocks corresponding to described current reconstructed blocks; And
For loop filtering being processed to the device of the described current reconstructed blocks be applied to use described loop filtering information.
39. 1 kinds of methods that adopt loop filtering to process reconstructing video, for Video Decoder, is characterized in that, described method comprises:
From video bit stream, obtain reconstructing video data;
In image level or sheet level, receive loop filtering mark;
If the loop filtering of described loop filtering mark indication based on image processes or the loop filtering based on sheet is processed, use respectively identical loop filtering information, the described loop filtering based on image of application is processed or described loop filtering based on sheet is processed to the whole image of described reconstructing video data or whole; And
If described loop filtering mark indicates block-based loop filtering to process, apply described block-based loop filtering and process the piece to described reconstructing video data, wherein said image is divided into a plurality of, and each piece is used the loop filtering information relevant to each piece.
40. methods as claimed in claim 39, is characterized in that, it is to be indicated by run signal that the described loop information relevant to current block shared by one or more pieces subsequently.
41. methods as claimed in claim 40, is characterized in that, described run signal is included in described level.
42. methods as claimed in claim 39, is characterized in that, it is to indicate by merging top mark that the loop information relevant to current block shared by the piece of top.
43. methods as claimed in claim 39, is characterized in that, it is to be indicated by operation difference signal that the described loop information relevant to current block shared by one or more pieces subsequently; Wherein, described operation difference signal is that difference based between current operation and prediction operation is determined; Described current operation is relevant to the first quantity of described current block piece subsequently, and described current block piece is subsequently shared the described loop information of described current block; And the second quantity of the piece of described prediction operation and top piece is subsequently relevant, the piece of described top piece is subsequently shared the described loop information of the piece of described top.
44. methods as claimed in claim 43, is characterized in that, if described prediction operation is unavailable, described operation difference signal is set to current operation.
45. methods as claimed in claim 44, is characterized in that, if described prediction operation is unavailable, described current operation is used without symbol variable-length code encoding; And if described prediction operation is available, described operation difference signal is used symbol variable-length code encoding.
46. methods as claimed in claim 45, is characterized in that, described is to select in the group forming from the index Columbus code by k rank, Golomb-Rice coding or CABAC coding without symbol variable-length code (VLC) or symbol variable-length code (VLC).
47. methods as claimed in claim 39, is characterized in that, each piece is maximum coding unit.
48. methods as claimed in claim 39, is characterized in that, if repeated rows mark is designated as repeated rows, and the described loop filtering information of all in the row of all shared tops in current line; And if described repeated rows mark is designated as non-repeated rows, described loop filtering information is included in described video bit stream.
49. methods as claimed in claim 48, is characterized in that, enable repeated rows mark and are included in described video bit stream to indicate the described repeated rows of the piece of every a line to mark whether to be included in described video bit stream.
50. methods as claimed in claim 48, is characterized in that, when image comprises a slice or comprises that the first in the image of multi-disc is used the filtering of sheet independent loop circuit to process, and omits described repeated rows mark in described video bit stream.
51. methods as claimed in claim 39, it is characterized in that, the described loop information relevant to the piece of current line by the piece of one or more row subsequently share be by with described a plurality of row subsequently in the relevant row run signal of the quantity of piece of one or more row subsequently indicate.
52. methods as claimed in claim 51, is characterized in that, described row run signal regular length representation; And the bit length of described fixed-length code is to determine based on minimum row operation and maximum row operation.
53. 1 kinds of methods that adopt loop filtering to process reconstructing video, for video encoder, is characterized in that, described method comprises:
Obtain reconstructing video data;
In image level or sheet level, merge loop filtering mark;
If the loop filtering of described loop filtering mark indication based on image processes or the loop filtering based on sheet is processed, use respectively identical loop filtering information, the described loop filtering based on image of application is processed or described loop filtering based on sheet is processed to the whole image of described reconstructing video data or whole; And
If described loop filtering mark indicates block-based loop filtering to process, applying described block-based loop filtering processes to the piece in described reconstructing video data, image in wherein said reconstructing video data is divided into a plurality of, and each piece is used the loop filtering information relevant to each piece.
54. methods as claimed in claim 53, is characterized in that, described method further comprises:
Run signal in merging video bit stream is to indicate the quantity of one or more pieces subsequently of sharing the described loop information relevant to current block.
55. methods as claimed in claim 54, is characterized in that, described run signal is included in described level.
56. methods as claimed in claim 54, is characterized in that, described method further comprises:
The merging top mark merging in video bit stream is shared the described loop information relevant to current block with the piece of indication top.
57. methods as claimed in claim 53, is characterized in that, described method further comprises:
Merge operation difference signal to indicate one or more pieces subsequently to share the described loop information relevant to current block; Wherein said operation difference signal is that the difference based between current operation and prediction operation is determined; Described current operation is relevant to the first quantity of described current block piece subsequently, and described current block piece is subsequently shared the described loop information of described current block; And the second quantity of the piece of described prediction operation and top piece is subsequently relevant, the piece of described top piece is subsequently shared the described loop information of the piece of described top.
58. 1 kinds of devices that adopt loop filtering to process reconstructing video, for Video Decoder, is characterized in that, described device comprises:
For obtain the device of reconstructing video data from video bit stream;
For the device at image level or sheet level reception loop filtering mark;
If the loop filtering of described loop filtering mark indication based on image processes or the loop filtering based on sheet is processed, for using respectively the identical described loop filtering based on image of loop filtering information application to process or described loop filtering based on sheet is processed to the whole image of described reconstructing video data or the device of whole; And
If described loop filtering mark indicates block-based loop filtering to process, for applying described block-based loop filtering, process to the device of the piece of described reconstructing video data, wherein said image is divided into a plurality of, and each piece is used the loop information relevant to each piece.
59. 1 kinds of devices that adopt loop filtering to process reconstructing video, for video encoder, is characterized in that, described device comprises:
For obtaining the device of reconstructing video data;
For the device at image level or sheet level merging loop filtering mark;
If the loop filtering of described loop filtering mark indication based on image processes or the loop filtering based on sheet is processed, for using respectively the identical described loop filtering based on image of loop filtering information application to process or described loop filtering based on sheet is processed to the whole image of described reconstructing video data or the device of whole; And
If described loop filtering mark indicates block-based loop filtering to process, for applying described block-based loop filtering, process to the device of the piece of described reconstructing video data, image in wherein said reconstructing video data is divided into a plurality of, and each piece is used the loop information relevant to each piece.
CN201280022870.8A 2011-05-16 2012-02-15 For the method and apparatus that the sample self adaptation of brightness and chromatic component offsets Active CN103535035B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510473630.5A CN105120270B (en) 2011-05-16 2012-02-15 Using the method and device of the adaptive migration processing reconstructing video of sample
CN201610409900.0A CN106028050B (en) 2011-05-16 2012-02-15 The method and apparatus that sample for brightness and chromatic component adaptively deviates

Applications Claiming Priority (11)

Application Number Priority Date Filing Date Title
US201161486504P 2011-05-16 2011-05-16
US61/486,504 2011-05-16
US13/158,427 2011-06-12
US13/158,427 US9055305B2 (en) 2011-01-09 2011-06-12 Apparatus and method of sample adaptive offset for video coding
US201161498949P 2011-06-20 2011-06-20
US61/498,949 2011-06-20
US201161503870P 2011-07-01 2011-07-01
US61/503,870 2011-07-01
US13/311,953 2011-12-06
US13/311,953 US20120294353A1 (en) 2011-05-16 2011-12-06 Apparatus and Method of Sample Adaptive Offset for Luma and Chroma Components
PCT/CN2012/071147 WO2012155553A1 (en) 2011-05-16 2012-02-15 Apparatus and method of sample adaptive offset for luma and chroma components

Related Child Applications (2)

Application Number Title Priority Date Filing Date
CN201510473630.5A Division CN105120270B (en) 2011-05-16 2012-02-15 Using the method and device of the adaptive migration processing reconstructing video of sample
CN201610409900.0A Division CN106028050B (en) 2011-05-16 2012-02-15 The method and apparatus that sample for brightness and chromatic component adaptively deviates

Publications (2)

Publication Number Publication Date
CN103535035A true CN103535035A (en) 2014-01-22
CN103535035B CN103535035B (en) 2017-03-15

Family

ID=47176199

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201280022870.8A Active CN103535035B (en) 2011-05-16 2012-02-15 For the method and apparatus that the sample self adaptation of brightness and chromatic component offsets
CN201610409900.0A Active CN106028050B (en) 2011-05-16 2012-02-15 The method and apparatus that sample for brightness and chromatic component adaptively deviates
CN201510473630.5A Active CN105120270B (en) 2011-05-16 2012-02-15 Using the method and device of the adaptive migration processing reconstructing video of sample

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN201610409900.0A Active CN106028050B (en) 2011-05-16 2012-02-15 The method and apparatus that sample for brightness and chromatic component adaptively deviates
CN201510473630.5A Active CN105120270B (en) 2011-05-16 2012-02-15 Using the method and device of the adaptive migration processing reconstructing video of sample

Country Status (5)

Country Link
CN (3) CN103535035B (en)
DE (1) DE112012002125T5 (en)
GB (1) GB2500347B (en)
WO (1) WO2012155553A1 (en)
ZA (1) ZA201305528B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103765904A (en) * 2011-06-24 2014-04-30 Lg电子株式会社 Image information encoding and decoding method
WO2017063168A1 (en) * 2015-10-15 2017-04-20 富士通株式会社 Image coding method and apparatus, and image processing device
CN110662065A (en) * 2018-06-29 2020-01-07 财团法人工业技术研究院 Image data decoding method, image data decoding device, image data encoding method, and image data encoding device
WO2020239119A1 (en) * 2019-05-30 2020-12-03 Beijing Bytedance Network Technology Co., Ltd. Adaptive loop filtering for chroma components
CN112997504A (en) * 2018-11-09 2021-06-18 北京字节跳动网络技术有限公司 Component-based loop filter
CN114402597A (en) * 2019-07-08 2022-04-26 Lg电子株式会社 Video or image coding using adaptive loop filter
US11490124B2 (en) 2019-04-20 2022-11-01 Beijing Bytedance Network Technology Co., Ltd. Signaling of chroma and luma syntax elements in video coding

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102349348B1 (en) 2011-06-14 2022-01-10 엘지전자 주식회사 Method for encoding and decoding image information
ES2807351T3 (en) * 2011-06-27 2021-02-22 Sun Patent Trust Image encoding procedure, image decoding procedure, image encoding device, image decoding device and image encoding / decoding device
JP5907367B2 (en) * 2011-06-28 2016-04-26 ソニー株式会社 Image processing apparatus and method, program, and recording medium
MX337235B (en) * 2011-06-28 2016-02-18 Samsung Electronics Co Ltd Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor.
GB201119206D0 (en) 2011-11-07 2011-12-21 Canon Kk Method and device for providing compensation offsets for a set of reconstructed samples of an image
US9936200B2 (en) * 2013-04-12 2018-04-03 Qualcomm Incorporated Rice parameter update for coefficient level coding in video coding process
US10021419B2 (en) 2013-07-12 2018-07-10 Qualcomm Incorported Rice parameter initialization for coefficient level coding in video coding process
JP6094838B2 (en) * 2015-08-31 2017-03-15 ソニー株式会社 Image processing apparatus and method, program, and recording medium
US20180041778A1 (en) * 2016-08-02 2018-02-08 Qualcomm Incorporated Geometry transformation-based adaptive loop filtering
JP6341304B2 (en) * 2017-02-14 2018-06-13 ソニー株式会社 Image processing apparatus and method, program, and recording medium
US10623738B2 (en) * 2017-04-06 2020-04-14 Futurewei Technologies, Inc. Noise suppression filter
US10531085B2 (en) * 2017-05-09 2020-01-07 Futurewei Technologies, Inc. Coding chroma samples in video compression
US20180359486A1 (en) * 2017-06-07 2018-12-13 Mediatek Inc. Non-local adaptive loop filter processing
EP4192016A1 (en) * 2018-10-23 2023-06-07 HFI Innovation Inc. Method and apparatus for reduction of in-loop filter buffer
JP7256874B2 (en) 2019-03-08 2023-04-12 キヤノン株式会社 adaptive loop filter
EP3981160A4 (en) 2019-06-27 2023-05-24 HFI Innovation Inc. Method and apparatus of cross-component adaptive loop filtering for video coding
WO2021021590A1 (en) 2019-07-26 2021-02-04 Mediatek Inc. Method and apparatus of cross-component adaptive loop filtering for video coding
WO2021025597A1 (en) * 2019-08-07 2021-02-11 Huawei Technologies Co., Ltd. Method and apparatus of sample adaptive offset in-loop filter with application region size constraint
CN118118669A (en) * 2019-08-29 2024-05-31 Lg 电子株式会社 Image compiling device and method based on self-adaptive loop filtering
CN114391255B (en) * 2019-09-11 2024-05-17 夏普株式会社 System and method for reducing reconstruction errors in video coding based on cross-component correlation
CN114731399A (en) * 2019-11-22 2022-07-08 韩国电子通信研究院 Adaptive in-loop filtering method and apparatus
US11303914B2 (en) * 2020-01-08 2022-04-12 Tencent America LLC Method and apparatus for video coding
US11800124B2 (en) 2020-07-28 2023-10-24 Beijing Dajia Internet Information Technology Co., Ltd. Chroma coding enhancement in cross-component sample adaptive offset
US11849117B2 (en) 2021-03-14 2023-12-19 Alibaba (China) Co., Ltd. Methods, apparatus, and non-transitory computer readable medium for cross-component sample adaptive offset
CN116433783A (en) * 2021-12-31 2023-07-14 中兴通讯股份有限公司 Method and device for video processing, storage medium and electronic device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101517909A (en) * 2006-09-15 2009-08-26 飞思卡尔半导体公司 Video information processing system with selective chroma deblock filtering

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8149926B2 (en) * 2005-04-11 2012-04-03 Intel Corporation Generating edge masks for a deblocking filter
CN101371571B (en) * 2006-01-12 2013-06-19 Lg电子株式会社 Processing multiview video
EP1944974A1 (en) * 2007-01-09 2008-07-16 Matsushita Electric Industrial Co., Ltd. Position dependent post-filter hints
EP2105025B1 (en) * 2007-01-11 2021-04-07 InterDigital VC Holdings, Inc. Methods and apparatus for using syntax for the coded_block_flag syntax element and the coded_block_pattern syntax element for the cavlc 4:4:4 intra, high 4:4:4 intra, and high 4:4:4 predictive profiles in mpeg-4 avc high level coding
US8938009B2 (en) * 2007-10-12 2015-01-20 Qualcomm Incorporated Layered encoded bitstream structure
CN104954789A (en) * 2009-04-20 2015-09-30 杜比实验室特许公司 Filter selection for video pre-processing in video applications
WO2012142966A1 (en) * 2011-04-21 2012-10-26 Mediatek Inc. Method and apparatus for improved in-loop filtering
US9008170B2 (en) * 2011-05-10 2015-04-14 Qualcomm Incorporated Offset type and coefficients signaling method for sample adaptive offset
DK2725797T3 (en) * 2011-06-23 2019-01-02 Huawei Tech Co Ltd OFFSET DECODER DEVICE, OFFSET ENTERPRISE DEVICE, PICTURE FILTER DEVICE AND DATA STRUCTURE

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101517909A (en) * 2006-09-15 2009-08-26 飞思卡尔半导体公司 Video information processing system with selective chroma deblock filtering

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHIH-MING FU等: ""CE13:Sample Adaptive Offset with LCU-Independent Decoding"", 《 JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11》 *
YU-WEN HUANG等: ""A Techical Description of MediaTek’s Proposal to the JCT-VC CfP"", 《 JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11》 *
YU-WEN HUANG等: ""In-Loop Adaptive Restoration"", 《 JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11303893B2 (en) 2011-06-24 2022-04-12 Lg Electronics Inc. Image information encoding and decoding method
US11700369B2 (en) 2011-06-24 2023-07-11 Lg Electronics Inc. Image information encoding and decoding method
US9743083B2 (en) 2011-06-24 2017-08-22 Lg Electronics Inc. Image information encoding and decoding method
US10091505B2 (en) 2011-06-24 2018-10-02 Lg Electronics Inc. Image information encoding and decoding method
US10944968B2 (en) 2011-06-24 2021-03-09 Lg Electronics Inc. Image information encoding and decoding method
US10547837B2 (en) 2011-06-24 2020-01-28 Lg Electronics Inc. Image information encoding and decoding method
CN103765904A (en) * 2011-06-24 2014-04-30 Lg电子株式会社 Image information encoding and decoding method
WO2017063168A1 (en) * 2015-10-15 2017-04-20 富士通株式会社 Image coding method and apparatus, and image processing device
CN110662065A (en) * 2018-06-29 2020-01-07 财团法人工业技术研究院 Image data decoding method, image data decoding device, image data encoding method, and image data encoding device
CN112997504B (en) * 2018-11-09 2023-04-18 北京字节跳动网络技术有限公司 Component-based loop filter
CN112997504A (en) * 2018-11-09 2021-06-18 北京字节跳动网络技术有限公司 Component-based loop filter
US11490124B2 (en) 2019-04-20 2022-11-01 Beijing Bytedance Network Technology Co., Ltd. Signaling of chroma and luma syntax elements in video coding
US11575939B2 (en) 2019-04-20 2023-02-07 Beijing Bytedance Network Technology Co., Ltd. Signaling of syntax elements for joint coding of chrominance residuals
WO2020239119A1 (en) * 2019-05-30 2020-12-03 Beijing Bytedance Network Technology Co., Ltd. Adaptive loop filtering for chroma components
CN113785574B (en) * 2019-05-30 2022-10-11 北京字节跳动网络技术有限公司 Adaptive loop filtering of chrominance components
US11477449B2 (en) 2019-05-30 2022-10-18 Beijing Bytedance Network Technology Co., Ltd. Adaptive loop filtering for chroma components
CN113785574A (en) * 2019-05-30 2021-12-10 北京字节跳动网络技术有限公司 Adaptive loop filtering of chrominance components
CN114402597A (en) * 2019-07-08 2022-04-26 Lg电子株式会社 Video or image coding using adaptive loop filter
CN114402597B (en) * 2019-07-08 2023-10-31 Lg电子株式会社 Video or image coding using adaptive loop filters
US11889071B2 (en) 2019-07-08 2024-01-30 Lg Electronics Inc. Video or image coding applying adaptive loop filter

Also Published As

Publication number Publication date
WO2012155553A1 (en) 2012-11-22
GB201311592D0 (en) 2013-08-14
CN106028050B (en) 2019-04-26
GB2500347B (en) 2018-05-16
DE112012002125T5 (en) 2014-02-20
CN105120270B (en) 2018-09-04
ZA201305528B (en) 2014-10-29
GB2500347A (en) 2013-09-18
CN106028050A (en) 2016-10-12
CN105120270A (en) 2015-12-02
CN103535035B (en) 2017-03-15

Similar Documents

Publication Publication Date Title
CN103535035A (en) Apparatus and method of sample adaptive offset for luma and chroma components
US10116967B2 (en) Method and apparatus for coding of sample adaptive offset information
US10405004B2 (en) Apparatus and method of sample adaptive offset for luma and chroma components
CN105120271B (en) Video coding-decoding method and device
AU2013248857B2 (en) Method and apparatus for loop filtering across slice or tile boundaries
US10419764B2 (en) In-loop filtering method and apparatus for same
CN107257458B (en) Method and apparatus for processing video using in-loop filtering
WO2013042884A1 (en) Method for encoding/decoding image and device thereof
CN103891292A (en) Method and apparatus for non-cross-tile loop filtering
CN103733627A (en) Method for encoding and decoding image information
KR20200020986A (en) Intra-prediction method, and encoder and decoder using same
CN110063057B (en) Method and apparatus for sample adaptive offset processing for video coding and decoding
EP3342169B1 (en) Method and apparatus of palette index map coding for screen content coding
CN114009036B (en) Image encoding apparatus and method, image decoding apparatus and method, and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20160919

Address after: Hsinchu County, Taiwan, China

Applicant after: Atlas Limited by Share Ltd

Address before: China Taiwan Hsinchu Science Park Hsinchu city Dusing a road No.

Applicant before: MediaTek.Inc

C14 Grant of patent or utility model
GR01 Patent grant