CN106028050A - Apparatus and method of sample adaptive offset for luma and chroma components - Google Patents

Apparatus and method of sample adaptive offset for luma and chroma components Download PDF

Info

Publication number
CN106028050A
CN106028050A CN201610409900.0A CN201610409900A CN106028050A CN 106028050 A CN106028050 A CN 106028050A CN 201610409900 A CN201610409900 A CN 201610409900A CN 106028050 A CN106028050 A CN 106028050A
Authority
CN
China
Prior art keywords
self adaptation
block
information
sample self
sao
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610409900.0A
Other languages
Chinese (zh)
Other versions
CN106028050B (en
Inventor
傅智铭
陈庆晔
蔡家扬
黄毓文
雷少民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HFI Innovation Inc
Original Assignee
MediaTek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/158,427 external-priority patent/US9055305B2/en
Priority claimed from US13/311,953 external-priority patent/US20120294353A1/en
Application filed by MediaTek Inc filed Critical MediaTek Inc
Publication of CN106028050A publication Critical patent/CN106028050A/en
Application granted granted Critical
Publication of CN106028050B publication Critical patent/CN106028050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method and apparatus for processing reconstructed video using in-loop filter in a video coding system are disclosed. The method uses chroma in-loop filter indication to indicate whether chroma components are processed by in-loop filter when the luma in-loop filter indication indicates that in-loop filter processing is applied to the luma component. An additional flag may be used to indicate whether the in-loop filter processing is applied to an entire picture using same in-loop filter information or each block of the picture using individual in-loop filter information. Various embodiments according to the present invention to increase efficiency are disclosed, wherein various aspects of in-loop filter information are taken into consideration for efficient coding such as the property of quadtree -based partition, boundary conditions of a block, in-loop filter information sharing between luma and chroma components, indexing to a set of in-loop filter information, and prediction of in-loop filter information..

Description

The method and apparatus offset for the sample self adaptation of brightness and chromatic component
Cross reference
This application claims the priority of following U.S. Provisional Application case: the Application No. 61/ that on May 16th, 2011 submits 486,504, the U.S. of invention entitled " Sample Adaptive Offset for Luma and Chroma Components " State's Provisional Application;And the Application No. 61/498,949 that on June 20th, 2011 submits, invention entitled " LCU-based Syntax for Sample Adaptive Offset " U.S. Provisional Application case;The Application No. that on July 1st, 2011 submits 61/503,870, the U.S. of invention entitled " LCU-based Syntax for Sample Adaptive Offset " is interim Application case.The application also requires the priority of following U.S. patent application case: the Application No. 13/ that on June 12nd, 2011 submits 158,427, invention entitled " Apparatus and Method of Sample Adaptive Offset for Video Coding " U.S. patent application case;The Application No. 13/311,953 that December in 2011 is submitted on the 6th, invention entitled “Apparatus and Method of Sample Adaptive Offset for Luma and Chroma Components " U.S. patent application case.The application merges at this with reference to above-mentioned U.S. Provisional Application case and patent application case.
Technical field
The present invention is related to Video processing, in particular to including that sample self adaptation offsets (sample adaptive Offset, SAO) compensate and the adaptive loop filter of self-adaption loop filtering (adaptive loop filter, ALF) (adaptive in-loop filtering) apparatus and method.
Background technology
In video coding system, video data is carried out multiple process such as: predicts, change, quantify, deblock and oneself Adaptation loop filters.Along the process path of video coding system, because carrying out aforesaid operations on video data, processed Some feature of video data may be changed from original video data.If the meansigma methods of: processed video is it may happen that partially Move.Strength offsets may cause visual impairment or artifact (artifacts), and especially when changing from frame to frame, intensity is inclined Shifting becomes apparent from.Therefore, image pixel intensities skew need to carefully be compensated or recover to alleviate artifact.In this field, some intensity Compensation scheme is used.Such as: a kind of sample self adaptation that is referred to as offsets (sample adaptive offset, SAO) Strength offsets scheme, this intensity compensation scheme is general to be selected according to context, each pixel in processed video data is divided In class extremely multiple classifications one.Traditional SAO scheme is only applied to luminance component (luma component).Need extension SAO scheme applies equally to process chromatic component (chroma component).SAO scheme typically requires and combines video bit stream (bitstream) picture or sheet (slice) (such as, are carried out the partition information (partition of piecemeal by the SAO information comprised in Information) and the SAO deviant of each piece), to allow a decoder to suitable operation.SAO information may account for According to quite a few of bit rate of compression video, and need to develop and encode the method comprising SAO information efficiently.Except SAO, self-adaption loop filtering is another type of loop filtering (in-loop filter), is commonly used to reconstructing video (reconstructed video) is to improve video quality.Similarly, need self-adaption loop filtering application in processing colourity Component is to improve video quality.Furthermore, video bit stream needs include self-adaption loop filtering information (such as, partition information And filtering parameter), to allow a decoder to suitable operation.Therefore, it is also desirable to the efficient coding of exploitation comprises self adaptation and returns The method of the video bit stream of road filtering information.
Summary of the invention
The present invention provides a kind of square law device using loop filtering to process reconstructing video in video decoders.According to this The method and apparatus of inventive embodiment includes: obtain reconstructed video data, wherein said reconstructing video number from video bit stream According to including luminance component and multiple chromatic component;If the brightness loop filtering instruction in described video bit stream shows loop Filtering Processing is applied to described luminance component, receives the instruction of colourity loop filtering from described video bit stream;If described color Degree loop filtering instruction shows that described loop filtering processes and is applied to the plurality of chromatic component, determines that colourity loop filtering is believed Breath;And if the instruction of described colourity loop filtering shows that described loop filtering processes and is applied to the plurality of chromatic component, root According to described chroma sample loop filtering information, described loop filtering is applied to process to the plurality of chromatic component.Described colourity is divided Amount can use single colourity loop filtering labelling or each chromatic component can use the loop filtering labelling of oneself, to control Described loop filtering whether is applied to process.Whole image uses identical loop filtering information.Selecting as one, image can be drawn It is divided into multiple pieces, each piece of loop filtering information using oneself.When the process of described loop filtering is applied to block, can be from phase Adjacent block obtains the loop filtering information of current block to improve code efficiency.The invention discloses and put forward high efficiency multiple embodiment, The many aspects of its loop filter information for improve the efficiency of coding, such as based on quaternary tree subregion attribute, block Editor's condition, luminance component and chromatic component share loop filtering information, the epitome of loop filtering information aggregate and loop filter The prediction of ripple information.
The present invention provides a kind of square law device using loop filtering to process reconstructing video in video decoders, Qi Zhongsuo The image-region stating reconstructing video is divided into multiple pieces, and described loop filtering is applied to the plurality of piece.Described method and apparatus Including: obtaining reconstructed video data from video bit stream, wherein said reconstructed video data includes reconstructed blocks;If currently reconstructed Block is new subregion, receives loop filtering information from described video bit stream;If described current reconstructed blocks is not described new Subregion, obtains loop filtering information from object block, and wherein said current reconstructed blocks merges with described object block, and described object block is Choose from one or more candidate blocks of the one or more adjacent blocks corresponding to described current reconstructed blocks;And will Loop filtering processes and is applied to use the described current reconstructed blocks of described loop filtering information.In order to improve code efficiency, if Existing more than an adjacent block, the merging labelling in video bit stream can be used for current block, to indicate in adjacent block altogether Enjoy loop filtering information.If only one of which adjacent block, it is not necessary to merge the loop filtering information that labelling i.e. deducibility is shared.According to The attribute of quaternary tree subregion and the pooling information of one or more candidate blocks, candidate blocks eliminates from the merging of current reconstructed blocks To improve code efficiency.
The present invention provides a kind of square law device using loop filtering to process reconstructing video in corresponding video encoder. Additionally, also provide for a kind of square law device using loop filtering to process reconstructing video in corresponding video encoder, Qi Zhongsuo The image-region stating reconstructing video is divided into multiple pieces, and loop filtering is applied to the plurality of piece.
Accompanying drawing explanation
Fig. 1 is for comprising the system block diagram of the video encoder of reconstruct loop (reconstruction loop), its intermediate ring road Filtering Processing includes the skew of de-blocking filter, sample self adaptation and self-adaption loop filtering.
Fig. 2 is the system block diagram of Video Decoder comprising reconstruct loop, its loop filter process include de-blocking filter, The skew of sample self adaptation and self-adaption loop filtering.
Fig. 3 is that the present invention uses the information of adjacent block A, D, B, E to carry out an embodiment of SAO coding current block C.
Fig. 4 A is an enforcement of the image division based on quaternary tree (quadtree-based) that the present invention processes for SAO Example.
Fig. 4 B is an embodiment of the image division based on LCU that the present invention processes for SAO.
Fig. 5 A is the embodiment that block C allows quaternary tree subregion, and wherein block A and D is positioned at identical subregion, and block B is positioned at difference Subregion.
Fig. 5 B is another embodiment that block C allows quaternary tree subregion, and wherein block A and D is positioned at identical subregion, and block B is positioned at Different subregions.
Fig. 5 C is the embodiment that block C does not allow quaternary tree subregion, and wherein block A and D is positioned at identical subregion, and block B is positioned at not Same subregion.
Fig. 6 A is the embodiment that block C allows quaternary tree subregion, and wherein block B and D is positioned at identical subregion, and block A is positioned at difference Subregion.
Fig. 6 B is another embodiment that block C allows quaternary tree subregion, and wherein block B and D is positioned at identical subregion, and block A is positioned at Different subregions.
Fig. 6 C is the embodiment that block C does not allow quaternary tree subregion, and wherein block B and D is positioned at identical subregion, and block A is positioned at not Same subregion.
Fig. 7 is to include that the grammer of labelling (flag) sets at sequence parameter set (Sequence Parameter Set, SPS) Meter (syntax design), wherein labelling is to enable or forbidden energy for indicating SAO in the sequence.
Fig. 8 is the grammar design of SAO parameter sao_param (), and wherein individually SAO information allows chromatic component.
Fig. 9 is the grammar design of SAO partitioning parameters sao_split_param (), wherein SAO partitioning parameters sao_split_ Param () includes " component " parameter, and " component " can be in luminance component or multiple chromatic component.
Figure 10 is the grammar design of SAO partitioning parameters sao_split_param (), wherein SAO partitioning parameters sao_ Split_param () includes " component " parameter, and " component " can be in luminance component or multiple chromatic component.
Figure 11 is an embodiment of the image division based on quaternary tree determined for SAO kind.
Figure 12 A is the embodiment of SAO based on image, and the most whole image uses identical SAO parameter.
Figure 12 B is the embodiment of SAO based on LCU, and each of which LCU uses respective SAO parameter.
Figure 13 is running the example of the SAO information for first three LCU, and this operation is equal to 2.
Figure 14 is for using run signal and merging overlay mark (merge-above flag) with the shared SAO information of coding Embodiment.
Figure 15 is to use run signal, operation prediction and merge overlay mark to encode the embodiment sharing SAO information.
Detailed description of the invention
In high efficiency Video coding (High Efficiency Video Coding, HEVC) field, introduce a kind of title For the technology of self adaptation skew (Adaptive Offset, AO), for compensating the skew of reconstructing video, and self adaptation offsets Technology is applied to reconstruct in loop.Application No. 13/158,427, invention entitled " Apparatus and Method of Sample Adaptive Offset for Video Coding " U.S. patent application case disclose a kind of migration side Method and system.This offset compensating method and system are by each pixel classifications to class, and classification based on each pixel is to place Reason video data application strength offsets compensates or repairs.In addition to self adaptation offsets, in high efficiency field of video encoding, also draw Enter self-adaption loop filtering (Adaptive Loop Filter, ALF) to improve the quality of video.Self-adaption loop filtering fortune The video being positioned in reconstruct loop is rebuild with space filtering (spatial filter).In the present invention as stated above, AO and ALF all by It is considered a kind of loop filtering.
The example of encoder as shown in Figure 1 illustrates a use and predicts (intra/inter-within the frame/frames Prediction) system.Intraprediction unit 110 video data based on same image, is responsible for providing prediction data.For Inter prediction, ME/MC unit 112, i.e. motion prediction (motion estimation, ME) and motion compensation (motion Compensation, MC), provide prediction data for video data based on other image.In switch 114 is used for selecting frame or Person's inter prediction data, and selected prediction data provide to adder 116 to produce forecast error (prediction Errors), residual error (residues) also it is.This forecast error is the most successively by T (transformation, conversion) 118 and Q (quantization quantifies) 120 processes.Residual error after being changed and quantifying is coded by entropy unit 122 and encodes to form correspondence Bit stream in this compressed video data.This bit stream relevant to conversion parameter and additional information (side information) Packaged.This additional information can be: motion, the information that pattern is relevant to image-region with other.This additional information is also entered Row entropy code is to reduce desire bandwidth.As it is shown in figure 1, the data relevant to additional information are provided to entropy code unit 122.When Using inter-frame forecast mode, a reference picture or multiple reference picture must be reconstructed in encoder-side.Therefore, IQ (inverse Quantization, re-quantization) 124 and IT (inverse transformation, inverse conversion) 126 process this and changed and measure Residual error after change is to recover this residual error.Then, at REC (reconstruction, reconstruct) 128, it is pre-that this residual error is added back to this Survey data 136 with reconstructed video data.This reconstructed video data can be stored in reference picture buffer 134, and is used for pre- Survey other frame.As it is shown in figure 1, the video data of input experienced by a series of process in coding system.Weight through REC128 Structure video data is because above-mentioned a series of process are it may happen that strength offsets and other noise.Therefore, deposited in reconstruct data Stored up before this reference picture buffer 134, DF (deblocking filter, de-blocking filter) 130 and SAO (sample self adaptation Skew) 131 and ALF (self-adaption loop filtering) 132 are applied to this reconstructed video data to improve video quality.This sample Self adaptation offset information and this adaptive-filtering information must be transmitted in this bitstream, the recovery that therefore decoder can be suitable Information needed is with application self-adapting skew and adaptive-filtering.Therefore, for being incorporated to this bit stream, the self adaptation of SAO131 output is inclined The self-adaption loop filtering information of shifting information and ALF132 output is provided to entropy coder 122.In order to be obtained from adaptation skew Information and self-adaption loop filtering information, encoder may need to access original video data.Draw the most clearly From the path being input between SAO131, ALF132.
Fig. 2 is the system block diagram of the Video Decoder embodiment comprising de-blocking filter and self-adaption loop filtering.Because compiling Code device also comprises local decoder in order to reconstruct this video data, and therefore except entropy decoder 222, some decoder element are It is used in encoder.Further, only motion compensation units 212 is required in decoder end.Switch element 214 selects frame In or inter-frame forecast mode, and select prediction data be provided to REC128 with recover residual error merge.Except to Compressed video data performs entropy decoding, and entropy decoding unit 222 is gone back entropy decoding additional information and provides this additional information to respective Block.For example, frame mode information is provided to intraprediction unit 110, and inter-frame mode information is provided to motion compensation Unit MC 212, self adaptation offset information is provided to SAO131, and self-adaption loop filtering information is provided to ALF132, and Residual error is provided to IQ 124.This residual error is processed by IQ 124, IT 126 and reconstruction processing subsequently, to reconstruct this video Data.Again, as in figure 2 it is shown, from REC 128 output reconstructed video data go through include IQ124 and IT126 a series of from Strength offsets is there is after reason.This reconstructed video data is further by de-blocking filter 130, SAO131 and ALF132 process.
According to existing HEVC standard, loop filtering is only applied to the luminance component of reconstructing video.Equally, by loop filtering Chromatic component for reconstructing video is also the most useful.The information relevant to loop filtering for chromatic component is phase When big.But, chromatic component generally can be with the less compression data of boil down to specific luminance component.Accordingly, it would be desirable to exploitation one Effectively apply loop filtering to the method and apparatus of chromatic component.Therefore, this law bright exposure one is efficiently applied to chromatic component SAO method and apparatus.
In one embodiment of this invention, when having been turned on for the SAO of luminance component, it is provided that an instruction (indication) it is startup with signal (signaling) for the loop filtering of chromatic component or closes.If for brightness The SAO of component does not start, then also will not start for the SAO of chromatic component.Therefore, in this case, avoid the need for Instruction is provided, is startup with signal for the loop filtering of chromatic component or closes.Pseudo-code (pseudo in the present embodiment Codes) example is as follows:
Instruction is that the labelling started is referred to as the instruction of colourity loop filtering, because it may be used for for the SAO of chromatic component ALF, it is also possible to for SAO.SAO is a kind of embodiment that loop filtering processes, and it can also be ALF that loop filtering processes.At this In another embodiment of invention, when having been turned on for the SAO of luminance component, it is provided that respective instruction (individual Indications), it is startup with signal for the loop filtering of multiple chromatic components (such as Cb and Cr) or closes.If used Do not start in the SAO of luminance component, then also will not start for the SAO of two chromatic components.Therefore, in this situation Under, avoid the need for providing respective instruction, be startup with signal for the loop filtering of two chromatic components or close.This reality The example executing pseudo-code in example is as follows:
As it has been described above, need a kind of effective loop circuit filtering method of exploitation to reduce data volume.Such as, reduce about SAO If the information such as SAO parameter required when the instruction whether started and SAO startup.Owing to adjacent block is generally of similar spy Levying, therefore adjacent block may be used for reducing required SAO information.Fig. 3 is to use adjacent block to reduce the embodiment of SAO information. Block C is the current block that SAO is processing.As it is shown on figure 3, block B, D, E and A be block C pre-treatment and around the phase of block C Adjacent block.The parameter that block-based syntactic representation is currently processed piece.Block can be a coding unit (coding unit, CU), Big coding unit (largest coding unit, LCU) or multiple maximum coding unit (multiple LCUs).Use mark Note indicates current block and adjacent block to share SAO parameter can reduce the bit rate needed for current block.If the processing sequence of block For raster scan order, then when the parameter of encoding block C, the parameter of block D, B, E and A is available.When from adjacent block When parameter is available, the parameter of these blocks may be used for encoding current block.Need the mark in order to indicate shared SAO parameter sent The data volume of note is generally far less than the data volume of SAO parameter.Therefore, efficient SAO can be realized.But, SAO is used as loop and filters Ripple illustrates parameter sharing based on adjacent block, is merely possible to an example, and this technology can also be applied to other loop Filtering, such as ALF.
In current HEVC standard, algorithm based on quaternary tree may be used for adaptively image-region being divided into four Sub regions, to realize better performance.In order to maintain the coding gain of SAO, the coding divided for SAO based on quaternary tree Algorithm needs effectively to be designed.SAO parameter (SAOP) includes types index (type index) and the deviant of selected type. Fig. 4 A and Fig. 4 B is the embodiment that SAO based on quaternary tree divides.Fig. 4 A represents and is carried out by employing quaternary tree partition method point The image in district, the most each corresponding LCU of little square.First subregion (degree of depth 0 subregion) represents with split_0 ().Value 0 Represent not segmentation, value 1 expression application segmentation.In figure 4b, image includes 12 LCU, be respectively labeled as P1, P2 ..., P12.The degree of depth 0 quaternary tree subregion, it is 4 parts that split_0 (1) divides the image into, upper left, upper right, lower-left and lower right area.Due to Lower-left and lower right area only have a line block, therefore can not apply quaternary tree subregion further.Therefore, the degree of depth 1 quaternary tree subregion is only Consider upper left and right regions.In embodiment shown in Fig. 4 A, top left region does not has divided, uses split_1 (0) to carry out table Show;Right regions is further split into four regions, uses split_1 (1) to represent.Correspondingly, four forks in Figure 4 A Tree division result is 7 subregions, and a point list notation is P ' 0 ..., P ' 6, wherein:
The SAO parameter of P1 is identical with the parameter of P2, P5 and P6;
The SAO parameter of P9 is identical with the SAO parameter of P10;And
The SAO parameter of P11 is identical with the SAO parameter of P12.
According to the partition information of SAO, each LCU can be a new subregion, or combines into other LCU It it is a subregion.If current LCU is that instruction is merged, multiple merging candidate can be selected.In order to permission information is described The grammar design example shared, has only allowed two to merge candidate in the quaternary tree subregion shown in Fig. 3.Although at the present embodiment In only two candidates, but multiple candidate can be selected in other embodiments of the invention from adjacent block.Grammar design is such as Under:
According to another embodiment of the present invention, the relation between adjacent block (LCUs) and the attribute of quaternary tree subregion, can For the data volume reducing the SAO relevant information needing transmission.Additionally, the boundary condition of image-region (such as sheet) may bag Containing the relation information between adjacent block, can be used for reducing the data volume of the SAO relevant information of required transmission.Relation between adjacent block It is likely to introduce the redundancy depending on adjacent block, and the relation between adjacent block can be used for reducing the SAO phase of required transmission The data volume of pass information.
Fig. 5 A-Fig. 5 C is the embodiment of the redundancy depending on adjacent block.As fig. 5 a and fig. 5b, according to quaternary tree subregion Attribute, if block D with A is in identical subregion, block B is in another subregion, then block A and block C will be at different points In district.In other words, according to quaternary tree subregion, the situation shown in Fig. 5 C is not allow appearance.Therefore, the merging in Fig. 5 C is waited Choosing is redundancy (redundant), and there is no need merging mark one code of distribution for representing corresponding diagram 5C.Implement to close And the pseudo-code embodiment of algorithm is as follows:
As described in above-described embodiment, only two kinds of situations about allowing, i.e. block C is a new subregion, or block C and block B Combine.Therefore, only with individual bit (single bit) represent newPartitionFlag be enough to identify this two The situation of kind.In another embodiment, as shown in Figure 6 A and 6 B, block D and block B is in identical subregion, and block A is at another subregion In, block B and block C would is that in different subregions.In other words, according to quaternary tree subregion, the situation shown in Fig. 6 C is not permit Permitted appearance.Therefore, the merging candidate in Fig. 6 C is redundancy, and there is no need to divide for the merging mark representing corresponding diagram 6C Join a code.The pseudo-code embodiment implementing to merge algorithm is as follows:
Fig. 5 A-5C and Fig. 6 A-6C illustrates to use the redundancy depending on adjacent block to reduce working as of required transmission further Two embodiments of front piece of relevant SAO information.System utilizes the redundancy depending on adjacent block to also have other conditions many.Example As, if block A, B with D in identical subregion, then block C cannot be in other subregion.Therefore, block C must be with block A, B With D in identical subregion, and also there is no need to transmit the instruction of SAO information sharing.The LCU block on sheet border also contemplates for using In reducing the SAO information that the required current block transmitted is relevant.Such as, if block A does not exists, that can enter with regard to only one of which direction Row merges.If block B does not exists, the most also only one of which direction can merge.If block A and B does not exists, just do not have The necessary labelling that transmits indicates block C as a new subregion.In order to reduce transfer syntax element (syntax further Elements) with a labelling, quantity, can show that current slice only applies a kind of SAO type, without any based on The signaling (signaling) of LCU.When sheet above is single subregion, and the quantity of transfer syntax element can also reduce.But, In the above-described embodiments, LCU is the unit as block, it is possible to use other block configuration (such as, the size and shape of block). Although here using sheet as image-region example, the block packet in image-region is to share common information, the most also Other image-region can be used, such as: a pack and piece image.
Additionally, colourity and luminance component can share the identical SAO information for color video data.SAO information also may be used To share between multiple chromatic components.Such as, multiple chromatic components (Cb and Cr) can use brightness partition information, thus without It is required to be chromatic component and partition information is provided.In another embodiment, chromatic component Cb and Cr can share identical SAO parameter, because of This, only need to transmit one group of SAO parameter and share for Cb and Cr.The SAO grammer of luminance component may be used for chromatic component, wherein, SAO grammer can include quaternary tree grammer and grammer based on LCU.
Utilization as shown in Fig. 5 A-5C and Fig. 6 A-6C depends on the SAO information transmitted needed for the redundancy of adjacent block reduces The embodiment of relevant data can also be applied to chromatic component.SAO parameter includes the SAO of SAO types index and selected type Deviant, SAO parameter can encode before partition information, therefore can be formed SAO parameter set (SAO parameter set, SAOPS).Correspondingly, it is possible to use index identifies the SAO parameter of current block from SAO parameter set, wherein, transmits index Data be typically less than transmit SAO parameter data.When partition information is encoded, in order to indicate the index selection of SAO parameter Can together be encoded.The quantity of SAO parameter set dynamically increases.Such as, after signal has new SAO parameter, SAO parameter The quantity of the SAO parameter concentrated will increase by 1.In order to represent the quantity in SAO parameter set, bit number can be dynamically adjusted (number of bits) is to be consistent with scope of data.Such as, the SAO parameter set including 5-8 parameter needs to come with 3 bits Represent.After signal has new SAO parameter, the quantity in SAO parameter set can increase to 9, is at this moment accomplished by coming with 4 bits Represent the SAO parameter set including 9 parameters.
If the process of SAO relates to the data in other sheet, SAO can use filling technique to avoid from other sheet any Middle acquisition data, or change pattern (pattern) to replace the data from other sheet.In order to reduce required SAO information Data, SAO parameter can use form (predicted form) after prediction to transmit, such as, transmit the SAO parameter of current block with Difference between the SAO parameter of adjacent block.Reduce the SAO parameter for chromatic component according to another embodiment of the present invention.Example As, skew based on edge (Edge-based Offset, EO) classification is by four kinds of each pixel classifications to chromatic component Apoplexy due to endogenous wind.The quantity of the EO kind of chromatic component can be reduced to two kinds, to reduce the transmission number relevant to the SAO information of current block According to.The band quantity that band skew (band offset, BO) of luminance component is classified is typically 16.In another embodiment, brightness divides The band quantity that band skew (band offset, BO) of amount is classified can be reduced to 8.
Embodiment shown in Fig. 3 illustrates that current block C has four situations merging candidate (that is, block A, B, D and E).If Merging candidate blocks to be positioned in identical subregion, the quantity merging candidate can reduce.Correspondingly, it is used for which indicates merge to wait Select selected bit number can to reduce or save.If the process of SAO relates to the data in other sheet, SAO will avoid From other sheet any, obtain data, and skip currently processed pixel to avoid the data from other sheet.Additionally, labelling Can be used for controlling whether SAO process is avoided obtaining data from other sheet any.Process about SAO and whether avoid from other sheet any Obtain the control labelling of data, may be embodied in sequence level or image level.About SAO process whether avoid from any its Its sheet obtains the control labelling of data, it is also possible to the non-crossing sheet boundary marker by adaptive-filtering or de-blocking filter is shared. The data relevant to SAO information in order to reduce transmission further, the ON/OFF of colourity SAO controls to depend on the ON/OFF of brightness SAO Information.The kind of colourity SAO can be the subset of the brightness SAO in specific SAO type.
Grammar design example according to various embodiments of the present invention is described as follows.Fig. 7 illustrates to be included in sequence level Sao_used_flag in data, such as sequence parameter set (Sequence Parameter Set, SPS).Work as sao_used_ When the value of flag is 0, SAO is forbidden energy to sequence.When the value of sao_used_flag is 1, SAO is to enable sequence.Figure The 8 grammer examples disclosing SAO parameter, wherein, sao_param () grammer may be embodied in auto-adaptive parameter collection (Adaptation Parameter Set, APS), picture parameter set (Picture Parameter Set, PPS) or sheet header (header) in.Except PPS, APS are the headers of another image level, including may be with the parameter of image modification.If Sao_flag instruction SAO is to enable, grammer will include partitioning parameters sao_split_param for luminance component (0, 0,0,0) and offset parameter sao_offset_param (0,0,0,0).Additionally, grammer also includes the SAO labelling for Cb component Sao_flag_cb and the SAO labelling sao_flag_cr for Cr component.If sao_flag_cb instruction is for Cb component SAO is to enable, and grammer will include partitioning parameters sao_split_param (0,0,0,1) for chromatic component Cb and inclined Shifting parameter sao_offset_param (0,0,0,1).If sao_flag_cr instruction is to enable for the SAO of Cr component, language Method will include partitioning parameters sao_split_param (0,0,0,2) for chromatic component Cr and offset parameter sao_ offset_param(0,0,0,2).Fig. 9 discloses a language of sao_split_param (rx, ry, Depth, component) Method example, wherein this grammer is similar to traditional sao_split_param (), except the extra parameter increased " component ", wherein " component " is for indicating in luminance component or multiple chromatic component.Figure 10 discloses One grammer example of sao_offset_param (rx, ry, Depth, component), wherein, this grammer and traditional sao_ Offset_param () is similar, except extra " component " parameter increased.Sao_offset_param (rx, ry, Depth, component) in, if segmentation mark sao_split_flag [component] [Depth] [ry] [rx] indicates this One region will not be split further, and grammer can include sao_type_idx [component] [Depth] [ry] [rx].Grammer The explanation of sao_type_idx [component] [Depth] [ry] [rx] is as shown in table 1.
Table 1
As shown in figure 11, sample self adaptation skew (SAO) of HM-3.0 application uses grammer based on quaternary tree, based on four The grammer of fork tree uses dividing mark that image-region recurrence (recursively) is divided into 4 sub regions.Each leaf region (leaf region) has the SAO parameter of oneself, and wherein SAO parameter includes the deviant information of SAO type and application region. In Figure 11 embodiments of the disclosure, image is divided into 7 leaf regions, and 1110 to 1170, wherein band offset type SAO is applied to leaf district Territory 1110 and 1150, edge offset type SAO is applied to leaf region 1130,1140 and 1160, and for leaf region 1120 He 1170, SAO is to close.In order to improve coding gain, grammar design uses image level mark according to an embodiment of the invention Remembering to switch between SAO based on image and block-based SAO, wherein, block can be a LCU or other block size. Figure 12 A discloses the embodiment of SAO based on image, and Figure 12 B discloses the embodiment of block-based SAO, wherein, each region It is a LCU, and has 15 LCU in the picture.In SAO based on image, one SAO parameter of whole Image Sharing (SAOP).SAO based on sheet can also be used, so that whole or multiple is shared a SAOP.At SAO based on LCU In, each LCU has the SAO parameter of oneself, and 15 LCU (LCU1-LCU15) use in SAOP1-SAOP15 respectively Individual.
In another embodiment in accordance with the invention, the SAOP of each LCU can be shared by LCU subsequently.Share identical The quantity of the continuous print of SAOP LCU subsequently can be indicated by run signal (run signal).Embodiment shown in Figure 13 In, SAOP1, SAOP2 and SAOP3 are identical.In other words, the SAOP of first LCU is SAOP1, and SAOP1 for After two LCU.In this case, grammer " run=2 " will be encoded to the continuous print of the shared identical SAOP of signal subsequently The quantity of LCU.Owing to the SAOP of following two LCU is all without transmission, the bit rate encoding these SAOP therefore can be saved. In another embodiment in accordance with the invention, except using run signal, can according to the LCU in the next line of raster scan order To share the SAOP of current LCU.If the LCU of top is available, can be with merging overlay mark (merge-above Flag) SAOP of the LCU of the shared top of current LCU is indicated.If merging overlay mark is set to 1, current LCU will make SAOP with the LCU of top.As shown in figure 14, SAOP2 is shared by four LCU, i.e. 1410-1440, wherein " run=1 " and " no Merge-above " it is used to refer to LCU1410 and 1420 and shares SAOP2, and do not share the SAOP of LCU above them.This Outward, " run=1 " and " merge-above=1 " is used to refer to LCU1430 and 1440 and shares SAOP2, and they share top The SAOP of LCU.It addition, SAOP1 and SAOP3 is shared by two LCU subsequently the most respectively, SAOP4 is by four LCU subsequently Shared.Correspondingly, the run signal of SAOP1 and SAOP3 is 2, and the run signal of SAOP4 is 4.Due to these LCU the most not altogether Enjoying the SAOP of the LCU of top, therefore the value of the merge-above grammer of SAOP1, SAOP3 and SAOP4 is 0.
In order to reduce the bit rate of run signal, the run signal of the LCU of top can serve as the run signal of current LCU Predictor.The most directly run signal is encoded, the substitute is and the difference between two run signal is compiled Code, wherein, difference is expressed as d_run in fig .15.When the LCU of top is not first LCU in LCU group with runtime value Time, operation predictive value (run prediction value) can be that the runtime value of LCU group up deducts LCU up The number of the LCU in identical LCU group before.The runtime value of first LCU sharing SAOP3 is 2, above first LCU The runtime value of LCU of shared SAOP1 be also 2, then the value of the d_run sharing the LCU of SAOP3 is 0.Share the of SAOP4 The runtime value of one LCU is 4, and the runtime value of the LCU of the shared SAOP3 above first LCU is 2.Correspondingly, SAOP4 is shared The value of d_run of LCU be 2.If the predictor run is disabled, can be by using without symbol variable-length code (VLC) Operation is encoded by (unsigned variable length code, U_VLC).If predictor exists, can pass through Symbol variable-length code (VLC) (signed variable length code, S_VLC) is used to come running difference (delta run), I.e. d_run encodes.U_VLC and S_VLC can be k rank Exp-Golomb (exp-Golomb coding), Binary conversion treatment (binarization proces) in Golomb-Rice coding or CABAC coding.
According to one embodiment of the invention, can with labelling indicate all SAOP in current LCU row be all with The LCU of lastrow is identical.Such as, for the RepeatedRow labelling of each LCU row, it is used to refer to the institute in current LCU row There is SAOP all identical with the LCU row of top.If RepeatedRow labelling is equal to 1, avoid the need for encoding more information.When The SAOP that each LCU in front LCU row is relevant, is that LCU replicates from the LCU row of top.If RepeatedRow Labelling is equal to 0, and the SAOP of current LCU row then needs coding.
In another embodiment in accordance with the invention, can illustrate that RepeatedRow marks whether can use with labelling.Example As, EnableRepeatedRow labelling can be used to indicate RepeatedRow to mark whether can use.EnableRepeatedRow Labelling can indicate at sheet or image level.If EnableRepeatedRow is equal to 0, then RepeatedRow labelling is not Each LCU row can be encoded.If EnableRepeatedRow is equal to 1, then RepeatedRow labelling is to each LCU Provisional capital encodes.
In another embodiment in accordance with the invention, the RepeatedRow labelling in a LCU row of image or sheet is permissible It is saved (save).Image only have a piece of in the case of, the RepeatedRow labelling in a LCU row can be saved.One In the case of individual image has multiple, if SAO process is sheet independently operates (slice-independent operation), So can save the RepeatedRow labelling in a LCU row;Otherwise, then RepeatedRow labelling need to be encoded.One The method of the RepeatedRow labelling in saving the oneth LCU row in individual image or a sheet, it is also possible to be applied to use The situation of EnableRepeatedRow labelling.
Transmit the data relevant to SAOP to reduce, according to one embodiment of the invention, use run signal with instruction All SAOP in the LCU row of lower section are identical with the LCU row of top.Such as, N number of continuous print LCU row comprises identical SAOP, in the LCU row in the LCU row that N number of continuous print repeats, SAOP and run signal are transmitted equal to N-1.One In individual image or sheet, the minimum and maximum operation of the LCU row of repetition can obtain at sheet or image level and be transmitted.Based on Maximum and minima, running number can encode with regular length code word (fixed-length code word).Fixing The length of length codewords can determine according to maximum runtime value and minimum runtime value, therefore can be adaptive at sheet or image level Should change on ground.
In another embodiment in accordance with the invention, the operation number in a LCU row of image or sheet is encoded.Using In the operation of the LCU row in the above-mentioned image mentioned of entropy code or a sheet and the method for operation difference, if Continuous print LCU repeats SAOP, runs and be encoded to indicate the quantity of the LCU of shared SAOP.If the predictor run can not With, operation can be encoded by using without symbol variable-length code (VLC) U_VLC or regular length code word.If use regular length Code word, word length can carry out adaptive coding based on picture traverse, the operation of coding or remaining LCU, or word length can be based on figure Image width degree and fixing or transmitted to decoder.Such as, include a LCU row in the image of N number of LCU, processed by SAO LCU is kth LCU in this LCU row, wherein k=0 ..., N-1.If running needs to be encoded, then the maximum number of operation For N-1-k.By be coded of run word length be floor (log2 (N-1-k)+1).In another example, at a piece of or image In the maximum number of operation and minimum number can be first calculated out.Based on maximum and minima, the word of regular length code word Length can be acquired and encode.
In another embodiment in accordance with the invention, the information running number and operation difference number may be embodied in sheet level In.Run number, run difference number or the quantity (NumSaoRun) of LCU, can be transmitted in sheet level.Present encoding SAOP The quantity of LCU can illustrate with NumSaoRun labelling.Additionally, the quantity running number, operation difference number or LCU is permissible The quantity being used in the LCU in a coded image is predicted.Predictive equation formula is as follows:
NumSaoRun=sao_num_run_info+NumTBsInPicture
Wherein, NumTBsInPicture is the quantity of LCU in one image, and sao_num_run_info is prediction Residual values.Grammer sao_num_run_info available symbols variable-length code (VLC) or encode without symbol variable-length code (VLC).Language Method sao_num_run_info also available symbols regular length code word or encode without symbol regular length code word.
The embodiment of the loop filtering according to the present invention as set forth above, it is possible to use multiple hardwares, software code or both Combination realizes.For example, one embodiment of the invention can be that circuit is integrated into video compress chip, or procedure code is integrated To video compression system, to carry out respective handling.One embodiment of the invention is alternatively at digital signal processor (Digital Signal Processor, DSP) above perform with the procedure code carrying out respective handling.The present invention also can comprise a series of function, and By computer processor, digital signal processor, microprocessor, field programmable gate array (Field Programmable Gate Array, FPGA) perform.By performing machine-readable software code or the firmware code of the definition embodiment of the present invention, above-mentioned processor can Particular task is performed according to the present invention.Software code or firmware code can be carried out in distinct program language and different-format or mode. Software code can be compiled into different target platforms.But, different coded format, mode and software code language, and with this Bright relevant other method making code perform task all meets the spirit of the present invention, falls into protection scope of the present invention.
Although the present invention is disclosed above with regard to preferred embodiment, so it is not intended to limiting the invention.Skill belonging to the present invention Those skilled in the art in art field, without departing from the spirit and scope of the present invention, when being used for a variety of modifications and variations.Cause This, protection scope of the present invention ought be defined depending on claims before and is as the criterion.

Claims (11)

1. use the method that loop filtering processes reconstructing video, in Video Decoder, wherein said reconstructing video Image-region is divided into multiple pieces, and described loop filtering is applied to the plurality of piece, it is characterised in that described method includes:
Obtain the reconstructed video data including reconstructed blocks;
Determine whether current reconstructed blocks is new subregion based on merging labelling;
It is described new subregion in response to described current reconstructed blocks, receives loop filtering information;
It not described new subregion in response to described current reconstructed blocks, obtain described loop filtering information, Qi Zhongsuo from object block Stating current reconstructed blocks to merge with described object block, described object block is from two adjacent blocks corresponding to described current reconstructed blocks Two candidate blocks choose, and described object block is to choose from said two adjacent block according to the second labelling 's;And
Loop filtering is processed and is applied to use the described current reconstructed blocks of described loop filtering information.
2. the method using loop filtering to process reconstructing video as claimed in claim 1, it is characterised in that
In response to there is multiple adjacent block, described loop filtering information is to obtain based on described merging labelling;And
In response to only existing an adjacent block, described loop filtering information is deduced and obtains.
3. the method using loop filtering to process reconstructing video as claimed in claim 1, it is characterised in that apply based on four forks Tree subregion to described image-region, described includes a region recurrence is divided into four sub regions based on quaternary tree subregion, until Obtain minimum unit.
4. the method using loop filtering to process reconstructing video as claimed in claim 3, it is characterised in that described minimum unit Including maximum coding unit.
5. the method using loop filtering to process reconstructing video as claimed in claim 3, it is characterised in that divide according to quaternary tree District's attribute and the pooling information of said two candidate blocks, at least one in said two candidate blocks from described current reconstructed blocks Merging in eliminate.
6. the method using sample self adaptation migration processing reconstructing video, in video encoder, it is characterised in that institute The method of stating includes:
Obtain and include luminance component and the reconstructed video data of multiple chromatic component;
If the skew instruction of luma samples self adaptation shows that described sample self adaptation migration processing is applied to described luminance component, depending on Frequently bit stream includes the skew instruction of chroma sample self adaptation;
If the skew instruction of described chroma sample self adaptation shows that described sample self adaptation offset applications is divided in the plurality of colourity Amount, described video bit stream includes chroma sample self adaptation offset information;And
If the skew instruction of described chroma sample self adaptation shows that described sample self adaptation migration processing is applied to the plurality of color Degree component, according to described chroma sample self adaptation offset information, applies described sample self adaptation migration processing to the plurality of color Degree component, wherein said multiple chromatic components share described chroma sample self adaptation offset information.
7. the method using sample self adaptation migration processing reconstructing video as claimed in claim 6, it is characterised in that
The colourity image-region of described reconstructing video is divided into multiple chrominance block, and described chroma sample self adaptation offset applications is in institute State multiple chrominance block;
If corresponding to the current reconstruct chrominance block of one of them of the plurality of chromatic component is new subregion, described colourity sample This self adaptation offset information is contained in described video bit stream;
If described current reconstruct chrominance block is not described new subregion, described chroma sample self adaptation offset information is from target Chrominance block obtains;And
Described current reconstruct chrominance block and described target colorimetric merged block, described target colorimetric block is from corresponding to described current weight One or more candidate's chrominance block of one or more adjacent chroma blocks of structure chrominance block choose.
8. the method using sample self adaptation migration processing reconstructing video as claimed in claim 6, it is characterised in that
The image-region of described reconstructing video is divided into multiple pieces;
Described luma samples self adaptation offset applications is in multiple luminance block, and described chroma sample self adaptation offset applications is in multiple colors Degree block;And
The partition information of the plurality of chromatic component is obtained from the partition information of described luminance component.
9. the method using sample self adaptation migration processing reconstructing video as claimed in claim 6, it is characterised in that
The image-region of described reconstructing video is divided into multiple pieces;
Described luma samples self adaptation offset applications is in multiple luminance block of use luma samples self adaptation offset information, described color Degree sample self adaptation offset applications is in the multiple chrominance block using chroma sample self adaptation offset information;And
The described luma samples self adaptation offset information relevant to each luminance block uses the described luma samples self adaptation of sensing inclined The index of the first set of shifting information encodes, or the described chroma sample adaptive information relevant to each chrominance block uses The index of the second set pointing to described chroma sample self adaptation offset information encodes.
10. the method using sample self adaptation migration processing reconstructing video as claimed in claim 6, it is characterised in that
The image-region of described reconstructing video is divided into multiple pieces;
Described luma samples self adaptation offset applications is in multiple luminance block of use luma samples self adaptation offset information, described color Degree sample self adaptation offset applications is in the multiple chrominance block using chroma sample self adaptation offset information;And
The described luma samples of the described luma samples self adaptation offset information prediction current block according to other blocks one or more Self adaptation offset information, or predict current block according to the described chroma sample self adaptation offset information of other blocks one or more Described chroma sample self adaptation offset information.
11. 1 kinds of devices using sample self adaptation migration processing reconstructing video, in video encoder, it is characterised in that Described device includes:
For obtaining the device of the reconstructed video data including luminance component and multiple chromatic component;
If the skew instruction of luma samples self adaptation shows that described sample self adaptation migration processing is applied to described luminance component, use In showing that video bit stream includes the device of chroma sample self adaptation skew instruction;
If the skew instruction of described chroma sample self adaptation shows that described sample self adaptation offset applications is divided in the plurality of colourity Amount, for showing that described video bit stream includes the device of chroma sample self adaptation offset information;And
If the skew instruction of described chroma sample self adaptation shows that described sample self adaptation migration processing is applied to the plurality of color Degree component, for applying described sample self adaptation migration processing to the plurality of according to described chroma sample self adaptation offset information The device of chromatic component.
CN201610409900.0A 2011-05-16 2012-02-15 The method and apparatus that sample for brightness and chromatic component adaptively deviates Active CN106028050B (en)

Applications Claiming Priority (11)

Application Number Priority Date Filing Date Title
US201161486504P 2011-05-16 2011-05-16
US61/486,504 2011-05-16
US13/158,427 2011-06-12
US13/158,427 US9055305B2 (en) 2011-01-09 2011-06-12 Apparatus and method of sample adaptive offset for video coding
US201161498949P 2011-06-20 2011-06-20
US61/498,949 2011-06-20
US201161503870P 2011-07-01 2011-07-01
US61/503,870 2011-07-01
US13/311,953 2011-12-06
US13/311,953 US20120294353A1 (en) 2011-05-16 2011-12-06 Apparatus and Method of Sample Adaptive Offset for Luma and Chroma Components
CN201280022870.8A CN103535035B (en) 2011-05-16 2012-02-15 For the method and apparatus that the sample self adaptation of brightness and chromatic component offsets

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201280022870.8A Division CN103535035B (en) 2011-05-16 2012-02-15 For the method and apparatus that the sample self adaptation of brightness and chromatic component offsets

Publications (2)

Publication Number Publication Date
CN106028050A true CN106028050A (en) 2016-10-12
CN106028050B CN106028050B (en) 2019-04-26

Family

ID=47176199

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201280022870.8A Active CN103535035B (en) 2011-05-16 2012-02-15 For the method and apparatus that the sample self adaptation of brightness and chromatic component offsets
CN201510473630.5A Active CN105120270B (en) 2011-05-16 2012-02-15 Using the method and device of the adaptive migration processing reconstructing video of sample
CN201610409900.0A Active CN106028050B (en) 2011-05-16 2012-02-15 The method and apparatus that sample for brightness and chromatic component adaptively deviates

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN201280022870.8A Active CN103535035B (en) 2011-05-16 2012-02-15 For the method and apparatus that the sample self adaptation of brightness and chromatic component offsets
CN201510473630.5A Active CN105120270B (en) 2011-05-16 2012-02-15 Using the method and device of the adaptive migration processing reconstructing video of sample

Country Status (5)

Country Link
CN (3) CN103535035B (en)
DE (1) DE112012002125T5 (en)
GB (1) GB2500347B (en)
WO (1) WO2012155553A1 (en)
ZA (1) ZA201305528B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018224006A1 (en) * 2017-06-07 2018-12-13 Mediatek Inc. Improved non-local adaptive loop filter processing
CN110199524A (en) * 2017-04-06 2019-09-03 华为技术有限公司 Noise inhibiting wave filter
WO2020083108A1 (en) * 2018-10-23 2020-04-30 Mediatek Inc. Method and apparatus for reduction of in-loop filter buffer
WO2020259538A1 (en) * 2019-06-27 2020-12-30 Mediatek Inc. Method and apparatus of cross-component adaptive loop filtering for video coding
CN114586351A (en) * 2019-08-29 2022-06-03 Lg 电子株式会社 Image compiling apparatus and method based on adaptive loop filtering
CN114586350A (en) * 2019-08-29 2022-06-03 Lg 电子株式会社 Image coding and decoding device and method based on cross component adaptive loop filtering
CN114731399A (en) * 2019-11-22 2022-07-08 韩国电子通信研究院 Adaptive in-loop filtering method and apparatus
WO2023124673A1 (en) * 2021-12-31 2023-07-06 中兴通讯股份有限公司 Method and apparatus for video processing, and storage medium and electronic apparatus

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102349348B1 (en) 2011-06-14 2022-01-10 엘지전자 주식회사 Method for encoding and decoding image information
CN107426579B (en) 2011-06-24 2020-03-10 Lg 电子株式会社 Image information encoding and decoding method
EP2725799B1 (en) * 2011-06-27 2020-04-29 Sun Patent Trust Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device
JP5907367B2 (en) 2011-06-28 2016-04-26 ソニー株式会社 Image processing apparatus and method, program, and recording medium
MX338669B (en) 2011-06-28 2016-04-27 Samsung Electronics Co Ltd Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor.
GB201119206D0 (en) * 2011-11-07 2011-12-21 Canon Kk Method and device for providing compensation offsets for a set of reconstructed samples of an image
US9936200B2 (en) 2013-04-12 2018-04-03 Qualcomm Incorporated Rice parameter update for coefficient level coding in video coding process
US10021419B2 (en) 2013-07-12 2018-07-10 Qualcomm Incorported Rice parameter initialization for coefficient level coding in video coding process
US20170295369A1 (en) * 2014-10-06 2017-10-12 Sony Corporation Image processing device and method
JP6094838B2 (en) * 2015-08-31 2017-03-15 ソニー株式会社 Image processing apparatus and method, program, and recording medium
WO2017063168A1 (en) * 2015-10-15 2017-04-20 富士通株式会社 Image coding method and apparatus, and image processing device
US11095922B2 (en) * 2016-08-02 2021-08-17 Qualcomm Incorporated Geometry transformation-based adaptive loop filtering
WO2018054286A1 (en) * 2016-09-20 2018-03-29 Mediatek Inc. Methods and apparatuses of sample adaptive offset processing for video coding
JP6341304B2 (en) * 2017-02-14 2018-06-13 ソニー株式会社 Image processing apparatus and method, program, and recording medium
US10531085B2 (en) * 2017-05-09 2020-01-07 Futurewei Technologies, Inc. Coding chroma samples in video compression
CN110662065A (en) * 2018-06-29 2020-01-07 财团法人工业技术研究院 Image data decoding method, image data decoding device, image data encoding method, and image data encoding device
WO2020094153A1 (en) * 2018-11-09 2020-05-14 Beijing Bytedance Network Technology Co., Ltd. Component based loop filter
EP3935860A1 (en) 2019-03-08 2022-01-12 Canon Kabushiki Kaisha An adaptive loop filter
CN113711612B (en) 2019-04-20 2023-05-26 北京字节跳动网络技术有限公司 Signaling of chroma syntax elements in video codecs
CN115567707A (en) * 2019-05-30 2023-01-03 抖音视界有限公司 Adaptive loop filtering of chrominance components
CN114402597B (en) * 2019-07-08 2023-10-31 Lg电子株式会社 Video or image coding using adaptive loop filters
CN114710977B (en) 2019-07-26 2023-11-24 寰发股份有限公司 Method for video encoding and decoding and apparatus thereof
EP3991435A4 (en) * 2019-08-07 2022-08-31 Huawei Technologies Co., Ltd. Method and apparatus of sample adaptive offset in-loop filter with application region size constraint
CN114930816A (en) * 2019-08-29 2022-08-19 Lg 电子株式会社 Apparatus and method for compiling image
CA3152954A1 (en) * 2019-08-29 2021-03-04 Lg Electronics Inc. Apparatus and method for image coding based on filtering
EP4029272A4 (en) * 2019-09-11 2023-10-11 SHARP Kabushiki Kaisha Systems and methods for reducing a reconstruction error in video coding based on a cross-component correlation
US11303914B2 (en) 2020-01-08 2022-04-12 Tencent America LLC Method and apparatus for video coding
CN114007067B (en) * 2020-07-28 2023-05-23 北京达佳互联信息技术有限公司 Method, apparatus and medium for decoding video signal
US11849117B2 (en) 2021-03-14 2023-12-19 Alibaba (China) Co., Ltd. Methods, apparatus, and non-transitory computer readable medium for cross-component sample adaptive offset

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101517909A (en) * 2006-09-15 2009-08-26 飞思卡尔半导体公司 Video information processing system with selective chroma deblock filtering

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8149926B2 (en) * 2005-04-11 2012-04-03 Intel Corporation Generating edge masks for a deblocking filter
CN101375593A (en) * 2006-01-12 2009-02-25 Lg电子株式会社 Processing multiview video
EP1944974A1 (en) * 2007-01-09 2008-07-16 Matsushita Electric Industrial Co., Ltd. Position dependent post-filter hints
JP5649105B2 (en) * 2007-01-11 2015-01-07 トムソン ライセンシングThomson Licensing Apparatus and method for encoding and apparatus and method for decoding
US8938009B2 (en) * 2007-10-12 2015-01-20 Qualcomm Incorporated Layered encoded bitstream structure
CN102450009B (en) * 2009-04-20 2015-07-22 杜比实验室特许公司 Filter selection for video pre-processing in video applications
EP2700230A4 (en) * 2011-04-21 2014-08-06 Mediatek Inc Method and apparatus for improved in-loop filtering
US9008170B2 (en) * 2011-05-10 2015-04-14 Qualcomm Incorporated Offset type and coefficients signaling method for sample adaptive offset
PL2725797T3 (en) * 2011-06-23 2019-01-31 Huawei Technologies Co., Ltd. Offset decoding device, offset encoding device, image filter device, and data structure

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101517909A (en) * 2006-09-15 2009-08-26 飞思卡尔半导体公司 Video information processing system with selective chroma deblock filtering

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHIH-MING FU等: "《CE13:Sample Adaptive Offset with LCU-Independent Decoding》", 《JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16AND ISO/IEC JTC1/SG29/WG11》 *
YU-WEN HUANG等: "《A Technical Description of Media Tek’s Proposal to the JCT-VC CfP》", 《JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16AND ISO/IEC JTC1/SG29/WG11》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110199524A (en) * 2017-04-06 2019-09-03 华为技术有限公司 Noise inhibiting wave filter
US10623738B2 (en) 2017-04-06 2020-04-14 Futurewei Technologies, Inc. Noise suppression filter
WO2018224006A1 (en) * 2017-06-07 2018-12-13 Mediatek Inc. Improved non-local adaptive loop filter processing
US11743458B2 (en) 2018-10-23 2023-08-29 Hfi Innovation Inc. Method and apparatus for reduction of in-loop filter buffer
WO2020083108A1 (en) * 2018-10-23 2020-04-30 Mediatek Inc. Method and apparatus for reduction of in-loop filter buffer
WO2020259538A1 (en) * 2019-06-27 2020-12-30 Mediatek Inc. Method and apparatus of cross-component adaptive loop filtering for video coding
CN114073094A (en) * 2019-06-27 2022-02-18 联发科技股份有限公司 Cross-element adaptive loop filtering method and device for video coding
CN114073094B (en) * 2019-06-27 2023-05-23 寰发股份有限公司 Video encoding and decoding method and device
TWI747339B (en) * 2019-06-27 2021-11-21 聯發科技股份有限公司 Method and apparatus for video coding
US11930169B2 (en) 2019-06-27 2024-03-12 Hfi Innovation Inc. Method and apparatus of cross-component adaptive loop filtering for video coding
CN114586351A (en) * 2019-08-29 2022-06-03 Lg 电子株式会社 Image compiling apparatus and method based on adaptive loop filtering
CN114586350A (en) * 2019-08-29 2022-06-03 Lg 电子株式会社 Image coding and decoding device and method based on cross component adaptive loop filtering
CN114586351B (en) * 2019-08-29 2024-04-16 Lg电子株式会社 Image compiling device and method based on self-adaptive loop filtering
US12010349B2 (en) 2019-08-29 2024-06-11 Lg Electronics Inc. Adaptive loop filtering-based image coding apparatus and method
CN114731399A (en) * 2019-11-22 2022-07-08 韩国电子通信研究院 Adaptive in-loop filtering method and apparatus
US12101476B2 (en) 2019-11-22 2024-09-24 Electronics And Telecommunications Research Institute Adaptive in-loop filtering method and device
WO2023124673A1 (en) * 2021-12-31 2023-07-06 中兴通讯股份有限公司 Method and apparatus for video processing, and storage medium and electronic apparatus

Also Published As

Publication number Publication date
CN105120270A (en) 2015-12-02
WO2012155553A1 (en) 2012-11-22
DE112012002125T5 (en) 2014-02-20
GB2500347B (en) 2018-05-16
CN103535035A (en) 2014-01-22
CN103535035B (en) 2017-03-15
CN106028050B (en) 2019-04-26
CN105120270B (en) 2018-09-04
GB2500347A (en) 2013-09-18
GB201311592D0 (en) 2013-08-14
ZA201305528B (en) 2014-10-29

Similar Documents

Publication Publication Date Title
CN103535035B (en) For the method and apparatus that the sample self adaptation of brightness and chromatic component offsets
RU2694012C1 (en) Improved intra prediction encoding using planar views
CN105120271B (en) Video coding-decoding method and device
US10405004B2 (en) Apparatus and method of sample adaptive offset for luma and chroma components
US10116967B2 (en) Method and apparatus for coding of sample adaptive offset information
CN103796029B (en) Video encoder
CN103959794B (en) Method and the equipment thereof of image is encoded and decodes based on constraint migration and loop filtering
US10819981B2 (en) Method and apparatus for entropy coding of source samples with large alphabet
US8654860B2 (en) Apparatus and method for high efficiency video coding using flexible slice structure
CN104170382B (en) Method for coding and decoding quantization matrix and use its equipment
CN103733627B (en) Method for coding and decoding image information
CN106899849B (en) A kind of electronic equipment and coding/decoding method
KR102227411B1 (en) Distance weighted bi-directional intra prediction
EP2557790A1 (en) Image encoding method and image decoding method
CN105230020A (en) For the method for the sampling self adaptation migration processing of Video coding
WO2013155897A1 (en) Method and apparatus for loop filtering across slice or tile boundaries
NO335667B1 (en) Method of video compression
CN103442229A (en) Bit rate estimation method of SAO mode decision applied to encoder of HEVC standard
CN114270818B (en) Image decoding device, image decoding method, and program
CN103442230A (en) Lagrangian multiplier dereferencing method of SAO mode decision applied to encoder of HEVC standard

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20170224

Address after: Hsinchu County, Taiwan, China

Applicant after: Atlas Limited by Share Ltd

Address before: Hsinchu Science Park Road, Taiwan city of Hsinchu Chinese Dusing 1

Applicant before: Lianfa Science and Technology Co., Ltd.

GR01 Patent grant
GR01 Patent grant