WO2012176964A1 - 영상 정보 부호화 및 복호화 방법 - Google Patents
영상 정보 부호화 및 복호화 방법 Download PDFInfo
- Publication number
- WO2012176964A1 WO2012176964A1 PCT/KR2011/009720 KR2011009720W WO2012176964A1 WO 2012176964 A1 WO2012176964 A1 WO 2012176964A1 KR 2011009720 W KR2011009720 W KR 2011009720W WO 2012176964 A1 WO2012176964 A1 WO 2012176964A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sao
- information
- offset
- band
- chroma
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/14—Coding unit complexity, e.g. amount of activity or edge presence estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
Definitions
- the present invention relates to image information compression technology, and more particularly, to a method of applying a sample adaptive offset (SAO) as an in-loop filter.
- SAO sample adaptive offset
- High-efficiency image compression technology can be used to effectively transmit, store, and reproduce high-resolution, high-quality video information.
- inter prediction and intra prediction may be used.
- a pixel value of a current picture is predicted by referring to information of another picture
- a pixel value is predicted by using a correlation between pixels in the same picture.
- An object of the present invention is to provide a method for adaptively applying SAO to improve the image reconstruction effect.
- Another technical problem of the present invention is to provide a method of applying SAO in consideration of the frequency of occurrence of pixels for each intensity.
- Another technical problem of the present invention is to provide a method for transferring information from an encoder to a decoder for applying SAO only for an effective band.
- Another object of the present invention is to provide a method of applying a plurality of SAO according to the SAO application unit.
- Another object of the present invention is to provide a method and apparatus for applying SAO to chroma pixels in order to enhance a shape restoration effect.
- Another object of the present invention is to provide a method of applying SAO in consideration of characteristics of chroma pixels.
- An embodiment of the present invention is a video information encoding method, comprising: generating a reconstruction block, applying a deblocking filter to the reconstructed block, and a sample adaptive offset (SAO) to the reconstructed block to which the deblocking filter is applied. And applying information about the SAO application.
- the SAO may be adaptively applied according to the SAO application area to which the SAO is applied.
- the band offset may be applied by dividing the band in units of more dense intensity for a section of intensity having a high frequency of occurrence.
- a band offset may be applied to an interval of high intensity, and in the information transmission step, information about the section to which the band offset is applied may be transmitted. have.
- an offset may be applied only to a band with a high frequency of occurrence, and in the information transmission step, information about the applied offset may be transmitted.
- a plurality of different edge offsets may be selectively applied to each pixel of one SAO application area.
- Another embodiment of the present invention is a video information encoding method, comprising: generating a reconstruction block, applying a deblocking filter to the reconstructed block, and a sample adaptive offset (SAO) to the reconstructed block to which the deblocking filter is applied.
- At least one of region information, segmentation information of the SAO application region, SAO type information, and SAO offset information may be transmitted.
- the SAO application area for chroma can be set separately from the SAO application area for luma.
- the intensity of the chroma pixels may be classified, and a band offset may be applied to a band located in a high frequency section of the entire intensity section.
- the relationship between the current chroma pixel and the surrounding chroma pixels is such that the intensity of at least one of the surrounding chroma pixels is greater than the intensity of the current chroma pixel and the intensity of at least one of the surrounding chroma pixels. Determines which case is less than the intensity of the current chroma pixel, and apply an edge offset to the current chroma pixel according to the determination.
- SAO information may be transmitted by distinguishing whether it is for luma or chroma.
- Another embodiment of the present invention is a video information decoding method comprising: receiving information, generating a reconstruction block based on the information, applying a deblocking filter to the reconstructed block, and deblocking A SAO (Sample Adaptive Offset) is applied to the reconstructed block to which the filter is applied.
- SAO Sample Adaptive Offset
- the SAO may be adaptively applied according to the SAO application area to which the SAO is applied.
- a band offset may be applied by dividing a band in units of more dense intensity for a section of intensity having a high frequency of occurrence.
- a band offset is applied to a section of intensity that has a high frequency of occurrence, and a section of intensity which has a high frequency of occurrence may be determined based on the received information.
- the offset in the SAO applying step, the offset may be applied only to a band corresponding to an offset included in the received information among all bands.
- Another embodiment of the present invention is a video information decoding method, comprising: receiving information, generating a reconstruction block based on the information, applying a deblocking filter to the reconstructed block, and deblocking And applying a sample adaptive offset (SAO) to the reconstructed block to which the filter is applied, wherein applying the SAO to the chroma pixel in the SAO application step, and whether the information received in the information receiving step applies the SAO to the chroma pixel. It may include at least one of region information, segmentation information of the SAO application region, SAO type information, and SAO offset information, together with the information on.
- SAO sample adaptive offset
- the SAO application region for chroma can be set separately from the SAO application region for luma.
- the intensity of the chroma pixels may be classified, and a band offset may be applied to a band located in a high frequency section of all intensity sections.
- the relationship between the current chroma pixel and the surrounding chroma pixels is such that at least one of the intensity of the surrounding chroma pixels is greater than the strength of the current chroma pixel and at least one of the surrounding chroma pixels. Determining which intensity case is less than the intensity of the current chroma pixel, and applying an edge offset to the current chroma pixel according to the determination, wherein the value of the edge offset is based on the information received in the receiving step. Can be determined.
- the information received in the information receiving step is information about luma, information about chroma, or about both chroma and luma.
- the effect of image reconstruction can be enhanced by adaptively applying SAO.
- the effect of image reconstruction can be enhanced by applying SAO in consideration of the frequency of occurrence of pixels for each intensity.
- the amount of information can be reduced by applying SAO only to the effective bands and transferring related information from the encoder to the decoder.
- the effect of image reconstruction can be enhanced by applying a plurality of SAOs according to the SAO application unit.
- an image reconstruction effect can be enhanced by applying SAO to chroma pixels.
- the effect of image reconstruction can be enhanced by considering the characteristics of the chroma pixels.
- FIG. 1 is a block diagram schematically illustrating an image encoding apparatus (encoder) according to an embodiment of the present invention.
- FIG. 2 is a block diagram schematically illustrating an image decoder according to an embodiment of the present invention.
- FIG. 3 is a diagram schematically illustrating a band offset.
- FIG. 4 illustrates an example of a histogram according to characteristics of a corresponding image in a predetermined image.
- FIG. 5 is a diagram schematically illustrating an example of a method of applying a band offset by dividing an intensity of all pixels adaptively.
- FIG. 6 is a diagram schematically illustrating another example of a method of adaptively dividing a band for all pixels and applying a band offset.
- FIG. 7 illustrates by way of example a representative shape of an edge that may appear in a direction in a block.
- FIG. 8 illustrates four representative edge types of the edge offset with respect to the current pixel C.
- FIG. 9 is a view schematically comparing the intensity of a current pixel and a neighboring pixel and dividing it into four categories.
- FIG. 10 is a diagram schematically illustrating a SAO application unit as an area to which SAO is applied.
- FIG. 11 illustrates a local distribution of a histogram for the same image.
- FIG. 12 is a diagram schematically illustrating an example in which a band offset is applied to only a part of all bands for a chroma pixel.
- FIG. 13 is a diagram schematically illustrating another example in which a band offset is applied to only a part of all bands for a chroma pixel.
- FIG. 14 is a flowchart schematically illustrating an operation of an encoder in a system to which the present invention is applied.
- 15 is a flowchart schematically illustrating an operation of a decoder in a system to which the present invention is applied.
- each of the components in the drawings described in the present invention are shown independently for the convenience of the description of the different characteristic functions in the image encoding / decoding apparatus, each component is implemented by separate hardware or separate software It does not mean to be.
- two or more of each configuration may be combined to form one configuration, or one configuration may be divided into a plurality of configurations.
- Embodiments in which each configuration is integrated and / or separated are also included in the scope of the present invention without departing from the spirit of the present invention.
- the image encoding apparatus 100 may include a picture splitter 105, a predictor 110, a transformer 115, a quantizer 120, a realigner 125, and an entropy encoder 130. , An inverse quantization unit 135, an inverse transform unit 140, a filter unit 145, and a memory 150.
- the picture dividing unit 105 may divide the input picture into at least one processing unit.
- the processing unit may be a prediction unit (hereinafter referred to as a PU), a transform unit (hereinafter referred to as a TU), or a coding unit (hereinafter referred to as "CU"). May be used).
- the predictor 110 includes an inter prediction unit for performing inter prediction and an intra prediction unit for performing intra prediction.
- the prediction unit 110 generates a prediction block by performing prediction on the processing unit of the picture in the picture division unit 105.
- the processing unit of the picture in the prediction unit 110 may be a CU, a TU, or a PU.
- the processing unit in which the prediction is performed may differ from the processing unit in which the prediction method and the details are determined.
- the method of prediction and the prediction mode may be determined in units of PUs, and the prediction may be performed in units of TUs.
- a prediction block may be generated by performing prediction based on information of at least one picture of a previous picture and / or a subsequent picture of the current picture.
- a prediction block may be generated by performing prediction based on pixel information in a current picture.
- a reference picture may be selected for a PU and a reference block having the same size as that of the PU may be selected in integer pixel samples. Subsequently, a residual block with the current PU is minimized and a prediction block with a minimum motion vector size is generated.
- a skip mode a merge mode, a motion vector prediction (MVP), and the like can be used.
- the prediction block may be generated in sub-integer sample units such as 1/2 pixel sample unit and 1/4 pixel sample unit.
- the motion vector may also be expressed in units of integer pixels or less.
- the luminance pixel may be expressed in units of 1/4 pixels
- the chrominance pixel may be expressed in units of 1/8 pixels.
- Information such as an index of a reference picture, a motion vector (ex. Motion Vector Predictor), and a residual signal selected through inter prediction is entropy coded and transmitted to a decoder.
- a prediction mode may be determined in units of PUs, and prediction may be performed in units of PUs.
- a prediction mode may be determined in units of PUs, and intra prediction may be performed in units of TUs.
- a prediction mode may have 33 directional prediction modes and at least two non-directional modes.
- the non-directional mode may include a DC prediction mode and a planner mode (Planar mode).
- a prediction block may be generated after applying a filter to a reference sample.
- whether to apply the filter to the reference sample may be determined according to the intra prediction mode and / or the size of the current block.
- the PU may have various sizes / shapes, for example, the PU may have a size of 2N ⁇ 2N, 2N ⁇ N, N ⁇ 2N, or N ⁇ N in case of inter-picture prediction.
- the PU may have a size of 2N ⁇ 2N or N ⁇ N (where N is an integer).
- the N ⁇ N size PU may be set to apply only in a specific case.
- the NxN PU may be used only for the minimum size coding unit, or only for intra prediction.
- a PU having a size of N ⁇ mN, mN ⁇ N, 2N ⁇ mN, or mN ⁇ 2N (m ⁇ 1) may be further defined and used.
- the residual value (the residual block or the residual signal) between the generated prediction block and the original block is input to the converter 115.
- prediction mode information and motion vector information used for prediction are encoded by the entropy encoder 130 along with the residual value and transmitted to the decoder.
- the transformer 115 performs transform on the residual block in transform units and generates transform coefficients.
- the transform unit in the converter 115 may be a TU and may have a quad tree structure. In this case, the size of the transform unit may be determined within a range of a predetermined maximum and minimum size.
- the transform unit 115 may convert the residual block using a discrete cosine transform (DCT) and / or a discrete sine transform (DST).
- DCT discrete cosine transform
- DST discrete sine transform
- the quantizer 120 may generate quantization coefficients by quantizing the residual values transformed by the converter 115.
- the value calculated by the quantization unit 120 is provided to the inverse quantization unit 135 and the reordering unit 125.
- the reordering unit 125 rearranges the quantization coefficients provided from the quantization unit 120. By rearranging the quantization coefficients, the efficiency of encoding in the entropy encoder 130 may be increased.
- the reordering unit 125 may rearrange the quantization coefficients in the form of a two-dimensional block into a one-dimensional vector form through a coefficient scanning method.
- the reordering unit 125 may increase the entropy coding efficiency of the entropy encoder 130 by changing the order of coefficient scanning based on probabilistic statistics of coefficients transmitted from the quantization unit.
- the entropy encoder 130 may perform entropy encoding on the quantized coefficients rearranged by the reordering unit 125.
- Entropy encoding may use, for example, an encoding method such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), or Context-Adaptive Binary Arithmetic Coding (CABAC).
- CABAC Context-Adaptive Binary Arithmetic Coding
- the entropy encoder 130 may include quantization coefficient information, block type information, prediction mode information, partition unit information, PU information, transmission unit information, motion vector information, and the like of the CUs received from the reordering unit 125 and the prediction unit 110.
- Various information such as reference picture information, interpolation information of a block, and filtering information may be encoded.
- the entropy encoder 130 may apply a constant change to a transmitted parameter set or syntax.
- the inverse quantization unit 135 inverse quantizes the quantized values in the quantization unit 120, and the inverse transformer 140 inversely transforms the inverse quantized values in the inverse quantization unit 135.
- the residual values generated by the inverse quantizer 135 and the inverse transformer 140 may be combined with the prediction block predicted by the predictor 110 to generate a reconstructed block.
- the filter unit 145 may apply a deblocking filter, an adaptive loop filter (ALF), and a sample adaptive offset (SAO) to the reconstructed picture.
- ALF adaptive loop filter
- SAO sample adaptive offset
- the deblocking filter may remove block distortion generated at the boundary between blocks in the reconstructed picture.
- the adaptive loop filter may perform filtering based on a value obtained by comparing the reconstructed image with the original image after the block is filtered through the deblocking filter. ALF may be performed only when high efficiency is applied.
- the SAO restores the offset difference from the original image on a pixel-by-pixel basis for the residual block to which the deblocking filter is applied, and is applied in the form of a band offset and an edge offset.
- the filter unit 145 may not apply filtering to the reconstructed block used for inter prediction.
- the memory 150 may store the reconstructed block or the picture calculated by the filter unit 145.
- the reconstructed block or picture stored in the memory 150 may be provided to the predictor 110 that performs inter prediction.
- the image decoder 200 includes an entropy decoder 210, a reordering unit 215, an inverse quantizer 220, an inverse transformer 225, a predictor 230, and a filter 235.
- Memory 240 may be included.
- the input bit stream may be decoded according to a procedure in which image information is processed by the image encoder.
- VLC variable length coding
- 'VLC' variable length coding
- the entropy decoder 210 also uses a VLC table used in the encoder. Entropy decoding can be performed by implementing the same VLC table.
- CABAC CABAC is used to perform entropy encoding in the image encoder
- CABAC CABAC correspondingly.
- Information for generating the prediction block among the information decoded by the entropy decoder 210 may be provided to the predictor 230, and a residual value of which entropy decoding is performed by the entropy decoder may be input to the reordering unit 215. .
- the reordering unit 215 may reorder the entropy-decoded bit stream by the entropy decoding unit 210 based on the reordering method in the image encoder.
- the reordering unit 215 may reorder the coefficients expressed in the form of a one-dimensional vector by restoring the coefficients in the form of a two-dimensional block.
- the reordering unit 215 may be realigned by receiving information related to coefficient scanning performed by the encoder and performing reverse scanning based on the scanning order performed by the corresponding encoder.
- the inverse quantization unit 220 may perform inverse quantization based on the quantization parameter provided by the encoder and the coefficient values of the rearranged block.
- the inverse transform unit 225 may perform inverse DCT and / or inverse DST on DCT and DST performed by the transform unit of the encoder with respect to the quantization result performed by the image encoder.
- the inverse transform may be performed based on a transmission unit determined by the encoder or a division unit of an image.
- the DCT and / or the DST may be selectively performed according to a plurality of pieces of information, such as a prediction method, a size and a prediction direction of the current block, and the inverse transformer 225 of the decoder is performed by the transformer of the encoder.
- Inverse transformation may be performed based on the transformation information.
- the prediction unit 230 may generate the prediction block based on the prediction block generation related information provided by the entropy decoder 210 and previously decoded blocks and / or picture information provided by the memory 240.
- the reconstruction block may be generated using the prediction block generated by the predictor 230 and the residual block provided by the inverse transform unit 225.
- intra prediction may be performed to generate a prediction block based on pixel information in the current picture.
- the inter-screen for the current PU based on information included in at least one of a previous picture or a subsequent picture of the current picture. You can make predictions.
- motion information required for inter-prediction prediction of the current PU provided by the image encoder for example, a motion vector, a reference picture index, and the like, may be derived in response to a skip flag, a merge flag, and the like received from the encoder.
- the reconstructed block and / or picture may be provided to the filter unit 235.
- the filter unit 235 applies deblocking filtering, sample adaptive offset (SAO), and / or adaptive loop filtering to the reconstructed block and / or picture.
- the memory 240 may store the reconstructed picture or block to use as a reference picture or reference block and provide the reconstructed picture to the output unit.
- the filter unit of the encoder and the decoder may apply a deblocking filter, a sample adaptive offset (SAO), and an adaptive loof filter (ALF) as an in-loop filter.
- a deblocking filter a sample adaptive offset (SAO)
- ALF adaptive loof filter
- the deblocking filter removes artifacts between blocks due to block-by-block prediction, transformation, and quantization.
- the deblocking filter is applied to the prediction unit edge or the transform unit edge, and can set a predetermined minimum block size for applying the deblocking filter.
- the deblocking filter To apply the deblocking filter, first determine the BS (Boundary Strength) of the horizontal or vertical filter boundary. Whether to perform filtering based on the BS is determined in units of blocks. If you decide to perform filtering, decide which filter to apply.
- the filter to be applied may be selected from a weak filter and a strong filter.
- the filtering unit applies the selected filter to the boundary of the corresponding block.
- SAO is a procedure for restoring the offset difference from the original image on a pixel-by-pixel basis for the deblocking filtering image.
- SAO compensates for a coding error.
- the coding error may be due to quantization.
- SAO As described above, there are two types of SAO, band offset and edge offset.
- FIG. 3 is a diagram schematically illustrating a band offset.
- the pixels in the SAO application unit are classified according to the intensity of each pixel.
- the total intensity range may be divided into a predetermined number of intensity intervals, that is, bands.
- Each band will contain pixels having an intensity within a corresponding intensity interval.
- the offset applied to each band may be determined.
- the entire intensity section is 0 to 2 N ⁇ 1.
- an 8-bit pixel has an intensity range of 0 to 255.
- 3 shows an example of dividing the entire intensity section into 32 bands having the same intensity interval.
- each band is eight.
- the 32 bands can be divided into a central first group and a peripheral second group.
- the first group may consist of 16 bands, and the second group may also consist of 16 bands.
- the offset is applied to each band, and the offset value may be transmitted to the decoder for each band.
- the pixels are grouped and the offset values transmitted for each band are applied in the same manner as the band offset is applied in the encoder.
- ALF compensates for coding errors using a Wiener filter. Unlike SAO, ALF is applied globally within a slice. ALF may be applied after applying SAO, and may be applied only to HE (High Efficiency). Information for applying the ALF (filter coefficient, on / off information, filter shape, etc.) may be delivered to the decoder through a slice header.
- the filter type used in the ALF can be a variety of symmetrical shapes, such as two-dimensional diamond, two-dimensional cross shape.
- FIG. 4 illustrates an example of a histogram according to characteristics of a corresponding image in a predetermined image. Specifically, FIG. 4 shows a histogram according to luma characteristics as shown in FIG.
- the histogram has various distributions according to characteristics of an image. Accordingly, the band offset may be applied by dividing the pixel range adaptively. That is, a method of applying an offset may be considered by adaptively setting bands for an intensity range that a pixel may have.
- the band may be divided into smaller parts, and the margin may be divided into smaller parts.
- M bands having a small intensity interval at the center are set, and a large portion at the periphery is set.
- L bands having an intensity section may be set.
- FIG. 5 is a diagram schematically illustrating an example of a method of applying a band offset by dividing an intensity range for all pixels.
- the case where the pixel is centered in the center is demonstrated as an example.
- the first group in the center is 16 pixels by 4 pixel values (eg, four intervals of intensity). Subdivided into three bands, and the second group of peripheries can be roughly divided into 12 bands of 16 pixel values.
- the first group in the center is divided into 12 bands of 16 pixel values in contrast to the example of FIG. 5, and the second group of the periphery is 4 pixels. You can also divide the values into 16 bands.
- a method of classifying the entire intensity range into more band groups according to the SAO application unit may be considered without classifying the entire intensity range into two band groups.
- the effect of image reconstruction can be enhanced. For example, rather than dividing the bands into two groups, it is possible to divide them into N groups so that a finer offset can be applied to some pixel value ranges.
- FIG. 6 is a diagram schematically illustrating an example of a method of applying a band offset by dividing an intensity range for all pixels adaptively.
- the entire intensity range is divided into bands, and the bands are divided into four groups to apply band offsets.
- the local intensity range of the image may be better reflected by dividing the entire intensity range into not more than two groups but transmitting information about the offset for each group.
- the encoder may transmit a range of band offsets applied to the current picture. For example, the encoder may transmit information on which bit depth period, that is, which intensity period, is applied to the decoder in the current picture to the decoder.
- the band and band offsets begin to be applied when band offsets are used for uniformly spaced bands.
- unnecessary offset information can be transmitted or unnecessary offset can be prevented from being performed.
- an offset occurs mainly in the current picture, and the pixel value (eg intensity) range to which the band offset should be applied ranges from 32 to 160.
- the number of occurrences of the pixel value belonging to each band may be counted so that the offset value of the band offset may be transmitted only for the band having a high frequency of pixel generation.
- an offset value to be applied as a band offset to a band may be transmitted to the decoder only, and an offset value to be applied as a band offset may not be dedicated to a band having a low frequency.
- the encoder may further transmit information about which band an offset value is transmitted to the decoder.
- edge offset As a second type of SAO, there is an edge offset in consideration of edge information per pixel.
- the edge offset is applied considering the direction of the edge with respect to the current pixel and the intensity of the current pixel and surrounding pixels.
- FIG. 7 illustrates by way of example a representative shape of an edge that may appear in a direction in a block.
- (a) to (d) of FIG. 7 illustrate edges having a direction of 0 degrees, edges having a direction of 90 degrees, edges having a direction of 135 degrees, and edges having a direction of 45 degrees, respectively.
- four kinds of edge offsets may be used for one filtering unit, that is, an SAO application unit (minimum unit, LCU) according to the angle or direction of the edge.
- SAO application unit minimum unit, LCU
- edge types of edge offsets are referred to as edge types of edge offsets.
- FIG. 8 illustrates four representative edge types of the edge offset with respect to the current pixel C.
- (a) of FIG. 8 represents an edge of 1 degree 0 degree
- (b) represents an edge of 1 dimension 90 degree
- (c) represents an edge of 1 dimension 135 degree
- (d) shows the one-dimensional 45 degree edge.
- four edge offsets may be used.
- an offset corresponding to one of four edge types may be applied.
- the relationship between the current pixel and the surrounding pixels can be considered to apply the edge offset.
- FIG. 9 is a view schematically comparing the intensity of a current pixel and a neighboring pixel and dividing it into four categories.
- FIGS. 9A to 9D show distributions of a current pixel C and an adjacent pixel for each category.
- the category shown in (a) of FIG. 9 represents a case where the intensity of two adjacent pixels is larger than the current pixel C.
- the category illustrated in (b) of FIG. 9 represents two cases in which the intensity of one of two pixels adjacent to the current pixel is smaller than the current pixel.
- the category illustrated in (c) of FIG. 9 represents two cases in which the intensity of one of two pixels adjacent to the current pixel is larger than the current pixel.
- the category shown in (d) of FIG. 9 represents a case where the intensity of two pixels adjacent to the current pixel is smaller than the current pixel.
- FIGS. 9A and 9D show a case where the intensity of the current pixel is larger or smaller than the intensity of a neighboring pixel.
- 9 (b) and 9 (d) may appear when the current pixel is located at a boundary of a predetermined area.
- Table 1 schematically shows the four categories shown in FIG. 9.
- C represents the current pixel.
- Category 1 of Table 1 corresponds to FIG. 9A
- Category 2 of Table 1 corresponds to FIG. 9B
- Category 3 of Table 1 corresponds to FIG. 9C
- Table 1 Category 4 in Fig. 9 corresponds to Fig. 9D.
- the encoder transmits an edge offset value for each category.
- the decoder may reconstruct the pixel by adding the edge offset value corresponding to the category and the edge type for the pixels. For example, after determining which mode among the four edge types of FIG. 7 belongs to, one of the categories of Table 1 may be determined to apply an offset of the corresponding category to the current pixel.
- the filtering unit that is, the SAO application unit, is a unit having a size equal to or larger than that of the LCU (Largest Coding Unit), and are units aligned with the LCU boundary.
- a unit to which SAO is applied is a region in which one picture is divided into a quadtree structure, and the encoder may determine whether to apply SAO, an offset type, and an offset value to each decoder to transmit to the decoder. Determining the offset type here is to determine which of the plurality of band offsets and the plurality of edge offsets to apply.
- 10 is a diagram schematically illustrating a SAO application unit. 10 illustrates a SAO application unit obtained by dividing a WQVGA (416x240) image into a quadtree structure. Each SAO application unit must be at least larger than the LCU and can be divided along the boundaries of the LCU.
- the smallest unit of the SAO application unit is the LCU, but the smaller the image, the larger the LCU may be to apply a single offset.
- the LCU may be a SAO application unit that is too large to reconstruct the original image with only a single offset.
- two or more offsets may be used in one LCU.
- a plurality of edge types from Figs. 8A to 8D can be selected and applied according to the direction of the edges in the region.
- Table 2 schematically shows an example of a sequence parameter set syntax as a syntax structure for applying SAO.
- Table 2 shows an example of information indicating whether SAO is applied to the current sequence. For example, in the syntax of Table 2, a value of sao_used_flag of 0 indicates that SAO is disabled for the current sequence, and a value of sao_used_flag of 1 indicates that SAO can be used for the current sequence. Instruct.
- Table 3 schematically shows an example of slice header syntax as a syntax structure for applying SAO.
- SAO parameter for SAO application may be indicated.
- Table 4 schematically shows an example of SAO parameter syntax as a syntax structure for applying SAO.
- the transmitted parameters include sao_split_param for partitioning the SAO application area and sao_offset_param for an offset applied to SAO, as in the example of Table 4.
- a value of sao_flag of 1 indicates that SAO may be enabled for at least a portion of the current picture.
- a value of sao_flag of 0 indicates that SAO cannot be applied to the entire current picture. ) Indicates that. Therefore, if the value of sao_flag is 1, SAO parameters may be indicated.
- Table 5 schematically shows an example of a sao_split_param syntax regarding splitting among SAO parameters as a syntax structure for applying SAO.
- sao_split_param (x, y, Depth) indicates whether the SAO application unit at the position designated by (x, y) and the depth designated by 'Depth' are further divided through sao_split_flag.
- a value of 0 sao_split_flag indicates that the current region is a leaf. Thus, the current area is no longer partitioned for SAO application.
- a value of sao_split_flag of 1 indicates that the current region is further divided into four child regions.
- pqao_split_param indicates whether the SAO application unit is further divided for each divided region when sao_split_param (x, y, Depth) indicates that the SAO application unit is further divided.
- the syntax sao_split_param may be used again for the divided region instead of the syntax pqao_split_param, but the depth of the indicated region may be changed and applied.
- sao_split_param (x0, y0, saoDepth) is the corresponding region.
- each divided area (x0 + 0, y0 + 0), (x0 + 0, y0 + 1), (x0 + 1, y0 + 0), (x0)
- the depth may be adjusted to 'saoDepth + 1' to indicate whether to divide again.
- Table 6 schematically shows an example of a syntax structure in which sao_split_param is reapplied to the divided region.
- NumSaoClass represents the number of SAO categories or SAO offsets.
- Table 7 schematically shows an example of a sao_offset_param syntax regarding an offset among SAO parameters as a syntax structure for applying SAO.
- an offset parameter may be indicated for each divided area.
- sao_type_index indicates an offset type applied to the current region.
- the number of SAO offsets or the number of SAO categories may be determined according to the offset type (sao type, sao_type_idx) applied to the current region, and as an example of syntax information indicating the number of SAO offsets or the number of SAO categories according to the offset type, PqaoOffsetNum [sao_type_idx] of Table 6 may be mentioned.
- start_offset represents the smallest number of band offsets or edge offsets used. If start_offset is not available (if 'start_offset' is not available), it may be assumed that start_offset has a value of zero. Also, end_offset indicates the largest number of band offsets or edge offsets used. If the end_offset is not available, the value of the end_offset may be set to the number of SAO categories (the number of offsets) and PqaoOffsetNum [sao_type_idx] determined according to the sao type (sao_type_idx) as described above.
- Table 8 schematically shows an example of the SAO offset type.
- the SAO category (the number of offsets) may be determined according to the offset type.
- the SAO type index may indicate one of an edge offset and a band offset.
- Table 8 an example of applying a band offset by dividing the entire band into two groups is shown as an example.
- the SAO type index indicates one of four edge offsets and two band offsets.
- the offset value is set for each category constituting each SOA type. For example, in the case of an edge offset, an offset value may be set for each of the four edge categories according to four categories according to the intensity of the current pixel and neighboring pixels.
- Table 9 schematically shows an example of the SAO offset type in the case of applying the band offset by dividing the band group adaptively.
- the number of categories of the center band and the neighboring bands is different.
- the center band and the peripheral band consisting of 16 bands of 8 pixel values are configured, while in the case of Table 9, the center band and 16 of 16 bands of 4 pixel values are configured.
- the band offset is applied using a peripheral band consisting of 12 bands of pixel values.
- the offset can be applied more precisely to the center bands.
- Table 10 schematically shows another example of the SAO offset type when the band group is adaptively divided to apply a band offset.
- Table 10 shows an example of applying a band offset by dividing the neighboring band in more detail than in the case of Table 9.
- a band offset is applied using a center band of 12 bands of 16 pixel values and a peripheral band of 16 bands of 4 pixel values.
- the offset can be applied more finely to the surrounding bands.
- Table 11 is an example of a table regarding the SAO type when a band offset is applied by designating more band groups.
- each band group is formed of 8 bands of 8 pixel values.
- Each band group may be grouped in order from the left for FIG. 6 and for the entire band.
- the SAO types to be applied to the current pixel among the SAO types as shown in Tables 8 to 11 may be indicated through the sao_type_idx described above. Referring to Table 7 and Tables 8 to 11, when the value of sao_type_idx is 5 or more, a band offset is applied.
- Table 12 schematically shows another example of the sao_offset_param syntax regarding an offset among SAO parameters as a syntax structure for applying SAO.
- Table 12 shows an example of a syntax structure for transmitting only a valid band offset.
- the valid band offset here means an applicable band offset.
- total_offset_num_minus_one indicates the total number of offsets of the band offset.
- offset_idx [i] indicates which category corresponds to the band offset indicated by sao_type_idx.
- sao_offset indicates an offset value for a category indicated by offset_idx [i] at a corresponding position and depth.
- a plurality of edge offsets may be applied to one SAO application unit.
- Table 13 is an example schematically showing a syntax structure when a plurality of edge offsets are applied to one SAO application unit.
- num_edge_offset indicates the total number of offsets of the edge offset.
- edge offset may be applied to a corresponding SAO application area as indicated by num_edge_offset.
- SAO may be applied to chroma in consideration of the difference between luma and chroma.
- FIG. 11 illustrates a local distribution of a histogram for the same image.
- FIG. 11 (b) shows a histogram difference between a luma original image and a reconstructed image.
- FIG. 11 (c) shows the histogram difference between the chroma (Cr) original image and the reconstructed image in areas A and B of FIG. 11 (a).
- FIG. 11 (d) shows a histogram difference between the chroma (Cb) original image and the reconstructed image.
- an offset may be applied to the chroma pixel at a bit depth substantially smaller than the size of the bit depth for the luma pixel.
- a range of chroma signals that is, a range of pixel values of chroma pixels is 0 to 2 N ⁇ 1 (where N is a bit depth)
- the size of the entire bit depth, that is, the range of pixel values is illustrated in FIG. 12. Or as shown in the example of FIG. 13.
- FIG. 12 is a diagram schematically illustrating an example in which a band offset is applied to only a part of all bands for a chroma pixel.
- chroma pixels may be allocated to center bands consisting of K center bands among all 2 * K bands, and a band offset may be applied.
- the offset values for the indices (1, 2, ..., K) assigned to each band to which the band offset is applied may be transferred from the encoder to the decoder.
- the offset value for the neighboring bands to which the band offset is not applied may designate a corresponding index as 0 so that the offset may not be indicated for the chroma pixel.
- An index with a value of zero may indicate that no band offset is applied and may indicate that the offset value of the band offset is zero.
- FIG. 13 is a diagram schematically illustrating another example of applying a band offset to only a part of all bands for a chroma pixel.
- chroma pixels may be allocated to residual bands composed of neighboring K bands among a total of 2 * K bands, and a band offset may be applied.
- the offset values for the indices (1, 2, ..., K / 2, K / 2 + 1, ..., K) assigned to each band to which the band offset is applied may be transferred from the encoder to the decoder.
- the offset value for the center band to which the band offset is not applied may be assigned an index of 0 so that the offset may not be indicated for the chroma pixel.
- An index with a value of zero may indicate that no band offset is applied and may indicate that the offset value of the band offset is zero.
- a band offset is applied by dividing the entire pixel value range into 32 bands and dividing it into two groups of 16 central bands and 16 peripheral bands. can do.
- Band offsets can be applied to the chroma pixels with eight center bands and eight peripheral bands.
- the signal (luma signal) for the luma pixel is a pixel value (eg, intensity) of the luma pixel, and is referred to as a "luma signal" for convenience of description below.
- Table 14 schematically shows an example of a sao_offset_param syntax regarding an offset among SAO parameters as a syntax structure for applying SAO to chroma.
- sao_type_cr_idx indicates an offset type for a chroma signal.
- sao_type_cb_idx indicates an offset type for a chroma (Cb) signal.
- sao_cr_offset indicates an offset value for a chroma signal.
- sao_cb_offset indicates an offset value for a chroma (Cb) signal.
- a chroma signal has a relatively small and simple edge component when compared to a luma signal.
- edge offset table of Table 1 category 1 and category 3 may be grouped into one category, and category 2 and category 4 may be grouped into another category.
- merging categories it is possible to reduce the value of the transmitted offset when edge offset is applied.
- Table 15 schematically shows an example of an edge offset category applied to chroma pixels.
- the intensity of the current pixel C is smaller than that of two adjacent pixels forming the edge, or the intensity of the current pixel C is one adjacent.
- a case smaller than the pixel intensity is set to one category (category 1).
- the intensity of the current pixel C is greater than that of two adjacent pixels forming the edge, or the intensity of the current pixel C is greater than that of one adjacent pixel.
- a large case is set to one category (category 2).
- Table 16 shows an example of the SAO type index table for the case of merging the categories for the edge offset as shown in Table 15 and applying the number of bands to which the band offset is applied as shown in FIG. 12.
- Table 17 shows an example of the SAO type index table for the case of merging the categories for the edge offset as shown in Table 15 and applying the number of bands to which the band offset is applied as shown in FIG.
- the amount of transmission information can be reduced by reducing the number of SAO categories to two in the case of edge offset for chroma pixels, and applying them to eight neighboring bands in case of band offset.
- Table 14 described above is an example of a syntax structure for the case where the same filtering partition is applied between the signal for the luma pixel and the signal for the chroma pixel, that is, the same SAO application unit is used for the luma pixel and the chroma pixel.
- Table 18 schematically shows an example of a syntax structure for the case of using independent partitions for luma pixels and chroma pixels.
- a value of sao_flag of 1 indicates that SAO is used for the luma signal.
- a value of sao_flag of 0 indicates that SAO is not used for the luma signal.
- a value of sao_flag_cb of 1 indicates that SAO is used for the Cb signal. If the value of sao_flag_cb is 0, it indicates that SAO is not used for the Cb signal.
- a value of sao_flag_cr indicates that SAO is used for the Cr signal. If the value of sao_flag_cr is 0, it indicates that SAO is not used for the Cr signal.
- x1 and x2 specify the location of the area to which the corresponding sao_offset_param applies
- x3 specifies the depth of the area to which the corresponding sao_offset_param applies
- x4 Indicates whether sao_offset_param is for luma, Cr or Cb.
- SAO is applied to luma, Cr, and Cb, respectively, and required parameters such as sao_split_param, sao_offset_param, and the like are indicated. SAO parameters may be transmitted as in the examples of FIGS. 18 and 19 described below.
- Table 19 schematically shows an example of a partitioning parameter as a syntax structure for applying independent partitions for luma pixels and chroma pixels.
- sao_split_flag if the value of sao_split_flag is 0, this indicates that the current region is a leaf. Thus, the current area is no longer divided. If the value of sao_split_flag is 1, the current region is further divided into four child regions. In sao_split_flag (x, y, Depth, component), (x, y) indicates the position of the region, and Depth indicates the depth of the region. In addition, 'component' indicates whether sao_split_flag is for luma, Cr or Cb.
- sao_split_flag If the value of sao_split_flag is 1 and the corresponding region is further divided, sao_split_param for luma, Cr and / or Cb in the divided four regions may be transmitted.
- Table 20 schematically shows an example of an offset parameter as a syntax structure for applying independent partitions for luma pixels and chroma pixels.
- sao_type_idx indicates the offset type applied to the current region.
- the offset type indicated by sao_type_idx may indicate a corresponding offset type on a SAO type table such as Tables 8 to 11, Tables 16 to 17, and the like.
- sao_offset indicates an offset applied to each group when the pixel group, that is, the entire pixel value is classified into a group of bands as described above.
- FIG. 14 is a flowchart schematically illustrating an operation of an encoder in a system to which the present invention is applied.
- the encoder reconstructs a block (S1410).
- the encoder transforms the residual block generated based on the prediction block and the current block, and quantizes the residual block, and then reconstructs the residual block through inverse quantization and inverse transformation.
- the encoder applies a loop filter to the reconstructed block (S1420).
- the loop filter may be applied in the filter unit of FIG. 1, and may include a deblocking filter, SAO.
- ALF may be applied.
- the SAO may be applied to the image to which the deblocking filter is applied in units of pixels, and the ALF may be applied to the image to which the SAO is applied.
- ALF may be applied only in the case of HE (High Efficiency).
- the filter unit may apply an offset in units of pixels.
- the filter unit may adaptively determine the number of offsets (number of bands), the group of bands, etc. to apply the band offset, or may transmit only the offset for the valid band to the decoder.
- the filter unit may be configured to apply a plurality of edge offsets in the SAO application area. Details are as described above.
- the filter unit may apply SAO to chroma pixels. Areas for applying SAO may be defined independently in the case of luma and chroma. In addition, in the case of a band offset for chroma, the number and group of bands may be determined to apply the offset to the chroma pixels. With respect to the edge offset with respect to chroma, the number of categories along the direction of each edge may be adjusted. Details are as described above.
- the decoder may then transmit the bitstream including the image information to which the SAO is applied and the image information about the SAO to the decoder (S1430).
- 15 is a flowchart schematically illustrating an operation of a decoder in a system to which the present invention is applied.
- the decoder first receives a bit stream from an encoder (S1510).
- the received bit stream contains not only the video information but also information necessary for recovering the video information.
- the decoder restores the block based on the received information (S1520).
- the decoder generates a reconstruction block based on the prediction block generated by the prediction and the residual block generated through inverse quantization and inverse transformation.
- the decoder applies a loop filter to the reconstruction block (S1530). Loop filtering may be performed in the filter unit of FIG. 2.
- a deblocking filter SAO, ALF, or the like may be applied.
- SAO may be applied to the image to which the deblocking filter is applied in units of pixels
- ALF may be applied to the image to which the SAO is applied.
- ALF may be applied only in the case of HE (High Efficiency).
- the filter unit may apply an offset in units of pixels.
- the filter unit may derive the SAO parameter based on the syntax element transmitted from the encoder.
- the filter unit may apply the band offset to the current pixel based on the number of offsets (the number of bands), the group of bands, etc. indicated by the SAO application information such as the SAO parameter. In this case, only the offset for the valid band may be transmitted to the decoder.
- the filter unit may apply a plurality of edge offsets in the corresponding SAO application area as indicated by the SAO parameter. Details are as described above.
- the filter unit may apply SAO to chroma pixels.
- the area for applying SAO is defined independently in the case of luma and chroma so that related information can be transmitted from the encoder.
- information about the number and group of bands for applying the band offset to the chroma pixel and information about the number of categories for applying the edge offset to the chroma pixel may also be transmitted from the encoder.
- the decoder may perform SAO on the chroma pixel based on the transmitted information. Details are as described above.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (20)
- 복원 블록을 생성하는 단계;
상기 복원 블록에 디블록킹 필터를 적용하는 단계;
상기 디블록킹 필터가 적용된 복원 블록에 SAO(Sample Adaptive Offset)을 적용하는 단계; 및
상기 SAO 적용에 관한 정보를 전송하는 단계를 포함하며,
상기 SAO를 적용하는 단계에서는 SAO를 적용하는 SAO 적용 영역에 따라서 SAO를 적응적으로 적용하는 것을 특징으로 하는 영상 정보 부호화 방법. - 제1항에 있어서, 상기 SAO 적용 단계에서는 발생 빈도가 높은 세기(intensity)의 구간에 대해서는 더 조밀한 세기 단위로 밴드를 나누어 밴드 오프셋을 적용하는 것을 특징으로 하는 영상 정보 부호화 방법.
- 제1항에 있어서, 상기 SAO 적용 단계에서는 발생 빈도가 높은 세기(intensity)의 구간에 대해서 밴드 오프셋을 적용하며,
상기 정보 전송 단계에서는 상기 밴드 오프셋이 적용된 구간에 대한 정보를 전송하는 것을 특징으로 하는 영상 정보 부호화 방법. - 제1항에 있어서, 상기 SAO 적용 단계에서는 발생 빈도가 높은 밴드에 대해서만 오프셋을 적용하며,
상기 정보 전송 단계에서는 상기 적용된 오프셋에 관한 정보를 전송하는 것을 특징으로 하는 영상 정보 부호화 방법. - 제1항에 있어서, 상기 SAO 적용 단계에서는
하나의 SAO 적용 영역의 각 화소에 대하여, 서로 다른 복수의 에지 오프셋을 선택적으로 적용하는 것을 특징으로 하는 영상 정보 부호화 방법. - 복원 블록을 생성하는 단계;
상기 복원 블록에 디블록킹 필터를 적용하는 단계;
상기 디블록킹 필터가 적용된 복원 블록에 SAO(Sample Adaptive Offset)을 적용하는 단계; 및
상기 SAO 적용에 관한 정보를 전송하는 단계를 포함하며,
상기 SAO 적용 단계에서는 크로마 픽셀에 대하여 SAO를 적용하고,
상기 정보 전송 단계에서는 크로마 픽셀에 대한 SAO 적용 여부에 대한 정보와 함께 영역 정보, 상기 SAO 적용 영역의 분할 정보, SAO 타입 정보 및 SAO 오프셋 정보 중 적어도 하나를 전송하는 것을 특징으로 하는 영상 정보 부호화 방법. - 제6항에 있어서, 상기 SAO 적용 단계에서는 루마에 대한 SAO 적용 영역과는 별도로 크로마에 대한 SAO 적용 영역을 설정하는 것을 특징을 하는 영상 정보 부호화 방법.
- 제6항에 있어서, 상기 SAO 적용 단계에서는 크로마 픽셀의 세기를 분류하고, 전체 세기 구간 중 발생 빈도가 높은 구간에 위치하는 밴드에 대해서 밴드 오프셋을 적용하는 것을 특징으로 하는 영상 정보 부호화 방법.
- 제6항에 있어서, 상기 SAO 적용 단계에서는 현재 크로마 픽셀과 주변 크로마 픽셀의 관계가, 주변 크로마 픽셀 중 적어도 하나의 세기가 현재 크로마 픽셀의 세기보다 큰 경우와 주변 크로마 픽셀 중 적어도 하나의 세기가 현재 크로마 픽셀의 세기보다 작은 경우 중 어느 경우에 속하는지를 결정하고, 상기 결정에 따라 상기 현재 크로마 픽셀에 에지 오프셋을 적용하는 것을 특징으로 하는 영상 정보 부호화 방법.
- 제6항에 있어서, 상기 정보 전송 단계에는 루마에 대한 것인지 크로마에 대한 것인지를 구분하여 SAO 정보를 전송하는 것을 특징으로 하는 영상 정보 부호화 방법.
- 정보를 수신하는 단계;
상기 정보를 기반으로 복원 블록을 생성하는 단계:
상기 복원 블록에 디블록킹 필터를 적용하는 단계; 및
상기 디블록킹 필터가 적용된 복원 블록에 SAO(Sample Adaptive Offset)를 적용하는 단계를 전송하는 단계를 포함하며,
상기 SAO를 적용하는 단계에서는 SAO를 적용하는 SAO 적용 영역에 따라서 SAO를 적응적으로 적용하는 것을 특징으로 하는 영상 정보 복호화 방법. - 제11항에 있어서, 상기 SAO 적용 단계에서는 발생 빈도가 높은 세기(intensity)의 구간에 대해서는 더 조밀한 세기 단위로 밴드를 나누어 밴드 오프셋을 적용하는 것을 특징으로 하는 영상 정보 복호화 방법.
- 제11항에 있어서, 상기 SAO 적용 단계에서는 발생 빈도가 높은 세기(intensity)의 구간에 대해서 밴드 오프셋을 적용하며,
상기 발생 빈도가 높은 세기의 구간은 상기 수신한 정보를 기반으로 결정되는 것을 특징으로 하는 영상 정보 복호화 방법. - 제11항에 있어서, 상기 SAO 적용 단계에서는, 전체 밴드들 중에서 상기 수신한 정보에 포함된 오프셋에 대응하는 밴드에만 상기 오프셋을 적용하는 것을 특징으로 하는 영상 정보 복호화 방법.
- 제11항에 있어서, 상기 SAO 적용 단계에서는
하나의 SAO 적용 영역의 각 화소에 대하여, 서로 다른 복수의 에지 오프셋을 선택적으로 적용하며,
상기 선택적으로 적용되는 에지 오프셋은 상기 수신한 정보를 기반으로 결정되는 것을 특징으로 하는 영상 정보 복호화 방법. - 정보를 수신하는 단계;
상기 정보를 기반으로 복원 블록을 생성하는 단계;
상기 복원 블록에 디블록킹 필터를 적용하는 단계; 및
상기 디블록킹 필터가 적용된 복원 블록에 SAO(Sample Adaptive Offset)를 적용하는 단계를 포함하며,
상기 SAO 적용 단계에서는 크로마 픽셀에 대하여 SAO를 적용하고,
상기 정보 수신 단계에서 수신된 정보는 크로마 픽셀에 대한 SAO 적용 여부에 대한 정보와 함께 영역 정보, 상기 SAO 적용 영역의 분할 정보, SAO 타입 정보 및 SAO 오프셋 정보 중 적어도 하나를 포함하는 것을 특징으로 하는 영상 정보 복호화 방법. - 제16항에 있어서, 상기 SAO 적용 단계에서 크로마에 대한 SAO 적용 영역은 루마에 대한 SAO 적용 영역과는 별도로 설정되는 것을 특징을 하는 영상 정보 복호화 방법.
- 제16항에 있어서, 상기 SAO 적용 단계에서는 크로마 픽셀의 세기를 분류하고, 전체 세기 구간 중 발생 빈도가 높은 구간에 위치하는 밴드에 대해서 밴드 오프셋을 적용하는 것을 특징으로 하는 영상 정보 복호화 방법.
- 제16항에 있어서, 상기 SAO 적용 단계에서는 현재 크로마 픽셀과 주변 크로마 픽셀의 관계가, 주변 크로마 픽셀 중 적어도 하나의 세기가 현재 크로마 픽셀의 세기보다 큰 경우와 주변 크로마 픽셀 중 적어도 하나의 세기가 현재 크로마 픽셀의 세기보다 작은 경우 중 어느 경우에 속하는지를 결정하고, 상기 결정에 따라 상기 현재 크로마 픽셀에 에지 오프셋을 적용하며,
상기 에지 오프셋의 값은 상기 수신 단계에서 수신한 정보에 기반하여 결정되는 것을 특징으로 하는 영상 정보 부호화 방법. - 제16항에 있어서, 상기 정보 수신 단계에서 수신되는 정보는 루마에 대한 정보인지 크로마에 대한 정보인지를 또는 크로마와 루마 모두에 대한 것인지가 지시되어 있는 것을 특징으로 하는 영상 정보 복호화 방법.
Priority Applications (19)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201180072897.3A CN103765904B (zh) | 2011-06-24 | 2011-12-16 | 图像信息编码和解码方法 |
US14/129,216 US9294770B2 (en) | 2011-06-24 | 2011-12-16 | Image information encoding and decoding method |
KR1020217007484A KR102338406B1 (ko) | 2011-06-24 | 2011-12-16 | 영상 정보 부호화 및 복호화 방법 |
KR1020187029923A KR102006479B1 (ko) | 2011-06-24 | 2011-12-16 | 영상 정보 부호화 및 복호화 방법 |
MX2014000042A MX2014000042A (es) | 2011-06-24 | 2011-12-16 | Metodo de codificacion y decodificacion de informacion de imagenes. |
KR1020137034682A KR101807810B1 (ko) | 2011-06-24 | 2011-12-16 | 영상 정보 부호화 및 복호화 방법 |
KR1020197022192A KR102104594B1 (ko) | 2011-06-24 | 2011-12-16 | 영상 정보 부호화 및 복호화 방법 |
KR1020177035129A KR101910618B1 (ko) | 2011-06-24 | 2011-12-16 | 영상 정보 부호화 및 복호화 방법 |
CA2840476A CA2840476C (en) | 2011-06-24 | 2011-12-16 | Encoding and decoding video applying independent offset for luma and chroma samples |
KR1020217040181A KR102492009B1 (ko) | 2011-06-24 | 2011-12-16 | 영상 정보 부호화 및 복호화 방법 |
EP11868293.9A EP2725790A4 (en) | 2011-06-24 | 2011-12-16 | PROCESS FOR CODING AND DECODING IMAGE INFORMATION |
KR1020207011364A KR102229157B1 (ko) | 2011-06-24 | 2011-12-16 | 영상 정보 부호화 및 복호화 방법 |
US14/658,895 US9253489B2 (en) | 2011-06-24 | 2015-03-16 | Image information encoding and decoding method |
US14/990,405 US9743083B2 (en) | 2011-06-24 | 2016-01-07 | Image information encoding and decoding method |
US15/648,206 US10091505B2 (en) | 2011-06-24 | 2017-07-12 | Image information encoding and decoding method |
US16/113,730 US10547837B2 (en) | 2011-06-24 | 2018-08-27 | Image information encoding and decoding method |
US16/712,239 US10944968B2 (en) | 2011-06-24 | 2019-12-12 | Image information encoding and decoding method |
US17/158,695 US11303893B2 (en) | 2011-06-24 | 2021-01-26 | Image information encoding and decoding method |
US17/689,842 US11700369B2 (en) | 2011-06-24 | 2022-03-08 | Image information encoding and decoding method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161500617P | 2011-06-24 | 2011-06-24 | |
US61/500,617 | 2011-06-24 |
Related Child Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/129,216 A-371-Of-International US9294770B2 (en) | 2011-06-24 | 2011-12-16 | Image information encoding and decoding method |
US14/658,895 Continuation US9253489B2 (en) | 2011-06-24 | 2015-03-16 | Image information encoding and decoding method |
US14/990,405 Continuation US9743083B2 (en) | 2011-06-24 | 2016-01-07 | Image information encoding and decoding method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012176964A1 true WO2012176964A1 (ko) | 2012-12-27 |
Family
ID=47422770
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2011/009720 WO2012176964A1 (ko) | 2011-06-24 | 2011-12-16 | 영상 정보 부호화 및 복호화 방법 |
Country Status (7)
Country | Link |
---|---|
US (8) | US9294770B2 (ko) |
EP (1) | EP2725790A4 (ko) |
KR (7) | KR102104594B1 (ko) |
CN (5) | CN107426578B (ko) |
CA (5) | CA3203096A1 (ko) |
MX (1) | MX2014000042A (ko) |
WO (1) | WO2012176964A1 (ko) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150004292A (ko) * | 2013-07-01 | 2015-01-12 | 삼성전자주식회사 | 필터링을 수반한 비디오 부호화 및 복호화 방법 및 그 장치 |
WO2015088284A1 (ko) * | 2013-12-13 | 2015-06-18 | 삼성전자 주식회사 | 비디오 부호화 및 복호화에서의 픽셀 프로세싱 방법 및 장치 |
CN105993174A (zh) * | 2013-12-12 | 2016-10-05 | 三星电子株式会社 | 用于用信号传送sao参数的视频编码方法和设备以及视频解码方法和设备 |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4075799B1 (en) | 2011-06-14 | 2024-04-10 | LG Electronics Inc. | Apparatus for encoding and decoding image information |
KR102104594B1 (ko) | 2011-06-24 | 2020-04-24 | 엘지전자 주식회사 | 영상 정보 부호화 및 복호화 방법 |
GB201119206D0 (en) | 2011-11-07 | 2011-12-21 | Canon Kk | Method and device for providing compensation offsets for a set of reconstructed samples of an image |
US9277194B2 (en) | 2011-11-08 | 2016-03-01 | Texas Instruments Incorporated | Method and apparatus for image and video coding using hierarchical sample adaptive band offset |
KR101674777B1 (ko) * | 2011-11-08 | 2016-11-09 | 구글 테크놀로지 홀딩스 엘엘씨 | 샘플 적응 오프셋 코딩 및/또는 시그널링을 위한 장치들 및 방법들 |
CN106851275A (zh) * | 2012-05-29 | 2017-06-13 | 寰发股份有限公司 | 视频数据的自适应采样点偏移的处理装置及方法 |
US20150036738A1 (en) | 2013-07-30 | 2015-02-05 | Texas Instruments Incorporated | Method and apparatus for real-time sao parameter estimation |
KR102301654B1 (ko) * | 2014-04-11 | 2021-09-13 | 한국전자통신연구원 | 적응적 오프셋 필터 적용 방법 및 장치 |
CN105530519B (zh) * | 2014-09-29 | 2018-09-25 | 炬芯(珠海)科技有限公司 | 一种环内滤波方法及装置 |
US10432961B2 (en) | 2015-03-10 | 2019-10-01 | Apple Inc. | Video encoding optimization of extended spaces including last stage processes |
US9872026B2 (en) * | 2015-06-12 | 2018-01-16 | Intel Corporation | Sample adaptive offset coding |
US10455228B2 (en) | 2016-03-21 | 2019-10-22 | Qualcomm Incorporated | Determining prediction parameters for non-square blocks in video coding |
KR101981687B1 (ko) * | 2016-05-04 | 2019-05-24 | 한국항공대학교산학협력단 | 오프셋 정보 부호화 및 복호화 방법 및 장치 |
EP3291553A1 (en) * | 2016-08-30 | 2018-03-07 | Thomson Licensing | Method and apparatus for video coding with sample adaptive offset |
WO2018120230A1 (zh) * | 2016-12-30 | 2018-07-05 | 华为技术有限公司 | 图像滤波方法、装置以及设备 |
US11811975B2 (en) | 2017-05-31 | 2023-11-07 | Interdigital Madison Patent Holdings, Sas | Method and a device for picture encoding and decoding |
WO2019083243A1 (ko) * | 2017-10-23 | 2019-05-02 | 에스케이텔레콤 주식회사 | Sao 필터링을 위한 방법 및 장치 |
KR102617469B1 (ko) * | 2017-10-23 | 2023-12-26 | 에스케이텔레콤 주식회사 | Sao 필터링을 위한 방법 및 장치 |
WO2019135294A1 (ja) * | 2018-01-05 | 2019-07-11 | 株式会社ソシオネクスト | 符号化方法、復号方法、符号化装置、復号装置、符号化プログラム及び復号プログラム |
US11095876B2 (en) | 2018-01-26 | 2021-08-17 | Samsung Electronics Co., Ltd. | Image processing device |
KR102465206B1 (ko) * | 2018-01-26 | 2022-11-09 | 삼성전자주식회사 | 이미지 처리 장치 |
CN117896531A (zh) * | 2018-09-05 | 2024-04-16 | 华为技术有限公司 | 色度块预测方法以及设备 |
KR20210136988A (ko) * | 2019-04-03 | 2021-11-17 | 엘지전자 주식회사 | 비디오 또는 영상 코딩 방법 및 그 장치 |
WO2020204412A1 (ko) * | 2019-04-03 | 2020-10-08 | 엘지전자 주식회사 | 적응적 루프 필터 절차를 동반한 비디오 또는 영상 코딩 |
WO2020204420A1 (ko) * | 2019-04-03 | 2020-10-08 | 엘지전자 주식회사 | 필터링 기반 비디오 또는 영상 코딩 |
US20220277491A1 (en) * | 2019-05-31 | 2022-09-01 | Electronics And Telecommunications Research Institute | Method and device for machine learning-based image compression using global context |
WO2021006633A1 (ko) * | 2019-07-08 | 2021-01-14 | 엘지전자 주식회사 | 인루프 필터링 기반 비디오 또는 영상 코딩 |
WO2021029720A1 (ko) * | 2019-08-13 | 2021-02-18 | 한국전자통신연구원 | 분할을 사용하는 영상 부호화/복호화를 위한 방법, 장치 및 기록 매체 |
WO2021054790A1 (ko) * | 2019-09-18 | 2021-03-25 | 한국전자통신연구원 | 분할을 사용하는 영상 부호화/복호화를 위한 방법, 장치 및 기록 매체 |
JP2021158633A (ja) * | 2020-03-30 | 2021-10-07 | Kddi株式会社 | 画像復号装置、画像復号方法及びプログラム |
CN114007067B (zh) * | 2020-07-28 | 2023-05-23 | 北京达佳互联信息技术有限公司 | 对视频信号进行解码的方法、设备和介质 |
WO2022035687A1 (en) * | 2020-08-13 | 2022-02-17 | Beijing Dajia Internet Information Technology Co., Ltd. | Chroma coding enhancement in cross-component sample adaptive offset |
WO2022178424A1 (en) * | 2021-02-22 | 2022-08-25 | Beijing Dajia Internet Information Technology Co., Ltd. | Coding enhancement cross-component sample adaptive offset |
MX2023010325A (es) * | 2021-03-18 | 2023-09-14 | Beijing Dajia Internet Information Tech Co Ltd | Mejora de la codificacion en la desviacion adaptativa por muestras de componentes cruzados. |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007536828A (ja) * | 2004-05-06 | 2007-12-13 | クゥアルコム・インコーポレイテッド | 低ビットレート映像圧縮のための画像エンハンスメント方法及び装置 |
KR20100030638A (ko) * | 2007-01-12 | 2010-03-18 | 미쓰비시덴키 가부시키가이샤 | 동화상 부호화 장치 및 동화상 부호화 방법 |
KR20100081148A (ko) * | 2009-01-05 | 2010-07-14 | 에스케이 텔레콤주식회사 | 블록 모드 부호화/복호화 방법 및 장치와 그를 이용한 영상부호화/복호화 방법 및 장치 |
JP2010245734A (ja) * | 2009-04-03 | 2010-10-28 | Oki Electric Ind Co Ltd | 映像圧縮符号化データの復号装置 |
Family Cites Families (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7227901B2 (en) * | 2002-11-21 | 2007-06-05 | Ub Video Inc. | Low-complexity deblocking filter |
JP4617644B2 (ja) * | 2003-07-18 | 2011-01-26 | ソニー株式会社 | 符号化装置及び方法 |
KR100657268B1 (ko) * | 2004-07-15 | 2006-12-14 | 학교법인 대양학원 | 컬러 영상의 신축적 부호화, 복호화 방법 및 장치 |
EP1878249B1 (en) * | 2005-04-01 | 2020-03-04 | LG Electronics, Inc. | Method for scalably decoding a video signal |
US7751484B2 (en) * | 2005-04-27 | 2010-07-06 | Lsi Corporation | Method for composite video artifacts reduction |
JP5112300B2 (ja) * | 2005-06-01 | 2013-01-09 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | コンテンツ項目の特性を決定する方法および電子装置 |
KR100727970B1 (ko) * | 2005-08-30 | 2007-06-13 | 삼성전자주식회사 | 영상의 부호화 및 복호화 장치와, 그 방법, 및 이를수행하기 위한 프로그램이 기록된 기록 매체 |
CN101507267B (zh) * | 2005-09-07 | 2011-09-14 | 维德约股份有限公司 | 用于使用可缩放视频编码进行可缩放和低延迟视频会议的系统和方法 |
US9001899B2 (en) * | 2006-09-15 | 2015-04-07 | Freescale Semiconductor, Inc. | Video information processing system with selective chroma deblock filtering |
WO2008133455A1 (en) * | 2007-04-25 | 2008-11-06 | Lg Electronics Inc. | A method and an apparatus for decoding/encoding a video signal |
JP2009004920A (ja) * | 2007-06-19 | 2009-01-08 | Panasonic Corp | 画像符号化装置および画像符号化方法 |
CN101399991B (zh) * | 2007-09-26 | 2010-11-10 | 华为技术有限公司 | 一种视频解码的方法和装置 |
US20090154567A1 (en) * | 2007-12-13 | 2009-06-18 | Shaw-Min Lei | In-loop fidelity enhancement for video compression |
EP2232874B1 (en) * | 2008-01-08 | 2012-12-05 | Telefonaktiebolaget L M Ericsson (publ) | Adaptive filtering |
US8363734B2 (en) * | 2008-01-12 | 2013-01-29 | Huaya Microelectronics | Multi-directional comb filtering in a digital video decoder |
KR101596829B1 (ko) * | 2008-05-07 | 2016-02-23 | 엘지전자 주식회사 | 비디오 신호의 디코딩 방법 및 장치 |
US9143803B2 (en) * | 2009-01-15 | 2015-09-22 | Qualcomm Incorporated | Filter prediction based on activity metrics in video coding |
US8394561B2 (en) * | 2009-07-20 | 2013-03-12 | Xerox Corporation | Colored toners |
CN101783957B (zh) * | 2010-03-12 | 2012-04-18 | 清华大学 | 一种视频预测编码方法和装置 |
US8660174B2 (en) * | 2010-06-15 | 2014-02-25 | Mediatek Inc. | Apparatus and method of adaptive offset for video coding |
CN101895761B (zh) * | 2010-07-29 | 2013-01-23 | 江苏大学 | 一种快速帧内预测算法 |
US9819966B2 (en) * | 2010-09-01 | 2017-11-14 | Qualcomm Incorporated | Filter description signaling for multi-filter adaptive filtering |
JP5485851B2 (ja) | 2010-09-30 | 2014-05-07 | 日本電信電話株式会社 | 映像符号化方法,映像復号方法,映像符号化装置,映像復号装置およびそれらのプログラム |
US20130177078A1 (en) * | 2010-09-30 | 2013-07-11 | Electronics And Telecommunications Research Institute | Apparatus and method for encoding/decoding video using adaptive prediction block filtering |
US9055305B2 (en) * | 2011-01-09 | 2015-06-09 | Mediatek Inc. | Apparatus and method of sample adaptive offset for video coding |
WO2012063878A1 (ja) * | 2010-11-10 | 2012-05-18 | ソニー株式会社 | 画像処理装置と画像処理方法 |
EP2661879B1 (en) * | 2011-01-03 | 2019-07-10 | HFI Innovation Inc. | Method of filter-unit based in-loop filtering |
US9001883B2 (en) * | 2011-02-16 | 2015-04-07 | Mediatek Inc | Method and apparatus for slice common information sharing |
ES2715782T3 (es) * | 2011-04-21 | 2019-06-06 | Hfi Innovation Inc | Procedimiento y aparato para un filtrado en bucle mejorado |
KR101567467B1 (ko) * | 2011-05-10 | 2015-11-09 | 미디어텍 인크. | 루프내 필터 버퍼의 감소를 위한 방법 및 장치 |
US9008170B2 (en) * | 2011-05-10 | 2015-04-14 | Qualcomm Incorporated | Offset type and coefficients signaling method for sample adaptive offset |
US20120294353A1 (en) * | 2011-05-16 | 2012-11-22 | Mediatek Inc. | Apparatus and Method of Sample Adaptive Offset for Luma and Chroma Components |
CN106028050B (zh) * | 2011-05-16 | 2019-04-26 | 寰发股份有限公司 | 用于亮度和色度分量的样本自适应偏移的方法和装置 |
KR101539312B1 (ko) * | 2011-05-27 | 2015-07-24 | 미디어텍 인크. | 비디오 프로세싱에 대한 라인 버퍼 감소를 위한 방법 및 장치 |
EP4075799B1 (en) * | 2011-06-14 | 2024-04-10 | LG Electronics Inc. | Apparatus for encoding and decoding image information |
US10038903B2 (en) | 2011-06-22 | 2018-07-31 | Texas Instruments Incorporated | Method and apparatus for sample adaptive offset parameter estimation in video coding |
US10484693B2 (en) | 2011-06-22 | 2019-11-19 | Texas Instruments Incorporated | Method and apparatus for sample adaptive offset parameter estimation for image and video coding |
JP5973434B2 (ja) | 2011-06-23 | 2016-08-23 | 華為技術有限公司Huawei Technologies Co.,Ltd. | 画像フィルタ装置、フィルタ方法および動画像復号装置 |
KR102104594B1 (ko) * | 2011-06-24 | 2020-04-24 | 엘지전자 주식회사 | 영상 정보 부호화 및 복호화 방법 |
WO2013042884A1 (ko) * | 2011-09-19 | 2013-03-28 | 엘지전자 주식회사 | 영상 부호화/복호화 방법 및 그 장치 |
US20130113880A1 (en) * | 2011-11-08 | 2013-05-09 | Jie Zhao | High Efficiency Video Coding (HEVC) Adaptive Loop Filter |
US9282328B2 (en) * | 2012-02-10 | 2016-03-08 | Broadcom Corporation | Sample adaptive offset (SAO) in accordance with video coding |
US9554149B2 (en) * | 2012-02-29 | 2017-01-24 | Lg Electronics, Inc. | Inter-layer prediction method and apparatus using same |
US9628822B2 (en) * | 2014-01-30 | 2017-04-18 | Qualcomm Incorporated | Low complexity sample adaptive offset encoding |
CN107113437A (zh) * | 2014-10-31 | 2017-08-29 | 三星电子株式会社 | 应用多偏移方案的视频编码方法和设备以及视频解码方法和设备 |
-
2011
- 2011-12-16 KR KR1020197022192A patent/KR102104594B1/ko active IP Right Grant
- 2011-12-16 WO PCT/KR2011/009720 patent/WO2012176964A1/ko active Application Filing
- 2011-12-16 EP EP11868293.9A patent/EP2725790A4/en not_active Withdrawn
- 2011-12-16 CN CN201710159847.8A patent/CN107426578B/zh active Active
- 2011-12-16 US US14/129,216 patent/US9294770B2/en active Active
- 2011-12-16 CN CN201180072897.3A patent/CN103765904B/zh active Active
- 2011-12-16 KR KR1020137034682A patent/KR101807810B1/ko active IP Right Grant
- 2011-12-16 KR KR1020187029923A patent/KR102006479B1/ko active IP Right Grant
- 2011-12-16 KR KR1020207011364A patent/KR102229157B1/ko active IP Right Grant
- 2011-12-16 CA CA3203096A patent/CA3203096A1/en active Pending
- 2011-12-16 CA CA2982695A patent/CA2982695C/en active Active
- 2011-12-16 MX MX2014000042A patent/MX2014000042A/es active IP Right Grant
- 2011-12-16 CA CA2840476A patent/CA2840476C/en active Active
- 2011-12-16 CN CN201710160528.9A patent/CN107105305B/zh active Active
- 2011-12-16 KR KR1020217040181A patent/KR102492009B1/ko active IP Right Grant
- 2011-12-16 KR KR1020177035129A patent/KR101910618B1/ko active IP Right Grant
- 2011-12-16 CA CA3039403A patent/CA3039403C/en active Active
- 2011-12-16 CN CN201710160527.4A patent/CN107105242B/zh active Active
- 2011-12-16 KR KR1020217007484A patent/KR102338406B1/ko active IP Right Grant
- 2011-12-16 CN CN201710159982.2A patent/CN107426579B/zh active Active
- 2011-12-16 CA CA3116207A patent/CA3116207C/en active Active
-
2015
- 2015-03-16 US US14/658,895 patent/US9253489B2/en active Active
-
2016
- 2016-01-07 US US14/990,405 patent/US9743083B2/en active Active
-
2017
- 2017-07-12 US US15/648,206 patent/US10091505B2/en active Active
-
2018
- 2018-08-27 US US16/113,730 patent/US10547837B2/en active Active
-
2019
- 2019-12-12 US US16/712,239 patent/US10944968B2/en active Active
-
2021
- 2021-01-26 US US17/158,695 patent/US11303893B2/en active Active
-
2022
- 2022-03-08 US US17/689,842 patent/US11700369B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007536828A (ja) * | 2004-05-06 | 2007-12-13 | クゥアルコム・インコーポレイテッド | 低ビットレート映像圧縮のための画像エンハンスメント方法及び装置 |
KR20100030638A (ko) * | 2007-01-12 | 2010-03-18 | 미쓰비시덴키 가부시키가이샤 | 동화상 부호화 장치 및 동화상 부호화 방법 |
KR20100081148A (ko) * | 2009-01-05 | 2010-07-14 | 에스케이 텔레콤주식회사 | 블록 모드 부호화/복호화 방법 및 장치와 그를 이용한 영상부호화/복호화 방법 및 장치 |
JP2010245734A (ja) * | 2009-04-03 | 2010-10-28 | Oki Electric Ind Co Ltd | 映像圧縮符号化データの復号装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2725790A4 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20150004292A (ko) * | 2013-07-01 | 2015-01-12 | 삼성전자주식회사 | 필터링을 수반한 비디오 부호화 및 복호화 방법 및 그 장치 |
KR102233965B1 (ko) * | 2013-07-01 | 2021-03-30 | 삼성전자주식회사 | 필터링을 수반한 비디오 부호화 및 복호화 방법 및 그 장치 |
CN105993174A (zh) * | 2013-12-12 | 2016-10-05 | 三星电子株式会社 | 用于用信号传送sao参数的视频编码方法和设备以及视频解码方法和设备 |
CN111263150A (zh) * | 2013-12-12 | 2020-06-09 | 三星电子株式会社 | 视频编码设备和视频解码设备 |
CN111263149A (zh) * | 2013-12-12 | 2020-06-09 | 三星电子株式会社 | 视频编码方法和设备以及视频解码方法和设备 |
US10728547B2 (en) | 2013-12-12 | 2020-07-28 | Samsung Electronics Co., Ltd. | Video encoding method and apparatus, and video decoding method and apparatus, for signaling SAO parameter |
CN111263149B (zh) * | 2013-12-12 | 2021-10-26 | 三星电子株式会社 | 视频编码方法和设备以及视频解码方法和设备 |
CN111263150B (zh) * | 2013-12-12 | 2021-10-26 | 三星电子株式会社 | 视频编码设备和视频解码设备 |
WO2015088284A1 (ko) * | 2013-12-13 | 2015-06-18 | 삼성전자 주식회사 | 비디오 부호화 및 복호화에서의 픽셀 프로세싱 방법 및 장치 |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11758149B2 (en) | In-loop filtering method and apparatus for same | |
KR102229157B1 (ko) | 영상 정보 부호화 및 복호화 방법 | |
KR102088014B1 (ko) | 영상 정보 인코딩 및 디코딩 방법 | |
WO2012138032A1 (ko) | 영상 정보 부호화 방법 및 복호화 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11868293 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2840476 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14129216 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 20137034682 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2014/000042 Country of ref document: MX |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011868293 Country of ref document: EP |