WO2016204372A1 - 영상 코딩 시스템에서 필터 뱅크를 이용한 영상 필터링 방법 및 장치 - Google Patents
영상 코딩 시스템에서 필터 뱅크를 이용한 영상 필터링 방법 및 장치 Download PDFInfo
- Publication number
- WO2016204372A1 WO2016204372A1 PCT/KR2016/001128 KR2016001128W WO2016204372A1 WO 2016204372 A1 WO2016204372 A1 WO 2016204372A1 KR 2016001128 W KR2016001128 W KR 2016001128W WO 2016204372 A1 WO2016204372 A1 WO 2016204372A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- filter
- information
- filter information
- bank
- filter bank
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
Definitions
- the present invention relates to an image coding technique, and more particularly, to an image filtering method and apparatus using a filter bank in an image coding system.
- video quality of the terminal device can be supported and the network environment is diversified, in general, video of general quality may be used in one environment, but higher quality video may be used in another environment. .
- a consumer who purchases video content on a mobile terminal can view the same video content on a larger screen and at a higher resolution through a large display in the home.
- An object of the present invention is to provide a method and apparatus for improving image coding efficiency.
- Another object of the present invention is to provide a method and apparatus for improving the subjective / objective picture quality of a reconstructed picture.
- Another technical problem of the present invention is to provide a method and an apparatus for filtering a reconstructed picture using a filter bank.
- Another technical problem of the present invention is to configure a filter bank in consideration of characteristics of an image.
- Another technical problem of the present invention is to adaptively apply a filter in consideration of characteristics of an image.
- a reconstructed picture filtering method performed by an encoding apparatus.
- the method may include deriving first filter information of a target region of a reconstructed picture, selecting one of the derived first filter information and second filter information included in a filter bank, and based on the selected filter information. And performing filtering on the target area in the reconstructed picture based on the selected filter information.
- selecting one of the derived first filter information and second filter information included in the filter bank may have the same image characteristic as that of the target region included in the derived first filter information. Determining whether the second filter information exists in the filter bank and when the second filter information exists, one of the first filter information and the second filter information based on a rate-distortion (RD) cost. It may include the step of selecting.
- RD rate-distortion
- each of the first filter information and the second filter information includes at least one information of activity, directionality, filter shape, frame number, and filter coefficient, and includes the image characteristics of the first filter information and the second filter information.
- Each of the image characteristics may be determined based on the activity and the direction included in the first filter information and the second filter information, respectively.
- the method may further include transmitting a filter index indicating an index of the second filter information in the filter bank to the decoder.
- the method may further include transmitting a filter bank available flag to a decoder, and when the value of the filter bank available flag indicates 1, transmitting a filter type flag, wherein the filter index is the filter index. It may be transmitted when the value of the type flag indicates 1.
- the method may further include updating the first filter information in the filter bank when the second filter information does not exist or the first filter information is selected.
- a reconstruction picture filtering method performed by a decoding apparatus includes receiving a filter type flag indicating whether filter based on a filter bank is applied to a target region of a reconstructed picture, selecting filter information for the target region based on the filter type flag, and the selected The filtering is performed on the target area in the reconstructed picture based on filter information.
- a decoding apparatus for performing reconstructed picture filtering.
- the decoding apparatus may include a receiver configured to receive a filter type flag indicating whether filter based on a filter bank is applied to a target region of a reconstructed picture, select filter information for the target region based on the filter type flag, and select the selected filter. And a filter configured to perform filtering on the target area in the reconstructed picture based on information.
- the present invention it is possible to perform efficient reconstruction picture filtering based on a filter bank, thereby reducing the amount of data allocated for transmission and reception of filter information, and consequently, to improve compression and coding efficiency.
- FIG. 1 is a block diagram schematically illustrating a video encoding apparatus according to an embodiment of the present invention.
- FIG. 2 is a block diagram schematically illustrating a video decoding apparatus according to an embodiment of the present invention.
- FIG. 3 exemplarily illustrates a method of deriving activity and direction of an image according to an embodiment of the present invention.
- FIG. 4 is a conceptual diagram schematically showing a filter unit according to an embodiment of the present invention.
- FIG. 5 is an example of the operation of an ALF unit and a filter bank according to an embodiment of the present invention.
- FIG. 6 is a flowchart schematically illustrating a filter bank management and reconstruction picture filtering method performed by an encoder.
- FIG. 7 is a flowchart schematically illustrating a filter bank management and reconstruction picture filtering method performed by a decoder.
- each of the components in the drawings described in the present invention are shown independently for the convenience of description of the different characteristic functions in the video encoding apparatus / decoding apparatus, each component is a separate hardware or separate software It does not mean that it is implemented.
- two or more of each configuration may be combined to form one configuration, or one configuration may be divided into a plurality of configurations.
- Embodiments in which each configuration is integrated and / or separated are also included in the present invention without departing from the spirit of the present invention.
- FIG. 1 is a block diagram schematically illustrating a video encoding apparatus according to an embodiment of the present invention.
- the encoding apparatus 100 may include a picture splitter 105, a predictor 110, a transformer 115, a quantizer 120, a reordering unit 125, an entropy encoding unit 130, An inverse quantization unit 135, an inverse transform unit 140, a filter unit 145, and a memory 150 are provided.
- the picture division unit 105 may divide the input picture into at least one processing unit block.
- the block as the processing unit may be a prediction unit (PU), a transform unit (TU), or a coding unit (CU).
- a picture may be composed of a plurality of coding tree units (CTUs), and each CTU may be split into CUs in a quad-tree structure.
- a CU may be divided into quad tree structures with CUs of a lower depth.
- PU and TU may be obtained from a CU.
- a PU may be partitioned from a CU into a symmetrical or asymmetrical square structure.
- the TU may also be divided into quad tree structures from the CU.
- the predictor 110 includes an inter predictor for performing inter prediction and an intra predictor for performing intra prediction, as described below.
- the prediction unit 110 performs prediction on the processing unit of the picture in the picture dividing unit 105 to generate a prediction block including a prediction sample (or a prediction sample array).
- the processing unit of the picture in the prediction unit 110 may be a CU, a TU, or a PU.
- the prediction unit 110 may determine whether the prediction performed on the processing unit is inter prediction or intra prediction, and determine specific contents (eg, prediction mode, etc.) of each prediction method.
- the processing unit in which the prediction is performed and the processing unit in which the details of the prediction method and the prediction method are determined may be different.
- the method of prediction and the prediction mode may be determined in units of PUs, and the prediction may be performed in units of TUs.
- a prediction block may be generated by performing prediction based on information of at least one picture of a previous picture and / or a subsequent picture of the current picture.
- a prediction block may be generated by performing prediction based on pixel information in a current picture.
- a skip mode, a merge mode, an advanced motion vector prediction (AMVP), and the like can be used.
- a reference picture may be selected for a PU and a reference block corresponding to the PU may be selected.
- the reference block may be selected in units of integer pixels (or samples) or fractional pixels (or samples).
- a prediction block is generated in which a residual signal with the current PU is minimized and the size of the motion vector is also minimized.
- the prediction block may be generated in integer pixel units, or may be generated in sub-pixel units such as 1/2 pixel unit or 1/4 pixel unit.
- the motion vector may also be expressed in units of integer pixels or less.
- Information such as an index of a reference picture selected through inter prediction, a motion vector difference (MDV), a motion vector predictor (MVV), and a residual signal may be entropy encoded and transmitted to the decoding apparatus.
- MDV motion vector difference
- MVV motion vector predictor
- a residual signal may be entropy encoded and transmitted to the decoding apparatus.
- the residual may be used as the reconstructed block, and thus the residual may not be generated, transformed, quantized, or transmitted.
- a prediction mode When performing intra prediction, a prediction mode may be determined in units of PUs, and prediction may be performed in units of PUs. In addition, a prediction mode may be determined in units of PUs, and intra prediction may be performed in units of TUs.
- the prediction mode may have, for example, 33 directional prediction modes and at least two non-directional modes.
- the non-directional mode may include a DC prediction mode and a planner mode (Planar mode).
- a prediction block may be generated after applying a filter to a reference sample.
- whether to apply the filter to the reference sample may be determined according to the intra prediction mode and / or the size of the current block.
- the residual value (the residual block or the residual signal) between the generated prediction block and the original block is input to the converter 115.
- the prediction mode information, the motion vector information, etc. used for the prediction are encoded by the entropy encoding unit 130 together with the residual value and transmitted to the decoding apparatus.
- the transform unit 115 performs transform on the residual block in units of transform blocks and generates transform coefficients.
- the transform block is a rectangular block of samples to which the same transform is applied.
- the transform block can be a transform unit (TU) and can have a quad tree structure.
- the transformer 115 may perform the transformation according to the prediction mode applied to the residual block and the size of the block.
- the residual block is transformed using a discrete sine transform (DST), otherwise the residual block is transformed into a discrete cosine transform (DCT). Can be converted using.
- DST discrete sine transform
- DCT discrete cosine transform
- the transform unit 115 may generate a transform block of transform coefficients by the transform.
- the quantization unit 120 may generate quantized transform coefficients by quantizing the residual values transformed by the transform unit 115, that is, the transform coefficients.
- the value calculated by the quantization unit 120 is provided to the inverse quantization unit 135 and the reordering unit 125.
- the reordering unit 125 rearranges the quantized transform coefficients provided from the quantization unit 120. By rearranging the quantized transform coefficients, the encoding efficiency of the entropy encoding unit 130 may be increased.
- the reordering unit 125 may rearrange the quantized transform coefficients in the form of a 2D block into a 1D vector form through a coefficient scanning method.
- the entropy encoding unit 130 entropy-codes a symbol according to a probability distribution based on the quantized transform values rearranged by the reordering unit 125 or the encoding parameter value calculated in the coding process, thereby performing a bit stream. ) Can be printed.
- the entropy encoding method receives a symbol having various values and expresses it as a decodable column while removing statistical redundancy.
- the symbol means a syntax element, a coding parameter, a residual signal, or the like that is an encoding / decoding object.
- An encoding parameter is a parameter necessary for encoding and decoding, and may include information that may be inferred in the encoding or decoding process as well as information encoded by an encoding device and transmitted to the decoding device, such as a syntax element. It means the information you need when you do.
- the encoding parameter may be, for example, a value such as an intra / inter prediction mode, a moving / motion vector, a reference image index, a coding block pattern, a residual signal presence, a transform coefficient, a quantized transform coefficient, a quantization parameter, a block size, block partitioning information, or the like. May include statistics.
- the residual signal may mean a difference between the original signal and the prediction signal, and a signal in which the difference between the original signal and the prediction signal is transformed or a signal in which the difference between the original signal and the prediction signal is converted and quantized It may mean.
- the residual signal may be referred to as a residual block in block units.
- Encoding methods such as exponential golomb, context-adaptive variable length coding (CAVLC), and context-adaptive binary arithmetic coding (CABAC) may be used for entropy encoding.
- the entropy encoding unit 130 may store a table for performing entropy encoding, such as a variable length coding (VLC) table, and the entropy encoding unit 130 may store the stored variable length coding ( Entropy encoding using a VLC) table, and the entropy encoding unit 130 derives a binarization method of a target symbol and a probability model of the target symbol / bin. Afterwards, entropy encoding may be performed using the derived binarization method or probability model.
- VLC variable length coding
- CABAC context-adaptive binary arithmetic coding
- the entropy encoding unit 130 may apply a constant change to a parameter set or syntax to be transmitted.
- the inverse quantizer 135 inversely quantizes the quantized values (quantized transform coefficients) in the quantizer 120, and the inverse transformer 140 inversely transforms the inverse quantized values in the inverse quantizer 135.
- the residual value (or the residual sample or the residual sample array) generated by the inverse quantizer 135 and the inverse transform unit 140 and the prediction block predicted by the predictor 110 are added together to reconstruct the sample (or the reconstructed sample array).
- a reconstructed block including a may be generated.
- a reconstructed block is generated by adding a residual block and a prediction block through an adder.
- the adder may be viewed as a separate unit (restore block generation unit) for generating a reconstruction block.
- the filter unit 145 may apply a deblocking filter, an adaptive loop filter (ALF), and a sample adaptive offset (SAO) to the reconstructed picture.
- ALF adaptive loop filter
- SAO sample adaptive offset
- the deblocking filter may remove distortion generated at the boundary between blocks in the reconstructed picture.
- the adaptive loop filter may perform filtering based on a value obtained by comparing the reconstructed image with the original image after the block is filtered through the deblocking filter. ALF may be performed only when high efficiency is applied.
- the SAO restores the offset difference from the original image on a pixel-by-pixel basis to the residual block to which the deblocking filter is applied, and is applied in the form of a band offset and an edge offset.
- the filter unit 145 may not apply filtering to the reconstructed block used for inter prediction.
- the memory 150 may store the reconstructed block or the picture calculated by the filter unit 145.
- the reconstructed block or picture stored in the memory 150 may be provided to the predictor 110 that performs inter prediction.
- the video decoding apparatus 200 includes an entropy decoding unit 210, a reordering unit 215, an inverse quantization unit 220, an inverse transform unit 225, a prediction unit 230, and a filter unit 235.
- Memory 240 may be included.
- the input bitstream may be decoded according to a procedure in which image information is processed in the video encoding apparatus.
- the entropy decoding unit 210 may entropy decode the input bitstream according to a probability distribution to generate symbols including symbols in the form of quantized coefficients.
- the entropy decoding method is a method of generating each symbol by receiving a binary string.
- the entropy decoding method is similar to the entropy encoding method described above.
- VLC variable length coding
- 'VLC' variable length coding
- CABAC CABAC
- the CABAC entropy decoding method receives a bin corresponding to each syntax element in a bitstream, and decodes syntax element information and decoding information of neighboring and decoding target blocks or information of symbols / bins decoded in a previous step.
- a context model may be determined using the context model, and a probability corresponding to the value of each syntax element may be generated by performing arithmetic decoding of the bin by predicting a probability of occurrence of the bin according to the determined context model.
- the CABAC entropy decoding method may update the context model by using the information of the decoded symbol / bin for the context model of the next symbol / bean after determining the context model.
- Information for generating the prediction block among the information decoded by the entropy decoding unit 210 is provided to the predictor 230, and a residual value where entropy decoding is performed by the entropy decoding unit 210, that is, a quantized transform coefficient It may be input to the reordering unit 215.
- the reordering unit 215 may reorder the information of the bitstream entropy decoded by the entropy decoding unit 210, that is, the quantized transform coefficients, based on the reordering method in the encoding apparatus.
- the reordering unit 215 may reorder the coefficients expressed in the form of a one-dimensional vector by restoring the coefficients in the form of a two-dimensional block.
- the reordering unit 215 may generate an array of coefficients (quantized transform coefficients) in the form of a 2D block by scanning coefficients based on the prediction mode applied to the current block (transform block) and the size of the transform block.
- the inverse quantization unit 220 may perform inverse quantization based on the quantization parameter provided by the encoding apparatus and the coefficient values of the rearranged block.
- the inverse transform unit 225 may perform inverse DCT and / or inverse DST on the DCT and the DST performed by the transform unit of the encoding apparatus with respect to the quantization result performed by the video encoding apparatus.
- the inverse transformation may be performed based on a transmission unit determined by the encoding apparatus or a division unit of an image.
- the DCT and / or DST in the encoding unit of the encoding apparatus may be selectively performed according to a plurality of pieces of information, such as a prediction method, a size and a prediction direction of the current block, and the inverse transformer 225 of the decoding apparatus may be Inverse transformation may be performed based on the performed transformation information.
- the prediction unit 230 includes prediction samples (or prediction sample arrays) based on prediction block generation related information provided by the entropy decoding unit 210 and previously decoded block and / or picture information provided by the memory 240.
- a prediction block can be generated.
- intra prediction for generating a prediction block based on pixel information in the current picture may be performed.
- inter prediction on the current PU may be performed based on information included in at least one of a previous picture or a subsequent picture of the current picture.
- motion information required for inter prediction of the current PU provided by the video encoding apparatus for example, a motion vector, a reference picture index, and the like, may be derived by checking a skip flag, a merge flag, and the like received from the encoding apparatus.
- a prediction block may be generated such that a residual signal with the current block is minimized and the size of the motion vector is also minimized.
- the motion information derivation scheme may vary depending on the prediction mode of the current block.
- Prediction modes applied for inter prediction may include an advanced motion vector prediction (AMVP) mode, a merge mode, and the like.
- AMVP advanced motion vector prediction
- the encoding apparatus and the decoding apparatus may generate a merge candidate list by using the motion vector of the reconstructed spatial neighboring block and / or the motion vector corresponding to the Col block, which is a temporal neighboring block.
- the motion vector of the candidate block selected from the merge candidate list is used as the motion vector of the current block.
- the encoding apparatus may transmit, to the decoding apparatus, a merge index indicating a candidate block having an optimal motion vector selected from candidate blocks included in the merge candidate list. In this case, the decoding apparatus may derive the motion vector of the current block by using the merge index.
- the encoding device and the decoding device use a motion vector corresponding to a motion vector of a reconstructed spatial neighboring block and / or a Col block, which is a temporal neighboring block, and a motion vector.
- a predictor candidate list may be generated. That is, the motion vector of the reconstructed spatial neighboring block and / or the Col vector, which is a temporal neighboring block, may be used as a motion vector candidate.
- the encoding apparatus may transmit the predicted motion vector index indicating the optimal motion vector selected from the motion vector candidates included in the list to the decoding apparatus. In this case, the decoding apparatus may select the predicted motion vector of the current block from the motion vector candidates included in the motion vector candidate list using the motion vector index.
- the encoding apparatus may obtain a motion vector difference MVD between the motion vector MV of the current block and the motion vector predictor MVP, and may encode the same and transmit the encoded motion vector to the decoding device. That is, MVD may be obtained by subtracting MVP from MV of the current block.
- the decoding apparatus may decode the received motion vector difference and derive the motion vector of the current block through the addition of the decoded motion vector difference and the motion vector predictor.
- the encoding apparatus may also transmit a reference picture index or the like indicating the reference picture to the decoding apparatus.
- the decoding apparatus may predict the motion vector of the current block using the motion information of the neighboring block, and may derive the motion vector for the current block using the residual received from the encoding apparatus.
- the decoding apparatus may generate a prediction block for the current block based on the derived motion vector and the reference picture index information received from the encoding apparatus.
- the encoding apparatus and the decoding apparatus may generate the merge candidate list using the motion information of the reconstructed neighboring block and / or the motion information of the call block. That is, the encoding apparatus and the decoding apparatus may use this as a merge candidate for the current block when there is motion information of the reconstructed neighboring block and / or the call block.
- the encoding apparatus may select a merge candidate capable of providing an optimal encoding efficiency among the merge candidates included in the merge candidate list as motion information for the current block.
- a merge index indicating the selected merge candidate may be included in the bitstream and transmitted to the decoding apparatus.
- the decoding apparatus may select one of the merge candidates included in the merge candidate list by using the transmitted merge index, and determine the selected merge candidate as motion information of the current block. Therefore, when the merge mode is applied, the motion information of the restored neighboring block and / or the call block may be used as the motion information of the current block.
- the decoding apparatus may reconstruct the current block by adding the prediction block and the residual transmitted from the encoding apparatus.
- the motion information of the reconstructed neighboring block and / or the motion information of the call block may be used to derive the motion information of the current block.
- the encoding apparatus does not transmit syntax information such as residual to the decoding apparatus other than information indicating which block motion information to use as the motion information of the current block.
- the encoding apparatus and the decoding apparatus may generate the prediction block of the current block by performing motion compensation on the current block based on the derived motion information.
- the prediction block may mean a motion compensated block generated as a result of performing motion compensation on the current block.
- the plurality of motion compensated blocks may constitute one motion compensated image.
- the reconstruction block may be generated using the prediction block generated by the predictor 230 and the residual block provided by the inverse transform unit 225.
- the reconstructed block is generated by combining the prediction block and the residual block in the adder.
- the adder may be viewed as a separate unit (restore block generation unit) for generating a reconstruction block.
- the reconstruction block includes a reconstruction sample (or reconstruction sample array) as described above
- the prediction block includes a prediction sample (or a prediction sample array)
- the residual block is a residual sample (or a residual sample). Array).
- a reconstructed sample (or reconstructed sample array) may be expressed as the sum of the corresponding predictive sample (or predictive sample array) and the residual sample (residual sample array).
- the residual is not transmitted for the block to which the skip mode is applied, and the prediction block may be a reconstruction block.
- the reconstructed block and / or picture may be provided to the filter unit 235.
- the filter unit 235 may apply deblocking filtering, sample adaptive offset (SAO), and / or ALF to the reconstructed block and / or picture.
- SAO sample adaptive offset
- the memory 240 may store the reconstructed picture or block to use as a reference picture or reference block, and may provide the reconstructed picture to the output unit.
- Components directly related to the decoding of an image for example, an entropy decoding unit 210, a reordering unit 215, an inverse quantization unit 220, an inverse transform unit 225, a prediction unit 230, and a filter unit ( 235) and the like may be distinguished from other components by a decoder or a decoder.
- the decoding apparatus 200 may further include a parsing unit (not shown) for parsing information related to the encoded image included in the bitstream.
- the parsing unit may include the entropy decoding unit 210 or may be included in the entropy decoding unit 210. Such a parser may also be implemented as one component of the decoder.
- An in-loop filter may be applied to the reconstructed picture to compensate for a difference between an original picture and a reconstructed picture due to an error occurring in a compression coding process such as quantization.
- in-loop filtering may be performed in the filter unit of the encoder and the decoder, and the filter unit may apply a deblocking filter, a sample adaptive offset (SAO), and / or an adaptive loop filter (ALF) to the reconstructed image.
- the ALF may perform filtering based on a value obtained by comparing the reconstructed picture with the original picture after the deblocking filtering and / or SAO process is performed.
- the ALF may adaptively apply a Wiener filter to the reconstructed picture after the deblocking filtering and / or the SAO process is performed. That is, the ALF may compensate for encoding error by using a Wiener filter.
- a filter for example, ALF
- ALF ALF
- the encoder and the decoder may perform filtering based on the filter shape and the filter coefficients.
- the encoder can determine the filter shape and / or filter coefficients through a predetermined process. Filtering may be applied to minimize an error occurring in the compression encoding process, and the encoder may determine a filter shape and / or a filter coefficient so as to minimize the error. Information about the determined filter may be transmitted to the decoder, and the decoder may determine the filter shape and / or filter coefficients based on the transmitted information.
- a filter bank may be configured based on characteristics of an image, and an optimal filter may be adaptively applied to a target region based on the filter bank.
- the encoder transmits index information on the filter bank to the decoder, and the decoder can obtain information about a filter applied to the target region based on the index information. Can be reduced, thereby improving coding efficiency.
- Table 1 below shows examples of filter information that may be included in the filter bank according to the present invention.
- Explanation Activity Indicates the degree of texture / error Direction
- Text / error direction Indicates texture / error direction
- Filter Shape (Length) Indicates the shape / size of the filter Frame Num. Frame Number with Filter Filter coefficient Filter coefficient information used
- the filter information according to the present invention may include at least one of activity, direction, filter shape, frame number, and filter coefficient. That is, the filter information may include a series of information, and thus may be called a filter information set.
- Mobility represents the nature of the texture or error in the target area of the picture.
- the activity value is low.
- the activity value is high.
- the vitality value may be set to a value within a specific range through normalizing, or may be used as the original value itself.
- the values 0 to 10 may be represented by 0, and 11 to 20 may be represented by 1, such that the activity values may be normalized to 0 to 9, or 0 may be the original value.
- a value of ⁇ 100 can be used as it is.
- the range mapped for normalization may also be set equally / non-uniformly according to importance.
- values that indicate relatively low activity are normalized to one value over a relatively wide range (eg 20 units), and relatively high.
- Values that indicate activity may be normalized to one value over a relatively narrow range (eg 5 units).
- the activity value may include an original value and a normalized value.
- Directionality indicates the directional nature of the texture or error in the target area.
- the directionality may be horizontal, vertical, diagonal (right upward / left upward, etc.).
- the activity and directivity may be derived based on sample points in the target area.
- FIG. 3 exemplarily illustrates a method of deriving activity and direction of an image according to an embodiment of the present invention.
- sample point E represents the current sample point
- sample points A, B, C, D, F, G, H and I represent sample points around it.
- the degree of change in the vertical direction of E can be known
- the degree of change in the horizontal direction of E can be known using the difference between E, D, and F.
- the difference in the left upward diagonal direction can be known using the difference between E, A, and I
- the change in the right upward diagonal direction can be known using the difference between E, C, and G.
- the image characteristic of the sample point E can be represented, which can represent the complexity of the image and based on the image in the target region Can derive the activity of
- the direction of the image with respect to the corresponding area may be obtained by comparing the degree of change in the vertical direction, the degree of change in the horizontal direction, and the degree of change in the diagonal direction.
- the characteristic of the image is calculated using nine sample points. However, the characteristic of the image may be calculated using more or fewer sample points. For example, when using five sample points, sample points E, B, H, D, F may be used, or sample points E, A, C, G, I may be used.
- the filter shape indicates the shape / size of the filter used. That is, one filter shape may be selected for each target block among a plurality of predetermined filter shapes. For example, in the case of ALF, various filter shapes and sizes may be used as the filter shape such as n ⁇ n star shape, m ⁇ n cross shape, and m ⁇ n diamond shape. Where n and m are positive integers and n and m may be the same or different.
- the frame number represents a frame number of a picture to which the selected filter is applied. This may be used as information for managing the filter bank. For example, the frame number may be used to remove related filter information when the temporal distance is greater than or equal to the frame number of the current picture.
- the filter coefficient represents filter coefficient information used for the filter.
- the filter coefficient may include a plurality of coefficients and may be assigned according to a criterion set in the filter tab of the corresponding filter.
- the filter bank according to the present invention represents a set of filter informations. That is, there may be a plurality of different filter information in the filter bank. For example, if there are three filter shapes and there are five active and three directionalities, there may be at least 45 filter information.
- the encoder and the decoder manage the filter banks in the same manner, and may efficiently filter the reconstructed picture by using one filter information in the filter bank.
- Table 2 below shows an example of the filter bank.
- the filter index represents an index for classifying each filter information in the filter bank.
- the encoder may inform the decoder of specific specific filter information based on the filter index.
- the CU type may be an intra prediction mode or an inter prediction mode in a CU including a region to be filtered (for example, a CU may be a target region, and a block unit subdivided within the CU may be a region to be filtered). Indicates whether it is encoded with.
- various values may be used for the filter coefficients to minimize an error occurring in the compression encoding process, and the expression is omitted.
- the encoder / decoder may adaptively select a filter to be applied to the target region based on the filter bank.
- the encoder / decoder may update the filter bank for more efficient filtering. At least one of the following managements may be performed to update the filter bank.
- the old filter is deleted.
- the oldest filter information for example, the frame number of the corresponding filter information has the largest difference from the frame number of the current picture
- the oldest filter information among the existing filter information may be deleted.
- the previously stored filter information may be replaced with the new filter information.
- having the same characteristics may mean a case in which activity and directionality are the same, or may mean a case in which activity, directionality, and filter shape (or size) are the same.
- the filter coefficient in the existing filter information and the filter coefficient in the new filter information may be synthesized.
- the size of the filter bank that is, the maximum value of the number of filter informations in the filter bank may or may not be limited.
- the size of the filter bank may be predefined between encoders / decoders, or may be determined by an encoder and signaled to a decoder.
- the size of the filter bank may be implicitly limited by the number of the characteristics even without limiting the size of the filter bank.
- FIG. 4 is a conceptual diagram schematically showing a filter unit according to an embodiment of the present invention.
- the filter unit may be included in the encoding apparatus described above with reference to FIG. 1 or may be included in the decoding apparatus described above with reference to FIG. 2.
- a unit performing filtering according to the present invention is an ALF unit.
- the filter unit 400 includes an ALF unit 420 and a filter bank 430 according to the present invention.
- the filter unit 400 may further include a deblocking filtering unit 410.
- the filter unit 400 may further include a SAO unit.
- the filter unit 400 may filter the reconstructed picture in consideration of the difference between the original picture and the reconstructed picture.
- the ALF unit 420 may perform additional filtering by the deblocking filtering unit 410 and / or the SAO unit to the reconstructed picture (or modified reconstructed picture) for which the filtering procedure is completed.
- the deblocking filtering unit 410 can deblock filter the samples around the block boundary.
- the block boundary may be a TU boundary and / or a PU boundary. That is, through deblocking filtering, an error occurring between blocks in the prediction and / or transform procedure may be corrected.
- the boundary strength (bS) of the block boundary may be detected, and whether or not to perform deblocking filtering based on the bS, and if so, whether to perform not only the luma component but also the chroma component may be determined.
- the ALF unit 420 may adaptively apply ALF to the (modified) reconstructed picture after the deblocking filtering and / or SAO process is completed. In this case, the ALF unit 420 may perform the ALF in units of a target region of the (modified) reconstructed picture.
- the ALF unit 420 may generate filter information according to the present invention and update the filter bank 430.
- the update may include storing, deleting, and modifying filter information.
- the ALF unit 420 may select (or extract) filter information for the target area from the filter bank 430.
- FIG. 5 is an example of the operation of an ALF unit and a filter bank according to an embodiment of the present invention.
- the ALF unit 520 may insert new filter information into the filter bank 530 for updating, and the ALF unit 520 may select and extract filter information stored in the filter bank 530. It may be.
- the ALF unit 520 of the encoder stage may update the filter bank 530 with new filter information derived based on a rate-distortion (RD) cost for the target region.
- the ALF unit 520 of the decoder stage may receive new filter information from the encoder and update the filter bank.
- RD rate-distortion
- the encoder unit ALF unit 520 checks whether the filter information derived for the target region is in the filter bank 530, and selects the filter information when the filter information is in the filter bank 530.
- the filter index of the filter information in the filter bank 530 may be encoded and transmitted to the decoder.
- the decoder may select or extract filter information for the target region in the filter bank 530 of the encoder based on the filter index, and perform ALF based on the filter information in the ALF unit 520.
- the filter bank 530 may initialize filter information in the filter bank 530 when there is a picture (or frame) having the characteristic of instantaneous decoding refresh (IDR). That is, when the current picture is a picture having IDR characteristics, filter information in the filter bank 530 may be initialized.
- IDR instantaneous decoding refresh
- the initialized filter bank (ie, the initial filter bank) may be empty, or may contain predefined filter information.
- FIG. 6 is a flowchart schematically illustrating a filter bank management and reconstruction picture filtering method performed by an encoder.
- the encoder derives filter information (S600).
- the encoder derives filter information for the target region to which the filtering according to the present invention is applied.
- the filter information includes image characteristics and filter coefficients for the target area.
- the image characteristic may include activity and orientation of the target area.
- the activity and the directionality can be obtained, for example, based on the method described above with reference to FIG. 3.
- the encoder calculates a first RD cost based on the derived filter information (first filter information), A second RD cost is calculated based on the filter information (second filter information) in the filter bank (S630). That is, the encoder calculates a first RD cost when performing filtering based on the first filter information and a second RD cost when performing filtering based on the second filter information.
- the encoder determines whether the second RD cost is relatively smaller based on the second filter information (S640). That is, the first RD cost is compared with the second RD cost, and it is determined whether the second RD cost is smaller than the first RD cost.
- the encoder selects and uses the filter information (second filter information) in the filter bank, and what filter information is selected in the filter bank. Generate filter index information indicating a and transmit the filter index information to a decoder (S650). In this case, the frame number information in the second filter information may be updated with the frame number of the current picture.
- the encoder selects the derived filter information (first filter information) and the filter.
- the bank is updated (S630).
- the encoder transmits the derived filter information (first filter information) to the decoder (S640).
- the encoder selects derived filter information (first filter information) and updates the filter bank (S630).
- the encoder transmits the derived filter information (first filter information) to the decoder (S640).
- the derived filter information may be inserted in place of the filter information (second filter information) stored in the filter bank.
- the encoder may perform filtering on the reconstructed picture based on the first filter information or the second filter information and generate a filtered reconstructed picture.
- the reconstructed picture may refer to the reconstructed picture after the deblocking filtering and / or SAO procedure is completed.
- the filtered reconstructed picture may be stored in a memory (eg, a decoded picture buffer (DPB)) and used as a reference picture in inter prediction.
- a memory eg, a decoded picture buffer (DPB)
- FIG. 6 shows that S630 is performed before S640, this is an example, and S640 may be performed before S630 or simultaneously.
- the RD cost comparison is performed only on the filter information having the same image characteristics as the derived filter information in the filter bank.
- RD costs may be compared based on all filter information stored in the filter bank, and the best (ie, the lowest RD cost) filter information may be selected and used.
- a filter bank available flag (filter_bank_enabled_flag) and / or a filter type flag (filter_type_flag) may be transmitted from the encoder to the decoder to perform the filtering procedure according to the present invention.
- the filter bank available flag indicates whether the filter bank is available.
- the filter bank available flag is flag information indicating whether the filtering method based on the filter bank according to the present invention can be applied in units of sequences, pictures (or frames) or slices. A value of 1 for the filter bank available flag indicates that the filter bank is available, and a value of 0 indicates that it is not available.
- the filter bank available flag may be signaled at the sequence level, picture (or frame) level or slice level.
- the filter type flag indicates whether the filtering method based on the filter bank according to the present invention is used in the target region.
- the filter type flag may be signaled only when the value of the filter bank available flag is 1.
- a value of 1 for the filter type flag indicates that filtering based on the filter bank is applied to the target region, and 0 indicates that filtering based on the filter bank is not applied to the target region.
- the value of the filter type flag is 1, it may indicate that filtering is performed based on filter information in a filter bank, and if it is 0, it may represent that filtering is performed based on received filter information.
- the target area may be an area having a specific size or may be an area according to a coding structure such as a tile, a CTU, a CU, or the like.
- the tile may represent a rectangular region composed of CTUs, and may be a basic unit of parallel processing for an encoder / decoder having a multi-core structure.
- the filter type flag may be transmitted in each target area unit.
- the filter type flag may be signaled in units of the target region or in a coding unit such as a CU including the target region.
- the filter bank available flag, the filter type flag, and the filter index may be transmitted to the decoder through a bitstream.
- FIG. 7 is a flowchart schematically illustrating a filter bank management and reconstruction picture filtering method performed by a decoder.
- the decoder receives a filter bank available flag through a bitstream and determines whether a value of the filter bank available flag indicates 1 (S700).
- the decoder receives a filter type flag through the bitstream and determines whether the value of the filter type flag is not 0 (S710).
- the decoder receives (or parses) a filter index through the bitstream (S720).
- the filter index indicates an index of specific filter information to be applied to a target region among filter information included in a filter bank.
- the decoder selects a specific filter bank from the filter bank using the filter index (S730).
- the decoder may perform filtering on the reconstructed picture based on the selected filter information.
- the reconstructed picture may refer to the reconstructed picture after the deblocking filtering and / or SAO procedure is completed.
- the filtered reconstructed picture may be stored in a memory (eg, a decoded picture buffer (DPB)), output through an output device in an output order, or may be used as a reference picture for inter prediction.
- a memory eg, a decoded picture buffer (DPB)
- the decoder does not use the filter information stored in the filter bank.
- the decoder may not perform filtering on the target region or may perform filtering based on fixed filter information.
- the decoder may receive filter information from the encoder through the bitstream and perform filtering based on the received filter information.
- the decoder may update the filter bank based on the received filter information.
- the decoder replaces the received filter information with the received filter information instead of the stored filter information when filter information having the same image characteristic as the image characteristic (ie, activity and directionality) of the received filter information is stored in the filter bank. can do.
- efficient reconstruction picture filtering based on the filter bank can be performed, thereby reducing the amount of data allocated for transmission and reception of the filter information, and consequently, the compression and coding efficiency can be improved.
- the above-described method may be implemented as a module (process, function, etc.) for performing the above-described function.
- the module may be stored in memory and executed by a processor.
- the memory may be internal or external to the processor and may be coupled to the processor by various well known means.
- the processor may include application-specific integrated circuits (ASICs), other chipsets, logic circuits, and / or data processing devices.
- the memory may include read-only memory (ROM), random access memory (RAM), flash memory, memory card, storage medium and / or other storage device.
Abstract
Description
구분 | 설명 |
Activity | 텍스처/에러의 정도를 나타냄 |
Direction | 텍스처/에러의 방향성을 나타냄 |
Filter Shape (Length) | 필터의 모양/크기을 나타냄 |
Frame Num. | 필터가 적용된 프레임 번호 |
Filter Coefficient | 사용된 필터 계수 정보 |
Filter Index | Activity | Direction | Filter Shape (or Length) | CU type | Filter Coefficient | Frame Num . |
0 | 1 | 0 | 5 | Intra | 0 | |
1 | 1 | 1 | 7 | Intra | 0 | |
2 | 1 | 2 | 9 | Intra | 0 | |
... | ... | ... | ... | ... | ... | ... |
44 | 5 | 2 | 11 | Intra | 3 |
Claims (15)
- 인코딩 장치에 의하여 수행되는 복원 픽처 필터링 방법에 있어서,복원 픽처의 대상 영역에 대한 제1 필터 정보를 도출하는 단계;상기 도출된 제1 필터 정보와 필터 뱅크에 포함된 제2 필터 정보 중 하나를 선택하는 단계; 및상기 선택된 필터 정보를 기반으로 상기 선택된 필터 정보 기반으로 상기 복원 픽처 내의 상기 대상 영역에 대한 필터링을 수행하는 단계를 포함함을 특징으로 하는, 필터링 방법.
- 제 1항에 있어서,상기 도출된 제1 필터 정보와 필터 뱅크에 포함된 제2 필터 정보 중 하나를 선택하는 단계는,상기 도출된 제1 필터 정보에 포함되는 상기 대상 영역에 대한 영상 특성과 동일한 영상 특성을 갖는 상기 제2 필터 정보가 상기 필터 뱅크 내에 존재하는지 판단하는 단계; 및상기 제2 필터 정보가 존재하는 경우, RD(rate-distortion) 코스트를 기반으로 상기 제1 필터 정보 및 상기 제2 필터 정보 중 하나를 선택하는 단계를 포함함을 특징으로 하는, 필터링 방법.
- 제 2항에 있어서,상기 제1 필터 정보 및 상기 제2 필터 정보 각각은 활동성, 방향성, 필터 모양, 프레임 번호, 필터 계수 중 적어도 하나의 정보를 포함하고,제1 필터 정보의 영상 특성 및 상기 제2 필터 정보의 영상 특성 각각은 상기 제1 필터 정보 및 상기 제2 필터 정보에 각각 포함되는 상기 활동성 및 상기 방향성을 기반으로 결정됨을 특징으로 하는, 필터링 방법.
- 제 1항에 있어서,상기 제2 필터 정보가 선택된 경우, 상기 필터 뱅크 내에서 상기 제2 필터 정보의 인덱스를 나타내는 필터 인덱스를 디코더로 전송하는 단계를 더 포함함을 특징으로 하는, 필터링 방법.
- 제 4항에 있어서,필터 뱅크 가용 플래그를 디코더로 전송하는 단계; 및상기 필터 뱅크 가용 플래그의 값이 1을 나타내는 경우, 필터 타입 플래그를 전송하는 단계를 더 포함하되,상기 필터 인덱스는 상기 필터 타입 플래그의 값이 1을 나타내는 경우에 전송됨을 특징으로 하는, 필터링 방법.
- 제 1항에 있어서,상기 제2 필터 정보가 존재하지 않는 경우, 상기 필터 뱅크에 상기 제1 필터 정보를 업데이트하는 단계를 더 포함함을 특징으로 하는, 필터링 방법.
- 제 6항에 있어서,상기 제2 필터 정보를 디코더로 전송하는 단계를 더 포함함을 특징으로 하는, 필터링 방법.
- 제 1항에 있어서,상기 제1 필터 정보가 선택된 경우, 상기 필터 뱅크에 상기 제1 필터 정보를 업데이트하는 단계를 더 포함함을 특징으로 하는, 필터링 방법.
- 제 8항에 있어서,상기 제1 필터 정보를 업데이트함에 있어, 상기 제1 필터 정보가 상기 제2 필터 정보를 대체함을 특징으로 하는, 필터링 방법.
- 디코딩 장치에 의하여 수행되는 복원 픽처 필터링 방법에 있어서,복원 픽처의 대상 영역에 필터 뱅크에 기반한 필터링이 적용되는지 여부를 나타내는 필터 타입 플래그를 수신하는 단계;상기 필터 타입 플래그를 기반으로 상기 대상 영역에 대한 필터 정보를 선택하는 단계; 및상기 선택된 필터 정보 기반으로 상기 복원 픽처 내의 상기 대상 영역에 대한 필터링을 수행함을 특징으로 하는, 필터링 방법.
- 제 10항에 있어서,상기 필터 뱅크의 가용 여부를 나타내는 필터 뱅크 가용 플래그를 수신하는 단계를 더 포함하되,상기 필터 타입 플래그는 상기 필터 뱅크 가용 플래그의 값이 1을 나타내는 경우에 수신됨을 특징으로 하는, 필터링 방법.
- 제 10항에 있어서,상기 필터 타입 플래그의 값이 1을 나타내는 경우, 상기 필터 뱅크 내의 필터 정보들 중에서 하나의 필터 정보의 인덱스를 나타내는 필터 인덱스를 수신하는 단계를 더 포함하되,상기 필터 인덱스를 기반으로 상기 대상 영역에 대한 상기 필터 정보를 선택함을 특징으로 하는, 필터링 방법.
- 제 10항에 있어서,제1 필터 정보를 수신하는 단계; 및상기 필터 뱅크에 상기 제1 필터 정보를 업데이트하는 단계를 더 포함하되,상기 제1 필터 정보가 상기 대상 영역에 대한 상기 필터 정보로 선택됨을 특징으로 하는, 필터링 방법.
- 제 13항에 있어서,상기 제1 필터 정보는 상기 필터 타입 플래그의 값이 0을 나타내는 경우에 수신되고,상기 제1 필터 정보를 업데이트함에 있어, 상기 제1 필터 정보가 상기 필터 뱅크에 포함된 제2 필터 정보를 대체함을 특징으로 하는, 필터링 방법.
- 제 13항에 있어서,상기 복원 픽처는 디블록킹 필터링 또는 SAO(sample adaptive offset) 절차가 완료된 후의 복원 픽처임을 특징으로 하는, 필터링 방법.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020177035609A KR20180019547A (ko) | 2015-06-18 | 2016-02-02 | 영상 코딩 시스템에서 필터 뱅크를 이용한 영상 필터링 방법 및 장치 |
US15/736,144 US10602140B2 (en) | 2015-06-18 | 2016-02-02 | Method and device for filtering image using filter bank in image coding system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562181729P | 2015-06-18 | 2015-06-18 | |
US62/181,729 | 2015-06-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016204372A1 true WO2016204372A1 (ko) | 2016-12-22 |
Family
ID=57545385
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2016/001128 WO2016204372A1 (ko) | 2015-06-18 | 2016-02-02 | 영상 코딩 시스템에서 필터 뱅크를 이용한 영상 필터링 방법 및 장치 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10602140B2 (ko) |
KR (1) | KR20180019547A (ko) |
WO (1) | WO2016204372A1 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019031842A1 (ko) * | 2017-08-08 | 2019-02-14 | 엘지전자 주식회사 | 영상 처리 방법 및 이를 위한 장치 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3454556A1 (en) | 2017-09-08 | 2019-03-13 | Thomson Licensing | Method and apparatus for video encoding and decoding using pattern-based block filtering |
TWI729478B (zh) * | 2018-08-31 | 2021-06-01 | 聯發科技股份有限公司 | 用於虛擬邊界的環內濾波的方法和設備 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100136391A (ko) * | 2009-06-18 | 2010-12-28 | 한국전자통신연구원 | 다수의 필터를 이용한 복원영상 필터링 방법 및 이를 적용한 부호화/복호화 장치 및 방법 |
KR20110001990A (ko) * | 2009-06-30 | 2011-01-06 | 삼성전자주식회사 | 영상 데이터의 인 루프 필터링 장치 및 방법과 이를 이용한 영상 부호화/복호화 장치 |
KR20110070823A (ko) * | 2009-12-18 | 2011-06-24 | 한국전자통신연구원 | 비디오 부호화/복호화 방법 및 장치 |
KR20130095279A (ko) * | 2011-01-03 | 2013-08-27 | 미디어텍 인크. | 필터 유닛 기반 인-루프 필터링 방법 |
US20130266059A1 (en) * | 2012-04-09 | 2013-10-10 | Qualcomm Incorporated | Lcu-based adaptive loop filtering for video coding |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9143803B2 (en) * | 2009-01-15 | 2015-09-22 | Qualcomm Incorporated | Filter prediction based on activity metrics in video coding |
-
2016
- 2016-02-02 KR KR1020177035609A patent/KR20180019547A/ko unknown
- 2016-02-02 US US15/736,144 patent/US10602140B2/en active Active
- 2016-02-02 WO PCT/KR2016/001128 patent/WO2016204372A1/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100136391A (ko) * | 2009-06-18 | 2010-12-28 | 한국전자통신연구원 | 다수의 필터를 이용한 복원영상 필터링 방법 및 이를 적용한 부호화/복호화 장치 및 방법 |
KR20110001990A (ko) * | 2009-06-30 | 2011-01-06 | 삼성전자주식회사 | 영상 데이터의 인 루프 필터링 장치 및 방법과 이를 이용한 영상 부호화/복호화 장치 |
KR20110070823A (ko) * | 2009-12-18 | 2011-06-24 | 한국전자통신연구원 | 비디오 부호화/복호화 방법 및 장치 |
KR20130095279A (ko) * | 2011-01-03 | 2013-08-27 | 미디어텍 인크. | 필터 유닛 기반 인-루프 필터링 방법 |
US20130266059A1 (en) * | 2012-04-09 | 2013-10-10 | Qualcomm Incorporated | Lcu-based adaptive loop filtering for video coding |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019031842A1 (ko) * | 2017-08-08 | 2019-02-14 | 엘지전자 주식회사 | 영상 처리 방법 및 이를 위한 장치 |
Also Published As
Publication number | Publication date |
---|---|
KR20180019547A (ko) | 2018-02-26 |
US20180176561A1 (en) | 2018-06-21 |
US10602140B2 (en) | 2020-03-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018174402A1 (ko) | 영상 코딩 시스템에서 변환 방법 및 그 장치 | |
WO2016204360A1 (ko) | 영상 코딩 시스템에서 조도 보상에 기반한 블록 예측 방법 및 장치 | |
WO2017082670A1 (ko) | 영상 코딩 시스템에서 계수 유도 인트라 예측 방법 및 장치 | |
WO2017043786A1 (ko) | 비디오 코딩 시스템에서 인트라 예측 방법 및 장치 | |
WO2017052000A1 (ko) | 영상 코딩 시스템에서 움직임 벡터 정제 기반 인터 예측 방법 및 장치 | |
WO2017069419A1 (ko) | 비디오 코딩 시스템에서 인트라 예측 방법 및 장치 | |
WO2018056603A1 (ko) | 영상 코딩 시스템에서 조도 보상 기반 인터 예측 방법 및 장치 | |
WO2019194440A1 (ko) | 인트라 예측 모드에 대한 룩업 테이블을 이용한 영상 코딩 방법 및 그 장치 | |
WO2017057877A1 (ko) | 영상 코딩 시스템에서 영상 필터링 방법 및 장치 | |
WO2016204373A1 (ko) | 영상 코딩 시스템에서 영상 특성에 기반한 적응적 필터링 방법 및 장치 | |
WO2017061671A1 (ko) | 영상 코딩 시스템에서 적응적 변환에 기반한 영상 코딩 방법 및 장치 | |
WO2016200043A1 (ko) | 비디오 코딩 시스템에서 가상 참조 픽처 기반 인터 예측 방법 및 장치 | |
WO2016204374A1 (ko) | 영상 코딩 시스템에서 영상 필터링 방법 및 장치 | |
WO2017048008A1 (ko) | 영상 코딩 시스템에서 인터 예측 방법 및 장치 | |
WO2017159901A1 (ko) | 비디오 코딩 시스템에서 블록 구조 도출 방법 및 장치 | |
WO2018056602A1 (ko) | 영상 코딩 시스템에서 인터 예측 방법 및 장치 | |
WO2019194507A1 (ko) | 어파인 움직임 예측에 기반한 영상 코딩 방법 및 그 장치 | |
WO2019112071A1 (ko) | 영상 코딩 시스템에서 크로마 성분의 효율적 변환에 기반한 영상 디코딩 방법 및 장치 | |
WO2018174357A1 (ko) | 영상 코딩 시스템에서 영상 디코딩 방법 및 장치 | |
WO2018212430A1 (ko) | 영상 코딩 시스템에서 주파수 도메인 필터링 방법 및 그 장치 | |
WO2018128222A1 (ko) | 영상 코딩 시스템에서 영상 디코딩 방법 및 장치 | |
WO2016178485A1 (ko) | 영상 코딩 시스템에서 코딩 유닛 처리 방법 및 장치 | |
WO2020141885A1 (ko) | 디블록킹 필터링을 사용하는 영상 코딩 방법 및 장치 | |
WO2019212230A1 (ko) | 영상 코딩 시스템에서 블록 사이즈에 따른 변환을 사용하는 영상 디코딩 방법 및 그 장치 | |
WO2016204372A1 (ko) | 영상 코딩 시스템에서 필터 뱅크를 이용한 영상 필터링 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16811785 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20177035609 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15736144 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16811785 Country of ref document: EP Kind code of ref document: A1 |