US20140348227A1 - Method for encoding/decoding a quantization coefficient, and apparatus using same - Google Patents

Method for encoding/decoding a quantization coefficient, and apparatus using same Download PDF

Info

Publication number
US20140348227A1
US20140348227A1 US14/356,218 US201214356218A US2014348227A1 US 20140348227 A1 US20140348227 A1 US 20140348227A1 US 201214356218 A US201214356218 A US 201214356218A US 2014348227 A1 US2014348227 A1 US 2014348227A1
Authority
US
United States
Prior art keywords
quantization parameter
information
coding unit
slice
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/356,218
Other languages
English (en)
Inventor
Sun Young Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Meta Platforms Inc
Original Assignee
Pantech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pantech Co Ltd filed Critical Pantech Co Ltd
Assigned to PANTECH CO., LTD. reassignment PANTECH CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEE, SUN YOUNG
Publication of US20140348227A1 publication Critical patent/US20140348227A1/en
Assigned to PANTECH INC. reassignment PANTECH INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PANTECH CO., LTD.
Assigned to GOLDPEAK INNOVATIONS INC reassignment GOLDPEAK INNOVATIONS INC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PANTECH INC
Assigned to FACEBOOK, INC. reassignment FACEBOOK, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GOLDPEAK INNOVATIONS INC
Assigned to META PLATFORMS, INC. reassignment META PLATFORMS, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: FACEBOOK, INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • H04N19/00096
    • H04N19/00139
    • H04N19/00272
    • H04N19/00369
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Definitions

  • the present invention relates to a method of encoding/decoding a quantization parameter and an apparatus using the same, and more particularly, to an encoding/decoding apparatus and an encoding/decoding method.
  • Video compression technology include various techniques, such as an inter prediction technique of predicting pixel values included in a current picture from previous or subsequent pictures of the current picture, an intra prediction technique of predicting pixel values included in a current picture using pixel information in the current picture, and an entropy encoding technique of assigning a short code to a value with a high appearance frequency and assigning a long code to a value with a low appearance frequency.
  • Video data may be effectively compressed and transferred or stored using such video compression techniques.
  • An aspect of the present invention is to provide a method of decoding a quantization parameter for improving video encoding efficiency.
  • Another aspect of the present invention is to provide an apparatus that performs a method of decoding a quantization parameter for improving video encoding efficiency.
  • An embodiment of the present invention provides a decoding method which includes decoding initial quantization parameter information and quantization parameter range information on a slice, and deriving a quantization parameter limit range applied to a coding unit included in the slice using the initial quantization parameter information and the quantization parameter range information on the slice.
  • the decoding method may further include decoding basic quantization parameter information on the slice.
  • the initial quantization parameter information may be a value obtained by adding changed slice quantization parameter information to basic quantization parameter information or subtracting the changed slice quantization parameter information from the basic quantization parameter information, or a value included in a slice header.
  • the quantization parameter limit range may be from a value obtained by subtracting the quantization parameter range information from the initial quantization parameter information to a value obtained by adding the quantization parameter range information to the initial quantization parameter information.
  • the decoding method may further include determining a variable-length code table based on the quantization parameter limit range of the coding unit and previous quantization parameter information on the coding unit and decoding a quantization parameter of the coding unit using the variable-length code table.
  • the previous quantization parameter information on the coding unit may include at least one of quantization parameter information on a coding unit decoded before the coding unit, quantization parameter information on a left coding unit of the coding unit and the initial quantization parameter information on the slice.
  • the determining the variable-length code table based on the quantization parameter limit range of the coding unit and the previous quantization parameter information on the coding unit and decoding the quantization parameter of the coding unit using the variable-length code table may include selecting a previously stored variable-length code table based on quantization parameter limit range and the previous quantization parameter information on the coding unit or creating a variable-length code table based on the quantization parameter limit range and the previous quantization parameter information on the coding unit.
  • the decoding method may further include determining whether to change a quantization parameter value of the coding unit.
  • the determining whether to change the quantization parameter value of the coding unit may include determining whether to change the quantization parameter value of the coding unit based on information on whether a quantization parameter of the coding unit is changed or depth information on a coding unit where a quantization parameter is changed.
  • Another embodiment of the present invention provides a decoding apparatus which include an entropy decoding module to decode a quantization parameter variable and a dequantization module to derive a quantization parameter limit range applied to a coding unit included in a slice based on the quantization parameter variable decoded by the entropy decoding module.
  • the dequantization module may derive the quantization parameter limit range applied to the coding unit comprised in the slice using initial quantization parameter information and quantization parameter range information on the slice.
  • the dequantization module may determine a variable-length code table based on the quantization parameter limit range of the coding unit and previous quantization parameter information on the coding unit and performs dequantization based on a quantization parameter of the coding unit derived using the variable-length code table.
  • the variable-length code table may include a previously stored variable-length code table determined based on quantization parameter limit range and the previous quantization parameter information on the coding unit or a variable-length code table created based on the quantization parameter limit range and the previous quantization parameter information on the coding unit.
  • the dequantization module may include a quantization parameter derivation module to derive the quantization parameter limit range of the coding unit and previous quantization parameter information on the coding unit based on the quantization parameter variable provided from the entropy decoding module.
  • the dequantization module may further include a variable-length code table determination module to determine a variable-length code table for decoding a quantization parameter of the coding unit based on the quantization parameter limit range of the coding unit and previous quantization parameter information on the coding unit derived by the quantization parameter derivation module; and a dequantization implementation module to perform dequantization based on the variable-length code table determined by the variable-length code table determination module.
  • variable-length code table is used in decoding a quantization parameter, thereby improving encoding and decoding efficiencies.
  • FIG. 1 is a block diagram illustrating an encoding apparatus according to an exemplary embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating a decoding apparatus according to an exemplary embodiment of the present invention.
  • FIG. 3 is a diagram illustrating a method of decoding a quantization parameter of a coding unit according to an exemplary embodiment of the present invention.
  • FIG. 4 is a diagram illustrating a method of deriving a quantization parameter of a coding unit according to an exemplary embodiment of the present invention.
  • FIG. 5 is a flowchart illustrating a method of decoding a quantization parameter according to an exemplary embodiment of the present invention.
  • FIG. 6 is a diagram illustrating a dequantization module according to an exemplary embodiment of the present invention.
  • Some elements are not essential to the substantial functions in the invention and may be optional constituents for merely improving performance.
  • the invention may be embodied by including only constituents essential to embodiment of the invention, except for constituents used to merely improve performance.
  • the structure including only the essential constituents except for the optical constituents used to merely improve performance belongs to the scope of the invention.
  • FIG. 1 is a block diagram illustrating an encoding apparatus according to an exemplary embodiment of the present invention.
  • the video encoding apparatus 100 may include a partition module 100 , a prediction module 110 , an intra prediction module 103 , an inter prediction module 106 , a transform module 115 , a quantization module 120 , a rearrangement module 125 , an entropy encoding module 130 , a dequantization module 135 , an inverse transform module 140 , a filter module 145 and a memory 150 .
  • the encoding apparatus may be realized by a video encoding method to be described in the following exemplary embodiment of the present invention, while operations of some components may not be performed so as to reduce complexity of the encoding apparatus or to achieve quick real-time encoding.
  • the prediction module may select one final intra prediction mode among a limited number of intra prediction modes so as to perform encoding in real time instead of using all intra prediction modes to select an optimal intra encoding method.
  • a shape of a prediction unit may be restricted.
  • a unit of a block processed by the encoding apparatus may be a coding unit for performing coding, a prediction unit for performing prediction or a transform unit for performing transformation.
  • a coding unit may be represented by a CU, a prediction unit by a PU, and a transform unit by a TU.
  • the partition module 100 may divide one picture into a plurality of combinations of a CU, a PU and a TU and select one combination of a CU, a PU and a TU on the basis of a predetermined criterion, for example, a cost function.
  • a recursive tree structure such as a quadtree structure, may be used to partition the picture into CUs.
  • a CU may be used to refer to not only a unit of encoding but also a unit of decoding.
  • a PU may be a unit for intra prediction or a unit for inter prediction.
  • a unit for intra prediction may have a square shape, such as 2N ⁇ 2N and N ⁇ N, or a rectangular shape using short distance intra prediction (SDIP).
  • a unit for inter prediction may include 2N ⁇ 2N and N ⁇ N square units, 2N ⁇ N and N ⁇ 2N units obtained by equally partitioning a square PU in two, or PUs obtained by asymmetric motion partitioning (AMP).
  • the transform module 115 may use a different transform method based on a shape of a PU.
  • the prediction module 110 may include the intra prediction module 103 to perform intra prediction and the inter prediction module 106 to perform inter prediction.
  • the prediction module 110 may determine whether to employ intra prediction or inter prediction for a PU.
  • a processing unit for performing prediction may be different from a processing unit for determining a prediction method and details on the prediction method. For example, in performing intra prediction, a prediction mode may be determined by PU while prediction may be performed by TU.
  • a residual value (residual block) between a generated prediction block and an original block may be input to the transform module 115 . Also, information on a prediction mode and a motion vector used for prediction may be encoded along with the residual value by the entropy encoding module 130 and transferred to the decoding apparatus.
  • the original block may be encoded and transferred to the decoding apparatus as it is without performing prediction by the prediction module 110 .
  • PCM pulse-code modulation
  • the intra prediction module 103 may generate a PU based on a reference pixel neighboring a current PU. In order to derive an optimal intra prediction mode for the current PU, the intra prediction module 103 may use a plurality of intra prediction modes to generate the current PU and select one of the modes.
  • Intra prediction modes may include a directional prediction mode in which reference pixel information is used according to a prediction direction and a non-directional prediction mode in which directivity information is not used in performing prediction.
  • a mode for predicting luma information and a mode for predicting chroma information may be different from each other.
  • Intra prediction mode information used to obtain luma information or predicted luma signal information may be used to predict chroma information.
  • Information on the one selected intra prediction mode of an intra prediction unit may be encoded using a method of predicting an intra prediction mode of a current PU from information on an intra prediction mode of a neighboring block of the current PU. That is, the intra prediction mode of the current PU may be predicted from an intra prediction mode of a PU neighboring the current PU.
  • the intra prediction mode of the current PU may be predicted from an intra prediction mode of a PU neighboring the current PU.
  • mode information predicted from a neighboring PU when the current PU and the neighboring PU have the same intra prediction mode, information indicating that the current PU and the neighboring PU have the same prediction mode may be transmitted using flag information.
  • information on the prediction mode of the current block may be encoded by entropy encoding.
  • a preset intra prediction mode value may be set as a candidate intra prediction mode value to predict the intra prediction mode of the current PU.
  • the intra prediction module 103 may generate a PU on the basis of information on a reference pixel neighboring the current block, that is, information on a pixel within a current picture.
  • a block neighboring the current PU is a block having been subjected to inter prediction and thus a reference pixel is a pixel having been subjected to inter prediction
  • reference pixels included in the block having been subjected to inter prediction may be replaced with reference pixel information on a block having been subjected to intra prediction. That is, when a reference pixel is unavailable, information on the unavailable reference pixel may be replaced with at least one reference pixel of available reference pixels.
  • intra prediction when a PU and a TU have the same size, intra prediction for the PU may be performed based on left pixels, an upper-left pixel and upper pixels of the PU. On the other hand, when a PU and a TU have different sizes, intra prediction may be performed using reference pixels based on the TU. Further, intra prediction using N ⁇ N partitioning may be used only for a minimum CU.
  • an adaptive intra smoothing (AIS) filter may be applied to reference pixels according to the prediction mode to generate a predicted block.
  • AIS filters may be applied to the reference pixels.
  • filtering may be further performed using an additional filter on some rows in the PU after performing intra prediction using the reference pixels. Different types of filtering may be used for filtering the rows in the PU depending on directivity of a prediction mode.
  • the inter prediction module 106 may generate a PU on the basis of information on at least one picture among previous and subsequent pictures of the current picture.
  • the inter prediction module 106 may include a reference picture interpolation module, a motion prediction module, and a motion compensation module.
  • the reference picture interpolation module may be supplied with reference picture information from the memory 150 and generate pixel information on a pixel smaller than an integer pixel from a reference picture.
  • a discrete cosine transform DCT-based 8-tap interpolation filter (DCT-based Interpolation Filter) having a varying filter coefficient may be used to generate information on a pixel smaller than an integer pixel by 1 ⁇ 4 pixel unit.
  • a DCT-based 4-tap interpolation filter (DCT-based Interpolation Filter) having a varying filter coefficient may be used to generate information on a pixel smaller than an integer pixel by 1 ⁇ 8 pixel unit.
  • the inter prediction module 106 may perform motion prediction on the basis of the reference picture interpolated by the reference picture interpolation module.
  • Various methods such as a full search-based block matching algorithm (FBMA), a three-step search (TSS) algorithm and a new three-step search (NTS) algorithm, may be used to derive a motion vector.
  • the motion vector has a motion vector value by a 1 ⁇ 2 or 1 ⁇ 4 pixel unit on the basis of an interpolated pixel.
  • the inter prediction module 106 may predict a current PU using different motion prediction methods.
  • Various methods such as skipping, merging, and advanced motion vector prediction (AMVP), may be used as a motion prediction method.
  • AMVP advanced motion vector prediction
  • a residual block including residual information which is a difference between the predicted PU and the original block of the PU may be generated based on the PU generated by the prediction module 110 .
  • the generated residual block may be input to the transform module 115 .
  • the transform module 115 may transform the residual block including the residual information as the difference between the PU and the original block using a transform method such as Discrete Cosign Transform (DCT) or Discrete Sine Transform (DST).
  • DCT Discrete Cosign Transform
  • DST Discrete Sine Transform
  • a transform method to may be determined among DCT and DST on the basis of intra prediction mode information and size information on the PU used to generate the residual block.
  • the transform module may use different transform methods depending on PU sizes.
  • the quantization module 120 may quantize values transformed to a frequency domain by the transform module 115 .
  • a quantization parameter may change depending on a block or importance of a picture. Values output from the quantization module 120 may be provided to the dequantization module 135 and the rearrangement module 125 .
  • the rearrangement module 125 may rearrange coefficients with respect to a quantized residual value.
  • the rearrangement module 125 may change a two-dimensional (2D) block of coefficients into a one-dimensional (1D) vector of coefficients through coefficient scanning.
  • the rearrangement module 125 may change a 2D block of coefficients into a 1D vector of coefficients by scanning from DC coefficients to coefficients of a high frequency domain using zigzag scanning (zig-zag scan).
  • zig-zag scan zig-zag scan
  • vertical scanning of scanning a 2D block of coefficients in a vertical direction and horizontal scanning of scanning a 2D block of coefficients in a horizontal direction may be used depending on a TU size and an intra prediction mode, instead of zigzag scanning. That is, a scanning method for use may be selected among zigzag scanning, vertical scanning, and horizontal scanning based on a TU size and an intra prediction mode.
  • the entropy encoding module 130 may perform entropy encoding on the basis of the values obtained by the rearrangement module 125 .
  • Various encoding methods such as exponential Golomb coding, context-adaptive variable length coding (CAVLC), and context-adaptive binary arithmetic coding (CABAC), may be used for entropy encoding.
  • the entropy encoding module 130 may entropy-encode diverse types of information, such as residual coefficient information and block type information on a CU, prediction mode information, partitioning unit information, PU information, transfer unit information, motion vector information, reference frame information, block interpolation information and filtering information, provided from the rearrangement module 120 and the prediction module 110 according to a predetermined encoding method. Further, the entropy encoding module 130 may entropy-encode coefficients of a CU input from the rearrangement module 125 .
  • the entropy encoding module 130 may store a table for entropy encoding, such as a variable-length code (VLC) table, and perform entropy encoding using the VLC table.
  • VLC variable-length code
  • allocation of some codewords included in the table to code numbers of corresponding information may be changed by a counter method or direct swapping. For instance, in a case of a plurality of higher code numbers allocated short-bit codewords in a table mapping code numbers to codewords, a mapping order of the codewords and the code numbers in the table may be adaptively changed so that a short-length codeword is allocated to a code number determined by a counter to most frequently happen in total. When a number of counting times by the counter is equal to a preset threshold, the number of counting times recorded in the counter may be divided in half, followed by counting again.
  • a code number in the table which is not counted is switched with a code number immediately above by direct swapping, to reduce a bit number allocated to the code number, thereby being entropy-encoded.
  • the dequantization module 135 and the inverse transform module 140 dequantize the values quantized by the quantization module 120 and inversely transform the values transformed by the transform module 115 .
  • a residual value generated by the dequantization module 135 and the inverse transform module 140 may be merged with a PU, which is predicted through the motion vector prediction module, the motion compensation module and the intra prediction module of the prediction module 110 , thereby generating a reconstructed block.
  • the filter module 145 may include at least one of a deblocking filter, an offset correction module, and an adaptive loop filter (ALF).
  • a deblocking filter may include at least one of a deblocking filter, an offset correction module, and an adaptive loop filter (ALF).
  • ALF adaptive loop filter
  • the deblocking filter may remove block distortion generated on boundaries between blocks in a reconstructed picture. Whether to apply the deblocking filter to a current block may be determined on the basis of pixels included in several rows or columns of the block. When the deblocking filter is applied to a block, a strong filter or a weak filter may be applied depending on a required deblocking filtering strength. When horizontal filtering and vertical filtering may be employed in applying the deblocking filter, horizontal filtering and vertical filtering may be performed in parallel.
  • the offset correction module may correct an offset of the deblocked picture from the original picture by a pixel.
  • a method of partitioning pixels of the picture into a predetermined number of regions, determining a region to be subjected to offset correction, and applying an offset to the determined region or applying an offset in consideration of edge information on each pixel may be used.
  • the Adaptive Loop Filter may perform filtering based on a result of comparing the filtered reconstructed picture and the original picture. Pixels included in the picture may be partitioned into one or more groups, a filter to be applied to each group may be determined, and differential filtering may be performed for each group. Information on whether to apply the ALF with respect to luma signal may be transferred by each coding unit (CU) and a size and coefficient of an ALF to be applied to each block may vary.
  • the ALF may have various types and a number of coefficients included in a corresponding filter may vary. Filtering information on the ALF, such as filter coefficient information, ALF ON/OFF information, and filter type information, may be included and transferred in a parameter set of a bitstream.
  • the memory 150 may store a reconstructed block or picture output from the filter module 145 , and the stored reconstructed block or picture may be supplied to the prediction module 110 when performing inter prediction.
  • FIG. 2 is a block diagram illustrating a decoding apparatus according to an exemplary embodiment of the present invention.
  • the decoding apparatus may include an entropy decoding module 210 , a rearrangement module 215 , an dequantization module 220 , an inverse transform module 225 , a prediction module 230 , a filter module 235 , and a memory 240 .
  • the input bitstream may be decoded according to the process of the encoding apparatus in reverse.
  • the entropy decoding module 210 may perform entropy decoding according to a reverse process of the entropy encoding process by the entropy encoding module of the encoding apparatus.
  • the same VLC table as used for entropy encoding in the encoding apparatus may be also used for the entropy decoding module to perform entropy decoding.
  • information for generating a prediction block may be provided to the prediction module 230 , and a residual value entropy-decoded by the entropy decoding module may be input to the rearrangement module 215 .
  • the entropy decoding module 210 may also change a codeword allocation table using the counter method or direct swapping, like the entropy encoding module, and perform entropy decoding based on the changed codeword allocation table.
  • the entropy decoding module 210 may decode information about intra prediction and inter prediction performed in the encoding apparatus. As described above, when the encoding apparatus has a constraint in performing intra prediction and inter prediction, for example, when a neighboring prediction mode is unavailable, the entropy decoding module may perform entropy decoding in view of such a constraint to obtain information about intra prediction and inter prediction of a current block.
  • the rearrangement module 215 may rearrange the bitstream entropy-decoded by the entropy decoding module 210 on the basis of the rearrangement method of the encoding module.
  • the rearrangement module 215 may reconstruct and rearrange a 1D vector of coefficients into a 2D block of coefficients.
  • the rearrangement module 215 may be supplied with information about coefficient scanning performed by the encoding module and perform rearrangement in reverse scanning order to that of the encoding module.
  • the dequantization module 220 may perform dequantization on the basis of a quantization parameter supplied from the encoding apparatus and the rearranged coefficients of the block.
  • the inverse transform module 225 may perform inverse DCT and inverse DST on a result of quantization performed by the encoding apparatus in response to DCT and DST performed by the transform module of the encoding apparatus. Inverse transform may be performed on the basis of a transfer unit determined by the encoding apparatus.
  • the transform module of the encoding apparatus may selectively perform DCT and DST depending on a plurality of information elements, such as a prediction method, a size of a current block and a prediction direction, and the inverse transform module 225 of the decoding apparatus may perform inverse transform on the basis of information on transform performed by the transform module of the encoding apparatus.
  • Transform may be performed based on a CU, instead of a TU.
  • the prediction module 230 may generate a prediction block based on information on generation of the prediction block provided from the entropy decoding module 210 and information on a previously decoded block or picture provided by the memory 240 .
  • intra prediction for the PU may be performed based on left pixels, an upper-left pixel and upper pixels of the PU.
  • intra prediction may be performed using reference pixels based on the TU.
  • intra prediction using N ⁇ N partitioning may be used only for a minimum CU.
  • the prediction module 230 may include a PU determination module, an inter prediction module and an intra prediction module.
  • the PU determination module may receive a variety of information, such as PU information input from the entropy decoding module, prediction mode information on an intra prediction method and motion prediction-related information on an inter prediction method, may distinguish a PU in a current CU, and may determine which of the inter prediction and the intra prediction is performed on the PU.
  • the inter prediction module may perform inter prediction on a current PU on the basis of information included in at least one picture among previous and subsequent pictures of a current picture including the current PU, using information necessary for inter prediction for the current PU supplied from the video encoding apparatus.
  • a motion prediction method for the PU included in the CU is a skip mode, a merge mode or an AMVP mode.
  • the intra prediction module may generate a prediction block on the basis of information on a pixel in a current picture.
  • intra prediction may be performed based on intra prediction mode information on the PU supplied from the video encoding apparatus.
  • the intra prediction module may include an AIS filter, a reference pixel interpolation module, and a DC filter.
  • the AIS filter performs filtering on reference pixels of a current block, and whether to apply the AIS filter may be determined depending on a prediction mode for the current PU.
  • AIS filtering may be performed on the reference pixels of the current block using the prediction mode for the PU and information on the AIS filter supplied from the video encoding apparatus.
  • the prediction mode of the current block does not perform AIS filtering, the AIS filter may not be applied.
  • the prediction block may be additionally filtered along with the reference pixels.
  • the reference pixel interpolation module may generate reference pixels in a pixel unit smaller than an integer pixel unit by interpolating the reference pixels.
  • the prediction mode of the current PU is a prediction mode of generating a prediction block without interpolating the reference pixels
  • the reference pixels may not be interpolated.
  • the DC filter may generate a prediction block through filtering when the prediction mode of the current block is the DC mode.
  • the reconstructed block or picture may be provided to the filter module 235 .
  • the filter module 235 includes a deblocking filter, an offset correction module, and an ALF.
  • the deblocking filter may be provided from the encoding apparatus with information on whether the deblocking filter is applied to the block or picture, and information on which of a strong filter and a weak filter is applied if the deblocking filter is used.
  • the deblocking filter of the decoding apparatus may be provided with information on the deblocking filter from the encoding apparatus and may perform deblocking filtering on the block in the decoding apparatus.
  • vertical deblocking filtering and horizontal deblocking filter are performed first, in which at least one of vertical deblocking filtering and horizontal deblocking filtering may be performed on an overlapping region. Either of vertical deblocking filtering and horizontal deblocking filtering which is not previously performed may be performed on a region in which vertical deblocking filtering and horizontal deblocking filtering overlap. This deblocking filtering process may enable parallel processing of deblocking filtering.
  • the offset correction module may perform offset correction on the reconstructed picture on the basis of an offset correction type and offset value information applied to the picture in encoding.
  • the ALF may perform filtering based on a result of comparing the filtered reconstructed picture and the original picture.
  • the ALF may be applied to a CU on the basis of information on whether the ALF is applied and ALF coefficient information provied from the encoding apparatus.
  • the ALF information may be included and provided in a specific parameter set.
  • the memory 240 may store the reconstructed picture or block to be used as a reference picture or a reference block and may provide the reconstructed picture to an output module.
  • coding unit CU
  • CU coding unit
  • FIGS. 3 to 8 methods of encoding/decoding an intra prediction mode using two candidate intra prediction modes to be illustrated in FIGS. 3 to 8 according to exemplary embodiments of the present invention may be realized in accordance with functions of the modules of the encoding apparatus and the decoding apparatus described above in FIGS. 1 and 2 , which fall within the scope of the present invention.
  • syntax elements used in exemplary embodiments of the present invention and definitions of the syntax elements will be illustrated.
  • the syntax elements and the definitions thereof are provided only for illustrative purposes, and other syntax elements and definitions thereof may be used to express equivalent meanings in different manners without departing from the nature of the present invention.
  • a sequence header may be used to refer to header information for decoding a sequence, which includes a sequence parameter set (SPS), and a picture header may be used to refer to header information for decoding a picture, which includes a picture parameter set (PPS).
  • SPS sequence parameter set
  • PPS picture parameter set
  • cu_qp_delta_enable_flag is a syntax element determining whether to change a quantization parameter in a CU layer.
  • cu_qp_delta_enable_flag may be termed “CU quantization parameter change enabling information.”
  • pic_init_qp_minus26 is a syntax element which is included in a PPS and includes basic quantization parameter value information of a slice referring to the PPS.
  • a quantization parameter value of each slice that is, an initial quantization parameter value of a current slice, may be derived by obtaining a basic quantization parameter value from the syntax element pic_init_qp_minus26 and adding a changed slice quantization parameter value to the basic quantization parameter value or subtracting the changed slice quantization parameter value from the basic quantization parameter value by additionally using a quantization parameter difference of each slice slice_qp_delta.
  • a quantization parameter value may be changed for each CU using a syntax element cu_qp_delta, which will be described.
  • pic_init_qp_minus26 may be termed “basic quantization parameter information.”
  • max_cu_qp_delta_depth is a syntax element specifying a depth of a maximum CU that allows change of a quantization parameter value
  • Log 2MaxCUSize is a syntax element specifying a size of the maximum CU.
  • a variable log 2MinCUDQPSize specifies a size of a minimum CU that is derived by Equation 1 based on max_cu_qp_delta_depth and Log 2MaxCUSize values and allows change of a quantization parameter value.
  • max_cu_qp_delta_depth may be termed “maximum quantization parameter changeable depth information.”
  • log 2MinCUDQPSize Log 2MaxCUSize ⁇ max — cu — qp _delta_depth Equation 1>
  • slice_qp_delta may be used to set up a quantization parameter value of each slice.
  • An initial quantization parameter value (SliceQPY) of a slice is set up by Equation 2.
  • pic_init_qp_minus26 is a basic quantization parameter value of a slice as described above, and slice_qp_delta may be defined as a variation in quantization parameter value of each slice for deriving an initial quantization parameter value of the slice.
  • An initial quantization parameter value of a slice may be a quantization parameter value that the slice first has.
  • slice_qp_delta may be termed “changed slice quantization parameter information.”
  • a syntax element cu_qp_delta may change a quantization parameter value of a quantization group.
  • a quantization group may be a unit having the same quantization parameter value.
  • a CU split flag (split_coding_unit_flag) of a current CU is 0 and a size of the CU (log 2CUSize) is the same as or larger than the size of the minimum CU (log 2MinCUDQSize) which allows change of the quantization parameter value
  • a quantization group includes the current CU only.
  • a quantization group includes all CUs partitioned from the current CU.
  • a quantization parameter QPY of a current CU may be derived based on a previous quantization parameter QPY, PREV and cu_qp_delta. For instance, the quantization parameter of the current CU may be derived by Equation 3.
  • QP Y (((QP Y,PREV +cu — qp _delta+52+2*QpBdOffset Y )%(52+QpBdOffset Y )) ⁇ QpBdOffset Y ⁇ Equation 3>
  • the previous quantization parameter QPY, PREV is a variable used to derive the quantization parameter value of the current CU, which may be derived from a left neighboring quantization group of the current CU in a current slice. If the left neighboring quantization group is unavailable, the previous quantization parameter may be a quantization parameter of a group decoded right before.
  • a previous quantization parameter QPY, PREV of a first quantization group in each slice may be an initial quantization parameter value of the slice.
  • Bit depth offset information QpBdOffsetY for deriving a quantization parameter may be derived from a syntax element bit_depth_luma_minus8 included in a sequence header. Bit depth offset information for a quantization parameter may be set by Equation 4.
  • a term “CU” used to represent a change of quantization parameter value in a CU layer may refer to a CU included in one quantization group sharing one quantization parameter value for convenience.
  • a term “changed CU quantization parameter information” may be also used to express the same meaning.
  • Table 1 illustrates a method of setting up a quantization parameter value according to an exemplary embodiment of the present invention.
  • a sequence header may include information on whether to additionally change a quantization parameter in a CU layer through CU quantization parameter change enabling information (cu_qp_delta_enabled_flag) as a syntax element.
  • a picture header may includes a basic quantization parameter (pic_init_qp_minus26) value as a syntax element which is used to derive initial quantization parameter value information on a slice referring to the picture header.
  • a basic quantization parameter (pic_init_qp_minus26) value as a syntax element which is used to derive initial quantization parameter value information on a slice referring to the picture header.
  • the picture header may include information on a depth of a maximum CU that allows change of a quantization parameter value through maximum quantization parameter changeable depth information (max_cu_qp_delta_depth) as a syntax element.
  • a slice header may include a changed slice quantization parameter (slice_qp_delta) value to derive a basic quantization parameter value based on the syntax element pic_init_qp_minus26 transmitted through the picture header and to derive initial quantization parameter value information applied to a current slice by further using slice_qp_delta.
  • slice_qp_delta changed slice quantization parameter
  • a quantization parameter value for each slice is referred to as a basic quantization parameter value.
  • a quantization parameter value of a current CU may be changed based on a changed CU quantization parameter (cu_qp_delta) value.
  • a quantization parameter value used to quantize a current slice ranges 0 to 51, without being limited thereto.
  • a method of encoding information on a quantization parameter range available for a current slice may be used to encode quantization parameter information on a CU included in the current slice.
  • the information on the quantization parameter range available for the CU included in the slice may be defined as a syntax element qp_range and be included in the picture header or slice header.
  • the syntax element qp_range may be expressed as an independent syntax element or in combination with another syntax element. In the following embodiments, the syntax element qp_range may be expressed as an independent syntax element for convenience.
  • a basic quantization parameter value to be used for the slice referring to the picture header may be transmitted based on a value of the syntax element pic_init_qp_minus26 included in a corresponding picture.
  • a qp_range value included in the picture header is a value to be subtracted from or added to the initial quantization parameter value of the slice derived by Equation 2 in a slice unit and may define a range of a quantization parameter value that the CU included in the slice has.
  • a quantization parameter limit range for the CU included in the slice referring to the picture header may be set to be from a value obtained by subtracting the qp_range value as quantization parameter range information from the initial quantization parameter of the slice to a value obtained by adding the qp_range value to the initial quantization parameter of the slice.
  • the initial quantization parameter value of the slice may be a previous quantization parameter value of a first CU of the slice in decoding order.
  • a quantization parameter value of a current CU may be derived by subtracting a changed CU quantization parameter value (cu_qp_delta) of the current CU from a previous quantization parameter value or adding the changed CU quantization parameter value to the previous quantization parameter value.
  • a VLC table used for encoding/decoding a cu_qp_delta value may be changed.
  • the CU included in the slice may have a quantization parameter value ranging from 23 to 29.
  • the current CU has a cu_qp_delta value of ⁇ 2.
  • a previous quantization parameter of 29 means that since the current CU included in the slice may have a quantization parameter ranging from 23 to 29, the cu_qp_delta value of the current CU does not have a positive value.
  • a table for encoding a negative range of cu_qp_delta values only without encoding a positive range of cu_qp_delta values may be used to perform entropy encoding on cu_qp_delta.
  • VLC table such as Table 2 may be used to perform entropy encoding.
  • a binary encoding method used in the VLC table of the present embodiment is provided for illustrative purposes only, and different binary codes may be used for other binary encoding methods.
  • a method of generating a VLC table using a previous quantization parameter and a quantization parameter limit range of a CU may be performed in the same manner for an operation of the decoding apparatus dequantizing a CU.
  • Entropy encoding that does not allocate a codeword to an unnecessary cu_cp_delta value may reduce unnecessary waste of codewords, resulting in enhancement of encoding/decoding efficiency.
  • VLC tables for entropy encoding may be stored in the encoding apparatus or decoding apparatus, among which a VLC table may be selectively used for encoding a changed CU quantization parameter of a current CU in view of a previous quantization parameter value and a quantization parameter range of the CU.
  • a VLC table is generated whenever a changed CU quantization parameter is encoded or decoded, instead of using a previously stored VLC table, thereby performing entropy encoding or entropy decoding.
  • quantization parameter range information qp_range
  • the quantization parameter range information may be transmitted via a slice header as described above. In this case, a different quantization parameter range may be set for each slice.
  • FIG. 3 is a diagram illustrating a method of decoding a quantization parameter of a CU according to an exemplary embodiment of the present invention.
  • FIG. 3 shows a method of decoding a quantization parameter value of a CU included in a slice when a basic quantization parameter is set to 26 and a quantization parameter range is set to 3 in a picture header.
  • the picture header 300 may include basic quantization parameter information (pic_init_qp_minus26) and quantization parameter range information (qp_range).
  • a basic quantization parameter value of the slice to be decoded may be obtained as 26 based on the basic quantization parameter information (pic_init_qp_minus26) in the picture header 300 , and an initial quantization parameter value of a first slice which is 27 may be derived by adding a changed slice quantization parameter information (slice_qp_delta) of 1, derived by decoding a first slice header 310 , to the basic quantization parameter value.
  • a quantization parameter limit range allowed for a CU included in the first slice may be from 24 to 30, obtained by adjusting the initial quantization parameter value of the first slice of 27 additively or subtractively by a quantization parameter range information (qp_range) of 3.
  • a VLC table as illustrated in Table 3 for decoding a changed CU quantization parameter (cu_qp_delta) may be used to decode a quantization parameter of the first CU 320 .
  • the VLC table may have cu_qp_delta values ranging from 3 to ⁇ 3 so as to represent a quantization parameter limit range of 24 to 30.
  • a code number of 5 may be derived and a cu_qp_delta value of 3 corresponding to the code number of 5 may be derived by decoding based on the VLC table of Table 3.
  • the quantization parameter value of the first CU of 30 may be used as a previous quantization parameter value of the second CU 330 .
  • the previous quantization parameter value is 30, the CU included in the first slice may have a quantization parameter limit range of 24 to 30, and thus the cu_qp_delta value may not have a positive value.
  • a VLC table as illustrated in Table 4 may be used to decode a cu_qp_delta value of the second CU.
  • the second CU is decoded using the VLC table of Table 4, thereby decoding a code number of 2 and decoding a cu_qp_delta value of ⁇ 2 corresponding to the code number of 2.
  • the quantization parameter value of the second CU 330 of 28 may be used as a previous quantization parameter value of the third CU 340 .
  • a new VLC table may be used to decode a cu_qp_delta value of the third CU 340 . Since the CU included in the first slice may have a quantization parameter limit range of 24 to 30, when the quantization parameter value of the second CU 330 is 28, the third CU may not have a cu_qp_delta value more than 3. That is, the VLC table may not use code numbers and binary codes for representing cu_qp_delta values of 3 or greater.
  • Table 5 is a VLC table for the third CU 340 .
  • Table 5 does not include an unnecessary range of cu_qp_delta, and thus unnecessary code numbers and unnecessary binary codes may not be generated in the VLC table.
  • the quantization parameter of the third CU 340 may be decoded using Table 5.
  • a quantization parameter of a CU included in a second slice may be decoded in the same manner as used for the first slice.
  • an initial quantization parameter value of the second slice may be derived as 24.
  • a quantization parameter limit range allowed for a CU included in the second slice may be from 21 to 27 based on the initial quantization parameter value of the second slice of 24 and a value of 3 derived by decoding the qp_range value as the quantization parameter range information from the picture header.
  • the initial quantization parameter value of the second slice of 24 may be a previous quantization parameter of a first CU 360 . Since the quantization parameter limit range of the CU included in the second slice is from 21 to 27, a VLC table having cu_qp_delta values ranging from ⁇ 3 to 3 as illustrated in Table 6 may be used to decode a cu_qp_delta value of the first CU.
  • the cu_qp_delta value of the first CU of ⁇ 3 is decoded and a quantization parameter value of the first CU 360 of 21 may be decoded.
  • a second CU 370 decoded subsequently to the first CU 360 may use the quantization parameter value of the first CU 360 of 21 as a previous quantization parameter value of the second CU 370 .
  • the second CU since the CU included in the second slice may have a quantization parameter limit range of 21 to 27, when the quantization parameter value of the first CU 360 used as the previous quantization parameter of the second CU 370 is 21, the second CU may not have a negative cu_qp_delta value.
  • a VLC table including no code number and no binary code for representing a negative range of cu_qp_delta values as illustrated in Table 7 may be used to decode the cu_qp_delta value of the second CU.
  • FIG. 3 illustrates the method of decoding the quantization parameter value only.
  • the same VLC table for encoding cu_qp_delta may be stored in both the encoding apparatus and the decoding apparatus or a VLC may be realized in the same manner in the encoding apparatus and the decoding apparatus, and thus the same quantization parameter value as used for encoding may be used to perform decoding.
  • FIG. 4 is a diagram illustrating a method of deriving a quantization parameter value of a CU according to an exemplary embodiment of the present invention.
  • FIG. 3 shows that quantization parameter range information (qp_range) as a syntax element is included in a picture header and a quantization parameter value limit range of a CU included in a slice is encoded and decoded
  • FIG. 4 illustrates that quantization parameter range information (qp_range) as a syntax element is included in a slice header and a quantization parameter value limit range of a CU included in a slice is encoded and decoded.
  • a basic quantization parameter value of 26 is decoded from a picture header 400 and a slice_qp_delta value included in a slice header 420 is decoded, thereby deriving an initial quantization parameter value of a current slice.
  • the initial quantization parameter value may be 27.
  • a qp_range value as quantization parameter range information is decoded from the slice header, thereby determining a quantization parameter limit range allowed for a CU included in the current slice.
  • the decoded qp_range value is 1, the quantization parameter limit range of the CU included in the slice is from 26 to 28.
  • the derived initial quantization parameter value may be used as a previous quantization parameter value of a first CU 440 decoded first in the slice, and a VLC table for decoding cu_qp_delta of the first CU 440 may be determined based on the previous quantization parameter value and a quantization parameter limit range of the first CU 440 .
  • a VLC table having cu_qp_delta values ranging from ⁇ 1 to 1 may be used for the first CU 440 .
  • a previous quantization parameter value used to decode the quantization parameter of the second CU 460 may be the quantization parameter value of the first CU 440 of 28 and the quantization parameter limit range of the CU included in the current slice is from 26 to 28, a VLC table having cu_qp_delta values ranging from ⁇ 2 to 0 without cu_qp_delta values in a positive range may used to decode the second CU 460 .
  • FIG. 4 shows that the quantization parameter range information (qp_range value) is decoded from the slice header to determine the quantization parameter value of the CU included in the slice. That is, a quantization parameter limit range may be allocated to each slice.
  • exponential Golomb coding and a variety of binary coding methods may be used for binary coding of a VLC table for representing cu_qp_delta of a CU without departing from the scope of the present invention.
  • Tables 8 to 10 illustrate structures of syntax elements including quantization parameter information according to other exemplary embodiments of the present invention.
  • a sequence header may include information on whether to change a quantization parameter in a CU layer through a syntax element cu_qp_delta_enable_flag.
  • a picture header may include depth information on a maximum CU that allows change of a quantization parameter value through a syntax element max_cu_qp_delta_depth if it is determined to change the quantization parameter in the CU layer.
  • a slice header may transmit an initial quantization parameter value used for a current slice through a slice_qp value. That is, the picture header may not include a basic quantization parameter value of the slice, but the slice header may include the initial quantization parameter value of the slice.
  • the CU layer may include a syntax element cu_qp_delta for changing a quantization parameter value of the current CU.
  • Table 9 illustrates a structure of a syntax element including quantization parameter information according to an exemplary embodiment of the present invention.
  • a sequence header may include a new syntax element cu_qp_delta_depth-ue(v), that is information on a depth of a CU layer where a quantization parameter is changed through.
  • ue(v) means that the syntax element is expressed by a binary system using variable-length unsigned exponential Golomb coding.
  • a quantization parameter may be changed only in a slice or picture unit including a CU, not in a CU layer.
  • a quantization parameter may be changed only in a largest coding unit (LCU).
  • a quantization parameter may be changed in a CU that is one of CUs into which an LCU is partitioned in half vertically and horizontally, that is, a CU with a depth level increased by one in a quadtree.
  • a quantization parameter may be changed in a CU that is one of CUs into which an LCU is partitioned in quarter vertically and horizontally, that is, a CU with a depth level increased by two in a quadtree.
  • a picture header may include only basic quantization parameter information on a slice, pic_init_qp_minus26, except for max_cu_qp_delta_depth, since the sequence header determines a size of a CU with a varying quantization parameter value.
  • a slice header may include slice_qp_delta for deriving an initial quantization parameter value of a slice.
  • a CU may include cu_qp_delta for changing a quantization parameter value of the CU.
  • Table 10 illustrates a structure of a syntax element including quantization parameter information according to another exemplary embodiment of the present invention.
  • a sequence header may include a new syntax element cu_qp_delta_depth-ue(v), that is, information on a depth of a CU layer where a quantization parameter is changed through.
  • ue(v) means that the syntax element is expressed by a binary system using variable-length unsigned exponential Golomb coding.
  • a quantization parameter may be changed only in a slice or picture unit including a CU, not in a CU layer.
  • a quantization parameter may be changed only in a largest coding unit (LCU).
  • a quantization parameter may be changed in a CU that is one of CUs into which an LCU is partitioned in half vertically and horizontally, that is, a CU with a depth level increased by one in a quadtree.
  • a quantization parameter may be changed in a CU that is one of CUs into which an LCU is partitioned in quarter vertically and horizontally, that is, a CU with a depth level increased by twoe in a quadtree.
  • a slice header may directly include an initial quantization parameter value of a slice through slice_qp.
  • a picture header may not include information on a quantization parameter.
  • a CU layer may include cu_qp_delta for changing a quantization parameter value of the CU.
  • Tables 8 to 10 are provided to illustrate various embodiments of the present invention, in which the headers including the represented syntax elements are disposed at arbitrary locations but may be included in different locations without departing from the nature of the present invention.
  • a quantization parameter range of a CU included in a picture or slice may be limited through a qp_range value encoded in at least one of a picture header and a slice header, and a quantization parameter value of the current CU may be represented using a VLC table considering a quantization parameter value of a previous CU and a quantization parameter limit range of the CU included in the slice.
  • FIG. 5 is a flowchart illustrating a method of decoding a quantization parameter according to an exemplary embodiment of the present invention.
  • a basic quantization parameter may be derived (S 500 ).
  • the basic quantization parameter may be included in a picture header.
  • a basic quantization parameter value of a slice referring to a current picture header may be derived based on a value of a syntax element, such as pic_init_qp_minus26, included in the picture header. If an initial quantization parameter value is derived directly from a slice as in Table 10, S 550 may not be performed.
  • An initial quantization parameter may be derived (S 510 ).
  • the initial quantization parameter may be derived by adding a slice_qp_delta value included in a slice header to the basic quantization parameter derived from the picture header.
  • the initial quantization parameter of the slice may be derived by deriving a slice_qp value directly from the slice header.
  • a quantization parameter limit range may be derived (S 520 ).
  • the quantization parameter limit range allowed for a CU included in the current slice may be derived based on the initial quantization parameter of the slice and a value of quantization parameter range information (qp_range) as a syntax element representing a quantization parameter range of the CU included in the slice transmitted via the picture header or the slice header.
  • qp_range quantization parameter range information
  • a VLC table may be determined based on a previous quantization parameter (S 530 ).
  • the previous quantization parameter may be used as a predictive value for deriving a quantization parameter of the current CU.
  • the VLC table for deriving the quantization parameter of the current CU may be selected or created on the basis of a previous quantization parameter value of the CU and the quantization parameter limit range of the CU derived based on the qp_range value as the quantization parameter range information on the CU.
  • the quantization parameter of the current CU may be derived based on the determined VLC table (S 540 ).
  • the quantization parameter of the current CU is derived based on the VLC table obtained in S 530 .
  • a code number may be obtained by decoding a binary code and a cu_qp_delta value corresponding to the code number may be derived.
  • the method of decoding the quantization parameter may further include determining whether to change the quantization parameter of the CU based on quantization parameter change enabling information or depth information on a CU where a quantization parameter is changed which are included in a sequence header.
  • FIG. 6 is a diagram illustrating a dequantization module according to an exemplary embodiment of the present invention.
  • the dequantization module may include a quantization parameter derivation module 600 , a VLC table determination module 620 , and a dequantization implementation module 640 .
  • the quantization parameter derivation module 600 may derive a quantization parameter limit range of a CU included in a current slice based on quantization parameter variables provided from the entropy decoding module, such as basic quantization parameter information, initial quantization parameter information and quantization parameter range information. Further, the quantization parameter derivation module 600 may derive a previous quantization parameter value for decoding a quantization parameter of the current CU based on dequantization results of CUs decoded before the current CU.
  • the foregoing deriving operations may be performed by the entropy decoding module, instead of the quantization parameter derivation module 600 without departing from the scope of the present invention.
  • the VLC table determination module 620 may determine a VLC table for decoding the quantization parameter of the current CU based on the quantization parameter range of the CU and the previous quantization parameter value derived by the quantization parameter derivation module 600 .
  • a VLC table may be stored in advance in the VLC table determination module 620 or created by the VLC table determination module 620 .
  • the foregoing determining operation may be performed by the entropy decoding module, instead of the VLC table determination module 620 without departing from the scope of the present invention.
  • the dequantization implementation module 640 may perform dequantization based on a quantization parameter value derived by the VLC table determination module 620 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
US14/356,218 2011-11-04 2012-10-30 Method for encoding/decoding a quantization coefficient, and apparatus using same Abandoned US20140348227A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR10-2011-0114687 2011-11-04
KR1020110114687A KR101965388B1 (ko) 2011-11-04 2011-11-04 양자화 계수 부/복호화 방법 및 이러한 방법을 사용하는 장치
PCT/KR2012/008999 WO2013066026A1 (ko) 2011-11-04 2012-10-30 양자화 계수 부/복호화 방법 및 이러한 방법을 사용하는 장치

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2012/008999 A-371-Of-International WO2013066026A1 (ko) 2011-11-04 2012-10-30 양자화 계수 부/복호화 방법 및 이러한 방법을 사용하는 장치

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/224,032 Continuation US11202091B2 (en) 2011-11-04 2018-12-18 Method for encoding/decoding a quantization coefficient, and apparatus using same

Publications (1)

Publication Number Publication Date
US20140348227A1 true US20140348227A1 (en) 2014-11-27

Family

ID=48192325

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/356,218 Abandoned US20140348227A1 (en) 2011-11-04 2012-10-30 Method for encoding/decoding a quantization coefficient, and apparatus using same
US16/224,032 Active US11202091B2 (en) 2011-11-04 2018-12-18 Method for encoding/decoding a quantization coefficient, and apparatus using same

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/224,032 Active US11202091B2 (en) 2011-11-04 2018-12-18 Method for encoding/decoding a quantization coefficient, and apparatus using same

Country Status (3)

Country Link
US (2) US20140348227A1 (ko)
KR (1) KR101965388B1 (ko)
WO (1) WO2013066026A1 (ko)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150350646A1 (en) * 2014-05-28 2015-12-03 Apple Inc. Adaptive syntax grouping and compression in video data
US20160119619A1 (en) * 2014-10-27 2016-04-28 Ati Technologies Ulc Method and apparatus for encoding instantaneous decoder refresh units
US20170332091A1 (en) * 2014-11-28 2017-11-16 Canon Kabushiki Kaisha Image coding apparatus, image coding method, storage medium, image decoding apparatus, image decoding method, and storage medium
CN109792541A (zh) * 2016-10-05 2019-05-21 瑞典爱立信有限公司 用于视频译码的去振铃滤波器
US10448056B2 (en) * 2016-07-15 2019-10-15 Qualcomm Incorporated Signaling of quantization information in non-quadtree-only partitioned video coding
US10681351B2 (en) * 2016-07-28 2020-06-09 Mediatek Inc. Methods and apparatuses of reference quantization parameter derivation in video processing system
CN112514385A (zh) * 2018-09-30 2021-03-16 腾讯美国有限责任公司 用于视频编码的方法和装置
US20210195209A1 (en) * 2011-03-03 2021-06-24 Sun Patent Trust Method of encoding an image into a coded image, method of decoding a coded image, and apparatuses thereof
US11202091B2 (en) 2011-11-04 2021-12-14 Facebook, Inc. Method for encoding/decoding a quantization coefficient, and apparatus using same
US11310514B2 (en) * 2014-12-22 2022-04-19 Samsung Electronics Co., Ltd. Encoding method and apparatus using non-encoding region, block-based encoding region, and pixel-based encoding region
US20220182630A1 (en) * 2019-08-24 2022-06-09 Beijing Bytedance Network Technology Co., Ltd. Residual coefficients coding
WO2022135508A1 (en) * 2020-12-23 2022-06-30 Beijing Bytedance Network Technology Co., Ltd. Video decoder initialization information constraints

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180058224A (ko) * 2015-10-22 2018-05-31 엘지전자 주식회사 영상 코딩 시스템에서 모델링 기반 영상 디코딩 방법 및 장치
CN113115036A (zh) * 2015-11-24 2021-07-13 三星电子株式会社 视频解码方法和设备及其编码方法和设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6301393B1 (en) * 2000-01-21 2001-10-09 Eastman Kodak Company Using a residual image formed from a clipped limited color gamut digital image to represent an extended color gamut digital image
US20100086028A1 (en) * 2007-04-16 2010-04-08 Kabushiki Kaisha Toshiba Video encoding and decoding method and apparatus
US20110200115A1 (en) * 2008-06-10 2011-08-18 Yoshiteru Hayashi Image decoding apparatus and image coding apparatus

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6606418B2 (en) 2001-01-16 2003-08-12 International Business Machines Corporation Enhanced compression of documents
BRPI0506846B1 (pt) * 2004-01-30 2018-12-11 Thomson Licensing codificador de vídeo e método para codificar quadros de imagens divisíveis em macro-blocos
KR100930485B1 (ko) * 2007-11-15 2009-12-09 (주)씨앤에스 테크놀로지 발생 비트량을 이용한 동영상 부호화기의 적응적 비트율제어 기법
KR100963322B1 (ko) * 2008-07-02 2010-06-11 재단법인서울대학교산학협력재단 실시간 h.264를 위한 적응적 프레임 비트율 제어 방법
KR20110071231A (ko) 2009-12-21 2011-06-29 엠텍비젼 주식회사 부호화 방법, 복호화 방법 및 장치
KR20120016980A (ko) * 2010-08-17 2012-02-27 한국전자통신연구원 영상 부호화 방법 및 장치, 그리고 복호화 방법 및 장치
KR101965388B1 (ko) 2011-11-04 2019-04-04 주식회사 골드피크이노베이션즈 양자화 계수 부/복호화 방법 및 이러한 방법을 사용하는 장치

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6301393B1 (en) * 2000-01-21 2001-10-09 Eastman Kodak Company Using a residual image formed from a clipped limited color gamut digital image to represent an extended color gamut digital image
US20100086028A1 (en) * 2007-04-16 2010-04-08 Kabushiki Kaisha Toshiba Video encoding and decoding method and apparatus
US20110200115A1 (en) * 2008-06-10 2011-08-18 Yoshiteru Hayashi Image decoding apparatus and image coding apparatus

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11523122B2 (en) * 2011-03-03 2022-12-06 Sun Patent Trust Method of encoding an image into a coded image, method of decoding a coded image, and apparatuses thereof
US20210195209A1 (en) * 2011-03-03 2021-06-24 Sun Patent Trust Method of encoding an image into a coded image, method of decoding a coded image, and apparatuses thereof
US11202091B2 (en) 2011-11-04 2021-12-14 Facebook, Inc. Method for encoding/decoding a quantization coefficient, and apparatus using same
US20150350646A1 (en) * 2014-05-28 2015-12-03 Apple Inc. Adaptive syntax grouping and compression in video data
US10715833B2 (en) * 2014-05-28 2020-07-14 Apple Inc. Adaptive syntax grouping and compression in video data using a default value and an exception value
US20160119619A1 (en) * 2014-10-27 2016-04-28 Ati Technologies Ulc Method and apparatus for encoding instantaneous decoder refresh units
US11218714B2 (en) * 2014-11-28 2022-01-04 Canon Kabushiki Kaisha Image coding apparatus and image decoding apparatus for coding and decoding a moving image by replacing pixels with a limited number of colors on a palette table
US20170332091A1 (en) * 2014-11-28 2017-11-16 Canon Kabushiki Kaisha Image coding apparatus, image coding method, storage medium, image decoding apparatus, image decoding method, and storage medium
US11310514B2 (en) * 2014-12-22 2022-04-19 Samsung Electronics Co., Ltd. Encoding method and apparatus using non-encoding region, block-based encoding region, and pixel-based encoding region
US10448056B2 (en) * 2016-07-15 2019-10-15 Qualcomm Incorporated Signaling of quantization information in non-quadtree-only partitioned video coding
US10681351B2 (en) * 2016-07-28 2020-06-09 Mediatek Inc. Methods and apparatuses of reference quantization parameter derivation in video processing system
US11272175B2 (en) * 2016-10-05 2022-03-08 Telefonaktiebolaget Lm Ericsson (Publ) Deringing filter for video coding
CN109792541A (zh) * 2016-10-05 2019-05-21 瑞典爱立信有限公司 用于视频译码的去振铃滤波器
JP2022502950A (ja) * 2018-09-30 2022-01-11 テンセント・アメリカ・エルエルシー ビデオ符号化及び復号の方法及び装置並びにコンピュータプログラム
CN112514385A (zh) * 2018-09-30 2021-03-16 腾讯美国有限责任公司 用于视频编码的方法和装置
US11350094B2 (en) 2018-09-30 2022-05-31 Tencent America LLC Method and apparatus for video coding
JP7222080B2 (ja) 2018-09-30 2023-02-14 テンセント・アメリカ・エルエルシー ビデオ符号化及び復号の方法及び装置並びにコンピュータプログラム
US11700374B2 (en) 2018-09-30 2023-07-11 Tencent America LLC Method and apparatus for video coding
JP7427814B2 (ja) 2018-09-30 2024-02-05 テンセント・アメリカ・エルエルシー ビデオ符号化及び復号の方法及び装置並びにコンピュータプログラム
US11968365B2 (en) 2018-09-30 2024-04-23 Tencent America LLC Adjusting a slice level quantization parameter (QP) based on a maximum QP value
US20220182630A1 (en) * 2019-08-24 2022-06-09 Beijing Bytedance Network Technology Co., Ltd. Residual coefficients coding
WO2022135508A1 (en) * 2020-12-23 2022-06-30 Beijing Bytedance Network Technology Co., Ltd. Video decoder initialization information constraints

Also Published As

Publication number Publication date
KR101965388B1 (ko) 2019-04-04
KR20130049587A (ko) 2013-05-14
WO2013066026A1 (ko) 2013-05-10
US11202091B2 (en) 2021-12-14
US20190149835A1 (en) 2019-05-16

Similar Documents

Publication Publication Date Title
US11202091B2 (en) Method for encoding/decoding a quantization coefficient, and apparatus using same
US10419759B2 (en) Adaptive transform method based on in-screen prediction and apparatus using the method
US10863173B2 (en) Intra prediction mode mapping method and device using the method
US11743488B2 (en) Method for setting motion vector list and apparatus using same
US11516461B2 (en) Method for encoding/decoding an intra-picture prediction mode using two intra-prediction mode candidate, and apparatus using such a method
US20180048911A1 (en) Intra prediction method of chrominance block using luminance sample, and apparatus using same
US10863196B2 (en) Method for partitioning block and decoding device
US11985312B2 (en) Image encoding/decoding method and device using intra prediction
US20230095972A1 (en) Method and apparatus for processing video signal

Legal Events

Date Code Title Description
AS Assignment

Owner name: PANTECH INC., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PANTECH CO., LTD.;REEL/FRAME:039428/0268

Effective date: 20160706

AS Assignment

Owner name: GOLDPEAK INNOVATIONS INC, KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PANTECH INC;REEL/FRAME:042098/0451

Effective date: 20161031

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: FACEBOOK, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GOLDPEAK INNOVATIONS INC;REEL/FRAME:052472/0835

Effective date: 20200203

AS Assignment

Owner name: META PLATFORMS, INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:FACEBOOK, INC.;REEL/FRAME:058962/0497

Effective date: 20211028