WO2012138032A1 - Procédé pour le codage et le décodage des informations d'image - Google Patents

Procédé pour le codage et le décodage des informations d'image Download PDF

Info

Publication number
WO2012138032A1
WO2012138032A1 PCT/KR2011/009075 KR2011009075W WO2012138032A1 WO 2012138032 A1 WO2012138032 A1 WO 2012138032A1 KR 2011009075 W KR2011009075 W KR 2011009075W WO 2012138032 A1 WO2012138032 A1 WO 2012138032A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
block
mode
residual
current
Prior art date
Application number
PCT/KR2011/009075
Other languages
English (en)
Korean (ko)
Inventor
박준영
박승욱
임재현
김정선
최영희
전병문
전용준
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to KR1020137020513A priority Critical patent/KR20140018873A/ko
Publication of WO2012138032A1 publication Critical patent/WO2012138032A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes

Definitions

  • the present invention relates to a technique for compressing video information, and more particularly, to a method for reducing the amount of transmitted information by considering the redundancy of information transmission for modes applied in inter prediction.
  • High-efficiency image compression technology can be used to effectively transmit, store, and reproduce high-resolution, high-quality video information.
  • inter prediction and intra prediction may be used.
  • a pixel value of a current picture is predicted by referring to information of another picture
  • a pixel value is predicted by using a correlation between pixels in the same picture.
  • CABAC Context-based Adaptive Binary Arithmetic Coding
  • CAVLC Context-based Adaptive Variable Length Coding
  • CABAC selects a probability model for each syntax element according to the context, changes the probability of the probability model through internal statistics, and performs compression using arithmetic coding.
  • CAVLC is used as the entropy coding mode, encoding is performed on each syntax element by using a predetermined Vav (Variavle Length Coding) table.
  • An object of the present invention is to provide a method and apparatus for reducing the amount of transmitted information by considering the redundancy of information transmission for each mode of inter prediction.
  • Another technical problem of the present invention is to provide a syntax structure that can reduce redundantly transmitted information.
  • Another technical problem of the present invention is to provide a method and apparatus for performing entropy encoding and entropy decoding in consideration of duplication of information without changing a syntax structure.
  • Another technical problem of the present invention is to provide a method and apparatus for using an index mapping table and an inverse index mapping table in consideration of duplication of information.
  • An embodiment of the present invention provides an image information encoding method, comprising: performing prediction on a current prediction unit, entropy encoding information on the current prediction unit, and transmitting the entropy encoded information.
  • the prediction unit may include a block of 2N ⁇ 2N having a depth of 0, and if a merge mode is applied to the current prediction unit, it may be assumed that there is a residual signal for the current prediction unit. Can be.
  • the current prediction unit is not a 2N ⁇ 2N block having a depth of 0 or is applied to the current prediction unit.
  • the residual information for the current prediction unit may be transmitted only when the prediction mode is not the merge mode.
  • the index mapping table determines a code number corresponding to a pattern of information on the existence of residual information for the luma transform block and the information on the existence of residual information for the chroma transform block.
  • the assigned code number may be mapped to a codeword corresponding to the assigned code number on a variable length coding (VLC) table.
  • VLC variable length coding
  • Another embodiment of the present invention is a method of decoding image information, comprising: receiving encoded information, entropy decoding the received information, and restoring an image based on the entropy decoded information.
  • a current block is a 2N ⁇ 2N block having a depth of 0, and a merge mode is applied to the current block, it may be estimated that a residual signal for the current block exists.
  • the mode of entropy coding applied to the current block is CABAC (Context-Adaptive Binary Arithmetic Coding)
  • the current block is a 2Nx2N block having a depth of 0
  • the prediction mode applied to the current block is a merge mode.
  • the entropy decoding step it is possible to estimate the existence of the residual information for the current block.
  • the inverse index mapping table includes a code number corresponding to the received information of information about the existence of residual information for the luma transform block and the residual information for the chroma transform block. It can be mapped to a pattern of information about existence.
  • the amount of transmission information can be reduced by considering the redundancy of information transmission for each mode of inter prediction.
  • the amount of information transmission can be reduced by using a syntax in consideration of the redundancy of information transmission for each mode of inter-screen prediction.
  • the amount of transmission information can be reduced by considering the redundancy of information when performing entropy encoding and entropy decoding.
  • FIG. 1 is a block diagram schematically illustrating an image encoding apparatus (encoder) according to an embodiment of the present invention.
  • FIG. 2 is a block diagram schematically illustrating an image decoder according to an embodiment of the present invention.
  • 3 schematically illustrates an example of candidate blocks to be merged with a current block in merge mode.
  • FIG. 4 schematically illustrates another example of candidate blocks to be merged with a current block in merge mode.
  • FIG. 5 schematically illustrates an example of an entropy encoding method using a VLC table.
  • FIG. 6 schematically illustrates an example of an entropy decoding method using a VLC table.
  • FIG. 7 is a flowchart schematically illustrating an operation of an encoder in a system to which the present invention is applied.
  • FIG. 8 is a flowchart schematically illustrating an operation of a decoder in a system to which the present invention is applied.
  • each of the components in the drawings described in the present invention are shown independently for the convenience of the description of the different characteristic functions in the image encoding / decoding apparatus, each component is implemented by separate hardware or separate software It does not mean to be.
  • two or more of each configuration may be combined to form one configuration, or one configuration may be divided into a plurality of configurations.
  • Embodiments in which each configuration is integrated and / or separated are also included in the scope of the present invention without departing from the spirit of the present invention.
  • the image encoding apparatus 100 may include a picture splitter 105, a predictor 110, a transformer 115, a quantizer 120, a realigner 125, and an entropy encoder 130. , An inverse quantization unit 135, an inverse transform unit 140, a filter unit 145, and a memory 150.
  • the picture dividing unit 105 may divide the input picture into at least one processing unit.
  • the processing unit may be a prediction unit (hereinafter referred to as a PU), a transform unit (hereinafter referred to as a TU), or a coding unit (hereinafter referred to as "CU"). May be used).
  • the predictor 110 includes an inter prediction unit for performing inter prediction and an intra prediction unit for performing intra prediction.
  • the prediction unit 110 generates a prediction block by performing prediction on the processing unit of the picture in the picture division unit 105.
  • the processing unit of the picture in the prediction unit 110 may be a CU, a TU, or a PU.
  • the processing unit in which the prediction is performed may differ from the processing unit in which the prediction method and the details are determined.
  • the method of prediction and the prediction mode may be determined in units of PUs, and the prediction may be performed in units of TUs.
  • a prediction block may be generated by performing prediction based on information of at least one picture of a previous picture and / or a subsequent picture of the current picture.
  • a prediction block may be generated by performing prediction based on pixel information in a current picture.
  • a reference picture may be selected for a PU and a reference block having the same size as that of the PU may be selected in integer pixel samples. Subsequently, a residual block with the current PU is minimized and a prediction block with a minimum motion vector size is generated.
  • a skip mode a merge mode, a motion vector prediction (MVP), and the like can be used.
  • the prediction block may be generated in sub-integer sample units such as 1/2 pixel sample unit and 1/4 pixel sample unit.
  • the motion vector may also be expressed in units of integer pixels or less.
  • the luminance pixel may be expressed in units of 1/4 pixels
  • the chrominance pixel may be expressed in units of 1/8 pixels.
  • Information such as an index of a reference picture, a motion vector (ex. Motion Vector Predictor), and a residual signal selected through inter prediction is entropy coded and transmitted to a decoder.
  • a prediction mode may be determined in units of PUs, and prediction may be performed in units of PUs.
  • a prediction mode may be determined in units of PUs, and intra prediction may be performed in units of TUs.
  • a prediction mode may have 33 directional prediction modes and at least two non-directional modes.
  • the non-directional mode may include a DC prediction mode and a planner mode (Planar mode).
  • a prediction block may be generated after applying an adaptive intra smoothing (AIS) filter to a reference pixel according to a prediction mode.
  • AIS adaptive intra smoothing
  • the type of AIS filter applied to the reference pixel may be different.
  • the prediction may be performed by interpolating the reference pixel in units of 1/8 pixels according to the prediction mode of the current block.
  • the PU may have various sizes / shapes, for example, the PU may have a size of 2N ⁇ 2N, 2N ⁇ N, N ⁇ 2N, or N ⁇ N in case of inter-picture prediction.
  • the PU may have a size of 2N ⁇ 2N or N ⁇ N (where N is an integer).
  • the N ⁇ N size PU may be set to apply only in a specific case.
  • the NxN PU may be used only for the minimum size coding unit, or only for intra prediction.
  • a PU having a size of N ⁇ mN, mN ⁇ N, 2N ⁇ mN, or mN ⁇ 2N (m ⁇ 1) may be further defined and used.
  • the residual value (the residual block or the residual signal) between the generated prediction block and the original block is input to the converter 115.
  • prediction mode information and motion vector information used for prediction are encoded by the entropy encoder 130 along with the residual value and transmitted to the decoder.
  • the transformer 115 performs transform on the residual block in transform units and generates transform coefficients.
  • the transform unit in the converter 115 may be a TU and may have a quad tree structure. In this case, the size of the transform unit may be determined within a range of a predetermined maximum and minimum size.
  • the transform unit 115 may convert the residual block using a discrete cosine transform (DCT) and / or a discrete sine transform (DST).
  • DCT discrete cosine transform
  • DST discrete sine transform
  • the quantizer 120 may generate quantization coefficients by quantizing the residual values transformed by the converter 115.
  • the value calculated by the quantization unit 120 is provided to the inverse quantization unit 135 and the reordering unit 125.
  • the reordering unit 125 rearranges the quantization coefficients provided from the quantization unit 120. By rearranging the quantization coefficients, the efficiency of encoding in the entropy encoder 130 may be increased.
  • the reordering unit 125 may rearrange the quantization coefficients in the form of a two-dimensional block into a one-dimensional vector form through a coefficient scanning method.
  • the reordering unit 125 may increase the entropy coding efficiency of the entropy encoder 130 by changing the order of coefficient scanning based on probabilistic statistics of coefficients transmitted from the quantization unit.
  • the entropy encoder 130 may perform entropy encoding on the quantized coefficients rearranged by the reordering unit 125.
  • Entropy encoding may use, for example, an encoding method such as Exponential Golomb, Context-Adaptive Variable Length Coding (CAVLC), or Context-Adaptive Binary Arithmetic Coding (CABAC).
  • CABAC Context-Adaptive Binary Arithmetic Coding
  • the entropy encoder 130 may include quantization coefficient information, block type information, prediction mode information, partition unit information, PU information, transmission unit information, motion vector information, and the like of the CUs received from the reordering unit 125 and the prediction unit 110.
  • Various information such as reference picture information, interpolation information of a block, and filtering information may be encoded.
  • the entropy encoder 130 may apply a constant change to a transmitted parameter set or syntax.
  • the inverse quantization unit 135 inverse quantizes the quantized values in the quantization unit 120, and the inverse transformer 140 inversely transforms the inverse quantized values in the inverse quantization unit 135.
  • the residual values generated by the inverse quantizer 135 and the inverse transformer 140 may be combined with the prediction block predicted by the predictor 110 to generate a reconstructed block.
  • the filter unit 145 may apply a deblocking filter, an adaptive loop filter (ALF), and a sample adaptive offset (SAO) to the reconstructed picture.
  • ALF adaptive loop filter
  • SAO sample adaptive offset
  • the deblocking filter may remove block distortion generated at the boundary between blocks in the reconstructed picture.
  • the adaptive loop filter may perform filtering based on a value obtained by comparing the reconstructed image with the original image after the block is filtered through the deblocking filter. ALF may be performed only when high efficiency is applied.
  • the SAO restores the offset difference from the original image on a pixel-by-pixel basis for the residual block to which the deblocking filter is applied, and is applied in the form of a band offset and an edge offset.
  • the filter unit 145 may not apply filtering to the reconstructed block used for inter prediction.
  • the memory 150 may store the reconstructed block or the picture calculated by the filter unit 145.
  • the reconstructed block or picture stored in the memory 150 may be provided to the predictor 110 that performs inter prediction.
  • the image decoder 200 includes an entropy decoder 210, a reordering unit 215, an inverse quantizer 220, an inverse transformer 225, a predictor 230, and a filter 235.
  • Memory 240 may be included.
  • the input bit stream may be decoded according to a procedure in which image information is processed by the image encoder.
  • VLC variable length coding
  • 'VLC' variable length coding
  • the entropy decoder 210 also uses a VLC table used in the encoder. Entropy decoding can be performed by implementing the same VLC table.
  • CABAC CABAC is used to perform entropy encoding in the image encoder
  • CABAC CABAC correspondingly.
  • Information for generating the prediction block among the information decoded by the entropy decoder 210 may be provided to the predictor 230, and a residual value of which entropy decoding is performed by the entropy decoder may be input to the reordering unit 215. .
  • the reordering unit 215 may reorder the entropy-decoded bit stream by the entropy decoding unit 210 based on the reordering method in the image encoder.
  • the reordering unit 215 may reorder the coefficients expressed in the form of a one-dimensional vector by restoring the coefficients in the form of a two-dimensional block.
  • the reordering unit 215 may be realigned by receiving information related to coefficient scanning performed by the encoder and performing reverse scanning based on the scanning order performed by the corresponding encoder.
  • the inverse quantization unit 220 may perform inverse quantization based on the quantization parameter provided by the encoder and the coefficient values of the rearranged block.
  • the inverse transform unit 225 may perform inverse DCT and / or inverse DST on DCT and DST performed by the transform unit of the encoder with respect to the quantization result performed by the image encoder.
  • the inverse transform may be performed based on a transmission unit determined by the encoder or a division unit of an image.
  • the DCT and / or the DST may be selectively performed according to a plurality of pieces of information, such as a prediction method, a size and a prediction direction of the current block, and the inverse transformer 225 of the decoder is performed by the transformer of the encoder.
  • Inverse transformation may be performed based on the transformation information.
  • the prediction unit 230 may generate the prediction block based on the prediction block generation related information provided by the entropy decoder 210 and previously decoded blocks and / or picture information provided by the memory 240.
  • the reconstruction block may be generated using the prediction block generated by the predictor 230 and the residual block provided by the inverse transform unit 225.
  • intra prediction may be performed to generate a prediction block based on pixel information in the current picture.
  • the inter-screen for the current PU based on information included in at least one of a previous picture or a subsequent picture of the current picture. You can make predictions.
  • motion information required for inter-prediction prediction of the current PU provided by the image encoder for example, a motion vector, a reference picture index, and the like, may be derived in response to a skip flag, a merge flag, and the like received from the encoder.
  • the reconstructed block and / or picture may be provided to the filter unit 235.
  • the filter unit 235 applies deblocking filtering, sample adaptive offset (SAO), and / or adaptive loop filtering to the reconstructed block and / or picture.
  • the memory 240 may store the reconstructed picture or block to use as a reference picture or reference block and provide the reconstructed picture to the output unit.
  • a prediction mode such as a merge mode, a skip mode, or a direct mode may be used to reduce the amount of transmission information according to the prediction.
  • merging means that prediction information is obtained from inter-screen prediction information of adjacent blocks in inter-screen prediction of the current block.
  • the information about the merge of the current block may include information indicating whether the current block is merged, for example, information indicating which neighboring block the current block merges with, such as a merge flag (merge_flag), a left adjacent block or an upper adjacent block of the current block, For example, it may be expressed using a merge index (merge_index).
  • merge_flag merge flag
  • merge_index merge index
  • merge_index merge index
  • inter-screen prediction information (A, D) of adjacent blocks in the left region of the current block 310, inter-screen prediction information (B, C) of adjacent blocks in the upper region, and the same position (co Inter-picture prediction information T of a temporal neighboring block of -located may be merge candidates of the current block.
  • inter-prediction information A, B, C, and D of adjacent blocks represents inter-prediction information of an area already reconstructed in the current picture.
  • inter-prediction information T represents inter-prediction information at a specific position, for example, a co-located block, of an already reconstructed picture.
  • merge_index merge index
  • inter prediction information between the left side of the current block 410 and the upper region adjacent block may be used for the current block.
  • inter-screen prediction information E 0 , E 1 of adjacent blocks in the left region of the current block 410, inter-screen prediction information F 0 , F 1 , F of adjacent blocks in the upper region. 2 ) and inter-picture prediction information T ′ of a co-located temporal neighboring block may be merge candidates of the current block 410.
  • the merge of the current block 410 to which neighboring block and which inter prediction information to use for the current block may be indicated by using a merge index (merge_index).
  • motion vector information In the skip mode, syntax information other than motion vector information for the current block is not transmitted.
  • the motion vector information may be transmitted in a manner of designating any one block among blocks adjacent to the current block to use the motion vector information of the block for the current block.
  • direct mode motion information itself is not transmitted.
  • the same method as that used in the merge mode may be used to obtain inter prediction information of the current block.
  • the merge candidate is the same as in FIG. 3, when adjacent blocks of the current block as shown in FIG. 3 are used as the merge candidate, the inter prediction of the block indicated by the merge index (merge_index) even in the skip mode.
  • the information can be used as prediction information between screens of the current block as it is.
  • the merge candidate is as shown in FIG. 4, the inter prediction information of the block indicated by the merge index (merge_index) may be used as the inter prediction information of the current block in the skip mode.
  • a typical inter with a 2Nx2N partition and a skip mode is applied for a certain CU.
  • a merge mode may be applied to a corresponding block of 2N ⁇ 2N but there is no residual information. That is, when the skip mode is applied to the 2N ⁇ 2N block and the merge mode is applied but there is no residual information, different representation methods are used, but when the merge indexes are the same, the same decoding result is obtained. As a result, redundancy may exist on the syntax transmitted from the encoder.
  • a block having a 2N ⁇ 2N partition and merge mode as a general inter mode it may be determined that residual information exists and may be inferred without transmitting an associated syntax.
  • syntax values may be estimated as there is residual information without transmitting a syntax regarding existence of residual information.
  • the entropy coding mode may be handled differently depending on whether the entropy coding mode is CABAC (Context-based Adaptive Binary Arithmetic Coding) or CAVLC (Context-based Adaptive Variable Length Coding).
  • CABAC Context-based Adaptive Binary Arithmetic Coding
  • CAVLC Context-based Adaptive Variable Length Coding
  • CABAC selects a probability model for each syntax element according to the context, changes the probability of the probability model through internal statistics, and performs compression using arithmetic coding. If the entropy coding mode is CABAC for a block that has a 2Nx2N partition and merge mode is not a skip mode, it is assumed that there is residual information without transmitting information indicating that there is no residual information. Decryption may proceed. For example, a value of no_residual_data_flag may be inferred to 0 without transmitting a flag (no_residual_data_flag) indicating that there is no residual information.
  • encoding is performed on each syntax element by using a predetermined Vav (Variavle Length Coding) table.
  • Vav Vavle Length Coding
  • the symbol values are obtained by mapping the table indexes of the corresponding sorting table.
  • the table indexes of the sorting table can be swapped to reflect the context, and adaptively adjust the codeword of each symbol.
  • the transmitted syntax is not used separately to indicate that there is no residual information. Therefore, when the entropy coding mode is CAVLC for a block having a 2Nx2N partition and merge mode as a general inter mode, the following method may be used to remove redundancy for information indicating the existence of residual information.
  • a coded block flag cbf, hereinafter 'cbf'
  • Y luma
  • U, V chroma
  • a split transform flag (transform_split_flag or split_transform_flag) indicating whether the current block is split into smaller blocks for transform encoding.
  • the cbf value for luma may be expressed as cbf_luma [x0] [y0] [trafoDepth].
  • a value of cbf_luma of 1 indicates that the luma transform block has one or more non-zero transform coefficient levels.
  • the array indices x0 and y0 may specify the positions of the top-left luma samples of the transformed transform block with respect to the top-left luma samples of the picture.
  • the array index trafoDepth may specify the current subdivision level of the CU divided into blocks for transform coding. For example, a block having a value of trafoDepth of 0 corresponds to a CU.
  • the cbf value for Cb among the cbf values for chroma may be represented as cbf_cb [x0] [y0] [trafoDepth].
  • a value of cbf_cb of 1 indicates that the Cb transform block has one or more non-zero transform coefficient levels.
  • the array indices x0 and y0 may specify the positions of the top-left samples of the transformed transform block with respect to the top-left luma samples of the picture.
  • the array index trafoDepth may specify the current subdivision level of the CU divided into blocks for transform coding. For example, a block having a value of trafoDepth of 0 corresponds to a CU.
  • the cbf value for Cr among the cbf values for chroma may be expressed as cbf_cr [x0] [y0] [trafoDepth].
  • a value of cbf_cr of 1 indicates that the Cr transform block has one or more non-zero transform coefficient levels.
  • the array indices x0 and y0 may specify the positions of the top-left samples of the transformed transform block with respect to the top-left luma samples of the picture.
  • the array index trafoDepth may specify the current subdivision level of the CU divided into blocks for transform coding. For example, a block having a value of trafoDepth of 0 corresponds to a CU.
  • the split transform flag may be represented by split_transform_flag [x0] [y0] [trafoDepth] or transform_split_flag [x0] [y0] [trafoDepth].
  • the split transform flag indicates whether the block is split into four blocks having a smaller horizontal or vertical size for transform coding.
  • the array indices x0 and y0 may specify the positions of the top-left samples of the transformed transform block with respect to the top-left luma samples of the picture.
  • the array index trafoDepth may specify the current subdivision level of the CU divided into blocks for transform coding. For example, a block having a value of trafoDepth of 0 corresponds to a CU.
  • the cbf_luma, cbf_cb, cbf_cr and the split transform flag may be jointly coded as described above and expressed as one piece of information.
  • the joint coded information indicates a coded pattern of whether a non-zero transform coefficient level exists in the luma block and the chroma block, and whether the block is divided.
  • the joint coded information may be expressed as cbp_and_split_transform.
  • the cbp_and_split_transform may include a plurality of coded block flags and / or split transform flags.
  • cbp_and_split_transform may be defined as shown in Table 1.
  • Luma, Cb, Cr and Split are cbf_luma [x0] [y0] [trafoDepth], cbf_cb [x0] [y0] [trafoDepth], cbf_cr [x0] [y0] [trafoDepth] and split_transform_flag [x0] [y0, respectively. ] represents the value of [trafoDepth].
  • Luma1, Luma2 and Luma3 represent cbf_luma [x1] [y1] [trafoDepth], cbf_luma [x2] [y2] [trafoDepth], and cbf_luma [x3] [y3] [trafoDepth].
  • x0, y0, x1, y1, x2, y2, x3, y3 give the position of the upper left sample in the divided four blocks.
  • width and height of the block before dividing are represented by log2TrafoWidth and log2TrafoHeight, respectively, x1, when divided into four blocks having a horizontal size that is half the horizontal size of the original block and a vertical size that is half the vertical size of the original block, y1, x2, y2, x3, y3 is the same as the equation (1).
  • Equation 2 D is 0 when the divided block has the same horizontal size and 1/4 vertical size as the original block, and 1 when the divided block has the same vertical size and 1/4 horizontal size as the original block. It can be defined as the value of.
  • the flag value may be specified from the associated cbf_luma, cbf_cb, cbf_cr, and split transform flag (split_transform_flag) included in the syntax element cbp_and split_transform.
  • a flag pattern can be obtained from codeLuma, codeCb, codeCr, and codeSplitTrans specified from cbf_luma, cbf_cb, cbf_cr, and split transform flag (split_transform_flag) as shown in Equation (3).
  • IntraSplitFlag indicates a shape of a corresponding block in intra prediction mode (intra mode). For example, if the value of the intraSplitFlag is 0, the 2Nx2N block may be indicated, and if the value of the intraSplitFlag is 1, the NxN block may be indicated.
  • entropy coding mode when entropy coding mode is CAVLC, it is a general inter mode, not skip mode, and has a 2Nx2N partition.
  • merge mode a method of estimating the value of cbf for Y, U, and V and independently transmitting a split transform flag is used. Can be.
  • Table 2 estimates the value of no_residual_data_flag as 0 without transmitting a flag (no_residual_data_flag) indicating that there is no residual information when the entropy coding mode is CABAC for a block having a 2N ⁇ 2N partition and merge mode is applied.
  • the entropy coding mode is CAVLC
  • an example of a syntax structure that infers all of the values of cbf for luma (Y) and chroma (U, V) to 1 and independently transmits a split transform flag is shown. will be.
  • Table 3 estimates the value of no_residual_data_flag as 0 without transmitting a flag indicating no residual information when the entropy coding mode is CABAC for a block having a 2Nx2N partition and merge mode is applied.
  • the entropy coding mode is CAVLC
  • Table 4 schematically illustrates an example of applying the example of Table 2 to the transform tree syntax.
  • Table 5 schematically shows an example of applying the example of Table 3 to the transform tree syntax.
  • a method of estimating at least one of cbf_luma, cbf_cb, cbf_cr and transmitting the remaining values may be used.
  • Table 6 estimates the value of no_residual_data_flag as 0 without transmitting a flag indicating no residual information when the entropy coding mode is CABAC for a block having a 2Nx2N partition and merge mode is applied.
  • Table 7 estimates the value of no_residual_data_flag as 0 for a block with 2Nx2N partition and merge mode applied without transmitting a flag (no_residual_data_flag) indicating there is no residual information when the entropy coding mode is CABAC.
  • a flag no_residual_data_flag
  • Table 8 schematically shows an example of applying the example of Table 6 to the transform tree syntax structure.
  • Table 9 schematically shows an example of applying the example of Table 7 to the transform tree syntax.
  • Table 10 estimates the value of no_residual_data_flag as 0 without transmitting a flag indicating no residual information when the entropy coding mode is CABAC for a block having a 2Nx2N partition and merge mode is applied. If the entropy coding mode is CAVLC, first transmit cbf for chroma (U, V), and cbf value of luma (Y) to 1 when both cbf (cbf_cr, cbf_cb) for chroma are both 0.
  • An example of an inferred syntax structure is schematically shown.
  • Table 11 also estimates the value of no_residual_data_flag as 0 without transmitting a flag indicating no residual information when the entropy coding mode is CABAC for a block having a 2N ⁇ 2N partition and merge mode is applied. If the entropy coding mode is CAVLC, the cbf of the chroma (U, V) is first transmitted, and if the values of the two cbf (cbf_cr, cbf_cb) of the chroma are both 0, the cbf value of the luma (Y) is obtained.
  • Another example of a syntax structure inferring to 1 is schematically shown.
  • Table 12 schematically shows an example of applying the example of Table 10 to the transform tree syntax structure.
  • Table 13 schematically shows an example of applying the example of Table 11 to the transform tree syntax structure.
  • redundancy between the merge mode and the skip mode of the 2Nx2N block may be considered in the parsing process without applying the above-described exception to the merge mode of the 2Nx2N block. That is, redundancy between the merge mode and the skip mode of the 2N ⁇ 2N block may be considered in an entropy encoding decoding process using a variable length coding (VLC) table.
  • VLC variable length coding
  • Tables 14 and 15 show examples of configurable transform tree syntax when not considering redundancy between the merge mode and the skip mode of the 2N ⁇ 2N block on the syntax.
  • the transmission bit amount may be reduced by removing the residual signal from the possible syntax value.
  • whether the skip mode is applied or not is determined when the coding unit unit, that is, trafoDepth is 0, and thus, the case where the redundancy with the merge mode may be considered may be limited to the case where trafoDepth is 0.
  • 2Nx2N_Merge mode a case where merge mode is applied to a 2Nx2N block having a trafoDepth of 0 will be referred to as 2Nx2N_Merge mode for convenience of description.
  • the binarization method of cbp_and_split_transform is unary VLC coding
  • the maximum value may be reduced by one.
  • an index mapping table also referred to as an index mapping table or a sorting table
  • it may be treated as having no residual information.
  • FIG. 5 schematically illustrates an example of an entropy encoding method using a VLC table.
  • the encoder allocates codeNum on the index mapping table 510 corresponding to the syntax element value to be transmitted. For example, if syntax element value S_n is input, codeNum_n is assigned to S_n.
  • the coder For codeNum allocated on the index mapping table 510, the coder allocates a codeword to transmit via the VLC table 520. For example, when codeNum_n is assigned in the index mapping table 510, codeword C_n may be assigned to codeNum_n through the VLC table 520.
  • the index mapping table 510 may be updated such that codeNum corresponding to a shorter codeword on the VLC table 520 is assigned to frequently generated syntax element values according to the frequency of each syntax element value.
  • the index mapping table 510 on the encoder side is updated, the inverse index mapping table on the decoder side is also updated in the same manner.
  • the syntax element value is directly input to the index mapping table.
  • the syntax element value may be further passed through a table that maps the syntax element value to a separate index for using the index mapping table.
  • FIG. 6 schematically illustrates an example of an entropy decoding method using a VLC table.
  • the decoder allocates codeNum corresponding to the received codeword using an inverse VLC table. For example, when codeword C_m is input, codeNum_m corresponding to C_m may be allocated through an inverse VLC table.
  • the decoder assigns a syntax element value via inverse index mapping table 620.
  • a syntax element value S_m may be assigned to codeNum_m through the inverse index mapping table 620.
  • the inverse index mapping table 610 of the decoder may be updated together with the inverse index mapping table of the encoder.
  • the syntax element value is directly assigned through the inverse index mapping table.
  • a table for mapping a predetermined index allocated in the inverse index mapping table to the syntax element value may be additionally used.
  • the code number (hereinafter referred to as 'codeNum') allocated in the index mapping table (sorting table) for the syntax value to be transferred is the value of all cbf (cbf_luma, cbf_cb, cbf_cr). If the value is greater than the codeNum allocated for the case where the value is 0, a codeword corresponding to codeNum having a syntax value of 1 smaller than the allocated codeNum may be transmitted.
  • Table 1 when all cbf values are 0 according to a flag pattern, it may be divided into two types. That is, the case where the information about the division is transmitted, such as the flag patterns 11, 13, 15, etc. of Table 1, and the case where the information about the division, such as 14, Table 1, are not transmitted.
  • cbf_luma, cbf_cb, cbf_cr three types of cbf (cbf_luma, cbf_cb, cbf_cr) values are transmitted as shown in Table 1.
  • there is no residual information and there is a case where all cbf values are zero.
  • codeNum for a case where all cbf values become 0 on the index mapping table can be set to a non-available (NA) state to reduce the amount of transmission bits.
  • NA non-available
  • the codeNum values for the above two cases are a and b (a ⁇ b)
  • the codeNum value of the syntax element value to be transmitted is c.
  • c is used as the codeNum value for the syntax element to be transmitted.
  • the encoder may transmit a codeword corresponding to the case where codeNum is c on the VLC table. If c is greater than a and less than b, c-1 is used as the codeNum value for the syntax element to be transmitted. The encoder may transmit a codeword corresponding to the case where codeNum is c-1 on the VLC table. If c is greater than b, c-2 is used as the codeNum value for the syntax element to be transmitted. The encoder may transmit a codeword corresponding to the case where codeNum is c-2 on the VLC table.
  • the decoder if the parsed value of codeNum c is smaller than a, c is used as the codeNum value.
  • the decoder may obtain a syntax element value corresponding to the c value on the inverse index mapping table. If the value of codeNum c parsed is greater than a and less than b, c + 1 is used as the original codeNum value.
  • the decoder may obtain a syntax element value corresponding to the c + 1 value on the inverse index mapping table. If the value of codeNum c parsed is greater than b, c + 2 is used as the original codeNum value.
  • the decoder may obtain a syntax element value corresponding to the c + 2 value on the inverse index mapping table.
  • N / A is always in the inter mode regardless of whether it is 2Nx2N_Merge mode or not. Can be treated as Because all cbf values are zero, there is no need to split and convert. Therefore, even when the flag pattern is 15, it may be determined whether the 2Nx2N_Merge mode is used, and the syntax element value excluded in the 2Nx2N_Merge mode may be one.
  • a codeNum value for a case where all cbf values are 0 and a split conversion flag value is 0 is a
  • a codeNum value for a syntax element value to be transmitted is b.
  • b is used as the codeNum value for the syntax element to be transmitted.
  • the encoder may transmit a codeword corresponding to the case where codeNum is b on the VLC table.
  • b-1 is used as the codeNum value for the syntax element to be transmitted.
  • the encoder may transmit a codeword corresponding to the case where codeNum is b-1 on the VLC table.
  • the decoder if the parsed codeNum b is smaller than a, b is used as the codeNum value.
  • the decoder may obtain a syntax element value corresponding to the b value on the inverse index mapping table. If the value of parsed codeNum b is larger than a, b + 1 is used as the original codeNum value.
  • the decoder may obtain a syntax element value corresponding to the b + 1 value on the inverse index mapping table.
  • cbp_and_split_transform syntax for the case where the entropy coding mode is CAVLC coding may be adjusted. That is, when the merge mode is applied as a 2Nx2N block in which the trafoDepth of the coding unit is 0, when the cbf is all 0, it may be temporarily disabled.
  • Table 16 shows the case where the flag patterns are 11, 13, and 15, for joint coding the cbp and the split transform flag for luma (Y) and chroma (U, V) when the coding unit is inter-coded.
  • Table 16 is an index mapping table (ranking table) on the encoder side or an inverse index mapping table (inverse ranking table) on the decoder side for cbp_yuv_split_trans or cbp_and_split_transform.
  • the codeword index of the normal case corresponds to a syntax element value composed of a combination of a cbf value and a split flag for luma and chroma (CbCr) when not in 2Nx2N_Merge mode. Is assigned as codeNum. In the case of 2Nx2N_Merge mode, the codeword index of 2Nx2N_Merge mode is assigned as codeNum corresponding to the syntax element value.
  • the codeword index of the normal case is converted into the codeword index of the 2Nx2N_Merge mode.
  • the codeword index of the normal case is converted into the codeword index of the 2Nx2N_Merge mode.
  • codeNums (codeword indexes) having a value greater than codeNum (codeword indexes) when all cbf values are 0 in a normal case have a codeNum (codeword index) value reduced by 1 in the case of 2Nx2N_Merge mode.
  • the codeword index assigned as codeNum is mapped to a predetermined codeword through the VCL table corresponding to Table 16.
  • the codeword index 4 for the case where all cbf values are 0 in the normal case is switched to NA in the 2Nx2N_Merge mode.
  • the codeword indexes 5 and 6 having a value larger than the codeword index 4 for the case where all cbf values are 0 are decreased by 1 in the 2Nx2N_Merge mode for the same syntax element (4 and 5).
  • A is called a codeword index (codeNum) value for cbp_and_split_transform or cbp_yuv_split_trans
  • B is called a codeword index (codeNum) when all cbf values are 0 in a normal case. lets do it. If A is less than B, A is used as the codeword index (codeNum) value. If A is greater than or equal to B, A + 1 is used as the codeword index (codeNum) value.
  • the codeword index for 2Nx2N_Merge mode that is, if codeNum is greater than or equal to 1 (4, 5), is greater than or equal to codeword index 4 for all cbf values of 0 in the normal case (4, 5)
  • the word indices 5 and 6 are restored and mapped to syntax element values (cbf_Luma, cbf_CbCr, split flag value).
  • Table 17 is an example of a table for joint coding cbp for luma (Y) and chroma (U, V) when the flag pattern is 14 and the coding unit is inter coded and trafoDepth is 0.
  • Table 17 is an index mapping table (ranking table) on the encoder side or an inverse index mapping table (inverse ranking table) on the decoder side for cbp_yuv_split_trans or cbp_and_split_transform.
  • the codeword index of the normal case corresponding to the syntax element value composed of the combination of the cbf value and the split flag for luma and chroma (CbCr) Is assigned as codeNum.
  • the codeword index of 2Nx2N_Merge mode is assigned as codeNum corresponding to the syntax element value.
  • the codeword index of the normal case is converted into the codeword index of the 2Nx2N_Merge mode.
  • the codeword index of the normal case is converted into the codeword index of the 2Nx2N_Merge mode.
  • codeNums (codeword indexes) having a value greater than codeNum (codeword indexes) when all cbf values are 0 in a normal case have a codeNum (codeword index) value reduced by 1 in the case of 2Nx2N_Merge mode.
  • the codeword index assigned as codeNum is mapped to a predetermined codeword through the VCL table corresponding to Table 17.
  • the codeword index 3 for the case where all cbf values are 0 in the normal case is switched to NA in 2Nx2N_Merge mode.
  • a codeword index (4, 5, 6, 7) having a value larger than codeword index 3 for all cbf values of 0 is reduced by 1 in 2Nx2N_Merge mode for the same syntax element ( 3, 4, 5, 6).
  • A is called a codeword index (codeNum) value for cbp_and_split_transform or cbp_yuv_split_trans
  • B is called a codeword index (codeNum) when all cbf values are 0 in a normal case. lets do it. If A is less than B, A is used as the codeword index (codeNum) value. If A is greater than or equal to B, A + 1 is used as the codeword index (codeNum) value.
  • codeword index for 2Nx2N_Merge mode that is, codeNum is equal to or greater than codeword index 3 for the case where all cbf values are 0 in the normal case (3, 4, 5, 6)
  • codeword index 4, 5, 6, 7 that is as large as and mapped to syntax element values (cbf_Luma, cbf_CbCr, split flag value).
  • the above parsing step is initiated to parse a syntax element (cbp_and_split_transform) regarding whether there is residual information when the entropy coding mode is CAVLC.
  • the parsing step may be performed by the entropy decoder of the decoder, and the operation of configuring the transmission information may be performed by the entropy encoder of the encoder.
  • bits from slice data, variables cbpVlcNumIdx, trafoDepth, variable array cpbSplitTransTable, and the like are used as input information of the parsing process.
  • cbpVlcNumIdx may be mapped with vlcNum indicating a VLC table to be used for parsing through a predetermined table such as cbpVlcNumTable.
  • cpbSplitTransTable is a table that maps codeNum corresponding to the received symbol to a predetermined value representing cbp and split information of the current block.
  • the predetermined value that is mapped may be a syntax element value or an index that maps to a syntax element value.
  • syntax elements cbp_and split_transform jointly coded with residual information and split information, and cbpVlcNumIdx and cbpSplitTransTable with updated values are output.
  • cbpVlcNumIdx and cbpSplitTransTable may be updated according to the frequency of occurrence of corresponding information.
  • vlcNum 14
  • cMax value 16
  • Table 19 below according to the flag pattern and the n value.
  • codeNum based on vlcNum and cMax.
  • codeNum may be obtained by applying a predetermined loop that places cMax as an upper limit to an initial bit corresponding to vlcNum.
  • cbp_and_split_transform is set to cpbSplitTransTable [k] [n] [codeNum] using cpbSplitTransTable.
  • the parsing process can be modified. Even in the 2Nx2N_Merge mode, the following parsing mode is started when the entropy encoding mode is CAVLC. Even in the 2Nx2N_Merge mode, bits from slice data, variables cbpVlcNumIdx, trafoDepth, variable array cpbSplitTransTable, and the like are used as input information of the parsing process. In addition, luma positions (xP, yP) that specify the top-left luma sample of the current prediction unit for the top-left sample of the current picture are used as input information.
  • cbp_and split_transform and cbpVlcNumIdx and cbpSplitTransTable are updated.
  • cbpVlcNumIdx and cbpSplitTransTable may be updated according to the frequency of occurrence of corresponding information.
  • vlcNum 14
  • the cMax value can be obtained through Table 20 according to the flag pattern and the n value. Unlike Table 19, Table 20 may be assigned a different cMax value even for the same flag pattern according to 2Nx2N_Merge_ZeroTrafoDepth indicating whether the mode is 2Nx2N_Merge mode.
  • codeNum based on vlcNum and cMax.
  • codeNum may be obtained by applying a predetermined loop that places cMax as an upper limit to an initial bit corresponding to vlcNum.
  • the prediction mode Pred Mode is Inter mode MODE_INTER, the partition mode is 2Nx2N (PART_2Nx2N), and the value of the merge flag (merge_flag [xP] [yP]) indicating whether the current prediction unit is the merge mode is 1, If the value of trafoDepth is 0, the value of 2Nx2N_Merge_ZeroTrafoDepth is set to 1.
  • codeNumCbfZero represents a codeNum value corresponding to the case where all cbf values become zero in the index mapping table (inverse index mapping table) as described above.
  • the value of codeNumCbfZero can be obtained by setting codeNumCbfZero to 0 and increasing the value of codeNumCbfZero until the value of cpbSplitTransTable [k] [n] [codeNum] becomes 0.
  • codeNum is greater than codeNumCbfZero
  • codeNum codeNum ++
  • codeNum codeNum ++
  • cbp_and_split_transform is set to cpbSplitTransTable [k] [n] [codeNum] using cpbSplitTransTable.
  • FIG. 7 is a flowchart schematically illustrating an operation of an encoder in a system to which the present invention is applied.
  • the encoder performs prediction on a current prediction unit (S710). Subsequently, the encoder entropy encodes the information to be transmitted (S720), and transmits a bit stream including the encoded information to the decoder (S730). Entropy encoding may be performed in the entropy encoder shown in FIG. 1.
  • the mode of entropy coding is CAVLC
  • the 2N ⁇ 2N_Merge mode is applied in the prediction step S710
  • some or all of the information indicating the existence of the residual signal may not be transmitted.
  • the corresponding value may be estimated without transmitting some or all of cbf for luma (Y) and chroma (U, V). Therefore, the estimated information information is not transmitted in the transmission step (S730). Details are as described above.
  • the syntax element value corresponding to the case where there is no residual signal among the information (syntax element value) indicating the presence of the residual signal May be excluded from the entropy encoding step (S720).
  • the values of codeNum matched to the syntax element on the index mapping table are adjusted in consideration of the case where there is no residual signal, thus reducing the amount of information transmitted. Details are as described above.
  • FIG. 8 is a flowchart schematically illustrating an operation of a decoder in a system to which the present invention is applied.
  • the decoder receives a bitstream including encoded information (S810) and entropy decodes the received information (S820).
  • the decoder restores image information based on the entropy decoded information and the predicted information (S830).
  • the decoder infers that the residual signal exists and performs an entropy decoding step (S820). Details are as described above.
  • the decoder may perform an entropy decoding step (S820) by estimating a value of the corresponding information with respect to information that is not transmitted.
  • the decoder estimates all the values of cbf_luma, cbf_cb, and cbf_cr as 1 (the residual signal exists). )can do. Details are as described above.
  • the syntax element value corresponding to the case where there is no residual signal among the information (syntax element values) indicating the existence of the residual signal is entropy decoded in step S820. Can be excluded. Values of codeNum matching a syntax element on the inverse index mapping table may be adjusted in consideration of the case where there is no residual signal. Details are as described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé pour le codage et le décodage des informations d'image. Le procédé pour le codage des informations d'image comprend les étapes de : réalisation d'une prévision concernant une unité de prévision actuelle ; réalisation d'un codage entropique des informations concernant l'unité de prévision actuelle ; et de transmission des informations codées de manière entropique, dans l'étape de réalisation de la prévision, l'unité de prévision actuelle étant un bloc 2Nx2N qui a une profondeur de 0, et lorsqu'un mode de fusion est appliqué à l'unité de prévision actuelle, l'existence d'un signal résiduel concernant l'unité de prévision actuelle pouvant être présumée. Selon l'invention, la quantité des informations transmises peut être diminuée par la prise en considération de la redondance de transmission des informations concernant les modes respectifs selon la prévision entre les écrans.
PCT/KR2011/009075 2011-04-07 2011-11-25 Procédé pour le codage et le décodage des informations d'image WO2012138032A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020137020513A KR20140018873A (ko) 2011-04-07 2011-11-25 영상 정보 부호화 방법 및 복호화 방법

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
US201161472633P 2011-04-07 2011-04-07
US61/472,633 2011-04-07
US201161489257P 2011-05-24 2011-05-24
US61/489,257 2011-05-24
US201161491877P 2011-05-31 2011-05-31
US61/491,877 2011-05-31
US201161504234P 2011-07-03 2011-07-03
US61/504,234 2011-07-03
US201161506157P 2011-07-10 2011-07-10
US61/506,157 2011-07-10

Publications (1)

Publication Number Publication Date
WO2012138032A1 true WO2012138032A1 (fr) 2012-10-11

Family

ID=46969394

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2011/009075 WO2012138032A1 (fr) 2011-04-07 2011-11-25 Procédé pour le codage et le décodage des informations d'image

Country Status (2)

Country Link
KR (1) KR20140018873A (fr)
WO (1) WO2012138032A1 (fr)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015057032A1 (fr) * 2013-10-18 2015-04-23 엘지전자 주식회사 Procédé et appareil de codage/décodage de vidéo multivue
WO2015057036A1 (fr) * 2013-10-18 2015-04-23 엘지전자 주식회사 Procédé et appareil de décodage de vidéo multivue
CN107040740A (zh) * 2017-04-26 2017-08-11 中国人民解放军国防科学技术大学 基于信息散度的视频大数据冗余删除方法
CN111131824A (zh) * 2015-01-15 2020-05-08 株式会社Kt 对视频信号进行解码的方法和对视频信号进行编码的方法
CN113396582A (zh) * 2019-02-01 2021-09-14 北京字节跳动网络技术有限公司 环路整形和调色板模式之间的相互作用
CN113545059A (zh) * 2019-02-28 2021-10-22 三星电子株式会社 一种用于预测色度分量的视频编码和解码的方法及其装置
CN114009047A (zh) * 2019-06-23 2022-02-01 Lg 电子株式会社 视频/图像编译系统中用于合并数据语法的信令方法和装置
CN114128272A (zh) * 2019-06-20 2022-03-01 Lg电子株式会社 基于亮度样本的映射和色度样本的缩放的视频或图像编码
CN114365495A (zh) * 2019-09-09 2022-04-15 北京字节跳动网络技术有限公司 帧内块复制编码与解码
CN114424574A (zh) * 2019-09-20 2022-04-29 北京字节跳动网络技术有限公司 编解码块的缩放过程
CN114467301A (zh) * 2019-08-31 2022-05-10 Lg 电子株式会社 图像解码方法及其设备
CN114731436A (zh) * 2019-10-04 2022-07-08 Lg电子株式会社 基于变换的图像编码方法及其设备
CN115243041A (zh) * 2018-05-03 2022-10-25 Lg电子株式会社 图像编码和解码方法及解码装置、存储介质和发送方法
US12063362B2 (en) 2019-03-23 2024-08-13 Beijing Bytedance Network Technology Co., Ltd Restrictions on adaptive-loop filtering parameter sets

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20240049658A (ko) * 2018-10-09 2024-04-16 삼성전자주식회사 비디오 복호화 방법 및 장치, 비디오 부호화 방법 및 장치

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020077630A (ko) * 2001-04-02 2002-10-12 엘지전자 주식회사 동영상에서 b픽쳐의 신축적인 다이렉트 모드 코딩 방법
KR20090099234A (ko) * 2008-03-17 2009-09-22 삼성전자주식회사 영상의 부호화, 복호화 방법 및 장치
KR20100029837A (ko) * 2007-06-15 2010-03-17 콸콤 인코포레이티드 인트라 예측 모드에 따른 레지듀얼 블록들의 적응형 변환

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020077630A (ko) * 2001-04-02 2002-10-12 엘지전자 주식회사 동영상에서 b픽쳐의 신축적인 다이렉트 모드 코딩 방법
KR20100029837A (ko) * 2007-06-15 2010-03-17 콸콤 인코포레이티드 인트라 예측 모드에 따른 레지듀얼 블록들의 적응형 변환
KR20090099234A (ko) * 2008-03-17 2009-09-22 삼성전자주식회사 영상의 부호화, 복호화 방법 및 장치

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015057036A1 (fr) * 2013-10-18 2015-04-23 엘지전자 주식회사 Procédé et appareil de décodage de vidéo multivue
US10321157B2 (en) 2013-10-18 2019-06-11 Lg Electronics Inc. Video decoding method and apparatus for decoding multi-view video
WO2015057032A1 (fr) * 2013-10-18 2015-04-23 엘지전자 주식회사 Procédé et appareil de codage/décodage de vidéo multivue
CN111131824A (zh) * 2015-01-15 2020-05-08 株式会社Kt 对视频信号进行解码的方法和对视频信号进行编码的方法
CN111131824B (zh) * 2015-01-15 2023-10-03 株式会社Kt 对视频信号进行解码的方法和对视频信号进行编码的方法
CN107040740A (zh) * 2017-04-26 2017-08-11 中国人民解放军国防科学技术大学 基于信息散度的视频大数据冗余删除方法
CN107040740B (zh) * 2017-04-26 2019-05-14 中国人民解放军国防科学技术大学 基于信息散度的视频大数据冗余删除方法
CN115243041A (zh) * 2018-05-03 2022-10-25 Lg电子株式会社 图像编码和解码方法及解码装置、存储介质和发送方法
CN115243041B (zh) * 2018-05-03 2024-06-04 Lg电子株式会社 图像编码和解码方法及解码装置、存储介质和发送方法
CN113396582B (zh) * 2019-02-01 2024-03-08 北京字节跳动网络技术有限公司 环路整形和调色板模式之间的相互作用
US12096021B2 (en) 2019-02-01 2024-09-17 Beijing Bytedance Network Technology Co., Ltd. Signaling of in-loop reshaping information using parameter sets
CN113396582A (zh) * 2019-02-01 2021-09-14 北京字节跳动网络技术有限公司 环路整形和调色板模式之间的相互作用
CN113545059A (zh) * 2019-02-28 2021-10-22 三星电子株式会社 一种用于预测色度分量的视频编码和解码的方法及其装置
US12022060B2 (en) 2019-02-28 2024-06-25 Samsung Electronics Co., Ltd. Video encoding and decoding method for predicting chroma component, and video encoding and decoding device for predicting chroma component
US12063362B2 (en) 2019-03-23 2024-08-13 Beijing Bytedance Network Technology Co., Ltd Restrictions on adaptive-loop filtering parameter sets
CN114128272A (zh) * 2019-06-20 2022-03-01 Lg电子株式会社 基于亮度样本的映射和色度样本的缩放的视频或图像编码
CN114128272B (zh) * 2019-06-20 2024-03-26 Lg电子株式会社 基于亮度样本的映射和色度样本的缩放的视频或图像编码
US11902527B2 (en) 2019-06-20 2024-02-13 Lg Electronics Inc. Video or image coding based on mapping of luma samples and scaling of chroma samples
CN114009047A (zh) * 2019-06-23 2022-02-01 Lg 电子株式会社 视频/图像编译系统中用于合并数据语法的信令方法和装置
CN114467301A (zh) * 2019-08-31 2022-05-10 Lg 电子株式会社 图像解码方法及其设备
CN114365495A (zh) * 2019-09-09 2022-04-15 北京字节跳动网络技术有限公司 帧内块复制编码与解码
US12069309B2 (en) 2019-09-09 2024-08-20 Beijing Bytedance Network Technology Co., Ltd Intra block copy coding and decoding
CN114424574A (zh) * 2019-09-20 2022-04-29 北京字节跳动网络技术有限公司 编解码块的缩放过程
CN114731436B (zh) * 2019-10-04 2023-06-16 Lg电子株式会社 基于变换的图像编码方法及其设备
CN114731436A (zh) * 2019-10-04 2022-07-08 Lg电子株式会社 基于变换的图像编码方法及其设备

Also Published As

Publication number Publication date
KR20140018873A (ko) 2014-02-13

Similar Documents

Publication Publication Date Title
US11616989B2 (en) Entropy decoding method, and decoding apparatus using same
US11539973B2 (en) Method and device for processing video signal using multiple transform kernels
WO2012138032A1 (fr) Procédé pour le codage et le décodage des informations d'image
KR102492009B1 (ko) 영상 정보 부호화 및 복호화 방법
KR102555352B1 (ko) 인트라 예측 방법과 이를 이용한 부호화 장치 및 복호화 장치
KR101425772B1 (ko) 영상 부호화 및 복호화 방법과 이를 이용한 장치
KR102350988B1 (ko) 인트라 예측 방법과 이를 이용한 부호화기 및 복호화기
KR101356439B1 (ko) 쿼드 트리를 이용한 영상 복호화 장치
KR20160093564A (ko) 비디오 신호 처리 방법 및 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11862899

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20137020513

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11862899

Country of ref document: EP

Kind code of ref document: A1