KR20120058384A - Intra prediction process - Google Patents

Intra prediction process Download PDF

Info

Publication number
KR20120058384A
KR20120058384A KR1020110064301A KR20110064301A KR20120058384A KR 20120058384 A KR20120058384 A KR 20120058384A KR 1020110064301 A KR1020110064301 A KR 1020110064301A KR 20110064301 A KR20110064301 A KR 20110064301A KR 20120058384 A KR20120058384 A KR 20120058384A
Authority
KR
South Korea
Prior art keywords
prediction
mode
block
unit
pixels
Prior art date
Application number
KR1020110064301A
Other languages
Korean (ko)
Inventor
오수미
Original Assignee
오수미
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 오수미 filed Critical 오수미
Priority to KR1020110064301A priority Critical patent/KR20120058384A/en
Priority to KR1020147014092A priority patent/KR20140071507A/en
Priority to EP16184569.8A priority patent/EP3125554B1/en
Priority to DK16184572.2T priority patent/DK3125555T3/en
Priority to TR2018/07094T priority patent/TR201807094T4/en
Priority to SI201131494T priority patent/SI3125552T1/en
Priority to PT118183623T priority patent/PT2608541T/en
Priority to NO16184557A priority patent/NO3125552T3/no
Priority to PL16184578T priority patent/PL3125558T3/en
Priority to PT161845573T priority patent/PT3125552T/en
Priority to DK11818362.3T priority patent/DK2608541T3/en
Priority to PT161845680T priority patent/PT3125553T/en
Priority to LTEP16184557.3T priority patent/LT3125552T/en
Priority to LTEP11818362.3T priority patent/LT2608541T/en
Priority to PL16184557T priority patent/PL3125552T3/en
Priority to HUE16184616A priority patent/HUE038963T2/en
Priority to RS20180499A priority patent/RS57165B1/en
Priority to CN201180050177.7A priority patent/CN103168472B/en
Priority to EP16184577.1A priority patent/EP3125557B1/en
Priority to ES16184582.1T priority patent/ES2693903T3/en
Priority to EP16184616.7A priority patent/EP3125561B1/en
Priority to KR1020147014100A priority patent/KR20140075020A/en
Priority to PL16184616T priority patent/PL3125561T3/en
Priority to LTEP16184572.2T priority patent/LT3125555T/en
Priority to CN201610809602.0A priority patent/CN106851284B/en
Priority to CN201610808102.5A priority patent/CN106231308B/en
Priority to JP2013524784A priority patent/JP5982612B2/en
Priority to ES16184569.8T priority patent/ES2685668T3/en
Priority to SI201131041A priority patent/SI2608541T1/en
Priority to EP16184586.2A priority patent/EP3125560B1/en
Priority to PL16184574T priority patent/PL3125556T3/en
Priority to EP16184578.9A priority patent/EP3125558B1/en
Priority to KR1020137020599A priority patent/KR101854489B1/en
Priority to ES16184568.0T priority patent/ES2670325T3/en
Priority to KR1020187009846A priority patent/KR20180039757A/en
Priority to PL16184572T priority patent/PL3125555T3/en
Priority to PL16184569T priority patent/PL3125554T3/en
Priority to PL16184582T priority patent/PL3125559T3/en
Priority to KR1020187009845A priority patent/KR20180039756A/en
Priority to PT161846167T priority patent/PT3125561T/en
Priority to HUE16184557A priority patent/HUE039248T2/en
Priority to EP16184582.1A priority patent/EP3125559B1/en
Priority to CN201510055953.2A priority patent/CN104602005B/en
Priority to HUE16184586A priority patent/HUE040604T2/en
Priority to SI201131498T priority patent/SI3125553T1/en
Priority to EP16184557.3A priority patent/EP3125552B1/en
Priority to LTEP16184568.0T priority patent/LT3125553T/en
Priority to RS20180526A priority patent/RS57166B1/en
Priority to CN201610809604.XA priority patent/CN106851285B/en
Priority to KR1020147010246A priority patent/KR20140057672A/en
Priority to KR1020137020598A priority patent/KR20130091799A/en
Priority to CN201610809603.5A priority patent/CN107071425B/en
Priority to DK16184616.7T priority patent/DK3125561T3/en
Priority to DK16184557.3T priority patent/DK3125552T3/en
Priority to SI201131484T priority patent/SI3125561T1/en
Priority to PL16184577T priority patent/PL3125557T3/en
Priority to ES16184586.2T priority patent/ES2693905T3/en
Priority to EP16184568.0A priority patent/EP3125553B1/en
Priority to RS20180573A priority patent/RS57233B1/en
Priority to CN201610809896.7A priority patent/CN107105234B/en
Priority to HUE16184574A priority patent/HUE042510T2/en
Priority to PT161845722T priority patent/PT3125555T/en
Priority to HUE11818362A priority patent/HUE031186T2/en
Priority to HUE16184582A priority patent/HUE040601T2/en
Priority to CN201510038581.2A priority patent/CN104602004B/en
Priority to KR1020187009839A priority patent/KR20180039750A/en
Priority to CN201610809897.1A priority patent/CN106851287B/en
Priority to KR1020187009844A priority patent/KR20180039755A/en
Priority to PL11818362T priority patent/PL2608541T3/en
Priority to RS20180454A priority patent/RS57112B1/en
Priority to KR1020187009843A priority patent/KR20180039754A/en
Priority to PL16184586T priority patent/PL3125560T3/en
Priority to KR1020187009840A priority patent/KR20180039751A/en
Priority to PL16184568T priority patent/PL3125553T3/en
Priority to KR1020187009841A priority patent/KR20180039752A/en
Priority to CN201610809894.8A priority patent/CN107105250B/en
Priority to RS20161024A priority patent/RS55325B1/en
Priority to DK16184568.0T priority patent/DK3125553T3/en
Priority to EP16184572.2A priority patent/EP3125555B1/en
Priority to ES16184572.2T priority patent/ES2670326T3/en
Priority to ES16184557.3T priority patent/ES2670324T3/en
Priority to ES16184577.1T priority patent/ES2685669T3/en
Priority to LTEP16184616.7T priority patent/LT3125561T/en
Priority to EP11818362.3A priority patent/EP2608541B1/en
Priority to KR1020187009842A priority patent/KR20180039753A/en
Priority to SI201131499T priority patent/SI3125555T1/en
Priority to HUE16184569A priority patent/HUE040410T2/en
Priority to KR1020147010248A priority patent/KR101474987B1/en
Priority to HUE16184572A priority patent/HUE039207T2/en
Priority to PCT/KR2011/005941 priority patent/WO2012023762A2/en
Priority to ES11818362.3T priority patent/ES2602779T3/en
Priority to ES16184574T priority patent/ES2696898T3/en
Priority to CN201610809898.6A priority patent/CN107071426B/en
Priority to TR2018/06128T priority patent/TR201806128T4/en
Priority to ES16184616.7T priority patent/ES2670327T3/en
Priority to CN201610809895.2A priority patent/CN106851286B/en
Priority to ES16184578T priority patent/ES2696931T3/en
Priority to KR1020127028865A priority patent/KR101373819B1/en
Priority to KR1020147010247A priority patent/KR20140057673A/en
Priority to EP16184574.8A priority patent/EP3125556B1/en
Priority to HUE16184568A priority patent/HUE039205T2/en
Publication of KR20120058384A publication Critical patent/KR20120058384A/en
Priority to US13/624,844 priority patent/US9491478B2/en
Priority to US15/189,374 priority patent/US9918086B2/en
Priority to US15/189,346 priority patent/US10063854B2/en
Priority to JP2016123340A priority patent/JP6371801B2/en
Priority to JP2016123332A priority patent/JP6371795B2/en
Priority to JP2016123335A priority patent/JP6322231B2/en
Priority to US15/189,485 priority patent/US9918087B2/en
Priority to JP2016123339A priority patent/JP6371800B2/en
Priority to US15/189,219 priority patent/US9716886B2/en
Priority to US15/189,305 priority patent/US10136130B2/en
Priority to JP2016123334A priority patent/JP6371797B2/en
Priority to US15/189,243 priority patent/US10123009B2/en
Priority to US15/189,273 priority patent/US10123010B2/en
Priority to US15/189,596 priority patent/US10003795B2/en
Priority to JP2016123331A priority patent/JP6322230B2/en
Priority to JP2016123336A priority patent/JP6371798B2/en
Priority to JP2016123330A priority patent/JP6371794B2/en
Priority to JP2016123333A priority patent/JP6371796B2/en
Priority to US15/189,521 priority patent/US10085019B2/en
Priority to US15/189,452 priority patent/US9924186B2/en
Priority to US15/189,561 priority patent/US9924187B2/en
Priority to JP2016123337A priority patent/JP6322232B2/en
Priority to JP2016123338A priority patent/JP6371799B2/en
Priority to SM201600449T priority patent/SMT201600449B/en
Priority to CY20161101330T priority patent/CY1118382T1/en
Priority to HRP20170053TT priority patent/HRP20170053T1/en
Priority to CY20181100489T priority patent/CY1120190T1/en
Priority to HRP20180834TT priority patent/HRP20180834T1/en
Priority to CY181100613T priority patent/CY1120813T1/en
Priority to CY181100612T priority patent/CY1120815T1/en
Priority to CY181100614T priority patent/CY1120795T1/en
Priority to HRP20181098TT priority patent/HRP20181098T1/en
Priority to HRP20181147TT priority patent/HRP20181147T1/en
Priority to HRP20181145TT priority patent/HRP20181145T1/en
Priority to US16/171,548 priority patent/US10567760B2/en
Priority to US16/725,167 priority patent/US10944965B2/en
Priority to US17/161,113 priority patent/US11284072B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Abstract

The present invention relates to an intra prediction method, and the present invention has the effect of further improving the compression efficiency at the time of video encoding decoding.

Description

Intra prediction process

The present invention relates to an intra prediction method.

Intra-prediction method is used to compress and reproduce the image with high efficiency.

The present invention is to provide a method for performing high efficiency intra prediction.

The intra prediction method of the present invention is characterized in that different types of intra prediction modes can be applied according to the size of a prediction block.

The present invention has the effect of exhibiting high performance compression and playback efficiency when encoding / decoding video.

1 is a diagram for explaining an embodiment of the present invention.
2 is a view for explaining an embodiment of the present invention.

Test Model under Consideration

I. Picture Split

1) A picture consists of a plurality of slices and a slice consists of a plurality of largest coding units (LCUs). The location of the LCU (largest coding unit) is determined by the address (lcuAddr).

2) The LCU can be divided into four sub-CUs, and when a partition is not allowed, a CU is regarded as a prediction unit (PU), and the position of the PU is determined by PU index (for the upper-left sample of the LCU). puIdx)

3) The PU may have a plurality of partitions, and the location may be known as a PU partition index (puPartIdx) for the upper-left sample of the PU. The PU may have a plurality of transform units (TUs), and the TU may be divided into four smaller sized TUs

4) Inverse coding unit Scanning process (p.6): Decoding process

First, the largest coding unit address is read by parsing the bitstream. The size of the largest coding unit is also read. This may be a value previously determined by the encoder and the decoder or may be included in a bitstream (sequence header or picture header) transmitted by the encoder to the decoder. Using the above values, the position of the upper-left sample of the largest coding unit is output.

The coding unit of the lower unit in the largest coding unit is raster scanned. Coding units can be divided into four smaller coding units which are also raster scanned within. Accordingly, the coding unit is recursively raster scanned as shown in the drawing.

1) Inverse prediction unit, partition, and transform unit Scanning process (p.8)

If the coding unit is no longer split, the prediction unit is scanned. The position of the prediction unit is specified by the index of the prediction unit. Prediction unit can be divided. Accordingly, the upper-left sample position of the prediction unit is obtained by inputting the prediction unit index, and the upper-left sample position of the prediction unit partition is obtained by inputting the prediction unit partition index.

Likewise, the upper-left sample position of the transform unit can be obtained through the transform unit index.

II. Syntax

1. Sequence parameter (header)

(1) information indicating the smallest unit size of a CU, PU, or TU, information representing an arbitrary shape region within a PU (prediction unit partition), and information indicating the maximum number of divisions allowed from an LCU. (maximum hierarchy depth) may be included. These can change from sequence to sequence. The largest CU size is 128. The largest TU size must be less than or equal to the largest CU size.

2. Slice header

1) The slice header contains a slice type. If the slice type is P or B, information indicating the method used to obtain the inter prediction sample (mc_interpolation_idc) and information indicating whether the MV competition is used are included. When mv competition is used, the decoder contains information indicating whether temporal MV is used. Temporal MV refers to a motion vector of a PU existing at or around the same position of an already encoded frame. (When a plurality of temporal MVs exist at the same position of the previous coded frame, the motion vector of the upper left PU may be selected or may be median thereof. Also, the same position of the previous frame may be included. If a PU exists, this may be referred to as temporal MV.) When temporal MV is used, information indicating whether a reference picture belongs to reference picture list 0 or reference picture list 1 may be included. If this information is not included, the value is assumed to be 1 (ie list 0 is used).

2) The slice header also includes mv_competition_temporal_flag (specifying whether temporally collocated motion vector is used in the motion competition process). The slice header includes edge_based_prediction_flag indicating whether the edge based prediction process shall be invoked during intra prediction. If the value is 0, the edge based prediction process is not applied. If the value is 1, one of the plurality of modes for intra prediction is an edge based prediction mode. In this case, the edge based prediction mode may be replaced with the DC prediction mode. This is done through a predetermined determination process, and information (threshold_edge) for determining this is included in the slice header.

3) When the slice type is P or B, the slice header includes information on memory management for storing the reference picture. The information is ~

Not only when the slice type is P or B but also when I and the like, the slice header includes information indicating whether adaptive loop filtering is applied to the current slice. When the information indicates that adaptive loop filtering is applied, the slice header additionally includes information indicating filter lengths in the horizontal and vertical directions of the luma component used in the adaptive loop filtering process. The adaptive loop filter may be applied to the reconstructed image, may be applied to the prediction signal, or may be applied to the quantized prediction error signal. Accordingly, the slice header may include all or at least one filter information applicable to each of the slice headers. Filter information may be quantized and transmitted. In this case, the slice header includes information indicating the quantization step size. That is, in encoding, the filter coefficient is quantized and then re-encoded (entropy encoding 1) and transmitted. In decoding, the adaptive loop filtering is performed using the filter coefficient obtained by inverse quantization after entropy decoding.

The slice header may carry information indicating the number of filter sets used. In this case, if the number of filters is two or more, filter coefficients may be used as a filter prediction method. Therefore, it may include information indicating whether the prediction method is used, and if the prediction method is used, the predicted filter coefficients are encoded and transmitted, otherwise, the unpredicted filter coefficients are encoded and transmitted.

Meanwhile, chroma components as well as luma may be adaptively filtered. Therefore, the slice header may include information indicating whether each of the chroma components are filtered. In this case, multiplexing information indicating whether to filter the Cr and the Cb may be represented to reduce the number of bits. Since most of the chroma components are not filtered most often, it is preferable to perform entropy encoding that allocates the smallest information. In addition, when at least one or more of the chroma components are filtered, since the luma filter may be applied to the chroma as it is, it may include information indicating whether the luma filter is used as it is or a separate chroma filter is used. . When the Chroma filter is adaptively used, the adaptive loop filter may be applied to the reconstructed image, may be applied to the prediction signal, or may be applied to the quantized prediction error signal. When a separate charoma filter is used, the filter information may be quantized and transmitted. In this case, the slice header includes information indicating the quantization step size, and when encoding, the filter coefficient is quantized and then re-encoded (entropy encoded) and transmitted, and during decoding, the filter coefficient obtained by inverse quantization after entropy decoding is used. To perform adaptive loop filtering. The horizontal and / or vertical filter lengths, quantization coefficients, etc. of the filter coefficients for the chroma may be determined as luma or separately from luma.

In addition, the slice header may include information indicating whether a loop filtering process for the luma component is applied to all luma samples in the slice. If applied, the adaptive loop filter process is applied to all luma components in the slice. Otherwise, information indicating whether filtering is performed for each CU should be included in the CU header. When applied, the slice header preferably includes information indicating applicable CU size information of the loop filter. The applicable CU size information may express information indicating the CU size of the smallest size applied from the largest CU as depth information.

2) Meanwhile, when the slice type is P or B, the slice header may include filter information for motion compensation applicable to each slice unit (Switched filter with offset information). The filter information may include information indicating whether the filter information is encoded using prediction. In the case of using prediction, the information indicating whether the currently transmitted filter information is the same as the previously transmitted filter information may be included. If the information indicates the same value, the coefficients of the filter indicated by the previous filter information are used as the current filter coefficients (in this case, only a filter index may be used). If the information is not the same, the previous filter value and the current filter value And a difference value or a current filter value or filter information corresponding thereto may be transmitted. Information indicating an offset value may be adaptively included together with the filter information (if not included, all offset values may be regarded as 0). The information may include information indicating whether all offset values are a specific reference value (for example, 0) or may include offset values to be applied when the offset value is not a specific reference value. Offset values may include values to be applied to both P and B pictures, and values to be applied only to B pictures.

3. Coding unit syntax

On the other hand, a slice includes a slice header and a plurality of CUs. The CU may include information (split_coding_unit_flag) indicating whether the CU is split into smaller sizes. If the current CU is the smallest CU, the above information is not included. On the other hand, instead of not including the above information, the encoder and the decoder may determine whether the current CU is the smallest CU, and if not the smallest CU, may promise to be split into smaller CUs. You can promise not to. Therefore, the decoder can determine whether the current CU is divided by determining whether the information exists.

In addition, the CU may include information (alf_flag) indicating whether to apply the adaptive loop filter, and if it does not exist, it may be considered not to apply. Therefore, the decoder determines whether the information exists and does not apply the loop filter when it does not exist, and if it exists, the loop filter can be adaptively applied according to the value.

In addition, when the information (split_coding_unit_flag) indicating whether the included CU is split into smaller sizes is split, it may have a plurality of lower coding units. This may include recursive lower coding units. However, a prediction unit is included to indicate that the information split_coding_unit_flag is not divided.

In addition, a transform unit may be included when the prediction mode is not the skip mode or the planar preidction of intra prediction.

4 Prediction Unit syntax

Prediction unit exists when slice is not an I slice.

The Prediction unit header may include information (skip_flag) indicating whether the current coding unit is skipped if the slice is not an I slice of the slice type. When the information indicates skip, only the motion vector prediction index of list 0 and / or list 1 may be included. If the index does not exist, the values may be regarded as set to zero.

However, if the information does not indicate skip, it may include information (pred_mode) indicating a prediction mode. The information indicates a mode of the current prediction unit, and the mode may indicate intra, inter, direct, skip, and the like. The prediction unit of the P or B slice may have any of the above modes.

When the information pred_mode indicates intra prediction, it may include information indicating whether planar prediction is to be applied (planar flag). When the above information is included to indicate that planar prediction is applied, information necessary for applying planar prediction may be included (planar_qdelta_indicatiorn). The information may be a value for reconstructing information (for example, bottom-right pixel value) used in the current prediction prediction transmitted by the encoder. The predicted block to be added to the residual signal may be generated by extracting a value equal to or similar to the value used for planar prediction in the encoder by using the value. It may also include information indicating whether planar prediction has been performed on chroma as well as luma, and, if so, values for restoring information used for prediction. In addition, the prediction unit may include information (intra_split_flag) indicating whether the prediction unit is divided into smaller units. The information may simply indicate whether the current prediction unit is divided into four lower prediction units or may indicate a non square prediction unit. The non square unit may be a rectangular partition divided into rectangular or straight lines.

In addition, intra mode information of luma used for intra prediction is included. The mode information may be information encoded by an intra mode encoding method. Therefore, for intra mode decoding, first, information (prev_intra_luma_pred_flag) indicating whether or not equal to any one of the intra prediction modes (the smaller value, the value of the same) of the adjacent previous block is parsed and the current value is 1; The intra prediction mode of the block is obtained from the available values of the adjacent left or the upper side, and if it is 0, it is obtained by using information (rem_intra_lima_pred_mode) indicating intra mode information. Herein, the information (rem_intra_lima_pred_mode) indicating the intra mode information indicates a value indicating the number of the remaining intra modes except for the small number among the left and upper mode numbers.

Similarly, intra mode information of chroma may be included. The chroma mode information may include a mode determined using intra luma mode information. (p. 51) In addition, another prediction method (for example, diagonal prediction for 8x8 size and plane prediction for other sizes) may be used for the same mode number according to the prediction mode size.

The mode information may be a value encoded in the same manner as the luma, or may transmit an intra prediction mode value as it is. Whether to code in the same manner as the luma, or transmit the intra prediction mode value as it is, may be determined according to the prediction unit size, or may be determined according to the number of prediction modes. The number of modes may be determined differently according to the prediction unit size. Chroma mode may be encoded based on luma mode.

In addition, information indicating whether the combined intra prediction process is to be applied for the intra prediction sample may be predicted. (The specific process can be indicated using the table on p. 51.) 1) Loading the intra prediction mode at a predetermined position adjacent to the current block. 2-1) When both intra prediction modes are available. Comparing the intra prediction mode number of the current block with a value of the prediction number mode number, 2-2) when one of two intra prediction modes is available, comparing the intra prediction mode number of the current block with the value. Step 3) transmitting information indicating that the same value is equal to the decoder in step 2-1) or 2-2) to the decoder; 4) small value if not equal in step 2-1) or 2-2). Or determining which of the other modes except for the one available value, and 5) transmitting information indicating that the mode number is not the same as the sequence number to the decoder. In this case, the size may be applied even when the sizes are not the same. In this case, the left block uses an intra prediction mode of a block located on the upper side of the plurality of left blocks and a block located on the right (or left) side of the plurality of upper blocks, For one mode, another prediction method (for example, diagonal prediction for 8x8 size and plane prediction for other sizes) may be used for the same mode number according to the prediction mode size.

When the information pred_mode indicates inter prediction, it may include type information inter_pratitioning_idc for dividing a current coding unit for inter prediction. The information may be a value determined according to the number of partitions, the horizontal length of the partition, and the vertical length. In the case of an Arbitatry shape, this may be information for specifying a shape.

In addition, it may include information (merge_flag) indicating whether parameters for inter prediction of a partition or prediction block are inferred from an adjacent (left or upper) inter predicted partition or prediction block, and if derived, left And information (merge_left_flag) indicating which of an upper partition or a prediction block is derived. If not derived, reference picture index for inter prediction, motion vector resolution information, motion vector difference, and reference motion vector information used for mv_competition are included.

Alternatively, it may include information indicating whether some of the parameters for inter prediction of the partition or prediction block are derived from an adjacent (left or top) inter predicted partition or prediction block. The information is 1) information (merge_left_flag) indicating which of the left and upper partitions or prediction blocks are derived, and 2) reference picture index for inter prediction, motion vector resolution information, motion vector difference, and mv_competition. It may include information indicating which of the reference motion vector information is derived from which of the left and upper partitions or prediction blocks.

If the information pred_mode indicates a direct mode

5.Transform unit syntax

VLC for residual block is different from the existing method.

New v2v for persistent blocks. Therefore, since these two types are selectively selected, information on which one to select is required, and if there are specific conditions, this is also necessary. It is set in transform block unit, so it is necessary to limit this in entropy coding part.

III. Intra prediction process

1.Intra prediction process for luma sample

This is a process applied when the prediction mode included in the bitstream is intra mode and planar prediction is not applied (planar flag = 0). That is, for intra prediction, first, it is determined whether the intra prediction mode of the current prediction block or partition is a planar mode. This is because in the planar mode, extra information (DC difference information?) Must be transmitted to the decoder. If it is determined that the planar mode is described below, a planar mode process will be described below. If the planar mode is not included, the valid intra prediction mode is input from an adjacent prediction block. (Of course, information indicating the planar mode does not exist separately, and may belong to this process as one of the prediction modes). According to the information (intra_split_flag) indicating whether the current prediction block is divided into smaller prediction blocks, the luma component of the prediction block is composed of one or more (usually four) partitions. For example, if the information indicates 0 or does not exist, it may have one partition. If the information indicates 1, it may have 4 partitions.

Different types of intra prediction modes may be applied according to the size of the prediction block. That is, the size of the prediction unit can be divided into larger than 8x8 and other cases (of course, may be divided into three or more, the size is not limited to 8x8).

First, the case where the size of the prediction unit is larger than 8x8 will be described. In this case, the intra prediction mode is further added to the existing H.264 mode of the vertical, horizontal, DC, plane, intra_angle mode and other additional modes (for example, bilinear mode, planar mode, etc.). That is, additional intra prediction directions may be represented by (dx, dy).

As the number of intra modes increases, an effective method of encoding a prediction mode is required.

(1) First, a method of utilizing the intra prediction mode of two adjacent blocks or partitions (located on the upper side and the left side). This method is effective when the number of prediction modes is (2 n +1). The intra prediction mode (or mode number) of the current block is compared with the small value in the available intra prediction mode (or mode number) of the upper or left adjacent block or partition. If the comparison is the same, information indicating this is transmitted to the decoder. However, if the comparison value is different, the mode number corresponding to the intra prediction mode number of the current block is n-bit decoder in the mode c obtained by rearranging the mode numbers of the other modes except the small value together with the information indicating this. To send. However, the larger the number of prediction modes, the less effective the method is, so a more effective method is required.

(2) Another method is to group and use the plurality of intra prediction modes. That is, the plurality of intra prediction modes are grouped into a plurality of prediction mode groups. At this time, a group intra mode number is assigned to each group. Each group includes one or more intra prediction modes, of which one selects the MostProbable prediction mode (may be predetermined values). Next, the intra prediction mode for the prediction blocks or partitions adjacent to the current prediction block or partition or a group intra mode number thereof is read to determine a small value. The adjacent prediction blocks or partitions may be located above and to the left. Next, the group intra mode number of the small value is compared with the group intra mode number of the group to which the intra prediction mode of the current block or partition belongs. If the two values are the same, the current block or intra prediction mode is predictively encoded from the group. Specifically, it is determined whether the intra prediction mode of the current block is the MostProboble prediction mode of the group intra prediction mode having the small value. If yes, the information is transmitted to the decoder, and if no, the remaining predictions belonging to the group are determined. The modes are rearranged and the corresponding prediction number is transmitted to the decoder together with the no information.

However, if the two values are different, it is preferable to rearrange the prediction modes belonging to the group and transmit the corresponding prediction number to the decoder.

Another method is as follows. First, the intra prediction mode of the current prediction block is determined. Next, one of the prediction modes is selected from the prediction modes of adjacent blocks. In the selection method, (i) one of the intra prediction modes of the upper and left blocks may be selected. At this time, it is preferable that a small number is selected. On the other hand, if any one of the two is available, it is selected, if both are not valid is determined to any one predetermined (for example, one of vertical, horizontal or DC mode). (ii) alternatively, a mode that can create a group containing many (most) of the directions indicated by available modes among upper, left, upper left and upper right (i.e. in the following five prediction groups: You can also choose to belong to). Next, it is determined whether the prediction direction of the selected prediction mode and the prediction direction of the intra prediction mode of the current block are within a predetermined range (when the numbers of the prediction modes are aligned to match the prediction direction, only the numbers of the prediction modes may be compared). have). It is preferable to determine whether the prediction mode of the current block belongs to a prediction mode group (including a total of five prediction modes) including the selected prediction mode and two right and left prediction modes adjacent thereto. The two adjacent left and right prediction modes may be determined using a prediction direction. If the prediction mode of the current intra prediction block falls within a predetermined range, the intra mode of the current block is encoded using the selected prediction mode and the prediction mode adjacent thereto. It is preferable that the prediction method at this time is the same as the prediction method of (1) mentioned above. In this case, the information belonging to the predetermined range is transmitted together with the information transmitted in the above (1).

However, if the prediction mode of the current intra prediction block does not belong to the predetermined range, the mode number corresponding to the intra prediction direction of the encoding mode of the current block is rearranged by reordering modes except the modes belonging to the predetermined range. You can send it to the decoder.

Meanwhile, as various prediction modes are added for intra prediction, the number of reference pixels required accordingly increases. However, since the number of valid reference pixels adjacent to the current prediction block is limited, it is necessary to generate reference pixels from precoded and reconstructed pixels in order to predict the various directions. For example, when the boundary pixels of the upper block adjacent to the upper side of the current prediction block are valid, but the pixels at the bottom of the upper right block (hereinafter referred to as the boundary pixel) are not valid, Generate boundary pixels of the upper right block from one or more pixels of the boundary pixel. Preferably, it is preferable to copy the rightmost pixel of the boundary pixel of the upper block, but it is also possible to generate the current pixel (right boundary pixel) using two or more pixels on the left side. (At this time, the generated upper right boundary pixel is used again to generate the boundary pixel).

Similarly, the boundary pixels of the left block adjacent to the left side of the current prediction block are valid, but the boundary pixels of the left block if the rightmost pixel (hereinafter, referred to as the boundary pixel) of the lower left block are not valid. Generate boundary pixels of the lower left block from one or more pixels of. This can be generated in the same manner as the generation of the right upper boundary pixel. On the other hand, the other invalid reference pixels other than the above may be set to a constant value. The value is preferably 2 n-1 . Here, n is preferably the number of bits required to represent luma information. For example, when the luma sample has a gray level of 256, the value may be 8.

Meanwhile, the existing reference pixel or generated reference pixels may be adaptively filtered according to the intra prediction mode and the position of the reference pixel applied. For example, when the prediction mode is vertical, horizontal, DC, or plane, no filtering is applied and reference pixels may be predicted for other modes. Also, the leftmost and rightmost pixels are not filtered for the reference pixels above the current block according to the position of the reference pixels, and the remaining pixels are filtered using two neighboring pixels. Similarly, the top and bottom pixels are not filtered for the reference pixels on the left side of the current block, and the pixels in between are filtered using two neighboring pixels. The lower right pixel of the upper left block adjacent to the current block may be filtered using the leftmost reference pixel of the upper block and the uppermost pixel of the left block. Meanwhile, the lower right pixel of the upper left block adjacent to the current block may be filtered, but the leftmost reference pixel of the upper block and the uppermost pixel of the left block may be filtered using the lower right pixel.

On the other hand, the directional intra prediction may result in a non-smooth pattern or the like, and the transform may not be effective (for the case of ADI). Therefore, a smoothing filter may be used to solve this problem. That is, it is preferable to use a 4-point filter (an existing filter is an NxN symmetric filter) to maintain a direction pattern. In this case, filter pred [x, y] as follows:

pred ?? [x, y] = (pred [x, y] + pred [x-1, y] + pred [x, y-1] + pred [x, y + 1] +2) >> 2.

Planar prediction

For planar prediction, a bottom-right pixel value of the current prediction block or a value encoded corresponding thereto must be included in the bitstream. However, since the number of bits is consumed when the value is transmitted as it is, it is preferable to set a reference value and transmit a difference value with this value.

The reference value may be an average value of valid reference pixels adjacent to the current prediction block. Accordingly, the average values are obtained when the boundary pixels of the upper and left blocks are valid, their average values when only the boundary pixels of the upper block are valid, and their average values when only the boundary pixels of the left block are valid. In addition, when there are no valid boundary pixels in the upper and left blocks, the value is preferably 2 n-1 . Here, n is preferably the number of bits required to represent luma information. The reference value may be any one of a lower end value of the boundary pixel of the left block and the rightmost value of the boundary block of the upper block. In this case, if only the boundary pixel of the left block is valid, the bottom value of the boundary pixel of the left block; if only the boundary pixel of the upper block is valid, the rightmost value of the boundary pixel of the upper block; The pixel value and the difference value of the above values may be compared with each other, and may be determined as one value (a value corresponding to a direction in which the difference value is determined to be small) or an average value thereof.

Meanwhile, a method of generating a pixel for predicting a prediction block using the reference value will be described. 1) First, linearly combine the reference value with the bottom value of the boundary pixel of the left block to generate bottom prediction pixels of the current block. Similarly, the reference value and the rightmost value of the boundary pixel of the upper block are linearly combined to generate prediction pixels of the right pixels of the current block. In this case, the prediction pixels of the remaining pixels are surrounded by the boundary pixels of the left block, the boundary pixels of the upper block, the current block bottom prediction pixels, and the current block right pixels. Thus, the remaining pixels are predictive pixels generated by linear combination of corresponding pixels in the four directions. When the prediction pixels are generated in this way, the difference value between the pixel values of the original prediction block and the generated prediction pixel values is obtained, the difference block is generated, and the encoded blocks are encoded and transmitted.

In this case, although the boundary pixels of the left block are valid, the boundary pixels of the upper block may be invalid. In this case, the boundary pixels of the upper block may be generated through linear combination of the reference value and a value obtained from the boundary pixels of the left block. The value obtained from the boundary pixel of the left block is preferably the uppermost value or the average value of the left boundary pixel. Similarly, when the boundary pixels of the upper block are valid, but the boundary pixels of the left block are not valid, the boundary pixels of the upper block may be generated through a linear combination of the reference value and a value obtained from the boundary pixels of the upper block (especially, the left value). have. If the boundary pixel cells of the left and upper blocks are not valid, it is preferably 2 n-1 . Here, n is preferably the number of bits necessary to represent the current luma information.

Edge-based DC mode

On the other hand, if an edge exists in the intra prediction block for intra prediction, the compression efficiency of the existing intra prediction is reduced. In particular, when the size of the prediction block increases, this phenomenon occurs even more. Therefore, an effective prediction mode is needed for blocks having edges. In this case, since it is necessary to additionally transmit a decision value for edge determination to the decoder, it may not be effective for a small prediction block. Therefore, the size of the prediction block to which the mode is applied is preferably larger than 8x8, for example, and may be promised between the encoder and the decoder not to be applied in the size of 8x8 or less. Accordingly, the encoder can turn on the device for determining the edge according to the block size.

The encoder transmits information (edge-based_prediction_flag) indicating whether to turn off an edge-based prediction mode when the block size is greater than or equal to a specific size (eg, 8x8). The mode may be switched with the DC mode or any other mode, or may be included as a separate mode.

If the information is on, the encoder performs an edge-detection process. Edge detection is applied to pixel values that have already been encoded and decoded.

The decoded pixel value for edge detection is composed of the upper side (two times the horizontal length of the prediction block) xM, the left side Mx (two times the vertical length of the prediction block), and the upper left MxM pixels of the current prediction block. Can be. The M value may vary depending on the size of the prediction block. For example, it may be 4 in a 16x16 prediction block and 8 in a 32x32 prediction block. It can also be fixed to four. In addition, when the decoded pixels are invalid, edge detection is performed only on the pixel values of the valid portion.

In order to detect edges using the decoded pixel values, gradient values (gradx and grady) are obtained in the x and y directions in units of 3x3 pixel groups, and the existence of edges is determined using the sum of their squares. If it is determined that an edge exists, a prediction pixel is obtained by linearly combining surrounding samples using gradx and grady.

On the other hand, if it is determined that the edge does not exist, intra prediction using the DC mode (that is, the average value of valid reference pixels is set as the prediction pixel value) is performed.

2. Intra prediction process for chroma samples

Prediction mode for two color difference blocks or partitions preferably uses the same mode. When the prediction mode for the chrominance block or partition is intra mode and planar prediction is applied (planar flag = 1), prediction pixels are generated in the same manner as the planar prediction for luma described above.

Prediction modes for the chrominance block or partition may include vertical (mode 0), Horizontal (mode 1), DC (mode 2), plane (mode 3), and Luma (mode 4). In the luma mode, this means generating a reference pixel of chroma using the distribution of luma pixels. This will be described later.

When the prediction mode for the chrominance block or partition is intra mode and planar prediction is not applied (planar flag = 0), if the mode is 0 to 3, a reference pixel is generated by the conventional H.264 method. In this case, in the case of DC mode, the edge-based prediction mode may be applied as luma. In luma mode, the restored luma sample is down-sampled using a linear filter, and a segmentation map is created by applying a mean value method to the samples. The map indicates what reference pixels are used to predict each region. Next, after performing prediction by applying the map of the luma sample to the chroma sample as it is, the predicted value is filtered to obtain a final prediction pixel. In this case, the filtering is preferably used 3x3 average smooth filter.

Different types of intra prediction modes may be applied according to the size of the prediction block. That is, the size of the prediction unit can be divided into larger than 8x8 and other cases (of course, may be divided into three or more, the size is not limited to 8x8).

3 is a block diagram illustrating a video encoding apparatus according to the present invention.

Referring to FIG. 3, the video encoding apparatus 100 according to the present invention includes a picture splitter 110, a transformer 120, a quantizer 130, a scanning unit 131, an entropy encoder 140, and an intra. The predictor 150, the inter predictor 160, the inverse quantizer 135, the inverse transform unit 125, the post processor 170, the picture storage unit 180, the subtractor 190, and the adder 195. It includes.

The picture dividing unit 110 analyzes an input video signal and divides a picture into coding units having a predetermined size for each largest coding unit to determine a prediction mode, and determines a size of a prediction unit for each coding unit. The picture splitter 110 transmits the prediction unit to be encoded to the intra predictor 150 or the inter predictor 160 according to the prediction mode. In addition, the picture splitter 110 transmits the prediction unit to be encoded to the subtractor 190.

The transformer 120 converts the original block of the input prediction unit and the residual block that is the residual signal of the prediction block generated by the intra predictor 150 or the inter predictor 160. The residual block is composed of coding units. The residual block composed of coding units is divided and transformed into an optimal transformation unit. Different transformation matrices may be determined according to the prediction mode (intra or inter). In addition, since the residual signal of intra prediction has a direction according to the intra prediction mode, the transformation matrix may be adaptively determined according to the intra prediction mode. The transformation unit may be transformed by two (horizontal and vertical) one-dimensional transformation matrices. For example, in the case of inter prediction, one predetermined transformation matrix is determined. On the other hand, in the case of intra prediction, when the intra prediction mode is horizontal, the probability of the residual block having the directionality in the vertical direction increases, so a DCT-based integer matrix is applied in the vertical direction and DST-based in the horizontal direction. Or apply a KLT-based integer matrix. When the intra prediction mode is vertical, an integer matrix based on DST or KLT is applied in the vertical direction and a DCT based integer matrix in the horizontal direction. In DC mode, a DCT-based integer matrix is applied in both directions. In addition, in the case of intra prediction, a transform matrix may be adaptively determined depending on the size of the transform unit.

The quantization unit 130 determines the quantization step size for quantizing the coefficients of the residual block transformed by the transform matrix for each coding unit. The coefficients of the transform block are quantized using a quantization matrix determined according to the determined quantization step size and prediction mode. The quantization unit 130 uses the quantization step size of the coding unit adjacent to the current coding unit as a quantization step size predictor of the current coding unit. The quantization unit 130 searches in the order of the left coding unit, the upper coding unit, and the left upper coding unit of the current coding unit, determines the quantization step size of the valid coding unit as the quantization step size predictor of the current coding unit, and determines the difference value as the entropy. Transmission to the encoder 140.

On the other hand, when a slice is divided into coding units, there is a possibility that neither the left coding unit, the upper coding unit, nor the left upper coding unit of the current coding unit exist. On the other hand, there may be a coding unit previously present in the coding order within the largest coding unit. Accordingly, candidates may be candidates in the coding units adjacent to the current coding unit and the coding unit immediately preceding the coding order within the maximum coding unit. In this case, 1) the left coding unit of the current coding unit, 2) the top coding unit of the current coding unit, 3) the top left coding unit of the current coding unit, and 4) the order of the previous coding unit in the coding order. Can be. The order may be changed, and the upper left coding unit may be omitted.

The quantized transform block is provided to the inverse quantization unit 135 and the scanning unit 131.

The scanning unit 131 scans the coefficients of the quantized transform block and converts them into one-dimensional quantization coefficients. Since the coefficient distribution of the transform block after quantization may be dependent on the intra prediction mode, the scanning scheme is determined according to the intra prediction mode. In addition, the coefficient scanning scheme may be determined differently according to the size of the transform unit.

Inverse quantization 135 inverse quantizes the quantized quantization coefficients. The inverse transform unit 125 restores the inverse quantized transform coefficients to the residual block of the spatial domain. The adder combines the residual block reconstructed by the inverse transform unit and the prediction block from the intra predictor 150 or the inter predictor 160 to generate a reconstructed block.

The post-processing unit 160 performs a deblocking filtering process to remove the blocking effect occurring in the reconstructed picture, an adaptive offset application process to compensate for the difference with the original image in pixel units, and a difference from the original image in coding units. An adaptive loop filter process is performed to compensate for the value.

The deblocking filtering process is preferably applied to the boundary of the prediction unit and the transform unit having a size of a predetermined size or more. The size may be 8x8. The deblocking filtering process includes determining a boundary to filter, determining a boundary filtering strength to be applied to the boundary, determining whether to apply a deblocking filter, and the deblocking filter. If it is determined to apply, the method includes selecting a filter to apply to the boundary.

Whether or not the deblocking filter is applied is i) whether the boundary filtering intensity is greater than 0, and ii) a value indicating the degree of change of pixel values in two block (P block, Q block) boundary portions adjacent to the boundary to be filtered. It is determined by whether or not it is smaller than the first reference value determined by this quantization parameter.

It is preferable that the said filter is at least 2 or more. When the absolute value of the difference between two pixels positioned at the block boundary is greater than or equal to the second reference value, a filter that performs relatively weak filtering is selected. The second reference value is determined by the quantization parameter and the boundary filtering intensity.

The adaptive loop filter process may perform filtering based on a value obtained by comparing the reconstructed image and the original image that have undergone the deblocking filtering process or the adaptive offset application process. The adaptive loop filter is detected through one Laplacian Activity value based on a 4x4 block. The determined ALF may be applied to all pixels included in a 4x4 or 8x8 block. Whether to apply the adaptive loop filter may be determined for each coding unit. The size and coefficient of the loop filter to be applied according to each coding unit may vary. Information indicating whether the adaptive loop filter is applied to each coding unit, filter coefficient information, and filter type information may be included in each slice header and transmitted to the decoder. In the case of a chrominance signal, it may be determined whether to apply an adaptive loop filter on a picture basis. The shape of the loop filter may have a rectangular shape unlike the luminance.

The picture storage unit 180 receives the post-processed image data from the post processor 160 and restores and stores the image in picture units. The picture may be an image in a frame unit or an image in a field unit. The picture storage unit 180 includes a buffer (not shown) that can store a plurality of pictures.

The inter prediction unit 150 performs motion estimation using at least one reference picture stored in the picture storage unit 180, and determines a reference picture index and a motion vector representing the reference picture. The prediction block corresponding to the prediction unit to be coded is extracted from a reference picture used for motion estimation among a plurality of reference pictures stored in the picture storage unit 150 according to the determined reference picture index and the motion vector. .

The intra predictor 140 performs intra prediction encoding by using the reconstructed pixel value inside the picture including the current prediction unit. The intra prediction unit 140 receives the current prediction unit to be predictively encoded and selects one of a preset number of intra prediction modes according to the size of the current block to perform intra prediction. The intra predictor adaptively filters the reference pixel to generate an intra prediction block. If the reference pixel is not available, the reference pixels can be generated using the available reference pixels.

The entropy encoder 130 entropy encodes the quantization coefficients quantized by the quantizer 130, intra prediction information received from the intra predictor 140, motion information received from the inter predictor 150, and the like.

4 is a flowchart illustrating a scanning operation of the scanning unit 131 of FIG. 3.

First, it is determined whether to divide the current quantized coefficient blocks into a plurality of subsets (S110). The determination of whether to divide depends on the size of the current transform unit. That is, when the size of the transform unit is larger than the first reference size, the encoded quantization coefficients are divided into a plurality of subsets. Preferably, the first reference size is 4x4 or 8x8. The predetermined size information may be transmitted to a decoder through a picture header or a slice header.

If it is determined that the quantized coefficient blocks are not divided into a plurality of subsets, a scan pattern to be applied to the quantized coefficient blocks is determined (S120). In operation S130, the coefficients of the quantized coefficient block are scanned according to the determined scan pattern. The scan pattern may be adaptively determined according to a prediction mode and an intra prediction mode.

In the case of the inter prediction mode, only one predetermined scan pattern (eg, a zigzag scan) may be applied. In addition, any one of a plurality of predetermined scan patterns may be selected and scanned, and the scan pattern information may be transmitted to the decoder.

In case of intra prediction, a predetermined scan pattern can be applied according to the intra prediction mode. For example, horizontal priority scan is applied to the vertical intra prediction mode and a predetermined number of intra prediction modes in the adjacent prediction direction. Vertical scan is applied to the horizontal intra prediction mode and a predetermined number of intra prediction modes having adjacent prediction directions. The predetermined number depends on the number of intra prediction modes (or the number of directional intra prediction modes) allowed for each prediction unit or the size of the prediction unit. For example, when the number of directional intra prediction modes allowed for the current prediction unit is 16, it is preferable to have two each in both directions based on the horizontal or vertical intra prediction modes. When the number of directional intra prediction modes allowed for the current prediction unit is 33, it is preferable that there are four each in both directions based on the horizontal or vertical intra prediction mode.

Meanwhile, the zigzag scan is applied to the non-directional modes. The non-directional mode may be a DC mode or a planar mode.

If it is determined that the current quantized coefficient blocks are divided into a plurality of subsets, the transform quantized coefficient blocks are divided into a plurality of subsets (S140). The plurality of subsets consists of one main subset and at least one residual subset. The main subset is located on the upper left side containing the DC coefficients, and the remaining subset covers an area other than the main subset.

Next, the scan pattern to be applied to the subset is determined (S150). The scan pattern is applied to all subsets. The scan pattern may be adaptively determined according to a prediction mode and an intra prediction mode. When the size of the current transform quantized coefficient block (the size of the transform block) is larger than the second reference size (eg, 8x8), only a zigzag scan may be applied. Therefore, the step is performed only when the first reference size is smaller than the second reference size.

In the case of the inter prediction mode, the scan pattern in the subset may be one predetermined scan pattern (eg, a zigzag scan). In the case of intra prediction, the same as in S120.

The scan order of the quantization coefficients in the subset scans in the reverse direction. That is, entropy coding may be performed by scanning nonzero quantization coefficients in a reverse direction from the last nonzero zero quantization coefficient in the subset according to the scan pattern.

Next, the quantization coefficients are scanned according to the scan pattern for each subset (S160). The scan order of the quantization coefficients in the subset scans in the reverse direction. That is, entropy coding may be performed by scanning nonzero quantization coefficients in a reverse direction from the last nonzero zero quantization coefficient in the subset according to the scan pattern.

In addition, scan patterns between subsets apply a zigzag scan. The scan pattern is preferably scanned from the main subset to the remaining subsets in the forward direction, but vice versa. In addition, scan patterns between subsets may be applied in the same manner as scan patterns applied within the subset.

On the other hand, the encoder transmits information that can indicate the position of the last non-zero quantization coefficient in the transform unit to the decoder. Information that can indicate the location of the last non-zero quantization coefficient in each subset is also sent to the decoder. The information may be information indicating the position of the last non-zero quantization coefficient in each subset.

5 is a block diagram illustrating a video decoding apparatus according to an embodiment of the present invention.

Referring to FIG. 5, the video decoding apparatus according to the present invention includes an entropy decoding unit 210, an inverse scanning unit 220, an inverse quantization unit 230, an inverse transform unit 240, an intra predictor 250, and an inter A predictor 260, a post processor 270, a picture storage 280, an adder 290, and an intra / inter switch 295 are provided.

The entropy decoding unit 210 decodes the received encoded bit stream and separates the received encoded bit stream into intra prediction information, inter prediction information, quantization coefficient information, and the like. The entropy decoder 210 supplies the decoded inter prediction information to the inter predictor 260. The entropy decoding unit 210 supplies the intra prediction information to the intra prediction unit 250 and the inverse transform unit 240. In addition, the entropy decoding 210 supplies the inverse quantization coefficient information to the inverse scan unit 220.

The inverse scanning unit 220 converts the quantization coefficient information into an inverse quantization block of a two-dimensional array. One of a plurality of reverse scanning patterns is selected for the conversion. The coefficient inverse scanning pattern is determined based on at least one of the prediction mode and the intra prediction mode. The operation of the reverse scanning unit 220 is the same as the reverse process of the operation of the scanning unit 131 described above.

The inverse quantization unit 230 determines a quantization step size predictor of the current coding unit. The determining process of the predictor is the same as the determining process of the predictor of the quantization unit 130 of FIG. The inverse quantization unit 230 adds the determined quantization step size predictor and the received residual quantization step size to obtain a quantization step size applied to the current inverse quantization block. The inverse quantization unit 230 restores the inverse quantization coefficient by using the quantization matrix to which the quantization step size is applied. Different quantization matrices are applied according to the size of the current block to be reconstructed, and even for blocks of the same size, a quantization matrix is selected based on at least one of the prediction mode and the intra prediction mode of the current block.

The inverse transform unit 240 inverts the inverse quantization block to restore the residual block. The residual block is reconstructed by inversely transforming the reconstructed quantization coefficients. The inverse transform matrix to be applied to the inverse quantization block may be adaptively determined according to a prediction mode (intra or inter) and an intra prediction mode. Since an inverse transform matrix of the transform matrix applied to the transform unit 120 of FIG. 3 is determined, a detailed description thereof is omitted.

 The adder 290 reconstructs the image block by adding the residual block reconstructed by the inverse transform unit 240 and the prediction block generated by the intra predictor 250 or the inter predictor 260.

The intra predictor 250 restores the intra prediction mode of the current block based on the intra prediction information received from the entropy decoder 210. The prediction block is generated according to the reconstructed intra prediction mode.

The inter prediction unit 260 reconstructs the reference picture index and the motion vector based on the inter prediction information received from the entropy decoding unit 210. The prediction block for the current block is generated using the reference picture index and the motion vector. When motion compensation with decimal precision is applied, the prediction block is generated by applying the selected interpolation filter.

Since the operation of the post processor 270 is the same as that of the post processor 160 of FIG. 3, it is omitted.

The picture storage unit 280 stores the decoded image post-processed by the post processor 270 in picture units.

6 is a block diagram illustrating an intra prediction unit 150 of the encoding apparatus 100 according to the present invention.

Referring to FIG. 6, the intra predictor 150 may include a reference pixel generator 151, a reference pixel filter 152, a prediction mode determiner 153, a predictive block generator 154, and a predictive block filter ( 155 and a prediction mode encoder 156.

The reference pixel generator 151 determines whether reference pixels for intra prediction are generated, and generates reference pixels when it is necessary to generate them.

7 is a diagram illustrating positions of reference pixels used for intra prediction of a current prediction unit. As shown in FIG. 7, the upper reference pixels of the current prediction unit are pixels (areas C and D) that exist over twice the horizontal length of the current prediction unit, and the left reference pixels are the lengths of the vertical lengths of the current prediction unit. Pixels (regions A and B) that exist twice as much.

The reference pixel generator 151 determines whether the reference pixels of all the positions are available. If some of the reference pixels are not available, the reference pixels at positions not available are generated using the available reference pixels.

First, a case in which all of the reference pixels of either the upper or left region of the current prediction unit to be encoded are not available will be described.

For example, when the current prediction unit is located at the upper boundary of the picture or slice, there are no reference pixels (regions C and D) above the current prediction unit. Similarly, when the current prediction unit is located at the left boundary of the picture or slice, there are no reference pixels on the left side (regions A and B). As described above, when one of the reference pixels is not available, the reference pixel may be generated by copying the nearest available reference pixel. In the former case, the closest available reference pixel is the leftmost top reference pixel (ie, the top reference pixel of region A). In the latter case, the closest available reference pixel is the upper leftmost reference pixel (ie, the leftmost reference pixel of region C). Application of the above scheme may be default, but if necessary, may be adaptively applied for each sequence unit, screen unit, or slice unit.

Next, a case in which a part of the reference pixels on the left or right side of the current prediction unit to be encoded is not available will be described. There are two types of reference pixels that are available in one direction only, and 2) both of which are available reference pixels based on reference pixels that are not available.

The case of 1) is demonstrated.

For example, when the current prediction unit is located at the right boundary of the picture or slice, or at the right boundary of the maximum coding unit, reference pixels of the area D are not available. Similarly, when the current prediction unit is located at the bottom boundary of the picture or slice or at the bottom boundary of the maximum coding unit, reference pixels of the region B are not available.

In this case, the reference pixels may be generated by copying the available reference pixels existing at the nearest position. Alternatively, the reference pixels may be generated from the plurality of available reference pixels existing at the nearest position.

 The case of 2) is demonstrated.

For example, if a current prediction unit is located at an upper boundary of a slice and a right upper prediction unit of the current prediction unit is available, reference pixels corresponding to the area C of the current prediction unit are not available, but the region A and the region Reference pixels located at D are available. In this case, when there are reference pixels available to both sides, one available reference pixel existing at the position closest to each direction is selected, and these (that is, the uppermost reference pixel of region A and the leftmost region of region D) are selected. Reference pixels) to generate reference pixels at locations that are not available.

A rounded average value of the two reference pixels (pixels located at the position nearest to each direction) may be generated as a reference pixel value. However, when the reference pixel area that is not available is large, there is a high possibility that a step between the available pixel and the generated pixel may occur, so that it is more useful to generate the reference pixel using the linear interpolation method. In detail, the position of the reference position that is not available may be generated in consideration of the positions of the two reference pixels.

Next, a case where neither the upper and left reference pixels of the current prediction unit to be encoded are available will be described. For example, there are no reference pixels available when the current prediction unit is adjacent to the picture or slice top left boundary.

In this case, some or all reference pixels may be generated using two or more pixels present in the current prediction unit. The number of pixels existing in the current prediction unit used to generate the reference pixel may be 2 or 3.

8 is a diagram illustrating a method of generating a reference pixel according to the present invention.

With reference to FIG. 8, the case where two pixels are used is demonstrated first. This is the case where one pixel different from the upper left pixel (○) of the current prediction unit is used. The other pixel may be a right upper pixel (□), a lower left pixel (△), and a lower right pixel (▽). When the upper left pixel (○) and the upper right pixel (□) of the current prediction unit are used, the upper left pixel and the upper right pixel are copied to a corresponding upper position, and the copied upper left pixel and the upper right pixel are used. The reference pixels of the region (region C) therebetween are generated. The generated reference pixels may be a rounded average value of the two reference pixels (the copied upper left pixel and the upper right pixel) or a value generated by a linear interpolation method. The reference pixels of the area D are generated by copying the right upper pixel (□) or by using the plurality of upper generation reference pixels. The same method applies to the case where the upper left pixel (○) and the lower left pixel (Δ) of the current prediction unit are used. When the upper left pixel (○) and the lower right pixel (▽) of the current prediction unit are used, the lower right pixel pixel (▽) is copied to the reference pixel position at the same position in the horizontal direction and the vertical direction. The subsequent reference pixel generation method is the same as the above.

The case of using three pixels will be described first. The upper left pixel (○), the upper right pixel (□), and the lower left pixel (Δ) of the current prediction unit may be used. In this case, after copying each pixel to a corresponding reference pixel position, the remaining reference pixels are generated using the pixels. The generation method of the remaining reference pixels is the same as the case of using the above two pixels.

On the other hand, when using the above scheme, the pixels in the current prediction unit used to generate the reference pixel should be transmitted to the decoder. In this case, in order to reduce the amount of information transfer, a value other than the upper left pixel ○ transmits a difference value from the upper left pixel ○. In addition, the upper left pixel value used for generating the reference pixel may be a quantized value, or may be entropy encoded and transmitted.

The method of generating reference pixels using two or more pixels existing in the current prediction unit is effective when the slice type is intra (I).

Another method of generating a reference pixel when neither the upper and left reference pixels of the current prediction unit to be encoded is available will be described. This method is effective when the slice type is not intra (I).

First, it is determined whether pixels exist at the same position as that of reference pixels of the current prediction unit in a previously encoded reference picture of the current block. If present, the pixels in the reference picture are copied to generate a reference pixel of the current prediction unit.

If it does not exist, it is determined whether there are pixels in a previously encoded reference picture that are closest to the positions of the reference pixels of the current prediction unit (one pixel away), and if so, the pixels are copied to copy the current prediction unit. Used as a reference pixel.

 The reference pixel filtering unit 152 adaptively filters the reference pixels of the current prediction unit. This is to smooth the amount of change in the pixel value between the reference pixels, and applies a low-pass filter. The low-pass filter may be [1, 2, 1], which is a 3-tap filter, or [1, 2, 4, 2, 1], which is a 5-tap filter.

In addition, the filter may be adaptively applied according to the size of the current prediction unit. For example, the filter may not be applied when the current prediction unit is larger than or equal to a predetermined size. In detail, the filter is not applied when the size of the current prediction unit is 64x64.

In addition, the low-pass filter may be adaptively applied according to the size of the current prediction unit and the intra prediction mode.

When the intra prediction mode is horizontal or vertical, one reference pixel is used to generate pixels of the prediction block. In this case, therefore, the filter is not applied. In addition, when the intra prediction mode is DC, since the average value of the reference pixels is used, the step difference between the reference pixels is not affected. Therefore, no filter is applied even in the DC mode.

On the other hand, for modes in which the intra prediction direction is inclined 45 ° with respect to the horizontal or vertical direction, the filter is applied regardless of the size of the current prediction unit. In this case, the first filter may be used for the prediction unit smaller than the predetermined size, and the second filter having the smoothing effect may be used for the prediction unit larger than the predetermined size. The predetermined size may be 16 × 16.

On the other hand, in modes other than the above-mentioned modes (six) that use at least two or more reference pixels to generate pixels of the prediction block, they may be adaptively applied according to the size of the current prediction unit and the intra prediction mode. However, in the planner mode, the filtering of the reference pixels is not performed.

In addition, a filter may not be applied to the reference pixels generated by the linear combination.

The prediction block generator 153 generates a prediction block corresponding to the intra prediction mode. The prediction block is generated using a linear combination of reference pixels or reference pixels based on an intra prediction mode. The reference pixels used to generate the prediction block may be reference pixels filtered by the reference pixel filter 152.

The prediction block filtering unit 154 adaptively filters the prediction block generated by the prediction block generation unit 153 according to the used intra prediction mode. The operation is to minimize the residual signal of the generated prediction block and the current prediction unit to be encoded. That is, the difference between the reference pixels and the pixels in the adjacent prediction blocks may vary depending on the intra prediction mode. Therefore, the residuals may be reduced by filtering the pixels in the prediction block generated by the intra prediction mode that generates a large amount of steps.

In the DC mode, since a prediction block is formed of an average value of reference pixels, a step may occur between the reference pixels and a pixel in an adjacent prediction block. Therefore, the pixels of the upper line and the pixels of the left line in the prediction block adjacent to the reference pixels are filtered using the reference pixels. Specifically, since pixels adjacent to the upper left corner of the prediction block have two adjacent reference pixels (upper reference pixel and left reference pixel), the pixels of the upper left corner are filtered (or smoothed) using a 3-tap filter. )do. The other pixels (that is, the pixels of the upper line and the pixels of the left line in the prediction block) are filtered using the 2-tap filter because there is one adjacent reference pixel.

Further, the reference block generated by the intra prediction modes (mode numbers 22, 12, 23, 5, 24, 13, and 25) having mode numbers 0 and 6 and the directionality between the modes is referred to the upper side of the current prediction unit. The prediction block is generated using only the pixels. Therefore, the pixels of the left line of the generated prediction block adjacent to the reference pixel may increase in level downward.

In addition, the reference block generated by intra prediction modes (mode numbers 30, 16, 31, 8, 32, 17, 33) having mode numbers 1, 9 and the directionality between the modes are referred to the left side of the current prediction unit. The prediction block is generated using only the pixels. Therefore, the pixels of the upper line of the generated prediction block adjacent to the reference pixel may increase in step to the right.

Accordingly, some pixels of the prediction block are adaptively filtered for a directional mode other than DC to compensate for the step.

If the mode number is 6, all or part of the pixels of the left line in the generated prediction block adjacent to the left reference pixel are filtered. Some pixels of the left line may be a portion below the left line, for example, N / 2 pixels. Where N represents the height of the current prediction unit.

Similarly, when the mode number is 9, all or part of the pixels of the upper line in the adjacent generated prediction block are filtered. Some pixels of the upper line may be a part of the right side of the upper line, for example, M / 2 pixels. Where M represents the width of the current prediction unit.

In addition, filtering may be performed in the same manner as the mode number 6 with respect to a predetermined number of modes having a direction closer to the mode number 6 among the modes having the direction between the mode numbers 0 and 6. In this case, the farther the mode from the mode number 6, the same or smaller the number of pixels to be filtered.

In addition, the same method may be applied to the modes having the directionality between the mode numbers 1 and 9.

Meanwhile, it may be adaptively applied according to the size of the current prediction unit. For example, a filter may not be applied to prediction units having a predetermined size or less for each intra prediction mode.

The prediction mode determiner 153 determines the intra prediction mode of the current prediction unit by using the reference pixels. The prediction mode determiner 153 may determine a mode in which the encoding amount of the residual block for each intra prediction mode is minimum as an intra prediction mode of the current prediction unit. To obtain the residual block, a prediction block is generated according to each intra prediction mode. The prediction block may be a prediction block using a pixel filtered by a reference pixel filtering unit or a prediction block filtered by the prediction block filtering unit 155 according to a predetermined condition according to each intra prediction mode. .

The prediction block transmitter 157 transmits the prediction block generated corresponding to the intra prediction mode determined by the prediction mode determiner 155 to the subtractor 190.

The prediction mode encoder 156 encodes the intra prediction mode of the current prediction unit determined by the prediction mode determiner 155. The prediction mode encoder 156 may be included in the intra predictor 150 and performed, or may be performed by the entropy encoder 140.

The prediction mode encoder 156 encodes the intra prediction mode of the current prediction unit by using the upper intra prediction mode of the current prediction unit and the left intra prediction mode of the current prediction unit.

First, the upper and left intra prediction modes of the current prediction unit are derived. When there are a plurality of upper prediction units of the current prediction unit, the intra prediction mode of the first valid prediction unit is set as the upper intra prediction mode while scanning in a predetermined direction (for example, right to left). Even when there are a plurality of left prediction units of the current prediction unit, the intra prediction mode of the first valid prediction unit may be set as the left intra prediction mode while scanning in a predetermined direction (eg, bottom to top). Alternatively, the smallest mode number among the mode numbers of the plurality of valid prediction units may be set as the upper intra prediction mode.

If the upper intra prediction mode or the left intra prediction mode is not valid, the DC mode (mode number 2) may be set as the upper or left intra prediction mode. If the upper or left intra prediction mode is not valid, there is no corresponding intra prediction unit.

Next, if the upper or left intra prediction mode is greater than or equal to the number of intra prediction modes allowed in the current prediction unit, the upper or left intra prediction mode is converted into one of a predetermined number of intra prediction modes. The predetermined number depends on the size of the current prediction unit. For example, if the size of the current prediction unit is 4x4, it is mapped to one of nine modes (modes 0 to 8). If the size of the current prediction unit is 64x64, it is mapped to one of three modes (mode 0 to 2). do. It may also convert to one of the intra prediction modes allowed in the current prediction unit.

Next, if the intra prediction mode number of the current prediction unit is the same as one of the left and upper intra prediction mode numbers, a flag indicating the same and a flag indicating one of the upper and left intra prediction modes are transmitted. . In this case, if the left and upper intra prediction modes are the same, only a flag indicating the same may be transmitted. In addition, even if only one of the upper and left intra prediction modes is valid and the same as the intra prediction mode of the current block, only a flag indicating that the same as the mode of the neighboring block may be transmitted.

However, when the intra prediction mode of the current prediction unit is a different mode from the left and upper intra prediction modes, the intra prediction mode number of the current prediction unit and the left and upper intra prediction mode numbers are compared. Compute the number of cases in which the left and upper intra prediction mode numbers are not greater than the intra prediction mode number of the current prediction unit, and reduce the intra prediction mode number of the current prediction unit by the number of cases. Determine in intra prediction mode. Here, if the left and upper intra prediction mode numbers are the same, it is treated as one.

Depending on whether the upper and left intra prediction modes are the same, a table for entropy encoding the determined final intra prediction mode is determined.

9 is a block diagram illustrating an intra predictor 250 of the decoding apparatus 200 according to the present invention.

The intra predictor 250 according to the present invention includes a prediction mode decoder 251, a reference pixel generator 252, a reference pixel filter 253, a prediction block generator 254, and a prediction block filter 255. And a prediction block transmitter 256.

The prediction mode decoder 251 restores the intra prediction mode of the current prediction unit through the following process.

Receive side information for generating a prediction block included in the side information container within the received coding unit. The additional information includes a predictable flag and residual prediction mode information. The predictable flag indicates whether the intra prediction mode is equal to any one of the neighboring prediction units of the current prediction unit. The remaining prediction mode information includes other information according to the predictable flag. If the predictable flag value is 1, the residual prediction mode information may include the prediction mode candidate index. The prediction mode candidate index is information designating an intra prediction mode candidate. If the predictable flag value is 0, the residual prediction mode information includes a residual intra prediction mode number.

Deriving the intra prediction mode candidates of the current prediction unit. The intra prediction mode candidate derives an intra prediction mode of a neighboring prediction unit. For convenience, the intra prediction mode candidates of the current prediction unit are limited to the upper and left intra prediction modes. When there are a plurality of upper or left prediction units of the current prediction unit, the upper and left intra prediction modes of the current prediction unit are derived in the same manner as the prediction mode encoder 156 of the encoding apparatus 100 described above. If the number of upper or left intra prediction modes is greater than or equal to the number of intra prediction modes allowed in the current prediction unit, the upper or left intra prediction mode is converted in the same manner as the prediction mode encoder 156.

Next, if the received predictable flag indicates that the prediction mode candidate index exists, the prediction mode indicated by the prediction mode candidate index is determined as the intra prediction mode of the current prediction unit. do.

If the received predictable flag indicates that it is the same as the intra prediction mode of the neighboring prediction unit, but the prediction mode candidate index does not exist and there is one valid intra prediction mode of the neighboring prediction unit, this is the intra prediction of the current prediction unit. Restore to the mode.

If the received predictable flag indicates that the received prediction intra flag is not the same as the intra prediction mode of the adjacent prediction unit, the received intra intra prediction mode value is compared with the intra prediction mode numbers of the valid intra prediction mode candidates to determine the intra prediction mode of the current prediction unit. Restore

The reference pixel generator 252 generates the reference pixel in the same manner as the reference pixel generator 151 of the encoding apparatus 100. However, since the reference pixel generator 252 adaptively generates the reference pixel according to the intra prediction mode reconstructed by the prediction mode decoder 251, the reference pixel generator 252 and the reference pixel generator 151 of the encoding apparatus 100 different. That is, the reference pixel generator 252 generates the reference pixel only when the reference pixels to be used to generate the prediction block using the reconstructed intra prediction mode are invalid.

The reference pixel filter 253 adaptively filters the reference pixels in the intra prediction mode reconstructed by the prediction mode decoder 251 and based on the size information of the current prediction unit. The filtering condition and the filter are the same as the filtering condition and the filter of the reference pixel filtering unit 152 of the encoding apparatus 100.

The predictive block generator 254 According to the intra prediction mode reconstructed by the prediction mode decoder 251, a prediction block is generated using reference pixels. ,

The prediction block filtering unit 255 adaptively filters according to the intra prediction mode reconstructed by the prediction mode decoder 251. The operation is the same as the prediction block filtering unit 154 of the encoding apparatus 100.

The prediction block transmitter 256 transmits the prediction block received from the prediction mode generator 254 or the prediction block filtering unit 255 to the adder 290.

Claims (1)

Intra-prediction method characterized in that different types of intra prediction mode can be applied according to the size of the prediction block.
KR1020110064301A 2010-08-17 2011-06-30 Intra prediction process KR20120058384A (en)

Priority Applications (138)

Application Number Priority Date Filing Date Title
KR1020110064301A KR20120058384A (en) 2010-08-17 2011-06-30 Intra prediction process
KR1020147014092A KR20140071507A (en) 2010-08-17 2011-08-12 Apparatus for generating a prediction block
EP16184569.8A EP3125554B1 (en) 2010-08-17 2011-08-12 Apparatus for encoding an intra prediction mode
DK16184572.2T DK3125555T3 (en) 2010-08-17 2011-08-12 Method of encoding an intraprediction mode
TR2018/07094T TR201807094T4 (en) 2010-08-17 2011-08-12 METHOD FOR RESTORING AN INTERNAL FORECAST MODE
SI201131494T SI3125552T1 (en) 2010-08-17 2011-08-12 Method for restoring an intra prediction mode
PT118183623T PT2608541T (en) 2010-08-17 2011-08-12 Method for decoding intra-predictions
NO16184557A NO3125552T3 (en) 2010-08-17 2011-08-12
PL16184578T PL3125558T3 (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
PT161845573T PT3125552T (en) 2010-08-17 2011-08-12 Method for restoring an intra prediction mode
DK11818362.3T DK2608541T3 (en) 2010-08-17 2011-08-12 Method for decoding intrapredictions
PT161845680T PT3125553T (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
LTEP16184557.3T LT3125552T (en) 2010-08-17 2011-08-12 Method for restoring an intra prediction mode
LTEP11818362.3T LT2608541T (en) 2010-08-17 2011-08-12 Method for decoding intra-predictions
PL16184557T PL3125552T3 (en) 2010-08-17 2011-08-12 Method for restoring an intra prediction mode
HUE16184616A HUE038963T2 (en) 2010-08-17 2011-08-12 Method for restoring an intra prediction mode
RS20180499A RS57165B1 (en) 2010-08-17 2011-08-12 Method for restoring an intra prediction mode
CN201180050177.7A CN103168472B (en) 2010-08-17 2011-08-12 The coding/decoding method of intra prediction mode
EP16184577.1A EP3125557B1 (en) 2010-08-17 2011-08-12 Apparatus for encoding an intra prediction mode
ES16184582.1T ES2693903T3 (en) 2010-08-17 2011-08-12 Apparatus for decoding an intra-prediction mode
EP16184616.7A EP3125561B1 (en) 2010-08-17 2011-08-12 Method for restoring an intra prediction mode
KR1020147014100A KR20140075020A (en) 2010-08-17 2011-08-12 Apparatus for generating a prediction block
PL16184616T PL3125561T3 (en) 2010-08-17 2011-08-12 Method for restoring an intra prediction mode
LTEP16184572.2T LT3125555T (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
CN201610809602.0A CN106851284B (en) 2010-08-17 2011-08-12 Method for encoding intra prediction mode
CN201610808102.5A CN106231308B (en) 2010-08-17 2011-08-12 Method for encoding intra prediction mode
JP2013524784A JP5982612B2 (en) 2010-08-17 2011-08-12 Intra prediction mode decoding method
ES16184569.8T ES2685668T3 (en) 2010-08-17 2011-08-12 Apparatus for encoding an intraprediction mode
SI201131041A SI2608541T1 (en) 2010-08-17 2011-08-12 Method for decoding intra-predictions
EP16184586.2A EP3125560B1 (en) 2010-08-17 2011-08-12 Apparatus for decoding an intra prediction mode
PL16184574T PL3125556T3 (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
EP16184578.9A EP3125558B1 (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
KR1020137020599A KR101854489B1 (en) 2010-08-17 2011-08-12 Metohd for decoding intra prediction mode
ES16184568.0T ES2670325T3 (en) 2010-08-17 2011-08-12 Coding procedure of an intra prediction mode
KR1020187009846A KR20180039757A (en) 2010-08-17 2011-08-12 Apparatus for decoding an image
PL16184572T PL3125555T3 (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
PL16184569T PL3125554T3 (en) 2010-08-17 2011-08-12 Apparatus for encoding an intra prediction mode
PL16184582T PL3125559T3 (en) 2010-08-17 2011-08-12 Apparatus for decoding an intra prediction mode
KR1020187009845A KR20180039756A (en) 2010-08-17 2011-08-12 Apparatus for decoding an image
PT161846167T PT3125561T (en) 2010-08-17 2011-08-12 Method for restoring an intra prediction mode
HUE16184557A HUE039248T2 (en) 2010-08-17 2011-08-12 Method for restoring an intra prediction mode
EP16184582.1A EP3125559B1 (en) 2010-08-17 2011-08-12 Apparatus for decoding an intra prediction mode
CN201510055953.2A CN104602005B (en) 2010-08-17 2011-08-12 The coding method of predictive mode
HUE16184586A HUE040604T2 (en) 2010-08-17 2011-08-12 Apparatus for decoding an intra prediction mode
SI201131498T SI3125553T1 (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
EP16184557.3A EP3125552B1 (en) 2010-08-17 2011-08-12 Method for restoring an intra prediction mode
LTEP16184568.0T LT3125553T (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
RS20180526A RS57166B1 (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
CN201610809604.XA CN106851285B (en) 2010-08-17 2011-08-12 Device for coded image
KR1020147010246A KR20140057672A (en) 2010-08-17 2011-08-12 Method for decoding intra-predictions
KR1020137020598A KR20130091799A (en) 2010-08-17 2011-08-12 Method for decoding intra-predictions
CN201610809603.5A CN107071425B (en) 2010-08-17 2011-08-12 Method for restoring intra prediction mode
DK16184616.7T DK3125561T3 (en) 2010-08-17 2011-08-12 Method of restoring an intraprediction mode
DK16184557.3T DK3125552T3 (en) 2010-08-17 2011-08-12 Method of restoring an intraprediction mode
SI201131484T SI3125561T1 (en) 2010-08-17 2011-08-12 Method for restoring an intra prediction mode
PL16184577T PL3125557T3 (en) 2010-08-17 2011-08-12 Apparatus for encoding an intra prediction mode
ES16184586.2T ES2693905T3 (en) 2010-08-17 2011-08-12 Apparatus for decoding an intra-prediction mode
EP16184568.0A EP3125553B1 (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
RS20180573A RS57233B1 (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
CN201610809896.7A CN107105234B (en) 2010-08-17 2011-08-12 For decoding the device of image
HUE16184574A HUE042510T2 (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
PT161845722T PT3125555T (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
HUE11818362A HUE031186T2 (en) 2010-08-17 2011-08-12 Method for decoding intra-predictions
HUE16184582A HUE040601T2 (en) 2010-08-17 2011-08-12 Apparatus for decoding an intra prediction mode
CN201510038581.2A CN104602004B (en) 2010-08-17 2011-08-12 The coding/decoding method of intra prediction mode
KR1020187009839A KR20180039750A (en) 2010-08-17 2011-08-12 Apparatus for encoding an image
CN201610809897.1A CN106851287B (en) 2010-08-17 2011-08-12 For decoding the device of image
KR1020187009844A KR20180039755A (en) 2010-08-17 2011-08-12 Apparatus for decoding an image
PL11818362T PL2608541T3 (en) 2010-08-17 2011-08-12 Method for decoding intra-predictions
RS20180454A RS57112B1 (en) 2010-08-17 2011-08-12 Method for restoring an intra prediction mode
KR1020187009843A KR20180039754A (en) 2010-08-17 2011-08-12 Apparatus for decoding an image
PL16184586T PL3125560T3 (en) 2010-08-17 2011-08-12 Apparatus for decoding an intra prediction mode
KR1020187009840A KR20180039751A (en) 2010-08-17 2011-08-12 Apparatus for encoding an image
PL16184568T PL3125553T3 (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
KR1020187009841A KR20180039752A (en) 2010-08-17 2011-08-12 Apparatus for encoding an image
CN201610809894.8A CN107105250B (en) 2010-08-17 2011-08-12 Method for restoring intra prediction mode
RS20161024A RS55325B1 (en) 2010-08-17 2011-08-12 Method for decoding intra-predictions
DK16184568.0T DK3125553T3 (en) 2010-08-17 2011-08-12 Method of encoding an intraprediction mode
EP16184572.2A EP3125555B1 (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
ES16184572.2T ES2670326T3 (en) 2010-08-17 2011-08-12 Coding procedure of an intra prediction mode
ES16184557.3T ES2670324T3 (en) 2010-08-17 2011-08-12 Procedure of resetting an intra prediction mode
ES16184577.1T ES2685669T3 (en) 2010-08-17 2011-08-12 Apparatus for encoding an intraprediction mode
LTEP16184616.7T LT3125561T (en) 2010-08-17 2011-08-12 Method for restoring an intra prediction mode
EP11818362.3A EP2608541B1 (en) 2010-08-17 2011-08-12 Method for decoding intra-predictions
KR1020187009842A KR20180039753A (en) 2010-08-17 2011-08-12 Apparatus for decoding an image
SI201131499T SI3125555T1 (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
HUE16184569A HUE040410T2 (en) 2010-08-17 2011-08-12 Apparatus for encoding an intra prediction mode
KR1020147010248A KR101474987B1 (en) 2010-08-17 2011-08-12 Apparatus for encoding residual signals
HUE16184572A HUE039207T2 (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
PCT/KR2011/005941 WO2012023762A2 (en) 2010-08-17 2011-08-12 Method for decoding intra-predictions
ES11818362.3T ES2602779T3 (en) 2010-08-17 2011-08-12 Intra-prediction decoding procedure
ES16184574T ES2696898T3 (en) 2010-08-17 2011-08-12 Procedure to encode an intra-prediction mode
CN201610809898.6A CN107071426B (en) 2010-08-17 2011-08-12 Method for encoding intra prediction mode
TR2018/06128T TR201806128T4 (en) 2010-08-17 2011-08-12 METHOD FOR RESTORING AN INTERNAL FORECAST MODE
ES16184616.7T ES2670327T3 (en) 2010-08-17 2011-08-12 Procedure of resetting an intra prediction mode
CN201610809895.2A CN106851286B (en) 2010-08-17 2011-08-12 Device for coded image
ES16184578T ES2696931T3 (en) 2010-08-17 2011-08-12 Procedure to encode an intra-prediction mode
KR1020127028865A KR101373819B1 (en) 2010-08-17 2011-08-12 Metohd of decoding intra prediction mode
KR1020147010247A KR20140057673A (en) 2010-08-17 2011-08-12 Method for decoding intra-predictions
EP16184574.8A EP3125556B1 (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
HUE16184568A HUE039205T2 (en) 2010-08-17 2011-08-12 Method for encoding an intra prediction mode
US13/624,844 US9491478B2 (en) 2010-08-17 2012-09-21 Method for decoding in intra prediction mode
US15/189,374 US9918086B2 (en) 2010-08-17 2016-06-22 Method for encoding an intra prediction mode
US15/189,346 US10063854B2 (en) 2010-08-17 2016-06-22 Apparatus for decoding an image
JP2016123340A JP6371801B2 (en) 2010-08-17 2016-06-22 Video encoding device
JP2016123332A JP6371795B2 (en) 2010-08-17 2016-06-22 Video decoding method
JP2016123335A JP6322231B2 (en) 2010-08-17 2016-06-22 Intra prediction mode coding method
US15/189,485 US9918087B2 (en) 2010-08-17 2016-06-22 Method for encoding an intra prediction mode
JP2016123339A JP6371800B2 (en) 2010-08-17 2016-06-22 Video encoding device
US15/189,219 US9716886B2 (en) 2010-08-17 2016-06-22 Method for restoring an intra prediction mode
US15/189,305 US10136130B2 (en) 2010-08-17 2016-06-22 Apparatus for decoding an image
JP2016123334A JP6371797B2 (en) 2010-08-17 2016-06-22 Video decoding device
US15/189,243 US10123009B2 (en) 2010-08-17 2016-06-22 Apparatus for encoding an image
US15/189,273 US10123010B2 (en) 2010-08-17 2016-06-22 Apparatus for encoding an image
US15/189,596 US10003795B2 (en) 2010-08-17 2016-06-22 Method for encoding an intra prediction mode
JP2016123331A JP6322230B2 (en) 2010-08-17 2016-06-22 Intra prediction mode decoding method
JP2016123336A JP6371798B2 (en) 2010-08-17 2016-06-22 Video encoding method
JP2016123330A JP6371794B2 (en) 2010-08-17 2016-06-22 Video decoding method
JP2016123333A JP6371796B2 (en) 2010-08-17 2016-06-22 Video decoding device
US15/189,521 US10085019B2 (en) 2010-08-17 2016-06-22 Method for restoring an intra prediction mode
US15/189,452 US9924186B2 (en) 2010-08-17 2016-06-22 Method for encoding an intra prediction mode
US15/189,561 US9924187B2 (en) 2010-08-17 2016-06-22 Method for restoring an intra prediction mode
JP2016123337A JP6322232B2 (en) 2010-08-17 2016-06-22 Intra prediction mode coding method
JP2016123338A JP6371799B2 (en) 2010-08-17 2016-06-22 Video encoding method
SM201600449T SMT201600449B (en) 2010-08-17 2016-12-13 METHOD OF DECODER OF THE INTRA PREDICTION MODE
CY20161101330T CY1118382T1 (en) 2010-08-17 2016-12-22 METHOD FOR RECOVERY OF ENDO-PROJECTS
HRP20170053TT HRP20170053T1 (en) 2010-08-17 2017-01-12 Method for decoding intra-predictions
CY20181100489T CY1120190T1 (en) 2010-08-17 2018-05-14 METHOD FOR RECOVERING AN INTRODUCTION STATE
HRP20180834TT HRP20180834T1 (en) 2010-08-17 2018-05-25 Method for restoring an intra prediction mode
CY181100613T CY1120813T1 (en) 2010-08-17 2018-06-12 METHOD FOR RECOVERING AN INTRA-FORECASTING SITUATION
CY181100612T CY1120815T1 (en) 2010-08-17 2018-06-12 METHOD FOR CODING AN INTRA-FORECASTING STATE
CY181100614T CY1120795T1 (en) 2010-08-17 2018-06-12 METHOD FOR CODING AN INTRA-FORECASTING STATE
HRP20181098TT HRP20181098T1 (en) 2010-08-17 2018-07-11 Method for restoring an intra prediction mode
HRP20181147TT HRP20181147T1 (en) 2010-08-17 2018-07-18 Method for encoding an intra prediction mode
HRP20181145TT HRP20181145T1 (en) 2010-08-17 2018-07-18 Method for encoding an intra prediction mode
US16/171,548 US10567760B2 (en) 2010-08-17 2018-10-26 Apparatus for encoding an image
US16/725,167 US10944965B2 (en) 2010-08-17 2019-12-23 Apparatus for encoding an image
US17/161,113 US11284072B2 (en) 2010-08-17 2021-01-28 Apparatus for decoding an image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020100079529 2010-08-17
KR1020110064301A KR20120058384A (en) 2010-08-17 2011-06-30 Intra prediction process

Publications (1)

Publication Number Publication Date
KR20120058384A true KR20120058384A (en) 2012-06-07

Family

ID=46609994

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110064301A KR20120058384A (en) 2010-08-17 2011-06-30 Intra prediction process

Country Status (1)

Country Link
KR (1) KR20120058384A (en)

Similar Documents

Publication Publication Date Title
KR20210131952A (en) Apparatus for encoding a moving picture
KR101373814B1 (en) Apparatus of generating prediction block
KR101373819B1 (en) Metohd of decoding intra prediction mode
KR20220151140A (en) Method for image encoding and computer readable redording meduim thereof
US20160381381A1 (en) Apparatus for encoding a moving picture
KR20210145091A (en) Apparatus for encoding an image
KR20170034799A (en) Apparatus for decoding an image
KR102410326B1 (en) Method and apparatus for encoding/decoding a video signal
KR20120058384A (en) Intra prediction process
KR20130029695A (en) Method for generating prediction block in short distance intra prediction
KR20170034354A (en) Apparatus for decoding an image
KR20170034355A (en) Apparatus for decoding an image
KR20130029694A (en) Method for generating chroma prediction block in intra mode