WO2019103543A1 - Procédé et dispositif de codage et décodage entropiques de signal vidéo - Google Patents
Procédé et dispositif de codage et décodage entropiques de signal vidéo Download PDFInfo
- Publication number
- WO2019103543A1 WO2019103543A1 PCT/KR2018/014560 KR2018014560W WO2019103543A1 WO 2019103543 A1 WO2019103543 A1 WO 2019103543A1 KR 2018014560 W KR2018014560 W KR 2018014560W WO 2019103543 A1 WO2019103543 A1 WO 2019103543A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- mode
- prediction
- mpm
- mode group
- context table
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present invention relates to a method and apparatus for entropy encoding and decoding video signals. More particularly, the present invention relates to a method and apparatus for designing a context-based Adaptive Binary Arithmetic Coding (CABAC) context model of a syntax element.
- CABAC Adaptive Binary Arithmetic Coding
- Entropy coding is a process of generating raw byte sequence payload (RBSP) by losslessly compressing syntax elements determined through an encoding process. Entropy coding assigns a short bit to a syntax that occurs frequently using statistics of a syntax, and assigns a long bit to a syntax that does not, thereby expressing the syntax elements as concise data.
- RBSP raw byte sequence payload
- CABAC Context-based Adaptive Binary Arithmetic Coding
- CABAC uses a probability model that is adaptively updated based on a context of a syntax and a previously generated symbol in a process of performing binary arithmetic coding.
- CABAC also has a high computational complexity and has a sequential structure, which makes parallel processing difficult.
- a method of performing entropy decoding on a video signal comprising the steps of: entropy decoding an MPM flag indicating whether a current block is encoded using an MPM (Most Probable Mode); Generating an MPM candidate list using an intra prediction mode of a block neighboring the current block when the current block is encoded using an MPM; Selecting a context table of an MPM index indicating an intra prediction mode of the current block based on a candidate intra prediction mode included in the MPM candidate list; And context-decoding the MPM index based on the context table, wherein the context table of the MPM index includes contexts mapped to a prediction mode group including the intra-prediction mode of the candidate among the predetermined prediction mode groups, Can be selected as a table.
- the selecting of the context table of the MPM index may include selecting a context table of the bin of the MPM index mapped to the candidate sequence of the MPM candidate list.
- the selecting of the context table of the MPM index may include selecting a context table of the first bin of the MPM index using the intra prediction mode of the first candidate of the MPM candidate list, Selecting a context table of a second bean of the MPM index using the intra prediction mode of the second candidate of the list and using the intra prediction mode of the third candidate of the MPM candidate list to determine the context of the third bean of the MPM index And selecting the table.
- the predetermined prediction mode groups include a first mode group and a second mode group
- the first mode group includes a planar mode, a DC mode, a horizontal mode and a vertical mode
- the mode group may include prediction modes other than the prediction modes included in the first mode group.
- the predetermined prediction mode groups include a first mode group, a second mode group and a third mode group
- the first mode group includes non-directional modes
- the third mode group may include prediction modes other than the prediction modes included in the first mode group and the second mode group.
- the predetermined prediction mode groups include a first mode group, a second mode group and a third mode group, the first mode group includes non-directional modes, and the second mode group includes a horizontal mode, wherein the first mode group includes six prediction modes adjacent to the prediction direction of the horizontal mode, and six prediction modes adjacent to the prediction direction of the vertical mode, and the third mode group includes the first mode group and the second mode And may include prediction modes other than the prediction modes included in the group.
- an apparatus for performing entropy decoding on a video signal comprising: an MPM flag decoding unit entropy decoding an MPM flag indicating whether a current block is encoded using an MPM (Most Probable Mode); An MPM candidate list generation unit for generating an MPM candidate list using an intra prediction mode of a block neighboring the current block when the current block is encoded using the MPM; A context table selection unit for selecting a context table of an MPM index indicating an intra prediction mode of the current block based on a candidate intra prediction mode included in the MPM candidate list; And an MPM index decoding unit for entropy decoding the MPM index based on the context table, wherein the context table of the MPM index is mapped to a prediction mode group including the intra prediction mode of the candidate among the predetermined prediction mode groups, Lt; / RTI >
- the context table selection unit may select a context table of a bin of the MPM index mapped to a candidate sequence of the MPM candidate list.
- the context table selection unit selects the context table of the first bin of the MPM index using the intra prediction mode of the first candidate of the MPM candidate list, and selects the context table of the second candidate of the MPM candidate list
- the context table of the second bean of the MPM index may be selected using the intra prediction mode and the context table of the third bean of the MPM index may be selected using the intra prediction mode of the third candidate of the MPM candidate list.
- the predetermined prediction mode groups include a first mode group and a second mode group
- the first mode group includes a planar mode, a DC mode, a horizontal mode and a vertical mode
- the mode group may include prediction modes other than the prediction modes included in the first mode group.
- the predetermined prediction mode groups include a first mode group, a second mode group and a third mode group
- the first mode group includes non-directional modes
- the third mode group may include prediction modes other than the prediction modes included in the first mode group and the second mode group.
- the predetermined prediction mode groups include a first mode group, a second mode group and a third mode group, the first mode group includes non-directional modes, and the second mode group includes a horizontal mode, wherein the first mode group includes six prediction modes adjacent to the prediction direction of the horizontal mode, and six prediction modes adjacent to the prediction direction of the vertical mode, and the third mode group includes the first mode group and the second mode And may include prediction modes other than the prediction modes included in the group.
- context models that effectively reflect the statistics on the symbol occurrence probability can be obtained by separately grouping the prediction modes that are relatively frequently selected in the context determination for the MPM (Most Probable Mode) index And thus, the entropy encoding / decoding performance can be improved.
- FIG. 1 is a schematic block diagram of an encoder in which still image or moving picture signal encoding is performed according to an embodiment of the present invention.
- FIG. 2 is a schematic block diagram of a decoder in which still image or moving picture signal encoding is performed according to an embodiment of the present invention.
- FIG. 3 is a diagram for explaining a block division structure of a QT (QuadTree, hereinafter referred to as 'QT') to which the present invention can be applied.
- FIG. 4 is a diagram for explaining a BT (Binary Tree, hereinafter referred to as 'BT') block division structure to which the present invention can be applied.
- BT Binary Tree
- FIG. 5 is a diagram for explaining a block division structure of a TT (Ternary Tree) block according to an embodiment of the present invention.
- FIG. 6 is a diagram for explaining an AT (Asymmetric Tree) block partitioning structure to which the present invention can be applied.
- FIG. 7 is a diagram illustrating an intra prediction method according to an embodiment to which the present invention is applied.
- FIG. 8 illustrates a prediction direction according to an intra prediction mode.
- FIG. 9 is a diagram illustrating a prediction direction according to an intra prediction mode, to which the present invention is applied.
- FIG. 10 is a diagram illustrating a method of interpolating a reference sample to generate a prediction sample, to which the present invention is applied.
- FIG. 11 is a diagram for explaining a method of constructing an MPM (Most Probable Mode) using a prediction mode of a neighboring block, to which the present invention is applied.
- FIG. 12 is a schematic block diagram of an entropy encoding unit to which CABAC (Context-based Adaptive Binary Arithmetic Coding) is applied according to an embodiment of the present invention.
- CABAC Context-based Adaptive Binary Arithmetic Coding
- FIG. 13 is a schematic block diagram of an entropy decoding unit to which CABAC (Context-based Adaptive Binary Arithmetic Coding) is applied according to an embodiment of the present invention.
- CABAC Context-based Adaptive Binary Arithmetic Coding
- FIG. 14 shows an encoding flow chart performed according to CABAC (Context-based Adaptive Binary Arithmetic Coding) according to an embodiment to which the present invention is applied.
- CABAC Context-based Adaptive Binary Arithmetic Coding
- FIG. 15 is a flowchart illustrating a decoding process performed according to CABAC (Context-based Adaptive Binary Arithmetic Coding) according to an embodiment of the present invention.
- CABAC Context-based Adaptive Binary Arithmetic Coding
- 16 is a block diagram illustrating a method for selecting a context model based on neighboring blocks according to an embodiment to which the present invention is applied.
- 17 is a flowchart illustrating a method of selecting a context model using a left block and an upper block according to an embodiment to which the present invention is applied.
- FIG. 18 is a view for explaining an arithmetic coding method according to an embodiment to which the present invention is applied.
- FIG. 19 is a flowchart illustrating a context modeling method of an MPM index according to an embodiment to which the present invention is applied.
- 20 is a diagram illustrating an entropy decoding apparatus according to an embodiment to which the present invention is applied.
- 21 is a diagram for explaining a method of selecting a context model using a left block and an upper block as an embodiment to which the present invention is applied.
- 22 is a diagram for explaining a method of selecting a context model using a left block and an upper block, according to an embodiment to which the present invention is applied.
- FIG. 23 shows a video coding system to which the present invention is applied.
- FIG. 24 shows a structure of a content streaming system as an embodiment to which the present invention is applied.
- 'processing unit' means a unit in which processing of encoding / decoding such as prediction, conversion and / or quantization is performed.
- the processing unit may be referred to as a " processing block " or a " block "
- the processing unit may be interpreted to include a unit for the luma component and a unit for the chroma component.
- the processing unit may correspond to a coding tree unit (CTU), a coding unit (CU), a prediction unit (PU), or a transform unit (TU).
- CTU coding tree unit
- CU coding unit
- PU prediction unit
- TU transform unit
- the processing unit can be interpreted as a unit for a luminance (luma) component or as a unit for a chroma component.
- the processing unit may include a Coding Tree Block (CTB), a Coding Block (CB), a Prediction Block (PU), or a Transform Block (TB) ).
- CTB Coding Tree Block
- CB Coding Block
- PU Prediction Block
- TB Transform Block
- the processing unit may be interpreted to include a unit for the luma component and a unit for the chroma component.
- processing unit is not necessarily limited to a square block, but may be configured as a polygonal shape having three or more vertexes.
- a pixel, a pixel, or the like is collectively referred to as a sample.
- using a sample may mean using a pixel value, a pixel value, or the like.
- FIG. 1 is a schematic block diagram of an encoder in which still image or moving picture signal encoding is performed according to an embodiment of the present invention.
- an encoder 100 includes an image divider 110, a subtractor 115, a transformer 120, a quantizer 130, an inverse quantizer 140, an inverse transformer 150, A decoding unit 160, a decoded picture buffer (DPB) 170, a predicting unit 180, and an entropy encoding unit 190.
- the prediction unit 180 may include an inter prediction unit 181 and an intra prediction unit 182.
- the image divider 110 divides an input video signal (or a picture, a frame) input to the encoder 100 into one or more processing units.
- the subtractor 115 subtracts a prediction signal (or a prediction block) output from the prediction unit 180 (i.e., the inter prediction unit 181 or the intra prediction unit 182) from the input video signal, And generates a residual signal (or difference block).
- the generated difference signal (or difference block) is transmitted to the conversion unit 120.
- the transforming unit 120 transforms a difference signal (or a difference block) by a transform technique (for example, DCT (Discrete Cosine Transform), DST (Discrete Sine Transform), GBT (Graph-Based Transform), KLT (Karhunen- Etc.) to generate a transform coefficient.
- a transform technique for example, DCT (Discrete Cosine Transform), DST (Discrete Sine Transform), GBT (Graph-Based Transform), KLT (Karhunen- Etc.
- the transform unit 120 may generate transform coefficients by performing transform using a transform technique determined according to a prediction mode applied to a difference block and a size of a difference block.
- the quantization unit 130 quantizes the transform coefficients and transmits the quantized transform coefficients to the entropy encoding unit 190.
- the entropy encoding unit 190 entropy-codes the quantized signals and outputs them as a bitstream.
- the quantized signal output from the quantization unit 130 may be used to generate a prediction signal.
- the quantized signal can be reconstructed by applying inverse quantization and inverse transformation through the inverse quantization unit 140 and the inverse transform unit 150 in the loop.
- a reconstructed signal can be generated by adding the reconstructed difference signal to a prediction signal output from the inter prediction unit 181 or the intra prediction unit 182.
- the filtering unit 160 applies filtering to the restored signal and outputs the restored signal to the playback apparatus or the decoded picture buffer 170.
- the filtered signal transmitted to the decoding picture buffer 170 may be used as a reference picture in the inter-prediction unit 181. [ As described above, not only the picture quality but also the coding efficiency can be improved by using the filtered picture as a reference picture in the inter picture prediction mode.
- the decoded picture buffer 170 may store the filtered picture for use as a reference picture in the inter-prediction unit 181.
- the inter-prediction unit 181 performs temporal prediction and / or spatial prediction to remove temporal redundancy and / or spatial redundancy with reference to a reconstructed picture.
- the reference picture used for prediction is a transformed signal obtained through quantization and inverse quantization in units of blocks at the time of encoding / decoding in the previous time, blocking artifacts or ringing artifacts may exist have.
- the inter-prediction unit 181 can interpolate signals between pixels by sub-pixel by applying a low-pass filter in order to solve the performance degradation due to discontinuity or quantization of such signals.
- a subpixel means a virtual pixel generated by applying an interpolation filter
- an integer pixel means an actual pixel existing in a reconstructed picture.
- the interpolation method linear interpolation, bi-linear interpolation, wiener filter and the like can be applied.
- the interpolation filter may be applied to a reconstructed picture to improve the accuracy of the prediction.
- the inter-prediction unit 181 generates an interpolation pixel by applying an interpolation filter to an integer pixel, and uses an interpolated block composed of interpolated pixels as a prediction block Prediction can be performed.
- the intra predictor 182 predicts a current block by referring to samples in the vicinity of a block to be currently encoded.
- the intraprediction unit 182 may perform the following procedure to perform intra prediction. First, a reference sample necessary for generating a prediction signal can be prepared. Then, a prediction signal can be generated using the prepared reference sample. Thereafter, the prediction mode is encoded. At this time, reference samples can be prepared through reference sample padding and / or reference sample filtering. Since the reference samples have undergone prediction and reconstruction processes, quantization errors may exist. Therefore, a reference sample filtering process can be performed for each prediction mode used for intraprediction to reduce such errors.
- a prediction signal (or a prediction block) generated through the inter prediction unit 181 or the intra prediction unit 182 is used to generate a reconstruction signal (or reconstruction block) or a difference signal (or a difference block) / RTI >
- FIG. 2 is a schematic block diagram of a decoder in which still image or moving picture signal encoding is performed according to an embodiment of the present invention.
- the decoder 200 includes an entropy decoding unit 210, an inverse quantization unit 220, an inverse transform unit 230, an adder 235, a filtering unit 240, a decoded picture buffer (DPB) A buffer unit 250, and a prediction unit 260.
- the prediction unit 260 may include an inter prediction unit 261 and an intra prediction unit 262.
- the reconstructed video signal output through the decoder 200 may be reproduced through a reproducing apparatus.
- the decoder 200 receives a signal (i.e., a bit stream) output from the encoder 100 of FIG. 1, and the received signal is entropy-decoded through the entropy decoding unit 210.
- a signal i.e., a bit stream
- the inverse quantization unit 220 obtains a transform coefficient from the entropy-decoded signal using the quantization step size information.
- the inverse transform unit 230 obtains a residual signal (or a difference block) by inverse transforming the transform coefficient by applying an inverse transform technique.
- the adder 235 adds the obtained difference signal (or difference block) to the prediction signal output from the prediction unit 260 (i.e., the inter prediction unit 261 or the intra prediction unit 262) ) To generate a reconstructed signal (or reconstruction block).
- the filtering unit 240 applies filtering to a reconstructed signal (or a reconstructed block) and outputs it to a reproducing apparatus or transmits the reconstructed signal to a decoding picture buffer unit 250.
- the filtered signal transmitted to the decoding picture buffer unit 250 may be used as a reference picture in the inter prediction unit 261.
- the embodiments described in the filtering unit 160, the inter-prediction unit 181 and the intra-prediction unit 182 of the encoder 100 respectively include the filtering unit 240 of the decoder, the inter-prediction unit 261, The same can be applied to the intra prediction unit 262.
- FIG. 3 is a diagram for explaining a block division structure of a QT (QuadTree, hereinafter referred to as 'QT') to which the present invention can be applied.
- One block in video coding can be segmented based on QT (QuadTree).
- QT QualityTree
- one sub-block divided by QT can be further recursively partitioned using QT.
- a leaf block that is not QT-divided can be divided by at least one of BT (Binary Tree), TT (Ternary Tree), or AT (Asymmetric Tree).
- BT can have two types of segmentation: horizontal BT (2NxN, 2NxN) and vertical BT (Nx2N, Nx2N).
- TT can have two types of segmentation: horizontal TT (2Nx1 / 2N, 2NxN, 2Nx1 / 2N) and vertical TT (1 / 2Nx2N, Nx2N, 1 / 2Nx2N).
- AT is a horizontal-up AT (2Nx1 / 2N, 2Nx3 / 2N), a horizontal-down AT (2Nx3 / 2N, 2Nx1 / 2N), a vertical-left AT (1 / 2Nx2N, 3 / 2Nx2N) / 2Nx2N, 1 / 2Nx2N).
- Each BT, TT, and AT can be recursively further partitioned using BT, TT, and AT.
- FIG. 3 shows an example of QT division.
- the block A can be divided into four sub-blocks (A0, A1, A2, A3) by QT.
- the sub-block A1 can be further divided into four sub-blocks (B0, B1, B2, B3) by QT.
- FIG. 4 is a diagram for explaining a BT (Binary Tree, hereinafter referred to as 'BT') block division structure to which the present invention can be applied.
- BT Binary Tree
- FIG. 4 shows an example of BT division.
- Block B3 which is no longer partitioned by QT, can be divided into vertical BT (C0, C1) or horizontal BT (D0, D1).
- each sub-block can be further recursively partitioned, such as in the form of horizontal BT (E0, E1) or vertical BT (F0, F1).
- FIG. 5 is a diagram for explaining a block division structure of a TT (Ternary Tree) block according to an embodiment of the present invention.
- FIG. 5 shows an example of TT division.
- Block B3 which is no longer partitioned by QT, may be divided into vertical TT (C0, C1, C2) or horizontal TT (D0, D1, D2).
- each sub-block can be further recursively divided into a horizontal TT (E0, E1, E2) or a vertical TT (F0, F1, F2).
- FIG. 6 is a diagram for explaining an AT (Asymmetric Tree) block partitioning structure to which the present invention can be applied.
- Block B3 which is no longer partitioned by QT, may be partitioned into vertical AT (C0, C1) or horizontal AT (D0, D1).
- each subblock can be further recursively partitioned, such as in the form of horizontal AT (E0, E1) or vertical TT (F0, F1).
- BT, TT, and AT segmentation can be used together.
- a subblock divided by BT can be divided by TT or AT.
- subblocks divided by TT can be divided by BT or AT.
- a subblock divided by AT can be divided by BT or TT.
- each subblock may be partitioned into a vertical BT, or after a vertical BT partition, each subblock may be partitioned into a horizontal BT.
- the two kinds of division methods have the same shape in the final division although the division order is different.
- searching is performed from left to right and from top to bottom, and searching for a block means a procedure for determining whether or not each divided sub-block is further divided into blocks, or when a block is not further divided, Refers to a coding order of a block, or a search order when referring to information of another neighboring block in a sub-block.
- And may use the decoded portion of the current picture or other pictures that contain the current processing unit to recover the current processing unit in which decoding is performed.
- a picture (slice) that uses only the current picture, that is, a picture (slice) that uses only the current picture, that is, a picture (slice) that performs only intra-picture prediction is referred to as an intra picture or an I picture
- a picture (slice) using a predictive picture or a P picture (slice), a maximum of two motion vectors and a reference index may be referred to as a bi-predictive picture or a B picture (slice).
- Intra prediction refers to a prediction method that derives the current processing block from a data element (e.g., a sample value, etc.) of the same decoded picture (or slice). That is, it means a method of predicting the pixel value of the current processing block by referring to the reconstructed areas in the current picture.
- a data element e.g., a sample value, etc.
- Inter prediction refers to a prediction method of deriving a current processing block based on a data element (e.g., a sample value or a motion vector) of a picture other than the current picture. That is, this means a method of predicting pixel values of a current processing block by referring to reconstructed areas in other reconstructed pictures other than the current picture.
- a data element e.g., a sample value or a motion vector
- intra prediction (or intra prediction) will be described in more detail.
- Intra prediction or intra prediction
- FIG. 7 is a diagram illustrating an intra prediction method according to an embodiment to which the present invention is applied.
- the decoder derives an intra prediction mode of the current processing block (S701).
- intra prediction it is possible to have a prediction direction with respect to the position of a reference sample used for prediction according to the prediction mode.
- An intra prediction mode having a prediction direction is referred to as an intra prediction mode (Intra_Angular prediction mode).
- intra prediction mode Intra_Angular prediction mode
- intra-planar (INTRA_PLANAR) prediction mode there are an intra-planar (INTRA_PLANAR) prediction mode and an intra-DC (INTRA_DC) prediction mode as intra-prediction modes having no prediction direction.
- Table 1 illustrates the intra-prediction mode and related names
- FIG. 8 illustrates the prediction direction according to the intra-prediction mode.
- intra prediction prediction is performed on the current processing block based on the derived prediction mode. Since the reference sample used in the prediction differs from the concrete prediction method used in the prediction mode according to the prediction mode, when the current block is encoded in the intra prediction mode, the decoder derives the prediction mode of the current block in order to perform prediction.
- the decoder checks whether neighboring samples of the current processing block can be used for prediction, and constructs reference samples to be used for prediction (S702).
- neighbor samples of the current processing block include a sample adjacent to the left boundary of the current processing block of size nS x nS and a total of 2 x nS samples neighboring the bottom-left, A sample adjacent to the top boundary and a total of 2 x n S samples neighboring the top-right side and one sample neighboring the top-left of the current processing block.
- the decoder may substitute samples that are not available with the available samples to construct reference samples for use in prediction.
- the decoder may perform filtering of the reference samples based on the intra prediction mode (S703).
- Whether or not the filtering of the reference sample is performed can be determined based on the size of the current processing block.
- the filtering method of the reference sample may be determined by a filtering flag transmitted from the encoder.
- the decoder generates a prediction block for the current processing block based on the intra prediction mode and the reference samples (S704). That is, the decoder determines the intra prediction mode derived in the intra prediction mode deriving step S701, the prediction for the current processing block based on the reference samples obtained through the reference sample building step S702 and the reference sample filtering step S703, (I.e., generates a prediction sample).
- the left boundary sample of the prediction block i.e., the sample in the prediction block adjacent to the left boundary
- samples in the prediction block adjacent to the upper boundary that is, samples in the prediction block adjacent to the upper boundary
- filtering may be applied to the left boundary sample or the upper boundary sample, similar to the INTRA_DC mode, for the vertical direction mode and the horizontal direction mode of the intra directional prediction modes.
- the value of a predicted sample can be derived based on a reference sample located in a prediction direction.
- the boundary sample which is not located in the prediction direction may be adjacent to the reference sample which is not used for prediction. That is, the distance from the reference sample that is not used for prediction may be much closer than the distance from the reference sample used for prediction.
- the decoder may adaptively apply filtering to the left boundary samples or the upper boundary samples according to whether the intra-prediction direction is vertical or horizontal. That is, when the intra prediction direction is vertical, filtering is applied to the left boundary samples, and filtering is applied to the upper boundary samples when the intra prediction direction is the horizontal direction.
- FIG. 9 is a diagram illustrating a prediction direction according to an intra prediction mode, to which the present invention is applied.
- the six non-directional DC modes, the remaining 65 directional prediction modes except for the planar mode can have prediction directions as shown in FIG. 9, and the encoder / It is possible to perform intra prediction by copying a reference sample determined according to the direction.
- the prediction mode numbers from 2 to 66 can be sequentially allocated from the lower left prediction direction to the upper right prediction direction, respectively.
- the method proposed by the present invention mainly describes intraprediction using 65 prediction modes recently discussed, but can also be applied to intra prediction using 35 conventional prediction modes in the same manner.
- FIG. 10 is a diagram illustrating a method of interpolating a reference sample to generate a prediction sample, to which the present invention is applied.
- the encoder / decoder can generate a prediction sample by copying a reference sample determined according to the prediction direction of the intra prediction mode. If the reference sample determined according to the prediction direction is not an integer pixel position, the encoder / decoder can interpolate adjacent integer pixel reference samples to compute a reference sample of the fractional pixel location and copy it to generate a prediction sample have.
- the encoder / decoder can compute an interpolated reference sample using the reference samples of the corresponding two integer pixel positions and the reference sample-to-sample distance ratio obtained through the angle of the prediction mode, as shown in FIG. 10 .
- the encoder / decoder can then generate a predicted sample by copying the computed interpolated reference sample.
- a tan value for the angle? Of the prediction mode may be defined to calculate the position of the sub-pixel (i.e., the fractional pixel). Also, in order to improve the complexity of the operation, it can be defined by scaling in integer units, and tan? Can be determined for each of 67 prediction modes by using Table 2 below.
- tan -1 ? For some prediction modes can be determined using Table 3 below.
- the encoder / decoder may apply an interpolation filter on the integer pixel reference samples.
- the interpolation filter can be selectively determined according to the size of the current processing block.
- the encoder / decoder performs interpolation on reference samples at integer pixel locations, and when the width or height of the current processing block is less than or equal to 8, a cubic filter is used as the interpolation filter . If the width or height of the current processing block is greater than 8, the encoder / decoder may use a Gaussian filter as the interpolation filter.
- the directional prediction mode may be classified into a vertical direction prediction mode when the prediction mode is larger than or equal to the prediction mode 34, and a horizontal direction prediction mode when the prediction mode is 34 times smaller. If the mode is the vertical direction prediction mode, the encoder / decoder selects the interpolation filter based on the width of the current processing block. If the mode is the horizontal direction prediction mode, the interpolation filter can be selected based on the height of the current processing block.
- the DC mode indicates a prediction method of constructing a prediction block using an average value of reference samples located around the current block.
- An effective prediction can be expected when the pixels in the current processing block are homogeneous.
- the value of the reference sample is not uniform, a discontinuity may occur between the prediction block and the reference sample.
- a planar prediction method has been devised.
- the planar prediction method constructs a prediction block by performing horizontal linear prediction and vertical linear prediction using surrounding reference samples and then averaging them.
- the encoder / decoder may perform post-processing filtering to alleviate the discontinuity between the reference sample and the prediction block boundary for blocks predicted in the horizontal direction, the vertical direction, and the DC mode. Thereafter, the decoder can restore the block encoded by intraprediction by summing the prediction block and the inverse-transformed residual signal into the pixel region.
- the prediction mode information is transmitted to the decoder, where MPM (Most Probable Mode) can be used for encoding of the efficient prediction mode.
- MPM Most Probable Mode
- the MPM starts with the assumption that the intra prediction mode of the current processing block will be the same as or similar to the prediction mode of the previously intra predicted block in the surroundings. Will be described with reference to the following drawings.
- FIG. 11 is a diagram for explaining a method of constructing an MPM (Most Probable Mode) using a prediction mode of a neighboring block, to which the present invention is applied.
- the encoder / decoder may use a prediction mode of a neighboring block to construct an MPM candidate list.
- the maximum number of MPM candidates constituting the MPM candidate list is 6.
- the present invention is not limited to this.
- the number of MPM candidates applied to the proposed method may be 3, 4, or 5, or may be 7 or more.
- the encoder / decoder may construct six MPM candidate lists (which may be referred to as MPM lists, MPM candidates, MPM candidate groups, etc. in the present invention) using the prediction mode of the neighboring blocks as well.
- the encoder / decoder can construct an MPM candidate list in the order of Left (L), Above (A), Planar, DC, Below left (BL), Above right (AR) and Above left (AL).
- the prediction mode defined as the default modes is set to the MPM list As shown in FIG.
- the default mode represents a predicting mode (or a prediction mode group) which is preferentially considered, and may include statistically selected prediction modes.
- the default mode may be configured to include a total of six prediction modes: 50, 18, 2, 34, 60, and 65 prediction modes.
- the current prediction mode is a prediction mode existing in the MPM candidate list
- the index information indicating the prediction mode to be applied to the current intra prediction is transmitted in the MPM list. Therefore, (In this case, a total of 7 bits is required), the number of bits according to the prediction mode signaling can be saved.
- the index of the MPM may be binarized in a truncated unary manner.
- encoding / decoding can be performed by separating into three context tables of horizontal direction, vertical direction and non-directional mode according to the direction of the MPM mode have.
- the prediction mode exists in the MPM candidate list, one of the modes excluding the six MPM candidates will be applied to the current block, so that the prediction mode information can be accurately transmitted by only signaling for a total of 61 prediction modes.
- the decoder When the current block to be decoded is coded in the intra mode, the decoder decodes the residual signal from the video signal transmitted from the encoder. At this time, the decoder performs entropy decoding on the signal that is symbol-based on the probability, and then performs inverse quantization and inverse transform to restore the residual signal of the pixel domain. Meanwhile, the intra-prediction unit 262 of the decoder generates a prediction block using the prediction mode transmitted from the encoder and the neighboring reference samples already reconstructed. Thereafter, the predicted signal and the decoded residual signal are summed to reconstruct the intra predicted block.
- FIG. 12 is a schematic block diagram of an entropy encoding unit to which CABAC (Context-based Adaptive Binary Arithmetic Coding) is applied according to an embodiment of the present invention.
- CABAC Context-based Adaptive Binary Arithmetic Coding
- the entropy encoding unit 1200 to which the present invention is applied includes a binarization unit 1210, a context modeling unit 1220, a binary arithmetic encoding unit 1230 and a memory 1260, and the binary arithmetic encoding unit 1230 A regular binary encoding unit 1240 and a bypass binary encoding unit 1250.
- the regular binary encoding unit 1240 and the bypass binary encoding unit 1250 may be referred to as a regular coding engine and a bypass coding engine, respectively.
- the binarization unit 1210 may receive a sequence of data symbols and perform binarization to output a binary symbol string composed of binary values of 0 or 1.
- the binarization unit 1210 may map syntax elements to binary symbols. Different binarization processes, such as unary (U), truncated unary (TU), k-th Exp-Golomb (EGk), and fixed length processes, Lt; / RTI > The binarization process can be selected based on the type of syntax element.
- the outputted binary symbol string is transmitted to the context modeling unit 1220.
- the context modeling unit 1220 selects probability information necessary for coding the current block from the memory, and transmits the probability information to the binary arithmetic encoding unit 1230.
- the context memory may be selected based on the syntax element to be coded and the probability information required for current syntax element coding may be selected via the empty index binIdx.
- context refers to information on the probability of occurrence of a symbol
- context modeling refers to a process of estimating the probability of a bin necessary for binary arithmetic coding with bin as a binarization result.
- the context modeling unit 1220 can provide an accurate probability estimate required to achieve high coding efficiency. Thus, different context models may be used for different binary symbols and the probability of this context model may be updated based on the values of the previously coded binary symbols. At this time, the values of the previously coded binary symbols are stored in the memory 1260, and the context modeling unit 1220 can use the values of the previously coded binary symbols.
- Binary symbols with similar distributions may share the same context model.
- the context model for each of these binary symbols includes, for probability estimation, at least one of the syntax information of the bean, the bin index (binIdx) indicating the position of the bean in the bin string, and the probability of the bean included in the neighboring block of the bean containing block Can be used.
- the binary arithmetic encoding unit 1230 may perform binary arithmetic encoding based on the context model.
- the binary arithmetic encoding unit 1230 includes a regular binary encoding unit 1240 and a bypass binary encoding unit 1250.
- the binary arithmetic encoding unit 1230 performs entropy encoding on the output string And outputs the compressed data bits.
- the regular binary encoding unit 1240 performs arithmetic coding based on a recursive interval division.
- an interval (or range) having an initial value of 0 to 1 is divided into two sub-intervals based on the probability of the binary symbol.
- the encoded bits provide an offset to select one of the two subdivisions representing the value of the decoded binary symbol when converted to a binary fractional number.
- the interval may be updated to equalize the selected lower interval, and the interval dividing process itself is repeated.
- the intervals and offsets have limited bit precision, so renormalization may be required to prevent overflow whenever the interval falls below a certain value. The renormalization may occur after each binary symbol is decoded.
- the bypass binary encoding unit 1250 performs encoding without a context model and performs coding by fixing the probability of a currently coded bin to 0.5. This can be used when it is difficult to determine the probability of a syntax or when coding at a high speed.
- FIG. 13 is a schematic block diagram of an entropy decoding unit to which CABAC (Context-based Adaptive Binary Arithmetic Coding) is applied according to an embodiment of the present invention.
- CABAC Context-based Adaptive Binary Arithmetic Coding
- the entropy decoding unit 1300 to which the present invention is applied includes a context modeling unit 1310, a binary arithmetic decoding unit 1320, a memory 1350, and an inverse binarization unit 1360.
- the binary arithmetic decoding unit 1320 Includes a regular binary decoding unit 1330 and a bypass binary decoding unit 1340.
- the entropy decoding unit 1300 can receive a bitstream and identify a bypass flag from the bitstream.
- the bypass flag indicates whether the bypass mode is bypass mode, the bypass mode does not use the context model, and the probability of the currently coded bin is 0.5 Which means that the coding is performed by fixing.
- the regular binary decoding unit 1330 When the bypass mode is not in accordance with the bypass flag, the regular binary decoding unit 1330 performs binary arithmetic decoding according to a regular mode .
- the context modeling unit 1310 selects probability information necessary for decoding the current bitstream from the memory 1350, and transmits the random information to the regular binary decoding unit 1330.
- the binary arithmetic decoding unit 1320 may perform binary arithmetic decoding based on the context model.
- the bypass binary decoding unit 1340 performs binary arithmetic decoding according to a bypass mode, .
- the inverse binarization unit 1360 receives the binary binarized bins decoded by the binary arithmetic decoding unit 1320 and converts the binary binarized bins into an integer syntax element value.
- FIG. 14 shows an encoding flow chart performed according to CABAC (Context-based Adaptive Binary Arithmetic Coding) according to an embodiment to which the present invention is applied.
- CABAC Context-based Adaptive Binary Arithmetic Coding
- the encoder may perform binarization on the syntax element (S1410).
- the encoder may determine whether to perform binary arithmetic coding according to the normal mode or to perform binary arithmetic coding according to the bypass mode (S1420). For example, the encoder can determine whether the mode is a normal mode or a bypass mode based on a bypass flag. For example, if the bypass flag is 1, the encoder indicates a bypass mode, A value of 0 indicates normal mode.
- the encoder may select a probability model (S1430) and perform binary arithmetic encoding based on the probability model (S1440).
- the encoder may update the probability model (S1450), and may select a suitable probability model again based on the updated probability model in operation S1430.
- the encoder may perform binary arithmetic encoding based on probability 0.5 (S1460).
- FIG. 15 is a flowchart illustrating a decoding process performed according to CABAC (Context-based Adaptive Binary Arithmetic Coding) according to an embodiment of the present invention.
- CABAC Context-based Adaptive Binary Arithmetic Coding
- the decoder can receive the bit stream (S1510).
- the decoder extracts a bypass flag from the bitstream to check whether it is in the normal mode or the bypass mode (S1520).
- the bypass flag may be determined in advance according to the type of the syntax.
- the decoder can select a probability model (S1530) and perform binary arithmetic decoding based on the probability model (S1540).
- the decoder may update the probability model (S1550), and may then select an appropriate probability model based on the updated probability model in step S1530.
- the decoder can perform binary arithmetic decoding based on the probability 0.5 (S1560).
- the decoder may perform inverse binarization on the decoded bin string (S1570). For example, it is possible to receive a decoded binary-type bin and convert it to an integer type syntax element value.
- 16 is a block diagram illustrating a method for selecting a context model based on neighboring blocks according to an embodiment to which the present invention is applied.
- CABAC's context model can be considered variously depending on statistical characteristics. For example, when using one context model, one context model can be used without considering special conditions. However, when three context models are used, the context model can be designed based on conditions, that is, based on the syntax element of the neighboring block.
- the current block is denoted by C
- the left block adjacent to the current block is denoted by L
- the upper block is denoted by A.
- the context model for the syntax of the current block C can be determined using at least one of the left block L and the upper block A which are neighboring blocks.
- the following Equation 1 shows a method of selecting a context model using the left block and the upper block.
- availableL and availableA indicate whether the left block and the upper block exist, respectively, and condL and condA indicate the values of corresponding syntaxes for the left block and the upper block, respectively.
- three context models may be used according to the syntax values of neighboring blocks.
- the context model may be determined according to a syntax value of a neighboring block regardless of a size of a current block or a size of a neighboring block.
- 17 is a flowchart illustrating a method of selecting a context model using a left block and an upper block according to an embodiment to which the present invention is applied.
- a method for selecting a context model according to the present invention can be applied to both an encoder and a decoder, and will be described with reference to a decoder for convenience of explanation.
- the decoder can derive a left block and an upper block adjacent to the current block (S1710). That is, it can be confirmed whether the left block and the upper block adjacent to the current block are available.
- the decoder may derive a syntax value from at least one of the left block and the upper block (S1720).
- the decoder can determine the context model by deriving the context index value based on the syntax value of at least one of the left block and the upper block (S1730).
- the decoder may perform binary arithmetic decoding based on the context model (S1740).
- Entropy coding in video coding is a process of compressing a syntax element determined from an encoding process losslessly and expressing it as concise data. Lossless compression of information can be expected by expressing the syntax with a small information short bit string for a syntax often appearing using the statistics of the syntax and expressing it with a large information (long bit string) for a syntax that does not.
- Entropy coding is largely divided into variable length coding and arithmetic coding.
- Variable-length coding can be expected to compress information by allocating a small amount of bits for temporal information, and allocating a large amount of bits for non-temporal information, as described above.
- the variable length coding has to allocate at least one bit for symbols having a high probability. Therefore, in recent video coding, an arithmetic coding method is mainly used as a method of expressing a plurality of symbols with a single real number value.
- FIG. 18 is a view for explaining an arithmetic coding method according to an embodiment to which the present invention is applied.
- the initial value of the range is [0,1], and 0 and 1 are divided into several intervals according to the probability of occurrence of each symbol. Then, the allocated interval is selected according to the probability of the current symbol to be encoded. In the next step, the selected interval becomes the next coding interval. Therefore, the initial probability information about each symbol can be completely restored by knowing both the encoder and the decoder.
- binary arithmetic coding is for binary symbols having only two values of zero or one. Therefore, there is only probability p (0) or p (1) for each symbol, and interval can be defined only by two according to probability.
- the echoer obtains a probability interval for a bit stream of 0010 with respect to a binary model in which a probability of occurrence of 0 is 0.6, calculates a probability interval of a binary stream of 001 (0.25) for the smallest number (0.25) To the decoder. Then, the decoder can restore the bit stream 0010 with the same probability model and in the opposite manner as performed by the encoder.
- CABAC context-adaptive binary arithmetic coding
- CABAC can include symbolic binarization, context modeling, and binary arithmetic coding as key elements (or core processes).
- the syntax element is represented by a bin (or symbol) of 0 and 1 through a binarization unit (FIGS. 12 and 1210), and each bin is represented by a regular binary encoding unit (Or a normal arithmetic encoding section) or a bypass binary encoding section (FIG. 12, 1250) (or a bypass arithmetic encoding section).
- the probability is input to the context modeling unit (FIG. 12, 1220) (or the probability predicting unit and the assigning unit) to allocate the probability of the corresponding bean, and then the normal arithmetic coding unit performs encoding .
- the currently encoded bean can be encoded with an updated probability according to the occurrence probability of previously input bins.
- a process of updating the probability model with respect to the currently input bin may be included.
- the most probable symbol (MPS) can be 0 or 1 and the least probable symbol (LPS) can be specified as the opposite.
- the probability of MPS is n
- the probability of LPS can be estimated as 1-n.
- the beans input into the bypass arithmetic coding unit are encoded with probabilities of 0 and 1, respectively, 0.5, and the context element update for the probability is not performed.
- the encoded bits can then be combined and transmitted to the decoder.
- CABAC decoding is performed first, and decoding can be performed by performing the above-described contents performed in the encoder in the reverse order.
- CABAC entropy coding can continuously perform probability update using information of a bin that has been previously encoded in order to predict the closest value to the changing probability of 0 or 1.
- FIG. 19 is a flowchart illustrating a context modeling method of an MPM index according to an embodiment to which the present invention is applied.
- the MPM candidate list includes a predetermined order (e.g., Left (L), Above (A), Planar, DC, Below left (BL), Above right left (AL)) using the prediction mode of the neighboring block.
- a predetermined order e.g., Left (L), Above (A), Planar, DC, Below left (BL), Above right left (AL)
- This is a predetermined order from the assumption that the already-restored mode around the current block will be similar to that of the spatial redundancy, which is the starting point of intraprediction.
- the binarization of the MPM index can be performed by a truncated unary binarization method as shown in Table 4 below. In this method, it is possible to allocate a small number of bins to the index having a high probability of being selected, and to compress the information by allocating a large number of bins as the index number increases.
- the context table (or probability table) of symbols (or beans) can be determined using the type of the binarized syntax and the information of the already encoded neighboring blocks.
- the encoder / decoder can select from two or more context tables based on the syntax element, the decoding of the corresponding syntax element of the neighboring block, the size of the block, mode information, and the like.
- a method of performing entropy coding on an MPM index by selecting a context table based on a candidate mode of an MPM list is proposed.
- FIG. 19 a method of entropy coding an MPM index according to an embodiment of the present invention will be described mainly with reference to a decoder, but it can be equally applied to an encoder and a decoder.
- the decoder entropy-decodes the MPM flag indicating whether or not the current block is encoded using MPM (Most Probable Mode) (S1901).
- the decoder If the current block is encoded using the MPM, the decoder generates (or configures) the MPM candidate list using the intra prediction mode of the block neighboring the current block (S1902).
- the decoder constructs the MPM in the same manner as the encoder, and then parses the MPM index to determine whether the candidate mode of the index in the MPM candidate list is finally selected as the prediction mode applied to the intra prediction of the current block do.
- the decoder selects a context table of the MPM index indicating the intra prediction mode of the current block based on the candidate intra prediction mode included in the MPM candidate list (S1903).
- the decoder may select the context table of the bin of the MPM index that is mapped to the candidate order of the MPM candidate list.
- the decoder entropy decodes the MPM index based on the context table selected in step S1903 (S1904).
- the context table of the MPM index may be selected as a context table mapped to a prediction mode group including the intra prediction mode of the candidate among the predetermined prediction mode groups.
- the decoder selects a context table based on the prediction mode of the MPM 0th candidate to check whether the 0th candidate of the MPM 0 is the final intra prediction mode (i.e., the prediction mode applied to the intra prediction of the current block) It can be decoded. If the MPM index is binarized as shown in Table 4 and the parsed symbol is 0, the final intra prediction mode for the current block is the MPM 0 candidate. If the symbol is 1, the decoder can select and decode the context table based on the prediction mode of the MPM 1st candidate. Thereafter, whether the above-described process is repeated or not can be determined depending on whether the parsed symbol is 0 or 1.
- the decoder selects the context table of the first bean of the MPM index using the intra-prediction mode of the first candidate of the MPM candidate list, and outputs the context table of the MPM index to the MPM
- the context table of the second bean of the index can be selected and the context table of the third bean of the MPM index can be selected using the intra prediction mode of the third candidate of the MPM candidate list.
- the decoder may select and decode the context table by applying the method described above to all bins of the MPM index.
- the decoder performs normal arithmetic coding by applying the above-described method only on a part of the bins of the MPM index (e.g., the first three bins of the MPM index) ) May be bypass-encoded.
- the encoder / decoder can select (or determine) a context using a prediction mode of the MPM candidate using various methods described below.
- the encoder / decoder may define a context table for each intra prediction mode. For example, when the intra prediction mode is 67, a total of 67 context tables can be defined.
- the encoder / decoder may group the intra-prediction mode into a directional mode and a non-directional mode and select from two context tables. For example, the encoder / decoder uses (or sets) the same context table for the planar mode and the DC mode, which are the non-directional modes, and uses the same context table for the remaining prediction modes, You can choose from two context tables.
- the encoder / decoder may group the intra-prediction mode into a directional mode and a non-directional mode, group the directional modes into a horizontal directional prediction mode group and a vertical directional prediction mode group, have. If 67 intra prediction modes are used as described above with reference to FIG. 9, the horizontal direction prediction mode group may include prediction modes 2 to 34, and the vertical direction prediction mode group may include Directional prediction modes.
- the encoder / decoder uses the same context table for the planar mode and the DC mode, which are the non-directional modes, and classifies them into the horizontal prediction mode for the prediction modes 2 to 34 in the directional mode, Table is used, and the remaining modes are classified into the vertical prediction mode and the same context table is used, so that a total of three context tables can be selected.
- the encoder / decoder can select the context table by grouping into modes other than frequently-selected modes among the intra-prediction modes. For example, if the planar, DC mode, directional mode 18 (horizontal mode), and 50 (vertical mode), which are non-directional modes, are mainly selected among the intraprediction modes, DC, 18, and 60 modes, and using the same context table for the other modes that are not used, among a total of two context tables.
- predetermined prediction mode groups in step S1904 may include a first mode group and a second mode group, wherein the first mode group includes a planar mode, a DC mode, a horizontal mode, and a vertical mode And the second mode group may include prediction modes other than the prediction modes included in the first mode group.
- the encoder / decoder is not limited to only the horizontal mode and the vertical mode among the directional modes included in the first mode group, but may include prediction modes adjacent to the prediction direction of the horizontal mode, Prediction modes.
- the encoder / decoder can group the peripheral modes of 18 N, 50 N into the first mode group.
- N may be set to various different integer values, preferably 2, 3, 4, 5.
- the encoder / decoder may group 10 " N, 26 " N of peripheral modes into the first mode group.
- N may be set to various integer values, preferably 1, 2, 3.
- the encoder / decoder may group the modes into and out of the frequently-selected modes of the intra-prediction mode and group them into the non-directional mode and the directional mode to select the context table.
- one table is allocated (or mapped) for planar and DC modes, which are non-directional modes, among the intra prediction modes, and a table is allocated (or mapped) for directional modes 18 (horizontal mode) and 50 It is possible to select among three context tables in total by allocating another table and allocating the remaining tables to the remaining modes.
- predetermined prediction mode groups in step S1904 include a first mode group, a second mode group, and a third mode group, the first mode group includes non-directional modes, And a vertical mode, and the third mode group may include prediction modes other than the prediction modes included in the first mode group and the second mode group.
- the encoder / decoder is not limited to only the horizontal mode and the vertical mode among the directional modes included in the second mode group, but may include prediction modes adjacent to the prediction direction of the horizontal mode, Prediction modes.
- the encoder / decoder can group the peripheral modes of 18 N, 50 N into the first mode group.
- N may be set to various different integer values, preferably 2, 3, 4, 5.
- the encoder / decoder may group 10 " N, 26 " N of peripheral modes into the first mode group.
- N may be set to various integer values, preferably 1, 2, 3.
- predetermined prediction mode groups in step S1904 include a first mode group, a second mode group, and a third mode group
- the first mode group includes non-directional modes, , A vertical mode, six prediction modes adjacent to the prediction direction of the horizontal mode, and six prediction modes adjacent to the prediction direction of the vertical mode
- the third mode group includes the first mode group and the second mode group And may include prediction modes other than the prediction modes included in the mode group.
- 20 is a diagram illustrating an entropy decoding apparatus according to an embodiment to which the present invention is applied.
- the entropy decoding unit is shown as one block in FIG. 20 for the sake of convenience, the entropy decoding unit may be implemented in an encoder and / or a decoder.
- the entropy decoding unit implements the functions, processes and / or methods proposed in FIGS. 7 to 19 above.
- the entropy decoding unit may include an MPM flag decoding unit 2001, an MPM candidate list generating unit 2002, a context table selecting unit 2003, and an MPM index decoding unit 2004.
- the MPM flag decoding unit 2001 entropy-decodes the MPM flag indicating whether the current block is encoded using the MPM (Most Probable Mode).
- the MPM candidate list generation unit 2002 When the current block is coded using the MPM, the MPM candidate list generation unit 2002 generates (or configures) the MPM candidate list using the intra prediction mode of the block neighboring the current block.
- the decoder constructs the MPM in the same manner as the encoder, and then parses the MPM index to determine whether the candidate mode of the index in the MPM candidate list is finally selected as the prediction mode applied to the intra prediction of the current block do.
- the context table selection unit 2003 selects a context table of an MPM index indicating an intra prediction mode of a current block based on a candidate intra prediction mode included in the MPM candidate list.
- the decoder may select the context table of the bin of the MPM index that is mapped to the candidate order of the MPM candidate list.
- the MPM index decoding unit 2004 entropy-decodes the MPM index based on the selected context table.
- the context table of the MPM index may be selected as a context table mapped to a prediction mode group including the intra prediction mode of the candidate among the predetermined prediction mode groups.
- the decoder selects a context table based on the prediction mode of the MPM 0th candidate to check whether the 0th candidate of the MPM 0 is the final intra prediction mode (i.e., the prediction mode applied to the intra prediction of the current block) It can be decoded. If the MPM index is binarized as shown in Table 4 and the parsed symbol is 0, the final intra prediction mode for the current block is the MPM 0 candidate. If the symbol is 1, the decoder can select and decode the context table based on the prediction mode of the MPM 1st candidate. Thereafter, whether the above-described process is repeated or not can be determined depending on whether the parsed symbol is 0 or 1.
- the decoder selects the context table of the first bean of the MPM index using the intra-prediction mode of the first candidate of the MPM candidate list, and outputs the context table of the MPM index to the MPM
- the context table of the second bean of the index can be selected and the context table of the third bean of the MPM index can be selected using the intra prediction mode of the third candidate of the MPM candidate list.
- the decoder may select and decode the context table by applying the method described above to all bins of the MPM index.
- the decoder performs normal arithmetic coding by applying the above-described method only on a part of the bins of the MPM index (e.g., the first three bins of the MPM index) ) May be bypass-encoded.
- the encoder / decoder can select (or determine) a context using a prediction mode of the MPM candidate using various methods described below.
- the encoder / decoder may define a context table for each intra prediction mode. For example, when the intra prediction mode is 67, a total of 67 context tables can be defined.
- the encoder / decoder may group the intra-prediction mode into a directional mode and a non-directional mode and select from two context tables. For example, the encoder / decoder uses (or sets) the same context table for the planar mode and the DC mode, which are the non-directional modes, and uses the same context table for the remaining prediction modes, You can choose from two context tables.
- the encoder / decoder may group the intra-prediction mode into a directional mode and a non-directional mode, group the directional modes into a horizontal directional prediction mode group and a vertical directional prediction mode group, have. If 67 intra prediction modes are used as described above with reference to FIG. 9, the horizontal direction prediction mode group may include prediction modes 2 to 34, and the vertical direction prediction mode group may include Directional prediction modes.
- the encoder / decoder uses the same context table for the planar mode and the DC mode, which are the non-directional modes, and classifies them into the horizontal prediction mode for the prediction modes 2 to 34 in the directional mode, Table is used, and the remaining modes are classified into the vertical prediction mode and the same context table is used, so that a total of three context tables can be selected.
- the encoder / decoder can select the context table by grouping into modes other than frequently-selected modes among the intra-prediction modes. For example, if the planar, DC mode, directional mode 18 (horizontal mode), and 50 (vertical mode), which are non-directional modes, are mainly selected among the intraprediction modes, DC, 18, and 60 modes, and using the same context table for the other modes that are not used, among a total of two context tables.
- the predetermined prediction mode groups may include a first mode group and a second mode group, wherein the first mode group includes a planar mode, a DC mode, a horizontal mode, and a vertical mode , And the second mode group may include prediction modes other than the prediction modes included in the first mode group.
- the encoder / decoder is not limited to only the horizontal mode and the vertical mode among the directional modes included in the first mode group, but may include prediction modes adjacent to the prediction direction of the horizontal mode, Prediction modes.
- the encoder / decoder can group the peripheral modes of 18 N, 50 N into the first mode group.
- N may be set to various different integer values, preferably 2, 3, 4, 5.
- the encoder / decoder may group 10 " N, 26 " N of peripheral modes into the first mode group.
- N may be set to various integer values, preferably 1, 2, 3.
- the encoder / decoder may group the modes into and out of the frequently-selected modes of the intra-prediction mode and group them into the non-directional mode and the directional mode to select the context table.
- one table is allocated (or mapped) for planar and DC modes, which are non-directional modes, among the intra prediction modes, and a table is allocated (or mapped) for directional modes 18 (horizontal mode) and 50 It is possible to select among three context tables in total by allocating another table and allocating the remaining tables to the remaining modes.
- the predetermined prediction mode groups include a first mode group, a second mode group, and a third mode group, the first mode group includes non-directional modes, the second mode group includes a horizontal mode, And the third mode group may include prediction modes other than the prediction modes included in the first mode group and the second mode group.
- the encoder / decoder is not limited to only the horizontal mode and the vertical mode among the directional modes included in the second mode group, but may include prediction modes adjacent to the prediction direction of the horizontal mode, Prediction modes.
- the encoder / decoder can group the peripheral modes of 18 N, 50 N into the first mode group.
- N may be set to various different integer values, preferably 2, 3, 4, 5.
- the encoder / decoder may group 10 " N, 26 " N of peripheral modes into the first mode group.
- N may be set to various integer values, preferably 1, 2, 3.
- the predetermined prediction mode groups include a first mode group, a second mode group and a third mode group, the first mode group includes non-directional modes, the second mode group includes a horizontal mode, Wherein the first mode group includes six prediction modes adjacent to the prediction direction of the horizontal mode, and six prediction modes adjacent to the prediction direction of the vertical mode, and the third mode group includes the first mode group and the second mode And may include prediction modes other than the prediction modes included in the group.
- 21 is a diagram for explaining a method of selecting a context model using a left block and an upper block as an embodiment to which the present invention is applied.
- the context table (or probability table) of symbols (or beans) can be determined using the type of the binarized syntax and the information of the already encoded neighboring blocks. For example, the encoder / decoder can select from two or more context tables based on the syntax element, the decoding of the corresponding syntax element of the neighboring block, the size of the block, mode information, and the like.
- the encoder / decoder can derive the syntax information of L (left block) and A (upper block) positions and utilize it when selecting the current context table.
- the information for negative decoding of the syntax element may be more varied than otherwise. Even in such a case, if the context model is determined using only the information at the limited position as shown in FIG. 21, the entropy coding efficiency may be degraded. Will be described with reference to the following drawings.
- 22 is a diagram for explaining a method of selecting a context model using a left block and an upper block, according to an embodiment to which the present invention is applied.
- the encoder / Information related to the prediction mode can be derived only from the in-block.
- the context table is composed of three tables, i.e., a horizontal prediction mode, a vertical prediction mode, and a non-directional mode
- the prediction mode of the L position is a horizontal prediction mode
- the context table for the horizontal prediction mode is selected instead of the vertical prediction context table corresponding to the actual current block mode .
- most of the other divided blocks on the left side are predicted in the vertical prediction mode as shown in FIG. 22, more accurate information can be utilized if all the available information in the vicinity is collected.
- all the coding information of all the minimum coding units divided in the vicinity of the current block is referred to without referring to the information related to the syntax element at the limited position in Fig. 21, .
- the present invention is not limited to the syntax element related to the prediction mode described above and can be applied to the decoding of any syntax that performs context table selection.
- the encoder / decoder may include a context table for a syntax element represented by a flag (Flag) such as whether or not any mode is applied (for example, an MPM flag indicating whether the MPM mode is applied)
- a flag such as whether or not any mode is applied
- information is collected from a minimum unit block unit located in a neighboring block, and the sum of the number of blocks decoded by enabling or disabling the decoding of the corresponding syntax, the corresponding syntax, and the like are compared with a preset threshold value,
- the context table for the syntax element can be selected.
- the encoder / decoder can select a context table for the current processing syntax element by dividing the coding into a case in which any one of the minimum unit blocks located in a neighboring block is enabled and encoded, or not.
- the encoder / decoder can check whether the neighboring blocks (i.e., left and / or upper blocks) are available. The encoder / decoder can also check whether the neighboring block is divided into smaller blocks based on the current block.
- the encoder / decoder may determine the context table based on the majority of the corresponding syntax element in the peripheral minimum block unit. For example, in Fig. 22, it can be confirmed that the majority of the intra prediction modes of the left blocks are the vertical direction mode, and in the entropy coding of the syntax element related to the current intra prediction mode, the encoder / Allocated, mapped) context table.
- FIG. 23 shows a video coding system to which the present invention is applied.
- the video coding system may include a source device and a receiving device.
- the source device may deliver the encoded video / image information or data in the form of a file or stream to a receiving device via a digital storage medium or network.
- the source device may include a video source, an encoding apparatus, and a transmitter.
- the receiving device may include a receiver, a decoding apparatus, and a renderer.
- the encoding apparatus may be referred to as a video / image encoding apparatus, and the decoding apparatus may be referred to as a video / image decoding apparatus.
- the transmitter may be included in the encoding device.
- the receiver may be included in the decoding apparatus.
- the renderer may include a display unit, and the display unit may be composed of a separate device or an external component.
- a video source can acquire video / image through capturing, compositing, or generating a video / image.
- the video source may include a video / video capture device and / or a video / video generation device.
- the video / video capture device may include, for example, one or more cameras, video / video archives including previously captured video / images, and the like.
- the video / image generation device may include, for example, a computer, tablet, smart phone, and the like (electronically) to generate video / images.
- a virtual video / image may be generated through a computer or the like. In this case, the video / image capturing process may be replaced in the process of generating related data.
- the encoding device may encode the input video / image.
- the encoding apparatus can perform a series of procedures such as prediction, conversion, and quantization for compression and coding efficiency.
- the encoded data (encoded video / image information) can be output in the form of a bitstream.
- the transmitting unit may transmit the encoded video / image information or data output in the form of a bit stream to a receiving unit of the receiving device through a digital storage medium or a network in the form of a file or a stream.
- the digital storage medium may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD and the like.
- the transmission unit may include an element for generating a media file through a predetermined file format, and may include an element for transmission over a broadcast / communication network.
- the receiving unit may extract the bitstream and transmit it to the decoding apparatus.
- the decoding apparatus may perform a series of procedures such as inverse quantization, inverse transformation, and prediction corresponding to the operation of the encoding apparatus to decode the video / image.
- the renderer may render the decoded video / image.
- the rendered video / image can be displayed through the display unit.
- FIG. 24 shows a structure of a contents streaming system as an embodiment to which the present invention is applied.
- the content streaming system to which the present invention is applied may include an encoding server, a streaming server, a web server, a media repository, a user device, and a multimedia input device.
- the encoding server compresses content input from multimedia input devices such as a smart phone, a camera, and a camcorder into digital data to generate a bit stream and transmit the bit stream to the streaming server.
- multimedia input devices such as a smart phone, a camera, a camcorder, or the like directly generates a bitstream
- the encoding server may be omitted.
- the bitstream may be generated by an encoding method or a bitstream generating method to which the present invention is applied, and the streaming server may temporarily store the bitstream in the process of transmitting or receiving the bitstream.
- the streaming server transmits multimedia data to a user device based on a user request through the web server, and the web server serves as a medium for informing the user of what services are available.
- the web server delivers it to the streaming server, and the streaming server transmits the multimedia data to the user.
- the content streaming system may include a separate control server. In this case, the control server controls commands / responses among the devices in the content streaming system.
- the streaming server may receive content from a media repository and / or an encoding server. For example, when receiving the content from the encoding server, the content can be received in real time. In this case, in order to provide a smooth streaming service, the streaming server can store the bit stream for a predetermined time.
- Examples of the user device include a mobile phone, a smart phone, a laptop computer, a digital broadcasting terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), a navigation device, a slate PC, Such as tablet PCs, ultrabooks, wearable devices (e.g., smartwatches, smart glass, HMDs (head mounted displays)), digital TVs, desktops Computers, and digital signage.
- PDA personal digital assistant
- PMP portable multimedia player
- slate PC Such as tablet PCs, ultrabooks, wearable devices (e.g., smartwatches, smart glass, HMDs (head mounted displays)), digital TVs, desktops Computers, and digital signage.
- Each of the servers in the content streaming system can be operated as a distributed server. In this case, data received at each server can be distributed.
- the embodiments described in the present invention can be implemented and executed on a processor, a microprocessor, a controller, or a chip.
- the functional units depicted in the figures may be implemented and implemented on a computer, processor, microprocessor, controller, or chip.
- the decoder and encoder to which the present invention is applied can be applied to multimedia communication devices such as a multimedia broadcasting transmitting and receiving device, a mobile communication terminal, a home cinema video device, a digital cinema video device, a surveillance camera, a video chatting device, (3D) video devices, video telephony video devices, and medical video devices, and the like, which may be included in, for example, a storage medium, a camcorder, a video on demand (VoD) service provision device, an OTT video (Over the top video) And may be used to process video signals or data signals.
- the OTT video (Over the top video) device may include a game console, a Blu-ray player, an Internet access TV, a home theater system, a smart phone, a tablet PC, a DVR (Digital Video Recorder)
- the processing method to which the present invention is applied may be produced in the form of a computer-executed program, and may be stored in a computer-readable recording medium.
- the multimedia data having the data structure according to the present invention can also be stored in a computer-readable recording medium.
- the computer-readable recording medium includes all kinds of storage devices and distributed storage devices in which computer-readable data is stored.
- the computer-readable recording medium may be, for example, a Blu-ray Disc (BD), a Universal Serial Bus (USB), a ROM, a PROM, an EPROM, an EEPROM, a RAM, a CD- Data storage devices.
- the computer-readable recording medium includes media implemented in the form of a carrier wave (for example, transmission over the Internet).
- the bit stream generated by the encoding method can be stored in a computer-readable recording medium or transmitted over a wired or wireless communication network.
- an embodiment of the present invention may be embodied as a computer program product by program code, and the program code may be executed in a computer according to an embodiment of the present invention.
- the program code may be stored on a carrier readable by a computer.
- Embodiments in accordance with the present invention may be implemented by various means, for example, hardware, firmware, software, or a combination thereof.
- an embodiment of the present invention may include one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs) field programmable gate arrays, processors, controllers, microcontrollers, microprocessors, and the like.
- ASICs application specific integrated circuits
- DSPs digital signal processors
- DSPDs digital signal processing devices
- PLDs programmable logic devices
- an embodiment of the present invention may be implemented in the form of a module, a procedure, a function, or the like which performs the functions or operations described above.
- the software code can be stored in memory and driven by the processor.
- the memory is located inside or outside the processor and can exchange data with the processor by various means already known.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
L'invention concerne un procédé permettant de réaliser un décodage entropique sur un signal vidéo un dispositif pour celui-ci. En particulier, un procédé permettant d'effectuer un décodage entropique sur un signal vidéo comprend : une étape de décodage entropique d'un drapeau de mode le plus probable (MPM) indiquant si un bloc en cours a été codé à l'aide d'un MPM ; une étape de génération d'une liste de candidats MPM à l'aide un mode de prédiction intra d'un bloc voisin du bloc en cours, si le bloc en cours a été codé à l'aide du MPM ; une étape de sélection d'une table de contexte d'un indice MPM indiquant un mode de prédiction intra du bloc en cours, sur la base d'un mode de prédiction intra d'un candidat inclus dans la liste de candidats MPM ; et une étape de décodage entropique de l'indice MPM sur la base de la table de contexte, la table de contexte de l'indice MPM pouvant être sélectionnée en tant que table de contexte mise en correspondance avec un groupe de modes de prédiction comprenant le mode de prédiction intra du candidat, parmi des groupes de modes de prédiction prédéfinis.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762590343P | 2017-11-23 | 2017-11-23 | |
US62/590,343 | 2017-11-23 | ||
US201762593243P | 2017-11-30 | 2017-11-30 | |
US62/593,243 | 2017-11-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019103543A1 true WO2019103543A1 (fr) | 2019-05-31 |
Family
ID=66632056
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2018/014560 WO2019103543A1 (fr) | 2017-11-23 | 2018-11-23 | Procédé et dispositif de codage et décodage entropiques de signal vidéo |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2019103543A1 (fr) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120070536A (ko) * | 2010-12-21 | 2012-06-29 | 한국전자통신연구원 | 인트라 예측 모드 부호화/복호화 방법 및 그 장치 |
WO2013115568A1 (fr) * | 2012-01-30 | 2013-08-08 | 한국전자통신연구원 | Procédé et dispositif de codage/décodage en mode de prédiction intra |
US20160373782A1 (en) * | 2015-06-18 | 2016-12-22 | Qualcomm Incorporated | Intra prediction and intra mode coding |
-
2018
- 2018-11-23 WO PCT/KR2018/014560 patent/WO2019103543A1/fr active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120070536A (ko) * | 2010-12-21 | 2012-06-29 | 한국전자통신연구원 | 인트라 예측 모드 부호화/복호화 방법 및 그 장치 |
WO2013115568A1 (fr) * | 2012-01-30 | 2013-08-08 | 한국전자통신연구원 | Procédé et dispositif de codage/décodage en mode de prédiction intra |
US20160373782A1 (en) * | 2015-06-18 | 2016-12-22 | Qualcomm Incorporated | Intra prediction and intra mode coding |
Non-Patent Citations (2)
Title |
---|
JIANLE CHEN: "Algorithm Description of Joint Exploration Test Model 7 (JEM 7)", JOINT VIDEO EXPLORATION TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, JVET-G1001, 21 July 2017 (2017-07-21), Torino, IT, XP055576095 * |
YE-KUI WANG: "High Efficiency Video Coding (HEVC) Defect Report", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, JCTVC-N1003, 2 August 2013 (2013-08-02), Vienna, AT, XP030114947 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020009556A1 (fr) | Procédé et dispositif de codage d'image à base de transformée | |
WO2017086765A2 (fr) | Procédé et appareil de codage et de décodage entropique d'un signal vidéo | |
WO2020256391A1 (fr) | Procédé et appareil de décodage d'image | |
WO2019050385A2 (fr) | Procédé et appareil de codage et de décodage entropiques de signal vidéo | |
WO2013062389A1 (fr) | Procédé et dispositif de prédiction intra de vidéo | |
WO2020231140A1 (fr) | Codage de vidéo ou d'image basé sur un filtre à boucle adaptatif | |
WO2020213944A1 (fr) | Transformation pour une intra-prédiction basée sur une matrice dans un codage d'image | |
WO2020213946A1 (fr) | Codage d'image utilisant un indice de transformée | |
WO2020204413A1 (fr) | Codage vidéo ou d'image pour corriger une image de restauration | |
WO2020076064A1 (fr) | Procédé de codage d'image basé sur une prédiction intra en utilisant une liste mpm, et dispositif associé | |
WO2019235822A1 (fr) | Procédé et dispositif de traitement de signal vidéo à l'aide de prédiction de mouvement affine | |
WO2021040319A1 (fr) | Procédé et appareil pour dériver un paramètre rice dans un système de codage vidéo/image | |
WO2020231139A1 (fr) | Codage vidéo ou d'image basé sur une cartographie de luminance et une mise à l'échelle chromatique | |
WO2020204419A1 (fr) | Codage vidéo ou d'image basé sur un filtre à boucle adaptatif | |
WO2020180143A1 (fr) | Codage vidéo ou d'image basé sur un mappage de luminance avec mise à l'échelle de chrominance | |
WO2020213976A1 (fr) | Dispositif et procédé de codage/décodage vidéo utilisant une bdpcm, et procédé de train de bits de transmission | |
WO2020171673A1 (fr) | Procédé et appareil de traitement de signal vidéo pour prédiction intra | |
WO2021054807A1 (fr) | Procédé et dispositif de codage/décodage d'image faisant appel au filtrage d'échantillon de référence, et procédé de transmission de flux binaire | |
WO2021040398A1 (fr) | Codage d'image ou de vidéo s'appuyant sur un codage d'échappement de palette | |
WO2020197274A1 (fr) | Procédé de codage d'image basé sur des transformations et dispositif associé | |
WO2021162494A1 (fr) | Procédé et dispositif de codage/décodage d'images permettant de signaler de manière sélective des informations de disponibilité de filtre, et procédé de transmission de flux binaire | |
WO2021066618A1 (fr) | Codage d'image ou de vidéo basé sur la signalisation d'informations liées au saut de transformée et au codage de palette | |
WO2020184966A1 (fr) | Procédé et dispositif de codage/décodage d'image, et procédé permettant de transmettre un flux binaire | |
WO2020197207A1 (fr) | Codage d'image ou vidéo sur la base d'un filtrage comprenant un mappage | |
WO2020185039A1 (fr) | Procédé de codage résiduel et dispositif |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18880975 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18880975 Country of ref document: EP Kind code of ref document: A1 |