US11863754B2 - Method and for reconstructing chroma block and video decoding apparatus - Google Patents
Method and for reconstructing chroma block and video decoding apparatus Download PDFInfo
- Publication number
- US11863754B2 US11863754B2 US17/611,466 US202017611466A US11863754B2 US 11863754 B2 US11863754 B2 US 11863754B2 US 202017611466 A US202017611466 A US 202017611466A US 11863754 B2 US11863754 B2 US 11863754B2
- Authority
- US
- United States
- Prior art keywords
- residual samples
- residual
- chroma
- block
- samples
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000000034 method Methods 0.000 title abstract description 120
- 238000009795 derivation Methods 0.000 claims description 54
- 239000000523 sample Substances 0.000 description 78
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 36
- 238000013139 quantization Methods 0.000 description 36
- 230000008569 process Effects 0.000 description 30
- 241000023320 Luma <angiosperm> Species 0.000 description 18
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 18
- 238000010586 diagram Methods 0.000 description 15
- 239000000284 extract Substances 0.000 description 8
- 230000008707 rearrangement Effects 0.000 description 8
- 238000005192 partition Methods 0.000 description 7
- 230000001131 transforming effect Effects 0.000 description 7
- 238000007906 compression Methods 0.000 description 6
- 238000001914 filtration Methods 0.000 description 6
- 230000006835 compression Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000007429 general method Methods 0.000 description 4
- 230000016776 visual perception Effects 0.000 description 4
- 230000000903 blocking effect Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 208000034188 Stiff person spectrum disease Diseases 0.000 description 1
- 229920010524 Syndiotactic polystyrene Polymers 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000012112 ischiocoxopodopatellar syndrome Diseases 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000002490 spark plasma sintering Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/107—Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/109—Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
Definitions
- the present invention relates to encoding and decoding of a video, more particularly, to a method for reconstructing a chroma block and a video decoding apparatus, which improve encoding and decoding efficiency by efficiently predicting residual samples of a chroma component.
- video data Since video data has a large data volume compared to audio data or still image data, it requires a lot of hardware resources, including memory, to store or transmit the data in its raw form before undergoing a compression process.
- storing or transmitting video data typically accompanies compression thereof by using an encoder before a decoder can receive, decompress, and reproduce the compressed video data.
- Existing video compression technologies include H.264/AVC and High Efficiency Video Coding (HEVC), which improves the encoding efficiency of H.264/AVC by about 40%.
- HEVC High Efficiency Video Coding
- the present invention is directed to providing an improved video encoding and decoding technique.
- an aspect of the present invention is related to a technique for improving encoding and decoding efficiency by deriving the other from one of a Cb chroma component and a Cr chroma component.
- a method for reconstructing a chroma block of a target block to be reconstructed includes decoding correlation information between first residual samples and second residual samples, the first residual sample, and prediction information of the chroma block from a bitstream, wherein the first residual samples are residual samples of a first chroma component and the second residual samples are residual samples of a second chroma component.
- the method further includes generating predicted samples of the first chroma component and predicted samples of the second chroma information on the basis of the prediction information, and deriving the second residual samples by applying the correlation information to the first residual samples.
- the method further includes reconstructing a chroma block of the first chroma component by adding the first residual samples and the predicted samples of the first chroma component and reconstructing a chroma block of the second chroma component by adding the second residual samples and predicted samples of the second chroma component.
- a video decoding apparatus for reconstructing a chroma block of a target block to be reconstructed.
- the video decoding apparatus comprises a decoding unit configured to decode correlation information between first residual samples and second residual samples, the first residual samples, and prediction information of the chroma block from a bitstream, wherein the first residual samples are residual samples of a first chroma component and the second residual samples are residual samples of a second chroma component.
- the video decoding apparatus further comprises a prediction unit configured to generate predicted samples of the first chroma component and predicted samples of the second chroma information on the basis of the prediction information, a chroma component reconstruction unit configured to derive the second residual samples by applying the correlation information to the first residual samples.
- the video decoding apparatus further comprises an adder configured to reconstruct a chroma block of the first chroma component by adding the first residual samples and the predicted samples of the first chroma component and reconstruct a chroma block of the second chroma component by adding the second residual samples and predicted samples of the second chroma component.
- FIG. 1 is a block diagram illustrating a video encoding apparatus that can implement the techniques of the present disclosure.
- FIG. 2 is a diagram for explaining a method of splitting a block by using a QTBTTT structure.
- FIG. 3 A is a diagram illustrating a plurality of intra-prediction modes.
- FIG. 3 B is a diagram illustrating a plurality of intra-prediction modes including wide-angle intra-prediction modes.
- FIG. 4 is a block diagram illustrating a video decoding apparatus that can implement the techniques of the present disclosure.
- FIG. 5 is an exemplary block diagram of a video encoding apparatus can implement an example of a residual block reconstruction method for chroma components.
- FIG. 6 is a flowchart illustrating an example of a residual block reconstruction method for chroma components implemented in the video encoding apparatus of FIG. 5 .
- FIG. 7 is a block diagram illustrating a video decoding apparatus that can implement an example of a residual block reconstruction method for chroma components.
- FIG. 8 is a flowchart illustrating an example of a residual block reconstruction method for chroma components implemented in the video decoding apparatus of FIG. 7 .
- FIG. 9 is an exemplary block diagram of a video encoding apparatus capable of implementing another example of a residual block reconstruction method for chroma components.
- FIG. 10 is a flowchart illustrating an example of a residual block reconstruction method for chroma components implemented in the video encoding apparatus of FIG. 9 .
- FIG. 11 is a block diagram illustrating a video decoding apparatus that can implement another example of a residual block reconstruction method for chroma components.
- FIG. 12 is a flowchart illustrating an example of a residual block reconstruction method implemented in the video decoding apparatus of FIG. 11 .
- FIGS. 13 and 14 are flowcharts illustrating other examples of a residual block reconstruction method for chroma components.
- FIG. 15 is a flowchart illustrating an example of performance conditions of a residual reconstruction method for chroma components.
- FIG. 1 is a block diagram illustrating a video encoding apparatus that can implement the techniques of the present disclosure.
- a video encoding apparatus and elements of the apparatus will be described with reference to FIG. 1 .
- the video encoding apparatus includes a picture splitter 110 , a predictor 120 , a subtractor 130 , a transformer 140 , a quantizer 145 , a rearrangement unit 150 , an entropy encoder 155 , an inverse quantizer 160 , an inverse transformer 165 , an adder 170 , a filter unit 180 , and a memory 190 .
- Each element of the video encoding apparatus may be implemented in hardware or software, or a combination of hardware and software.
- the functions of the respective elements may be implemented as software, and a microprocessor may be implemented to execute the software functions corresponding to the respective elements.
- One video includes a plurality of pictures. Each picture is split into a plurality of regions, and encoding is performed on each region. For example, one picture is split into one or more tiles or/and slices. Here, the one or more tiles may be defined as a tile group. Each tile or slice is split into one or more coding tree units (CTUs). Each CTU is split into one or more coding units (CUs) by a tree structure. Information applied to each CU is encoded as a syntax of the CU, and information applied to CUs included in one CTU in common is encoded as a syntax of the CTU.
- CTUs coding tree units
- information applied to all blocks in one slice in common is encoded as a syntax of a slice header, and information applied to all blocks constituting a picture is encoded in a picture parameter set (PPS) or a picture header.
- PPS picture parameter set
- information which a plurality of pictures refers to in common is encoded in a sequence parameter set (SPS).
- SPS sequence parameter set
- VPS video parameter set
- Information applied to one tile or tile group in common may be encoded as a syntax of a tile or tile group header.
- the picture splitter 110 determines the size of a coding tree unit (CTU).
- CTU size Information about the size of the CTU (CTU size) is encoded as a syntax of the SPS or PPS and is transmitted to the video decoding apparatus.
- the picture splitter 110 splits each picture constituting the video into a plurality of CTUs having a predetermined size, and then recursively splits the CTUs using a tree structure.
- a leaf node serves as a coding unit (CU), which is a basic unit of coding.
- the tree structure may be a QuadTree (QT), in which a node (or parent node) is split into four sub-nodes (or child nodes) of the same size, a BinaryTree (BT), in which a node is split into two sub-nodes, a TernaryTree (TT), in which a node is split into three sub-nodes at a ratio of 1:2:1, or a structure formed by a combination of two or more of the QT structure, the BT structure, and the TT structure.
- QTBT QuadTree plus BinaryTree
- QTBTTT QuadTree plus BinaryTree TernaryTree
- BTTT may be collectively referred to as a multiple-type tree (MTT).
- FIG. 2 exemplarily shows a QTBTTT splitting tree structure.
- a CTU may be initially split in the QT structure.
- the QT splitting may be repeated until the size of the splitting block reaches the minimum block size MinQTSize of a leaf node allowed in the QT.
- a first flag (QT_split_flag) indicating whether each node of the QT structure is split into four nodes of a lower layer is encoded by the entropy encoder 155 and signaled to the video decoding apparatus.
- the leaf node of the QT is not larger than the maximum block size (MaxBTSize) of the root node allowed in the BT, it may be further split into one or more of the BT structure or the TT structure.
- MaxBTSize maximum block size
- the BT structure and/or the TT structure may have a plurality of splitting directions. For example, there may be two directions, namely, a direction in which a block of a node is horizontally split and a direction in which the block is vertically split.
- a second flag indicating whether nodes are split
- a flag indicating a splitting direction vertical or horizontal
- a flag indicating a splitting type Boary or Ternary
- a CU splitting flag (split_cu_flag) indicating whether the node is split may be encoded.
- the block of the node becomes a leaf node in the splitting tree structure and serves a coding unit (CU), which is a basic unit of encoding.
- the video encoding apparatus starts encoding the flags in the manner described above, starting with the first flag.
- splitting types which are a type of horizontally splitting a block into two blocks of the same size (i.e., symmetric horizontal splitting) and a type of vertically splitting a block into two blocks of the same size (i.e., symmetric vertical splitting).
- a split flag (split_flag) indicating whether each node of the BT structure is split into block of a lower layer and splitting type information indicating the splitting type are encoded by the entropy encoder 155 and transmitted to the video decoding apparatus.
- splitting_flag split flag
- splitting type information indicating the splitting type are encoded by the entropy encoder 155 and transmitted to the video decoding apparatus.
- splitting_flag split flag
- splitting type information indicating the splitting type are encoded by the entropy encoder 155 and transmitted to the video decoding apparatus.
- splitting type information indicating the splitting type are encoded by the entropy encoder 155 and transmitted to the video decoding apparatus.
- splitting type information indicating the splitting type are encoded by the en
- CUs may have various sizes according to QTBT or QTBTTT splitting of a CTU.
- a block corresponding to a CU i.e., a leaf node of QTBTTT
- the shape of the current block may be square or rectangular.
- the predictor 120 predicts the current block to generate a prediction block.
- the predictor 120 includes an intra-predictor 122 and an inter-predictor 124 .
- each of the current blocks in a picture may be predictively coded.
- prediction of a current block is performed using an intra-prediction technique (using data from a picture containing the current block) or an inter-prediction technique (using data from a picture coded before a picture containing the current block).
- the inter-prediction includes both unidirectional prediction and bi-directional prediction.
- the intra-prediction unit 122 predicts pixels in the current block using pixels (reference pixels) positioned around the current block in the current picture including the current block.
- the plurality of intra-prediction modes may include two non-directional modes, which include a planar mode and a DC mode, and 65 directional modes. Neighboring pixels and an equation to be used are defined differently for each prediction mode.
- directional modes For efficient directional prediction for a rectangular-shaped current block, directional modes (intra-prediction modes 67 to 80 and ⁇ 1 to ⁇ 14) indicated by dotted arrows in FIG. 3 B may be additionally used. These modes may be referred to as “wide angle intra-prediction modes.”
- arrows indicate corresponding reference samples used for prediction, not indicating prediction directions. The prediction direction is opposite to the direction indicated by an arrow.
- a wide-angle intra prediction mode is a mode in which prediction is performed in a direction opposite to a specific directional mode without additional bit transmission when the current block has a rectangular shape.
- some wide angle intra-prediction modes available for the current block may be determined based on a ratio of the width and height of the rectangular current block. For example, wide angle intra-prediction modes with an angle less than 45 degrees (intra prediction modes 67 to 80) may be used when the current block has a rectangular shape with a height less than the width thereof. Wide angle intra-prediction modes with an angle greater than ⁇ 135 degrees (intra-prediction modes ⁇ 1 to ⁇ 14) may be used when the current block has a rectangular shape with width greater than the height thereof.
- the intra-predictor 122 may determine an intra-prediction mode to be used in encoding the current block.
- the intra-predictor 122 may encode the current block using several intra-prediction modes and select an appropriate intra-prediction mode to use from the tested modes.
- the intra-predictor 122 may calculate rate distortion values using rate-distortion analysis of several tested intra-prediction modes, and may select an intra-prediction mode that has the best rate distortion characteristics among the tested modes.
- the intra-predictor 122 selects one intra-prediction mode from among the plurality of intra-prediction modes, and predicts the current block using neighboring pixels (reference pixels) and an equation determined according to the selected intra-prediction mode.
- Information about the selected intra-prediction mode is encoded by the entropy encoder 155 and transmitted to the video decoding apparatus.
- the inter-predictor 124 generates a prediction block for the current block through motion compensation.
- the inter-predictor 124 searches for a block most similar to the current block in a reference picture which has been encoded and decoded earlier than the current picture, and generates a prediction block for the current block using the searched block. Then, the inter-predictor generates a motion vector corresponding to a displacement between the current block in the current picture and the prediction block in the reference picture.
- motion estimation is performed on a luma component, and a motion vector calculated based on the luma component is used for both the luma component and the chroma component.
- the motion information including information about the reference picture and information about the motion vector used to predict the current block is encoded by the entropy encoder 155 and transmitted to the video decoding apparatus.
- the subtractor 130 subtracts the prediction block generated by the intra-predictor 122 or the inter-predictor 124 from the current block to generate a residual block.
- the transformer 140 partitions the residual block into one or more transform blocks, performs a transform on the transform blocks, and transforms the residual values of the transform blocks from a pixel domain into a frequency domain.
- the transform blocks are referred to as coefficient blocks containing one or more transform coefficient values.
- a two-dimensional (2D) transform kernel may be used for the transform, and a one-dimensional (1D) transform kernel may be used for each of horizontal transform and vertical transform.
- the transform kernels may be based on a discrete cosine transform (DCT), a discrete sine transform (DST), or the like.
- the transformer 140 may transform the residual signals in a residual block by using the entire size of the residual block as a transform unit. Also, the transformer 140 may partition the residual block into two sub-blocks in a horizontal or vertical direction and may perform the transform on only one of the two sub-blocks. Accordingly, the size of the transform block may be different from the size of the residual block (and thus the size of a prediction block). Non-zero residual sample values may be absent or very sparse in untransformed sub-block. Residual samples of the untransformed sub-block may not be signaled and may all be regarded as “0” by a video decoding apparatus. Several partition types may be present depending on a partitioning direction and a partitioning ratio.
- the transformer 140 may provide information on a coding mode (or a transform mode) of the residual block (e.g., the information on the coding mode includes information indicating whether the residual block is transformed or the sub-block of the residual block is transformed, information indicating a partition type selected to partition the residual block into the sub-blocks, information for identifying the sub-block to be transformed, etc.) to the entropy encoder 155 .
- the entropy encoder 155 may encode the information on a coding mode (or a transform mode) of a residual block.
- the quantizer 145 quantizes transform coefficients output from the transformer 140 and outputs quantized transform coefficients to the entropy encoder 155 .
- the quantizer 145 may directly quantize a related residual block for a certain block or frame without transform.
- the rearrangement unit 150 may perform rearrangement of the coefficient values with the quantized transform coefficients.
- the rearrangement unit 150 may use coefficient scanning for changing the two-dimensional coefficient array into a one-dimensional coefficient sequence. For example, the rearrangement unit 150 may scan coefficients from a DC coefficient toward coefficients in a high-frequency region through a zig-zag scan or a diagonal scan to output a one-dimensional coefficient sequence.
- the zig-zag scan used may be replaced by a vertical scan for scanning the two-dimensional coefficient array in a column direction and a horizontal scan for scanning the two-dimensional block shape coefficients in a row direction.
- a scanning method to be used may be determined among a zig-zag scan, a diagonal scan, a vertical scan, and a horizontal scan according to the size of the transform unit and the intra-prediction mode.
- the entropy encoder 155 encodes a sequence of the one-dimensional quantized transform coefficients outputted from the rearrangement unit 150 by using various encoding methods such as Context-based Adaptive Binary Arithmetic Code (CABAC), Exponential Golomb, and the like, encoding to generate a bitstream.
- CABAC Context-based Adaptive Binary Arithmetic Code
- Exponential Golomb Exponential Golomb
- the entropy encoder 155 encodes information such as a CTU size, a CU split flag, a QT split flag, an MTT splitting type, and an MTT splitting direction, which are associated with block splitting, such that the video decoding apparatus may split the block in the same manner as in the video encoding apparatus.
- the entropy encoder 155 encodes information about a prediction type indicating whether the current block is encoded by intra-prediction or inter-prediction, and encodes intra-prediction information (i.e., information about an intra-prediction mode) or inter-prediction information (information about a reference picture index and a motion vector) according to the prediction type.
- the inverse quantizer 160 inversely quantizes the quantized transform coefficients output from the quantizer 145 to generate transform coefficients.
- the inverse transformer 165 transforms the transform coefficients output from the inverse quantizer 160 from the frequency domain to the spatial domain and reconstructs the residual block.
- the adder 170 adds the reconstructed residual block to the prediction block generated by the predictor 120 to reconstruct the current block.
- the pixels in the reconstructed current block are used as reference pixels in performing intra-prediction of a next block.
- the filter unit 180 filters the reconstructed pixels to reduce blocking artifacts, ringing artifacts, and blurring artifacts generated due to block-based prediction and transform/quantization.
- the filter unit 180 may include a deblocking filter 182 and a pixel adaptive offset (SAO) filter 184 .
- SAO pixel adaptive offset
- the deblocking filter 180 filters the boundary between the reconstructed blocks to remove blocking artifacts caused by block-by-block coding/decoding, and the SAO filter 184 performs additional filtering on the deblocking-filtered video.
- the SAO filter 184 is a filter used to compensate for a difference between a reconstructed pixel and an original pixel caused by lossy coding.
- the reconstructed blocks filtered through the deblocking filter 182 and the SAO filter 184 are stored in the memory 190 . Once all blocks in one picture are reconstructed, the reconstructed picture may be used as a reference picture for inter-prediction of blocks in a picture to be encoded next.
- FIG. 4 is an exemplary functional block diagram of a video decoding apparatus capable of implementing the techniques of the present disclosure.
- the video decoding apparatus and its components will be described with reference to FIG. 4 .
- the video decoding apparatus may include an entropy decoder 410 , a rearrangement unit 415 , an inverse quantizer 420 , an inverse transformer 430 , a predictor 440 , an adder 450 , a filter unit 460 , and a memory 470 .
- each element of the video decoding apparatus may be implemented in hardware, software, or a combination of hardware and software. Further, the function of each element may be implemented in software, and the microprocessor may be implemented to execute the function of software corresponding to each element.
- the entropy decoder 410 determines a current block to be decoded by decoding a bitstream generated by the video encoding apparatus and extracting information related to block splitting, and extracts prediction information and information about a residual signal, and the like required to reconstruct the current block.
- the entropy decoder 410 extracts information about the CTU size from the sequence parameter set (SPS) or the picture parameter set (PPS), determines the size of the CTU, and splits a picture into CTUs of the determined size. Then, the decoder determines the CTU as the uppermost layer, that is, the root node of a tree structure, and extracts splitting information about the CTU to split the CTU using the tree structure.
- SPS sequence parameter set
- PPS picture parameter set
- a first flag (QT_split_flag) related to splitting of the QT is extracted to split each node into four nodes of a sub-layer.
- the second flag (MTT_split_flag) and information about a splitting direction (vertical/horizontal) and/or a splitting type (binary/ternary) related to the splitting of the MTT are extracted to split the corresponding leaf node in the MTT structure.
- a CU split flag (split_cu_flag) indicating whether to split a CU may be extracted.
- the first flag QT_split_flag
- the CTU may directly undergo MTT splitting without the QT splitting, or undergo only QT splitting multiple times.
- the first flag (QT_split_flag) related to QT splitting is extracted, and each node is split into four nodes of a lower layer. Then, a split flag (split_flag) indicating whether a node corresponding to a leaf node of QT is further split in the BT and the splitting direction information are extracted.
- the entropy decoder 410 extracts information about a prediction type indicating whether the current block is intra-predicted or inter-predicted.
- the prediction type information indicates intra-prediction
- the entropy decoder 410 extracts a syntax element for the intra-prediction information (intra-prediction mode) for the current block.
- the prediction type information indicates inter-prediction
- the entropy decoder 410 extracts a syntax element for the inter-prediction information, that is, information indicating a motion vector and a reference picture referred to by the motion vector.
- the entropy decoder 410 extracts the information on a coding mode of a residual block (e.g., information on whether a residual block is encoded or only a sub-block of a residual block is encoded, information indicating a partition type selected to partition a residual bock into sub-blocks, information for identifying encoded residual sub-blocks, quantization parameters, etc.) from a bitstream. Also, the entropy decoder 410 extracts information on quantized transform coefficients of the current block as information regarding a residual signal.
- a coding mode of a residual block e.g., information on whether a residual block is encoded or only a sub-block of a residual block is encoded, information indicating a partition type selected to partition a residual bock into sub-blocks, information for identifying encoded residual sub-blocks, quantization parameters, etc.
- the rearrangement unit 415 may change the sequence of the quantized 1D transform coefficients entropy-decoded by the entropy decoder 410 back into a 2D array of coefficients (i.e., a block) in the reverse order of coefficient scanning performed by the video encoding apparatus.
- the inverse quantizer 420 inversely quantizes the quantized transform coefficients, and the inverse transformer 430 generates a reconstructed residual block for the current block by reconstructing residual signals by inversely transforming the inversely quantized transform coefficients from a frequency domain to a spatial domain on the basis of the information on a coding mode of a residual block.
- the inverse transformer 430 When the information on a coding mode of a residual block indicates that a residual block of the current block is encoded in the video encoding apparatus, the inverse transformer 430 generates a reconstructed residual block for the current block by performing inverse transform on the inversely quantized transform coefficients using the size of the current block (and thus the size of a residual block to be restored) as a transform unit.
- the inverse transformer 430 when the information on a coding mode of a residual block indicates that only one sub-block of a residual block is encoded in the video encoding apparatus, the inverse transformer 430 generates a reconstructed residual block for the current block by reconstructing residual signals for a transformed sub-block through inverse transform on the inversely quantized transform coefficients using the size of the transformed sub-block as a transform unit and by setting residual signals for an untransformed sub-block to “0.”
- the predictor 440 may include an intra-predictor 442 and an inter-predictor 444 .
- the intra-predictor 442 is activated when the prediction type of the current block is intra-prediction
- the inter-predictor 444 is activated when the prediction type of the current block is inter-prediction.
- the intra-predictor 442 determines an intra-prediction mode of the current block among a plurality of intra-prediction modes based on the syntax element for the intra-prediction mode extracted from the entropy decoder 410 , and predicts the current block using the reference pixels around the current block according to the intra-prediction mode.
- the inter-predictor 444 determines a motion vector of the current block and a reference picture referred to by the motion vector using the syntax element for the inter-prediction mode extracted from the entropy decoder 410 , and predicts the current block based on the motion vector and the reference picture.
- the adder 450 reconstructs the current block by adding the residual block output from the inverse transformer 430 and the prediction block output from the inter-predictor 444 or the intra-predictor 442 .
- the pixels in the reconstructed current block are used as reference pixels in intra-predicting a block to be decoded next.
- the filter unit 460 may include a deblocking filter 462 and an SAO filter 464 .
- the deblocking filter 462 deblocking-filters the boundary between the reconstructed blocks to remove blocking artifacts caused by block-by-block decoding.
- the SAO filter 464 performs additional filtering on the reconstructed block after deblocking filtering to corresponding offsets so as to compensate for a difference between the reconstructed pixel and the original pixel caused by lossy coding.
- the reconstructed block filtered through the deblocking filter 462 and the SAO filter 464 is stored in the memory 470 . When all blocks in one picture are reconstructed, the reconstructed picture is used as a reference picture for inter-prediction of blocks in a picture to be encoded next.
- each of the chroma components is predicted in the same manner as a prediction process for a luma component, or each of the chroma components is predicted in a simplified way of the prediction process for the luma component.
- a conventional method has a problem in that color distortion occurs.
- the present disclosure proposes encoding and decoding methods for effectively predicting a chroma component in a chroma block of a target block to be reconstructed (i.e., a current block).
- the methods proposed herein are methods in which information on residual samples (or residual signals) of one of a Cb chroma component and a Cr chroma component is coded and signaled, and information on residual samples of the other one is derived without being coded and signaled.
- residual samples of a chroma component to be derived may be referred to as “second residual samples of a second chroma component”, and residual samples of a chroma component to be coded and signaled to derive the second residual samples may be referred to as “first residual samples of a first chroma component”.
- the first chroma component may be one of the Cb chroma component and the Cr chroma component
- the second chroma component may be the other of the Cb chroma component and the Cr chroma component.
- the residual samples of the Cb chroma component when the residual samples of the Cb chroma component are coded and signaled and the residual samples of the Cr chroma component are derived, the residual samples of the Cb chroma component may be referred to as first residual samples, and the residual samples of the Cr chroma component may be referred to as second residual samples.
- the residual samples of the Cr chroma component when the residual samples of the Cr chroma component are coded and signaled and the residual samples of the Cb chroma component are derived, the residual samples of the Cr chroma component may be referred to as first residual samples, and the residual samples of the Cb chroma component may be referred to as second residual samples.
- a method of deriving the second residual samples may be classified into 1) embodiments in which information on a correlation between the first residual samples and the second residual samples is used, 2) embodiments in which whether to activate or apply a second-residual-sample derivation scheme is determined, and the like. Also, the embodiments in which the correlation information is used may be classified into different embodiments depending on whether an inter-chroma difference value is used. Hereinafter, terms used herein will be defined first, and then each embodiment will be described in detail.
- Correlation information refers to information for deriving second residual samples from first residual samples and may be adaptively determined according to the range of a luma component value of the current block to be encoded.
- the correlation information may include multiplication information or may include multiplication information and offset information.
- the correlation information may be defined at various positions in a bitstream and signaled to a video decoding apparatus and may be decoded from the positions in the bitstream.
- the correlation information may be defined and signaled at one or more positions among high-level syntaxes (HLS) such as SPS, PPS, and picture level.
- HLS high-level syntaxes
- the correlation information may be signaled at a lower level such as a tile group level, a tile level, a CTU level, a unit block level (CU, TU, PU), and the like.
- a difference value (difference correlation information) with correlation information signaled through HLS may be signaled at a lower level.
- the correlation information may not be directly signaled, but some information from which the correlation information can be derived in the video decoding apparatus may be signaled.
- table information including fixed values for the correlation information may be signaled, and an index value indicating correlation information used to derive a second residual sample among the fixed values in the table information may be signaled.
- the table information may not be signaled, but may be predefined between a video encoding apparatus and a video decoding apparatus.
- the index value may be defined and signaled at one or more of a tile group level, a tile level, and a unit block level.
- the correlation information is information used to derive a second residual sample and thus may be signaled when the derivation of a second residual sample is applied. Accordingly, the correlation information may be decoded from the bitstream when a first syntax element, which will be described below, indicates that the derivation of the second residual sample is allowed or may be decoded from the bitstream when a second syntax element, which will be described below, indicates that the derivation of the second residual same is applied.
- Multiplication information refers to information for indicating a multiplication factor between the first residual samples and the second residual samples.
- the multiplication factor may represent a scaling relationship, a weight relationship, a sign relationship, etc, between the first residual samples and the second residual samples. Accordingly, the multiplication factor may be an integer such as ⁇ 1 or 1 or a fraction such as 1 ⁇ 2 or ⁇ 1 ⁇ 2.
- the multiplication information When the multiplication information is signaled in the form of a flag of 0 or 1 and the multiplication factor represents the sign relationship between the first residual samples and the second residual samples, the multiplication information may represent the multiplication factor through the method shown in Equation 1.
- Multiplication Factor 1 ⁇ 2 ⁇ (Multiplication Information) [Equation 1]
- Multiplication Information i.e., a flag
- Multiplication Information i.e., a flag
- Multiplication Information i.e., a flag
- 1 indicates that the first residual samples and the second residual samples have different sign relationships, and a multiplication factor of “ ⁇ 1” may be applied to the first residual samples.
- Offset information refers to information for indicating an offset factor between the first residual sample (to which the multiplication factor is applied) and the second residual sample.
- the offset factor may be an integer such as ⁇ 1, 0, or 1 or a fraction such as 1 ⁇ 2 or ⁇ 1 ⁇ 2.
- the offset information may not be signaled if only multiplication information is included in the correlation information, and the offset information may indicate that the offset factor equals to 0 if the multiplication information and the offset information are included in the correlation information.
- An inter-chroma difference value refers to a difference value between the first residual sample and the second residual sample (i.e., refers to a value obtained by subtraction between the first residual sample and the second residual sample). More specifically, the inter-chroma difference value corresponds to a value derived by subtraction between the first residual sample to which the correlation information is applied and the second residual sample. For example, if the correlation information includes only the multiplication information, the inter-chroma difference value may be derived by performing subtraction between the first residual sample to which the multiplication factor is applied and the second residual sample. As another example, if the correlation information includes the multiplication information and the offset information, the inter-chroma difference value may be derived by performing subtraction between the first residual sample to which the multiplication factor and the offset factor are applied and the second residual sample.
- Embodiment 1 is a method of using both correlation information and inter-chroma difference values. Embodiment 1 may be divided into the following sub-embodiments, depending on a step of the encoding steps at which a process of deriving inter-chroma difference values and correlation information is performed and depending on a step of the decoding steps at which a process of deriving second residual samples is performed.
- Embodiment 1-1 a process of deriving inter-chroma difference values and correlation information is performed before a step of transforming residual samples, and a process of deriving second residual samples is performed after a step of inversely transforming residual samples.
- FIGS. 5 and 6 An exemplary block diagram and flowchart of a video encoding apparatus for performing Embodiment 1-1 are shown in FIGS. 5 and 6 , respectively, and an exemplary block diagram and flowchart of a video decoding apparatus for performing Embodiment 1-1 are shown in FIGS. 7 and 8 , respectively.
- a subtractor 130 may obtain the first residual samples and the second residual samples (S 610 ).
- the first residual samples may be obtained by subtracting between a prediction block (or predicted samples) of a first chroma component and a chroma block of the first chroma component
- the second residual samples may be obtained by subtracting a prediction block of a second chroma component and a chroma block of the second chroma component.
- the prediction blocks of the chroma components may be derived through prediction of a predictor 120 , and information used for the prediction, i.e., prediction information may be derived in this process.
- the process of generating predicted samples and the process of deriving prediction information may be equally applied to other embodiments of the present specification.
- a chroma component predictor 510 may determine whether to derive the second residual samples from the first residual samples (S 620 ).
- the chroma component predictor 510 may determine, for the chroma blocks, one of a method in which both the first residual samples and the second residual samples are coded (i.e., a general method) and a method in which the second residual samples is derived (i.e., a second-residual-sample derivation method). For example, the chroma component predictor 510 calculates rate-distortion values through rate-distortion analysis on the general method and the derivation method, and may select or determine one method having the best rate-distortion characteristics for the chroma blocks. The process of determining whether to derive the second residual samples may be equally applied to other embodiments of the present specification.
- the chroma component predictor 510 may modify the first residual samples when the second-residual-sample derivation method (i.e., the method in which the second residual samples are derived) is selected for the chroma blocks (S 630 ).
- the modification of the first residual samples may be achieved by applying the correlation information to the first residual samples.
- the chroma component predictor 510 may derive inter-chroma difference values using the modified first residual samples and the second residual samples (S 640 ).
- the inter-chroma difference value may be derived by subtracting the modified first residual sample and the second residual sample.
- Operation S 630 and operation S 640 may be performed through Equation 2 below.
- Cro_ r Cro_resi2 ⁇ ( W* Cro_resi1+Offset) [Equation 2]
- Cro_resi1 denotes first residual sample
- Cro_resi2 denotes a second residual sample
- Cro_r denotes an inter-chroma difference value
- W denotes a multiplication factor
- Offset denotes an offset factor.
- Cro_resi2 may be a primary signal of the second residual sample (i.e., a primary residual signal of the second chroma component)
- Cro_r may be a secondary signal of the second residual sample (i.e., a secondary residual signal of the second chroma component).
- the transformer 140 may transform the inter-chroma difference values and the first residual samples, and the quantizer 145 may quantize the transformed inter-chroma difference values and the transformed first residual samples (S 650 ).
- the inter-chroma difference values may be quantized through a “quantization parameter that is changed using QP_C_offset” from a quantization parameter of the first residual samples or the luma component.
- QP_C_offset may be determined by various methods. For example, QP_C_offset may be adaptively determined according to one or more of the range of a luma component value (the range of a brightness value), the size of a chroma block, and the ranges of quantization parameters of a luma component.
- QP_C_offset may be determined as a value preset in a video encoding apparatus and a video decoding apparatus.
- the video encoding apparatus may determine QP_C_offset as an arbitrary value, perform a quantization process, and signal a value of QP_C_offset used in the quantization process to the video decoding apparatus.
- the quantization method using QP_C_offset may also be applied to other embodiments of the present specification.
- the transformed and quantized inter-chroma difference values, first residual samples, correlation information, and prediction information may be encoded and signaled to the video decoding apparatus (S 660 ).
- the second residual samples are not signaled.
- the entropy decoder 410 may decode the inter-chroma difference values, the first residual samples, the correlation information, and the prediction information from a bitstream (S 810 ).
- the inverse quantizer 420 inversely quantizes the inter-chroma difference values and the first residual samples, and the inverse transformer 430 may inversely transform the inversely quantized inter-chroma difference values and the inversely quantized first residual samples (S 820 ).
- the predictor 440 may generate (or reconstruct) the predicted samples (or predictive block) of the first chroma component and the predicted samples of the second chroma component on the basis of the prediction information (S 820 ).
- a chroma component reconstruction unit 710 may determine whether to derive the second residual samples from the first residual samples (whether to activate (allow) and/or apply the second-residual-sample derivation method) (S 830 ). A detailed description of operation S 830 will be described below through a separate embodiment.
- the chroma component reconstruction unit 710 may modify the first residual samples using the (inversely transformed) correlation information when it is determined to derive the second residual samples (S 840 ). Also, the chroma component reconstruction unit 710 may derive the second residual samples using the modified first residual samples and the inversely transformed inter-chroma difference values (S 850 ). The second residual samples may be derived by adding the modified first residual samples and the inversely transformed inter-chroma difference values.
- Operation S 630 and operation S 640 may be performed through Equation 3 below.
- Cro_resi2 ( W* Cro_resi1+Offset)+Cro_ r [Equation 3]
- the adder 450 may reconstruct the chroma block of the first chroma component by adding the first residual samples and the prediction block of the first chroma component and may reconstruct the chroma block of the second chroma component by adding the derived second residual samples and the prediction block of the second chroma component (S 860 ).
- Embodiment 1-2 a process of deriving the inter-chroma difference values and correlation information is performed after a step of quantizing residual samples, and a process of deriving the second residual samples is performed before a step of inversely quantizing residual samples.
- FIGS. 9 and 10 An exemplary block diagram and flowchart of a video encoding apparatus for performing Embodiment 1-2 are shown in FIGS. 9 and 10 , respectively, and an exemplary block diagram and flowchart of a video decoding apparatus for performing Embodiment 1-2 are shown in FIGS. 11 and 12 , respectively.
- a subtractor 130 may obtain a first residual samples and a second residual samples (S 1010 ). Residual samples of each of chroma components may be acquired by subtracting the prediction block and the chroma block of each of the chroma components, and the prediction block and the prediction information of each of the chroma components is derived through the prediction process of a predictor 120 .
- a transformer 140 may transform the first residual samples and the second residual samples, and a quantizer 145 may quantize the transformed first residual samples and the transformed second residual samples (S 1020 ).
- the second residual samples may be quantized as a value obtained by adding a quantization offset for quantization of the second residual samples to a quantization parameter of the first residual samples.
- the quantization offset may be determined by various methods. For example, the quantization offset may be adaptively determined according to one or more of the range of a luma component value (the range of a brightness value), the size of the first residual sample values, and the bit-depth of the second residual samples. As another example, the quantization offset may be determined as a value preset in a video encoding apparatus and a video decoding apparatus.
- the video decoding apparatus may determine a quantization parameter using delta-QP signaled from the video encoding apparatus, add the quantization offset to the quantization parameter to derive a quantization parameter of the second residual samples, and then inversely quantize the second residual samples using the derived quantization parameter.
- the quantization/inverse quantization method using the quantization offset may be applied to other embodiments of the present specification.
- quantization coefficients of “0” may be derived through a quantization process for the second residual samples (i.e., there may be no residual signal in the quantization process).
- information or a syntax element indicating that quantization coefficients of “0” are derived may be signaled from the video encoding apparatus to the video decoding apparatus.
- one or more of a quantization parameter value of the first residual sample (to which the quantization offset is not added) (first value), a value obtained by adding the quantization offset to the quantization parameter of the first residual sample (second value), and the average of the first value and the second value may be used in an in-loop filtering process for the second residual sample.
- first value a quantization parameter value of the first residual sample (to which the quantization offset is not added)
- second value a value obtained by adding the quantization offset to the quantization parameter of the first residual sample
- the average of the first value and the second value may be used in an in-loop filtering process for the second residual sample.
- the first value, the second value, and the average may be used as a parameter for determining the in-loop filtering strength of the second residual sample or may be used as a parameter for determining an index in a table for determining boundary strength.
- the chroma component predictor 510 may determine whether to derive the second residual samples from the first residual samples (S 1030 ). The chroma component predictor 510 may modify the quantized first residual samples when it is determined to derive the second residual samples (S 1040 ). The modification of the first residual samples may be performed by applying the correlation information to the quantized first residual samples.
- the chroma component predictor 510 may derive an inter-chroma difference values using the modified first residual samples and the quantized second residual samples (S 1050 ).
- the inter-chroma difference values may be derived by performing a subtraction between the modified first residual samples and the quantized second residual samples.
- Operation S 1040 and operation S 1050 may be performed through Equation 4 below.
- Q ( T (Cro_ r )) Q ( T (Cro_resi2)) ⁇ ( W*Q ( T (Cro_resi1))+Offset) [Equation 4]
- Equation 4 Q(T(Cro_resi1)) denotes the transformed and quantized first residual samples, Q(T(Cro_resi2)) denotes the transformed and quantized second residual samples, and Q(T(Cro_r)) denotes the inter-chroma difference values derived from the transformed and quantized first residual samples and the transformed and quantized second residual samples.
- the inter-chroma difference values, the first residual samples, the correlation information, and the prediction information may be encoded and signaled to the video decoding apparatus (S 1060 ).
- the second residual samples are not signaled.
- the entropy decoder 410 may decode the inter-chroma difference values, the first residual samples, the correlation information, and the prediction information from a bitstream (S 1210 ).
- the predictor 440 may generate (or reconstruct) predicted samples (predictive block) of the first chroma component and predicted samples of the second chroma component on the basis of the prediction information (S 1220 ).
- the chroma component reconstruction unit 710 may determine whether to derive the second residual samples from the first residual samples (i.e., whether to activate and/or apply the second-residual-sample derivation method) (S 1230 ). A detailed description of operation S 1230 will be described below through a separate embodiment.
- the chroma component reconstruction unit 710 may modify the first residual samples using the correlation information when it is determined to derive the second residual samples (S 1240 ). Also, the chroma component reconstruction unit 710 may derive the second residual samples using the modified first residual samples and the inter-chroma difference values (S 1250 ). The second residual samples may be derived by adding the modified first residual samples and the inter-chroma difference values.
- Operation S 1240 and operation S 1250 may be performed through Equation 5 below.
- Q ( T (Cro_resi2)) ( W*Q ( T (Cro_res1))+Offset)+ Q ( T (Cro_ r )) [Equation 5]
- the inverse quantizer 420 may inversely quantize the first residual samples and the derived second residual samples and may inversely transform the inversely quantized first residual samples and the inversely quantized second residual samples (S 1260 ).
- the adder 450 may reconstruct the chroma block of the first chroma component by adding the inversely transformed first residual samples and the prediction block of the first chroma component and may reconstruct the chroma block of the second chroma component by adding the inversely transformed second residual samples and the prediction block of the second chroma component (S 1270 ).
- Embodiment 2 is a method of predicting and deriving second residual samples using correlation information without using inter-chroma difference values.
- Embodiment 2 is different from Embodiment 1 in that inter-chroma difference values are not used and a process of deriving the inter-chroma difference values (S 640 or S 1050 ) is not performed.
- Embodiment 1 may also be performed in Embodiment 2. Accordingly, as in Embodiment 1-1, a process of deriving correlation information in a video encoding apparatus may be performed before a step of transforming residual samples, and a process of deriving second residual samples in a video decoding apparatus may be performed after a step of inversely transforming residual samples. Also, as in Embodiment 1-2, a process of deriving correlation information in a video encoding apparatus may be performed after a step of quantizing residual samples, and a process of deriving second residual samples in a video decoding apparatus may be performed before a step of inversely quantizing residual samples. However, the remaining steps except for the step of transforming/quantizing residual samples and the step of inversely quantizing/inversely transforming residual samples will be described below.
- FIGS. 13 and 14 show flowcharts illustrating an example for Embodiment 2.
- the subtractor 130 may subtract the prediction block of the first chroma component and the chroma block of the first chroma component to acquire the first residual samples and may subtract the prediction block of the second chroma component and the chroma block of the second chroma component to acquire the second residual samples (S 1310 ).
- the chroma component predictor 510 may determine whether to derive the second residual samples from the first residual samples (S 1320 ). The chroma component predictor 510 may derive the correlation information using the first residual samples and the second residual samples when it is determined to derive the second residual samples (S 1330 ).
- the second-residual-sample derivation method may include the following three modes when only multiplication information is included in the correlation information.
- Mode 1 The values of the Cb residual samples are signaled, and the values of the Cr residual samples are derived by applying a multiplication factor of ⁇ 1 ⁇ 2 or +1 ⁇ 2 to the values of the Cb residual samples.
- Mode 2 The values of the Cb residual samples are signaled, and the values of the Cr residual samples are derived by applying a multiplication factor of ⁇ 1 or +1 to the values of the Cb residual samples.
- Mode 3 The values of the Cr residual samples are signaled, and the values of the Cb residual samples are derived by applying a multiplication factor of ⁇ 1 ⁇ 2 or +1 ⁇ 2 to the values of the Cr residual samples.
- the second-residual-sample derivation method may further include modes in which an offset factor is applied to each of the first to third modes when the offset information is also included in the correlation information.
- the chroma component predictor 510 may determine a mode having the best rate distortion characteristic among the above modes as a mode for the chroma block.
- the chroma component predictor 510 may integratedly perform a process of determining one of the above-described general method and the second-residual-sample derivation method and a process of determining one of the modes of the second-residual-sample derivation method.
- the chroma component predictor 510 may determine a mode or method having the best rate distortion characteristic, among the general method and the modes in the second-residual-sample derivation method, for the chroma block.
- the first residual samples, the correlation information, and the prediction information may be encoded and signaled to the video decoding apparatus (S 1340 ).
- the second residual samples and the inter-chroma difference values are not signaled.
- the entropy decoder 410 may decode the first residual samples, the correlation information, and the prediction information from a bitstream (S 1410 ).
- the predictor 440 may generate (or reconstruct) the predicted samples (predictive block) of the first chroma component and the predicted samples of the second chroma component on the basis of the prediction information (S 1420 ).
- the chroma component reconstruction unit 710 may determine whether to derive the second residual samples from the first residual samples (i.e., whether to activate and/or apply the second-residual-sample derivation method) (S 1430 ). A detailed description of operation S 1430 will be described below through a separate embodiment.
- the chroma component reconstruction unit 710 may derive the second residual samples by applying the correlation information to the first residual samples when it is determined to derive the second residual samples (S 1440 ).
- the second residual samples may be derived by applying a multiplication factor indicated by the multiplication information to the first residual samples.
- the second residual samples may be derived by applying an offset factor indicated by the offset information to the first residual samples to which the multiplication factor is applied.
- Operation S 1440 may be performed through Equation 6 below.
- Cro_resi2 W* Cro_resi1+Offset [Equation 6]
- the adder 450 may reconstruct the chroma block of the first chroma component by adding the first residual samples and the prediction block of the first chroma component and may reconstruct the chroma block of the second chroma component by adding the derived second residual samples and the prediction block of the second chroma component (S 1450 ).
- Embodiment 3 is a method of determining whether to derive the second residual samples from the first residual samples (i.e., whether to allow (activate) and/or apply the second-residual-sample derivation method).
- Whether to perform the second-residual-sample derivation method may be determined by various criteria.
- the various criteria may include 1) a value of a syntax element (e.g., flag) indicating whether to allow and/or apply the derivation of the second residual samples (i.e., whether it is on or off), 2) a prediction mode of a target block, 3) the range of a luma component value, etc.
- Criterion 1 Syntax Element Indicating On/Off
- a first syntax element and/or a second syntax element may be employed in order to indicate whether to derive the second residual samples.
- the first syntax element which is a syntax element indicating whether to allow (or activate) the second-residual-sample derivation method (i.e., whether it is on or off), may be defined at various positions of a bitstream and signaled to the video decoding apparatus.
- the first syntax element may be defined and signaled at the level of CTU or higher or may be defined and signaled at one or more of unit block (PU, TU, CU) levels, tile level, tile group level, and picture level.
- the second syntax element which is a syntax element indicating whether to apply the second-residual-sample derivation method to a target block (chroma block) (i.e., whether it is on or off for the target block), may be defined at various positions of a bitstream and signaled to the video decoding apparatus.
- the second syntax element may be defined and signaled at the level of CTU or higher or may be defined and signaled at one or more of unit block (PU, TU, CU) levels, tile level, tile group level, and picture level.
- the first syntax element may be defined and signaled at a relatively higher level in the bitstream
- the second syntax element may be defined and signaled at a relatively lower level in the bitstream.
- the second syntax element may not be signaled at the lower level when the second-residual-sample derivation method is switched off at the higher level, and whether to switch on or off at the lower level may be selectively determined even when the second-residual-sample derivation method is switched on at the higher level. Therefore, it is possible to improve the bit efficiency for the second-residual-sample derivation method.
- FIG. 13 shows an example of determining whether to switch on or off the second-residual-sample derivation method.
- the video encoding apparatus may determine whether the second-residual-sample derivation method is allowed and may set a value of the first syntax element based on a result of the determination and signal the first syntax element to the video decoding apparatus. Also, the video encoding apparatus may determine whether the second-residual-sample derivation method is applied and may set a value of the second syntax element a result of the determination and signal the second syntax element to the video decoding apparatus.
- the video decoding apparatus may decode the first syntax element from a bitstream (S 1510 ) and may determine whether the second-residual-sample derivation method is allowed according to a value of the first syntax element (S 1520 ).
- the video decoding apparatus may decode the second syntax element from the bitstream (S 1530 ). Also, the video decoding apparatus may determine whether the second-residual-sample derivation method is applied according to a value of the second syntax element (S 1540 ).
- the video decoding apparatus may derive the second residual sample on the basis of the correlation information and the first residual samples (or the correlation information, the first residual samples, and the inter-chroma difference values) for the target block (S 1550 ).
- Whether to switch on or off the second-residual-sample derivation method may be adaptively determined in consideration of or according to the prediction mode of the target block (chroma block).
- the derivation of the second residual samples may be switched on or off.
- the derivation of the second residual samples may be switched on or off.
- the derivation of the second residual samples may be switched on or off.
- information indicating the switching on or off of the derivation of the second residual samples may be signaled to the video decoding apparatus only when the chroma block is predicted through CCLM or DM.
- the derivation of the second residual sample may be switched on or off.
- Information indicating the switching on or off of the derivation of the second residual samples may be signaled to the video decoding apparatus only when the chroma block is predicted through a bi-prediction mode or a merge mode or only when the chroma block is predicted with reference to a zeroth reference image.
- An example of considering the prediction mode of the chroma block may be combined with the above example of using the first syntax element and the second syntax element.
- the second syntax element may be decoded from a bitstream. (S 1530 ). That is, whether to decode the second syntax element may be determined in consideration of the prediction mode of the chroma block.
- the range of values of a luma component may be divided into two or more sections and, depending on which section the values of the luma component of the target block belong to among the divided sections, whether to apply the second-residual-sample derivation method may be determined.
- the second-residual-sample derivation method may not be applied when the values of the luma component of the target block belong to the first section, and the second-residual-sample derivation method may be applied when values of the luma component of the target block belong to the second section, and vice versa.
- a section to which the second-residual-sample derivation method is not applied may correspond to a “visual perception section” to which a user's vision can react sensitively, and a section to which the second-residual-sample derivation method is applied may not correspond to the “visual perception section.” Accordingly, instead of being applied to the visual perception section, the second-residual-sample derivation method may be selectively applied only to sections other than the visual perception section, and thus it is possible to prevent deterioration of subject image quality.
- One or more section value of a section value indicating the range of the first section and a section value indicating the range of the second section may be signaled from the video encoding apparatus to the video decoding apparatus.
- the section value may be preset between the video encoding apparatus and the video decoding apparatus without signaling.
- the second-residual-sample derivation method may be selectively applied. Also, in this case, the quantization of only a part, not the whole, of the second residual sample may not be omitted (i.e., only some of the second residual samples are signaled).
- the second-residual-sample derivation method may or may not be applied to the target block.
- the second-residual-sample derivation method may not be applied to the target block.
- GAA gradual random access
- IDR instantaneous decoding recoding
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Multiplication Factor=1−2·(Multiplication Information) [Equation 1]
Cro_r=Cro_resi2−(W*Cro_resi1+Offset) [Equation 2]
Cro_resi2=(W*Cro_resi1+Offset)+Cro_r [Equation 3]
Q(T(Cro_r))=Q(T(Cro_resi2))−(W*Q(T(Cro_resi1))+Offset) [Equation 4]
Q(T(Cro_resi2))=(W*Q(T(Cro_res1))+Offset)+Q(T(Cro_r)) [Equation 5]
Cro_resi2=W*Cro_resi1+Offset [Equation 6]
Claims (7)
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20190056974 | 2019-05-15 | ||
KR10-2019-0056974 | 2019-05-15 | ||
KR10-2020-0058335 | 2020-05-15 | ||
PCT/KR2020/006432 WO2020231225A1 (en) | 2019-05-15 | 2020-05-15 | Method for restoring chrominance block and apparatus for decoding image |
KR1020200058335A KR20200132762A (en) | 2019-05-15 | 2020-05-15 | Method and apparatus for reconstructing chroma block |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/006432 A-371-Of-International WO2020231225A1 (en) | 2019-05-15 | 2020-05-15 | Method for restoring chrominance block and apparatus for decoding image |
Related Child Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/509,537 Continuation US20240089452A1 (en) | 2019-05-15 | 2023-11-15 | Method and for reconstructing chroma block and video decoding apparatus |
US18/509,624 Continuation US20240089456A1 (en) | 2019-05-15 | 2023-11-15 | Method and for reconstructing chroma block and video decoding apparatus |
US18/509,557 Continuation US20240089453A1 (en) | 2019-05-15 | 2023-11-15 | Method and for reconstructing chroma block and video decoding apparatus |
US18/509,580 Continuation US20240089454A1 (en) | 2019-05-15 | 2023-11-15 | Method and for reconstructing chroma block and video decoding apparatus |
US18/509,596 Continuation US20240089455A1 (en) | 2019-05-15 | 2023-11-15 | Method and for reconstructing chroma block and video decoding apparatus |
Publications (2)
Publication Number | Publication Date |
---|---|
US20220217354A1 US20220217354A1 (en) | 2022-07-07 |
US11863754B2 true US11863754B2 (en) | 2024-01-02 |
Family
ID=73289648
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/611,466 Active 2040-09-14 US11863754B2 (en) | 2019-05-15 | 2020-05-15 | Method and for reconstructing chroma block and video decoding apparatus |
US18/509,557 Pending US20240089453A1 (en) | 2019-05-15 | 2023-11-15 | Method and for reconstructing chroma block and video decoding apparatus |
US18/509,537 Pending US20240089452A1 (en) | 2019-05-15 | 2023-11-15 | Method and for reconstructing chroma block and video decoding apparatus |
US18/509,596 Pending US20240089455A1 (en) | 2019-05-15 | 2023-11-15 | Method and for reconstructing chroma block and video decoding apparatus |
US18/509,580 Pending US20240089454A1 (en) | 2019-05-15 | 2023-11-15 | Method and for reconstructing chroma block and video decoding apparatus |
US18/509,624 Pending US20240089456A1 (en) | 2019-05-15 | 2023-11-15 | Method and for reconstructing chroma block and video decoding apparatus |
Family Applications After (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/509,557 Pending US20240089453A1 (en) | 2019-05-15 | 2023-11-15 | Method and for reconstructing chroma block and video decoding apparatus |
US18/509,537 Pending US20240089452A1 (en) | 2019-05-15 | 2023-11-15 | Method and for reconstructing chroma block and video decoding apparatus |
US18/509,596 Pending US20240089455A1 (en) | 2019-05-15 | 2023-11-15 | Method and for reconstructing chroma block and video decoding apparatus |
US18/509,580 Pending US20240089454A1 (en) | 2019-05-15 | 2023-11-15 | Method and for reconstructing chroma block and video decoding apparatus |
US18/509,624 Pending US20240089456A1 (en) | 2019-05-15 | 2023-11-15 | Method and for reconstructing chroma block and video decoding apparatus |
Country Status (2)
Country | Link |
---|---|
US (6) | US11863754B2 (en) |
WO (1) | WO2020231225A1 (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20060079051A (en) | 2004-12-30 | 2006-07-05 | 삼성전자주식회사 | Color image encoding and decoding method and apparatus using a correlation between chrominance components |
US20060146191A1 (en) | 2004-12-30 | 2006-07-06 | Samsung Electronics Co., Ltd. | Color image encoding and decoding method and apparatus using a correlation between chrominance components |
US20070014478A1 (en) | 2005-07-15 | 2007-01-18 | Samsung Electronics Co., Ltd. | Apparatus, method, and medium for encoding/decoding of color image and video using inter-color-component prediction according to coding modes |
US20110158521A1 (en) | 2009-12-31 | 2011-06-30 | Korea Electronics Technology Institute | Method for encoding image using estimation of color space |
KR20120061035A (en) | 2010-10-01 | 2012-06-12 | 삼성전자주식회사 | Method and apparatus for image intra prediction |
US20130182761A1 (en) | 2010-10-01 | 2013-07-18 | Samsung Electronics Co., Ltd. | Image intra prediction method and apparatus |
WO2019066384A1 (en) | 2017-09-26 | 2019-04-04 | 삼성전자 주식회사 | Video decoding method and device using cross-component prediction, and video encoding method and device using cross-component prediction |
US20210211738A1 (en) * | 2018-02-14 | 2021-07-08 | Dolby Laboratories Licensing Corporation | Image reshaping in video coding using rate distortion optimization |
US20210289230A1 (en) * | 2016-07-15 | 2021-09-16 | Lg Electronics Inc. | Method and apparatus for encoding and decoding video signal using transform domain prediction for prediction unit partition |
US20210297680A1 (en) * | 2016-10-12 | 2021-09-23 | Telefonaktiebolaget Lm Ericsson (Publ) | Residual refinement of color components |
US20210368172A1 (en) * | 2017-09-20 | 2021-11-25 | Electronics And Telecommunications Research Institute | Method and device for encoding/decoding image, and recording medium having stored bitstream |
-
2020
- 2020-05-15 US US17/611,466 patent/US11863754B2/en active Active
- 2020-05-15 WO PCT/KR2020/006432 patent/WO2020231225A1/en active Application Filing
-
2023
- 2023-11-15 US US18/509,557 patent/US20240089453A1/en active Pending
- 2023-11-15 US US18/509,537 patent/US20240089452A1/en active Pending
- 2023-11-15 US US18/509,596 patent/US20240089455A1/en active Pending
- 2023-11-15 US US18/509,580 patent/US20240089454A1/en active Pending
- 2023-11-15 US US18/509,624 patent/US20240089456A1/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20060079051A (en) | 2004-12-30 | 2006-07-05 | 삼성전자주식회사 | Color image encoding and decoding method and apparatus using a correlation between chrominance components |
US20060146191A1 (en) | 2004-12-30 | 2006-07-06 | Samsung Electronics Co., Ltd. | Color image encoding and decoding method and apparatus using a correlation between chrominance components |
US20070014478A1 (en) | 2005-07-15 | 2007-01-18 | Samsung Electronics Co., Ltd. | Apparatus, method, and medium for encoding/decoding of color image and video using inter-color-component prediction according to coding modes |
KR20070009336A (en) | 2005-07-15 | 2007-01-18 | 삼성전자주식회사 | Apparatus and method for encoding/decoding of color image and video using different prediction among color components according to coding modes |
US20110158521A1 (en) | 2009-12-31 | 2011-06-30 | Korea Electronics Technology Institute | Method for encoding image using estimation of color space |
KR20110078498A (en) | 2009-12-31 | 2011-07-07 | 전자부품연구원 | Method for encoding image using estimation of color space |
KR20120061035A (en) | 2010-10-01 | 2012-06-12 | 삼성전자주식회사 | Method and apparatus for image intra prediction |
US20130182761A1 (en) | 2010-10-01 | 2013-07-18 | Samsung Electronics Co., Ltd. | Image intra prediction method and apparatus |
US20210289230A1 (en) * | 2016-07-15 | 2021-09-16 | Lg Electronics Inc. | Method and apparatus for encoding and decoding video signal using transform domain prediction for prediction unit partition |
US20210297680A1 (en) * | 2016-10-12 | 2021-09-23 | Telefonaktiebolaget Lm Ericsson (Publ) | Residual refinement of color components |
US20210368172A1 (en) * | 2017-09-20 | 2021-11-25 | Electronics And Telecommunications Research Institute | Method and device for encoding/decoding image, and recording medium having stored bitstream |
WO2019066384A1 (en) | 2017-09-26 | 2019-04-04 | 삼성전자 주식회사 | Video decoding method and device using cross-component prediction, and video encoding method and device using cross-component prediction |
US20210211738A1 (en) * | 2018-02-14 | 2021-07-08 | Dolby Laboratories Licensing Corporation | Image reshaping in video coding using rate distortion optimization |
Also Published As
Publication number | Publication date |
---|---|
US20240089452A1 (en) | 2024-03-14 |
US20240089456A1 (en) | 2024-03-14 |
WO2020231225A1 (en) | 2020-11-19 |
US20240089454A1 (en) | 2024-03-14 |
US20240089453A1 (en) | 2024-03-14 |
US20240089455A1 (en) | 2024-03-14 |
US20220217354A1 (en) | 2022-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10880559B2 (en) | Method for decoding chroma image using luma quantization | |
US20220191530A1 (en) | Intra prediction method and device for predicting and dividing prediction unit into sub-units | |
US11973966B2 (en) | Method and apparatus for efficiently coding residual blocks | |
JP7293376B2 (en) | Intra-prediction-based video signal processing method and apparatus | |
KR20130091799A (en) | Method for decoding intra-predictions | |
US20220394255A1 (en) | Method and device for performing an inverse transform on transform coefficients of a current block | |
US20220224885A1 (en) | Method and device for deriving intra-prediction mode | |
US20220182604A1 (en) | Video encoding and decoding using intra block copy | |
US11924453B2 (en) | Coding tool setting method and video decoding apparatus | |
US20230048262A1 (en) | Decoding device and method for predicting block partitioned into random shape | |
US20220353505A1 (en) | Method for reconstructing residual blocks of chroma blocks, and video decoding apparatus | |
US11863754B2 (en) | Method and for reconstructing chroma block and video decoding apparatus | |
US20220295068A1 (en) | Method and device for efficiently applying transform skip mode to data block | |
US20210352329A1 (en) | Method and apparatus for parallel encoding and decoding of moving picture data | |
CN113841403A (en) | Inverse quantization apparatus and method used in image decoding apparatus | |
US20230130958A1 (en) | Video encoding and decoding using adaptive color transform | |
EP4068776A1 (en) | Decoding device and method for predicting block partitioned into random shapes | |
US20240107011A1 (en) | Video encoding/decoding method and apparatus | |
US20230421753A1 (en) | Method and apparatus for video coding based on mapping | |
US20220150470A1 (en) | Method and apparatus for intra predictionbased on deriving prediction mode | |
US20220286686A1 (en) | Video encoding and decoding using differential modulation | |
US20240007636A1 (en) | Method and apparatus for video coding using versatile information-based context model | |
US20240007623A1 (en) | Block splitting structure for efficient prediction and transform, and method and appartus for video encoding and decoding using the same | |
US20240137490A1 (en) | Video encoding/decoding method and apparatus | |
US20240007645A1 (en) | Video encoding and decoding method using adaptive reference pixel selection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: KWANGWOON UNIVERSITY INDUSTRY-ACADEMIC COLLABORATION FOUNDATION, KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SIM, DONG GYU;PARK, SEA NAE;PARK, SEUNG WOOK;AND OTHERS;SIGNING DATES FROM 20211105 TO 20211111;REEL/FRAME:058116/0616 Owner name: KIA CORPORATION, KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SIM, DONG GYU;PARK, SEA NAE;PARK, SEUNG WOOK;AND OTHERS;SIGNING DATES FROM 20211105 TO 20211111;REEL/FRAME:058116/0616 Owner name: HYUNDAI MOTOR COMPANY, KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SIM, DONG GYU;PARK, SEA NAE;PARK, SEUNG WOOK;AND OTHERS;SIGNING DATES FROM 20211105 TO 20211111;REEL/FRAME:058116/0616 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP, ISSUE FEE PAYMENT VERIFIED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |