US20150003514A1 - Method and apparatus for block-based significance map and significance group flag context selection - Google Patents
Method and apparatus for block-based significance map and significance group flag context selection Download PDFInfo
- Publication number
- US20150003514A1 US20150003514A1 US14/368,264 US201214368264A US2015003514A1 US 20150003514 A1 US20150003514 A1 US 20150003514A1 US 201214368264 A US201214368264 A US 201214368264A US 2015003514 A1 US2015003514 A1 US 2015003514A1
- Authority
- US
- United States
- Prior art keywords
- sub
- context
- block
- selection
- block index
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H04N19/00139—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/63—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
- H04N19/64—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission
- H04N19/647—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission using significance based coding, e.g. Embedded Zerotrees of Wavelets [EZW] or Set Partitioning in Hierarchical Trees [SPIHT]
-
- H04N19/00109—
-
- H04N19/00775—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/129—Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/18—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
Definitions
- the present invention relates to video coding or video processing.
- the present invention relates to significance map coding and significance group flag coding.
- the arithmetic coding is known as an efficient data compressing method and is widely used in coding standards, such as JBIG, JPEG2000, H.264/AVC, and High-Efficiency Video Coding (HEVC).
- HEVC High-Efficiency Video Coding
- JM JVT Test Model
- HM HEVC Test Model
- CABAC Context-Based Adaptive Binary Arithmetic Coding
- FIG. 1 illustrates an example of CABAC encoder 100 which includes three parts: Binarization 110 , Context Modeling 120 , and Binary Arithmetic Coding (BAC) 130 .
- binarization each syntax element is uniquely mapped into a binary string (also called bin or bins in this disclosure).
- context modeling step a probability model is selected for each bin. The corresponding probability model may depend on previously encoded syntax elements, bin indexes, side information, or any combination of the above.
- a bin value along with its associated context model is provided to the binary arithmetic coding engine, i.e., the BAC 130 block in FIG. 1 .
- the bin value can be coded in two coding modes depending on the syntax element and bin indexes, where one is the regular coding mode, and the other is the bypass mode.
- the bins corresponding to regular coding mode are referred to as regular bins and the bins corresponding to bypass coding mode are referred to as bypass bins in this disclosure.
- the probability of the Most Probable Symbol (MPS) and the probability of the Least Probable Symbol (LPS) for BAC are derived from the associated context model.
- the bypass coding mode the probability of the MPS and the LPS are equal.
- CABAC the bypass mode is introduced to speed up the encoding process.
- High-Efficiency Video Coding is a new international video coding standard that is being developed by the Joint Collaborative Team on Video Coding (JCT-VC).
- JCT-VC Joint Collaborative Team on Video Coding
- HEVC is based on the hybrid block-based motion-compensated DCT-like transform coding architecture.
- the basic unit for compression termed Coding Unit (CU)
- CU The basic unit for compression, termed Coding Unit (CU), is a 2N ⁇ 2N square block, and each CU can be recursively split into four smaller CUs until a predefined minimum size is reached.
- Each CU contains one or several variable-block-sized Prediction Unit(s) (PUs) and Transform Unit(s) (TUs).
- PUs Prediction Unit
- TUs Transform Unit
- For each PU either intra-picture or inter-picture prediction is selected.
- Each TU is processed by a spatial block transformation and the transform coefficients for the TU are then quantized.
- the transform coefficients are coded TU by TU.
- syntax elements last_significant_coeff_x and last_significant_coeff_y are transmitted to indicate the last non-zero coefficient horizontal and vertical positions respectively according to a selected scanning order.
- a TU is divided into multiple subsets for the TUs having size larger than 4 ⁇ 4.
- the 64 coefficients are divided into 4 subsets according to the diagonal scanning order through the entire 8 ⁇ 8 TU as shown in FIG. 2 .
- the scanning through the transform coefficients will convert the two-dimensional data into a one-dimensional data.
- Each subset contains 16 continuous coefficients of the diagonally scanned coefficients.
- the TUs are divided into 4 ⁇ 4 sub-blocks.
- Each sub-block corresponds to a coefficient sub-set.
- the significance map which is represented by significant_coeff_flag[x,y] is coded first.
- Variable x is the horizontal position of the coefficient within the sub-block and the value of x is from 0 to (sub-block width ⁇ 1 ).
- Variable y is the vertical position of the coefficient within the sub-block and the value of y is from 0 to (sub-block height ⁇ 1 ).
- the flag, significant_coeff_flag[x,y] indicates whether the corresponding coefficient of the TU is zero or non-zero.
- the index [x,y] is omitted from significant coeff flag[x,y].
- the level and sign of the non-zero coefficient is represented by coeff_abs_level_greater1_flag, coeff abs_level_greater2_flag, coeff_abs_level_minus3, and coeff_sign_flag.
- one significant_coeffgroup_flag is coded for each sub-block prior to the coding of level and sign of the sub-block (e.g. the significant_coeff_flag, coeff_abs_level_greater1_flag, coeff_abs_level_greater2_flag, coeff_ab_level_minus3, and coeff_sign_flag). If significant coeffgroup flag is equal to 0, it indicates that the entire 4 ⁇ 4 sub-block is zero.
- level and sign of sub-block can be skipped. If significant_coeffgroup_flag is equal to 1, it indicates that at least one coefficient in the 4 ⁇ 4 sub-block is non-zero. The level and sign of each non-zero coefficient in the sub-block will be coded after the significant_coeffgroup_flag. The value of significant coeff_flag is inferred as 1 for the sub-block containing the DC term (i.e., the transform coefficient with the lowest spatial frequency).
- significant_coeff_flag is coded in regular CABAC mode with context modeling.
- Different context selection methods are used for different TU sizes. For TUs with size of 4 ⁇ 4 or 8 ⁇ 8, the context selection is based on the position of the coefficient within the TU.
- FIG. 3 shows the position-based context selection map for a 4 ⁇ 4 TU
- FIG. 4 shows the position-based context selection map for an 8 ⁇ 8 TU as adopted in HM-5.0.
- significance map 310 is used for the luma component
- significance map 320 is used for the chroma component, where each number corresponds to a context selection.
- luma and chroma 8 ⁇ 8 TUs share the same significance map.
- FIGS. 5A and 5B illustrate examples of the neighboring-information-dependent context selection for luma and chroma components respectively.
- One context is used for the DC coefficient.
- the context selection depends on the neighboring coefficients. For example, a group of neighboring non-zero coefficients including I, H, F, E, and B around a current coefficient X are used for the context selection. If none of the neighboring pixels is non-zero, context #0 is used for coefficient X. If one or two of the neighboring pixels are non-zero, context #1 is used for X. Otherwise context #2 is used for coefficient X.
- the non-DC coefficients of the entire TU are divided into two regions (i.e., region-1 and region-2) for the luma component and one region (region-2) for the chroma component.
- regions will use different context sets.
- Each context set includes three contexts (i.e., context #0, #1, and #2).
- the area of region-1 for the luma component can be mathematically specified by the x-position and y-position of a coefficient X within the TU. As shown in FIG. 5A, if the sum of x-position and y-position of coefficient X is smaller than a threshold value and greater than 0, region-1 context set is selected for coefficient X. Otherwise, region-2 context set is selected.
- the threshold value can be determined based on the width and the height of the TU. For example, the threshold can be set to a quarter of the maximum value of the TU width and the TU height. Accordingly, in the case of TU sizes 32 ⁇ 32, 32 ⁇ 8 or 8 ⁇ 32, the threshold value can be set to 8.
- FIG. 6A illustrates an example where one 4 ⁇ 4 sub-block 610 (the center of the sub-block is indicated by a dot) for 16 ⁇ 16 TU 621 , 16 ⁇ 4 622 , and 4 ⁇ 16 TU 623 will use two context sets for significant coeff flag coding.
- FIG. 6A illustrates an example where one 4 ⁇ 4 sub-block 610 (the center of the sub-block is indicated by a dot) for 16 ⁇ 16 TU 621 , 16 ⁇ 4 622 , and 4 ⁇ 16 TU 623 will use two context sets for significant coeff flag coding.
- FIG. 6B illustrates an example where three 4 ⁇ 4 sub-blocks 631 to 633 for 32 ⁇ 32 TU 641 , 32 ⁇ 8 TU 642 , and 8 ⁇ 32 TU 643 will use two context sets for significant_coeff_flag coding.
- sub-blocks 632 and 633 the sum of x-potion and y-position of coefficient X has to be calculated in order to determine whether the coefficient X is in region-1 or region-2.
- the sub-block containing the DC term i.e., sub-block 631
- the position of the DC term is known and all other coefficients in the sub-block belong to region-1.
- significant_coeffgroup_flag can be inferred and there is no need to calculate the sum of x-position and y-position.
- For other sub-blocks there is no need to calculate the sum of x-position and y-position of coefficient X since all coefficients of other sub-blocks are in region-2 and one context set for significant_coeff_flag coding is used.
- the TU is divided into one or more sub-blocks and at least two context sets are used for the TU.
- Non-DC transform coefficients in each sub-block are coded based on the same context, context set, or context formation.
- the context, context set, or context formation for each sub-block can be determined based on sub-block index in scan order, horizontal sub-block index, vertical sub-block index, video component type, TU width,
- the sub-block index in scan order, the horizontal sub-block index, the vertical sub-block index, or a combination of them can be compared with a threshold to determine the context, context set, or context formation for each sub-block.
- the threshold is related to the TU width, the TU height or a combination of them.
- the threshold can be set to the maximum of the TU width and the TU height divided by 16.
- the sum of the horizontal sub-block index and the vertical sub-block index is used to classify each sub-block into a class and the context, context set, or context formation is then determined according to the class.
- the sum can be compared with a threshold to classify each sub-block and the threshold is derived based on the maximum of the TU width and the TU height divided by 16.
- the sub-block size can be 4 ⁇ 4, 4 ⁇ 8, 8 ⁇ 4, 8 ⁇ 8, 16 ⁇ 16, or 32 ⁇ 32.
- a method and apparatus for significance group flag coding are disclosed.
- the TUs are divided into one or more sub-blocks and the significance group flags are coded based on sub-block index in scan order, horizontal sub-block index, vertical sub-block index, video component type, TU width, TU height, context selection, context set selection, context formation selection, or any combination of the above.
- the context selection, the context set selection and the context formation selection are associated with significance map coding of the sub-block.
- the significance group flag coding will also share the same second context selection, second context set selection, or second context formation selection.
- FIG. 1 illustrates exemplary architecture of CABAC encoding system with a bypass mode.
- FIG. 2 illustrates an exemplary diagonal scanning order for the transform coefficinets of an 8 ⁇ 8 TU.
- FIG. 3 illustrates an example of context selection maps for the 4 ⁇ 4 TU of luma and chroma components used by HEVC Test Model Version 5.0.
- FIG. 4 illustrates an example of context selection map for the 8 ⁇ 8 TU of luma and chroma components used by HEVC Test Model Version 5.0.
- FIG. 5A illustrates an example of neighboring-information-dependent context selection for the 16 ⁇ 16 TU of luma component used by HEVC Test Model Version 5.0.
- FIG. 5B illustrates an example of neighboring-information-dependent context selection for the 16 ⁇ 16 TU of chroma component used by HEVC Test Model Version 5.0.
- FIG. 6A illustrates an example of context selection for the 16 ⁇ 16 TU of luma component used by HEVC Test Model Version 5.0.
- FIG. 6B illustrates an example of context selection for the 32 ⁇ 32 TU of luma component used by HEVC Test Model Version 5.0.
- FIG. 7A illustrates an example of block-based context selection for the 16 ⁇ 16 TU of luma component according to an embodiment of the present invention.
- FIG. 7B illustrates an example of block-based context selection for the 32 ⁇ 32 TU of luma component according to an embodiment of the present invention.
- embodiments of the present invention use block-based context selection to simplify and unify the context set, context selection and context formation for significant coeff flag coding.
- the region-1/region-2 context selection depends on the x-block-index and y-block-index of the sub-block instead of the x-position and y-position of the coefficient X.
- the x-block-index and y-block-index refer to the horizontal sub-block index and the vertical sub-block index respectively.
- the value of the x-block-index is from 0 to (number of horizontal sub-blocks ⁇ 1).
- the value of the y-block-index is from 0 to (number of vertical sub-blocks ⁇ 1). In a system incorporating an embodiment of the present invention, none of the sub-blocks will cross the boundary between region-1 and region-2.
- the region-1/region-2 determination can be based on the sum of the x-block-index and y-block-index of each sub-block.
- the sum can be compared with a threshold.
- the threshold value can either depend on the TU width and/or height or can be a fixed value.
- FIG. 7A and FIG. 7B illustrates an example of block-based context selection according to an embodiment of the present invention.
- the threshold value is set to the maximum value of TU width and TU height divided by 16. Therefore, the threshold value is 1 for 16 ⁇ 16 TU 721 , 16 ⁇ 4 TU 722 , and 4 ⁇ 16 TU 723 and the threshold value is 2 for 32 ⁇ 32 TU 741 , 32 ⁇ 8 TU 742 and 8 ⁇ 32 TU 743 .
- region-1 context set is used for the sub-block. Otherwise region-2 context set is used for the sub-block.
- one sub-block 710 in FIG. 7A and three sub-blocks 731 through 733 in FIG. 7B use region-1 context and other sub-blocks use region-2 context. Furthermore, the value of significant coeffgroup flag can be inferred as 1 for region-1 sub-blocks for unification.
- 4 ⁇ 4 sub-block is used as an example of the block-based context selection
- other sub-block sizes may also be used.
- other sub-blocks such as 4 ⁇ 8, 8 ⁇ 4, 8 ⁇ 8, 16 ⁇ 16 and 32 ⁇ 32 may also be used.
- block-based significance map coding is used for context selection, the block-based significance map coding may also be used for context set selection or context formation selection.
- block-based significance map coding select context, context set or context formation based on sub-block index in scan order, horizontal sub-block index (i.e., x-block-index) and/or vertical sub-block index (i.e., y-block-index), the selection may also be based on the video component type and/or the TU width/height.
- the video component type may correspond to the luma component (Y) or the chroma component (Cr or Cb).
- the video component type may correspond to other video formats.
- the selection may depend on a combination of sub-block index in scan order, horizontal sub-block index, vertical sub-block index, video component type, and TU width/height.
- the block-based significance group flag coding may be based on sub-block index in scan order, horizontal sub-block index (i.e., x-block-index) and/or vertical sub-block index (i.e., y-block-index).
- the block-based significance group flag coding may also be based on the video component type and/or the TU width/height.
- the block-based significance group flag coding may also be based on the context, context set, or context formation selection associated with the significance map coding.
- the block-based significance group flag coding may also depend on a combination of sub-block index in scan order, horizontal sub-block index, vertical sub-block index, video component type, TU width/height, context, context set, and context formation selection associated with the significance map coding.
- Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both.
- an embodiment of the present invention can be a circuit integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein.
- An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
- DSP Digital Signal Processor
- the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
- the software code or firmware code may be developed in different programming languages and different formats or styles.
- the software code may also be compiled for different target platforms.
- different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
Abstract
Description
- The present invention claims priority to U.S. Provisional Patent Application, Ser. No. 61/582,725, filed Jan. 3, 2012, entitled “Block-based Significance Map and Significance Group Flag Context Selection Method”. The U.S. Provisional Patent Application is hereby incorporated by reference in its entirety.
- The present invention relates to video coding or video processing. In particular, the present invention relates to significance map coding and significance group flag coding.
- The arithmetic coding is known as an efficient data compressing method and is widely used in coding standards, such as JBIG, JPEG2000, H.264/AVC, and High-Efficiency Video Coding (HEVC). In H.264/AVC JVT Test Model (JM) and HEVC Test Model (HM), Context-Based Adaptive Binary Arithmetic Coding (CABAC) is adopted as the entropy coding tool for various syntax elements in the video coding system.
-
FIG. 1 illustrates an example ofCABAC encoder 100 which includes three parts:Binarization 110,Context Modeling 120, and Binary Arithmetic Coding (BAC) 130. In the binarization step, each syntax element is uniquely mapped into a binary string (also called bin or bins in this disclosure). In the context modeling step, a probability model is selected for each bin. The corresponding probability model may depend on previously encoded syntax elements, bin indexes, side information, or any combination of the above. After the binarization and the context model assignment, a bin value along with its associated context model is provided to the binary arithmetic coding engine, i.e., the BAC 130 block inFIG. 1 . The bin value can be coded in two coding modes depending on the syntax element and bin indexes, where one is the regular coding mode, and the other is the bypass mode. The bins corresponding to regular coding mode are referred to as regular bins and the bins corresponding to bypass coding mode are referred to as bypass bins in this disclosure. In the regular coding mode, the probability of the Most Probable Symbol (MPS) and the probability of the Least Probable Symbol (LPS) for BAC are derived from the associated context model. In the bypass coding mode, the probability of the MPS and the LPS are equal. In CABAC, the bypass mode is introduced to speed up the encoding process. - High-Efficiency Video Coding (HEVC) is a new international video coding standard that is being developed by the Joint Collaborative Team on Video Coding (JCT-VC). HEVC is based on the hybrid block-based motion-compensated DCT-like transform coding architecture. The basic unit for compression, termed Coding Unit (CU), is a 2N×2N square block, and each CU can be recursively split into four smaller CUs until a predefined minimum size is reached. Each CU contains one or several variable-block-sized Prediction Unit(s) (PUs) and Transform Unit(s) (TUs). For each PU, either intra-picture or inter-picture prediction is selected. Each TU is processed by a spatial block transformation and the transform coefficients for the TU are then quantized. The smallest TU size allowed for HEVC is 4×4.
- In HEVC Test Model Version 5.0 (HM-5.0), the transform coefficients are coded TU by TU. For each TU, syntax elements last_significant_coeff_x and last_significant_coeff_y are transmitted to indicate the last non-zero coefficient horizontal and vertical positions respectively according to a selected scanning order. A TU is divided into multiple subsets for the TUs having size larger than 4×4. For an 8×8 TU, the 64 coefficients are divided into 4 subsets according to the diagonal scanning order through the entire 8×8 TU as shown in
FIG. 2 . The scanning through the transform coefficients will convert the two-dimensional data into a one-dimensional data. Each subset contains 16 continuous coefficients of the diagonally scanned coefficients. For TUs having size larger than 8×8 (e.g. 16×16, 32×32) and non-square TUs (e.g. 16×4, 4×16, 32×8, 8×32), the TUs are divided into 4×4 sub-blocks. Each sub-block corresponds to a coefficient sub-set. For each sub-block (i.e. each subset), the significance map, which is represented by significant_coeff_flag[x,y], is coded first. Variable x is the horizontal position of the coefficient within the sub-block and the value of x is from 0 to (sub-block width −1). Variable y is the vertical position of the coefficient within the sub-block and the value of y is from 0 to (sub-block height −1). The flag, significant_coeff_flag[x,y] indicates whether the corresponding coefficient of the TU is zero or non-zero. For convenience, the index [x,y] is omitted from significant coeff flag[x,y]. For each non-zero coefficient as indicated by significant_coeff_flag, the level and sign of the non-zero coefficient is represented by coeff_abs_level_greater1_flag, coeff abs_level_greater2_flag, coeff_abs_level_minus3, and coeff_sign_flag. - In HM-5.0, if the TU size is equal to 16×16, 32×32, 16×4, 4×16, 32×8, or 8×32, one significant_coeffgroup_flag is coded for each sub-block prior to the coding of level and sign of the sub-block (e.g. the significant_coeff_flag, coeff_abs_level_greater1_flag, coeff_abs_level_greater2_flag, coeff_ab_level_minus3, and coeff_sign_flag). If significant coeffgroup flag is equal to 0, it indicates that the entire 4×4 sub-block is zero. Therefore, there is no need for any additional information to represent this sub-block. Accordingly, the coding of level and sign of sub-block can be skipped. If significant_coeffgroup_flag is equal to 1, it indicates that at least one coefficient in the 4×4 sub-block is non-zero. The level and sign of each non-zero coefficient in the sub-block will be coded after the significant_coeffgroup_flag. The value of significant coeff_flag is inferred as 1 for the sub-block containing the DC term (i.e., the transform coefficient with the lowest spatial frequency).
- In HM-5.0, significant_coeff_flag is coded in regular CABAC mode with context modeling. Different context selection methods are used for different TU sizes. For TUs with size of 4×4 or 8×8, the context selection is based on the position of the coefficient within the TU.
FIG. 3 shows the position-based context selection map for a 4×4 TU andFIG. 4 shows the position-based context selection map for an 8×8 TU as adopted in HM-5.0. InFIG. 3 ,significance map 310 is used for the luma component andsignificance map 320 is used for the chroma component, where each number corresponds to a context selection. InFIG. 4 , luma andchroma 8×8 TUs share the same significance map. - For other TU sizes, the neighboring-information-dependent context selection is adopted.
FIGS. 5A and 5B illustrate examples of the neighboring-information-dependent context selection for luma and chroma components respectively. One context is used for the DC coefficient. For non-DC coefficients (i.e., AC coefficients), the context selection depends on the neighboring coefficients. For example, a group of neighboring non-zero coefficients including I, H, F, E, and B around a current coefficient X are used for the context selection. If none of the neighboring pixels is non-zero,context # 0 is used for coefficient X. If one or two of the neighboring pixels are non-zero,context # 1 is used for X. Otherwisecontext # 2 is used for coefficient X. - In the above neighboring-information-dependent context selection, the non-DC coefficients of the entire TU are divided into two regions (i.e., region-1 and region-2) for the luma component and one region (region-2) for the chroma component. Different regions will use different context sets. Each context set includes three contexts (i.e.,
context # 0, #1, and #2). The area of region-1 for the luma component can be mathematically specified by the x-position and y-position of a coefficient X within the TU. As shown in FIG. 5A, if the sum of x-position and y-position of coefficient X is smaller than a threshold value and greater than 0, region-1 context set is selected for coefficient X. Otherwise, region-2 context set is selected. The threshold value can be determined based on the width and the height of the TU. For example, the threshold can be set to a quarter of the maximum value of the TU width and the TU height. Accordingly, in the case of TU sizes 32×32, 32×8 or 8×32, the threshold value can be set to 8. - In HM-5.0, for TUs with sizes other than 4×4 and 8×8, the TUs will be divided into 4×4 sub-blocks for coefficient map coding. However, the criterion of region-1/region-2 context selection depends on the x-position and y-position of the transform coefficient. Therefore, some sub-blocks may cross the boundary between region-1 and region-2 and two context sets will be required for these sub-blocks.
FIG. 6A illustrates an example where one 4×4 sub-block 610 (the center of the sub-block is indicated by a dot) for 16×16TU 621, 16×4 622, and 4×16TU 623 will use two context sets for significant coeff flag coding.FIG. 6B illustrates an example where three 4×4 sub-blocks 631 to 633 for 32×32TU 641, 32×8TU TU 643 will use two context sets for significant_coeff_flag coding. Forsub-blocks 632 and 633, the sum of x-potion and y-position of coefficient X has to be calculated in order to determine whether the coefficient X is in region-1 or region-2. For the sub-block containing the DC term, i.e., sub-block 631, the position of the DC term is known and all other coefficients in the sub-block belong to region-1. Therefore, significant_coeffgroup_flag can be inferred and there is no need to calculate the sum of x-position and y-position. For other sub-blocks, there is no need to calculate the sum of x-position and y-position of coefficient X since all coefficients of other sub-blocks are in region-2 and one context set for significant_coeff_flag coding is used. - Therefore, it is desirable to simplify the context selection process, such as to eliminate the requirement of calculating the sum of x-position and y-position of coefficient or to eliminate other operations.
- A method and apparatus for significance map context selection are disclosed. According to one embodiment of the present invention, the TU is divided into one or more sub-blocks and at least two context sets are used for the TU. Non-DC transform coefficients in each sub-block are coded based on the same context, context set, or context formation. The context, context set, or context formation for each sub-block can be determined based on sub-block index in scan order, horizontal sub-block index, vertical sub-block index, video component type, TU width,
- TU height, or any combination of the above. For example, the sub-block index in scan order, the horizontal sub-block index, the vertical sub-block index, or a combination of them can be compared with a threshold to determine the context, context set, or context formation for each sub-block. The threshold is related to the TU width, the TU height or a combination of them. For example, the threshold can be set to the maximum of the TU width and the TU height divided by 16. In another embodiment of the present invention, the sum of the horizontal sub-block index and the vertical sub-block index is used to classify each sub-block into a class and the context, context set, or context formation is then determined according to the class. For example, the sum can be compared with a threshold to classify each sub-block and the threshold is derived based on the maximum of the TU width and the TU height divided by 16. The sub-block size can be 4×4, 4×8, 8×4, 8×8, 16×16, or 32×32.
- A method and apparatus for significance group flag coding are disclosed. According to one embodiment of the present invention, the TUs are divided into one or more sub-blocks and the significance group flags are coded based on sub-block index in scan order, horizontal sub-block index, vertical sub-block index, video component type, TU width, TU height, context selection, context set selection, context formation selection, or any combination of the above. The context selection, the context set selection and the context formation selection are associated with significance map coding of the sub-block. When two sub-blocks use the same context selection, context set selection, or context formation selection for the significance map coding, the significance group flag coding will also share the same second context selection, second context set selection, or second context formation selection.
-
FIG. 1 illustrates exemplary architecture of CABAC encoding system with a bypass mode. -
FIG. 2 illustrates an exemplary diagonal scanning order for the transform coefficinets of an 8×8 TU. -
FIG. 3 illustrates an example of context selection maps for the 4×4 TU of luma and chroma components used by HEVC Test Model Version 5.0. -
FIG. 4 illustrates an example of context selection map for the 8×8 TU of luma and chroma components used by HEVC Test Model Version 5.0. -
FIG. 5A illustrates an example of neighboring-information-dependent context selection for the 16×16 TU of luma component used by HEVC Test Model Version 5.0. -
FIG. 5B illustrates an example of neighboring-information-dependent context selection for the 16×16 TU of chroma component used by HEVC Test Model Version 5.0. -
FIG. 6A illustrates an example of context selection for the 16×16 TU of luma component used by HEVC Test Model Version 5.0. -
FIG. 6B illustrates an example of context selection for the 32×32 TU of luma component used by HEVC Test Model Version 5.0. -
FIG. 7A illustrates an example of block-based context selection for the 16×16 TU of luma component according to an embodiment of the present invention. -
FIG. 7B illustrates an example of block-based context selection for the 32×32 TU of luma component according to an embodiment of the present invention. - In order to eliminate the need to calculate the sum of x-position and y-position of a coefficient, embodiments of the present invention use block-based context selection to simplify and unify the context set, context selection and context formation for significant coeff flag coding.
- For TU sizes other than 4×4 and 8×8, the region-1/region-2 context selection according to one embodiment of the present invention depends on the x-block-index and y-block-index of the sub-block instead of the x-position and y-position of the coefficient X. The x-block-index and y-block-index refer to the horizontal sub-block index and the vertical sub-block index respectively. The value of the x-block-index is from 0 to (number of horizontal sub-blocks −1). The value of the y-block-index is from 0 to (number of vertical sub-blocks −1). In a system incorporating an embodiment of the present invention, none of the sub-blocks will cross the boundary between region-1 and region-2. There is no need to use two context sets for significant_coeff_flag coding or to calculate the sum of x-position and y-position for each coefficient. The region-1/region-2 determination can be based on the sum of the x-block-index and y-block-index of each sub-block. The sum can be compared with a threshold. The threshold value can either depend on the TU width and/or height or can be a fixed value.
-
FIG. 7A andFIG. 7B illustrates an example of block-based context selection according to an embodiment of the present invention. In this example, the threshold value is set to the maximum value of TU width and TU height divided by 16. Therefore, the threshold value is 1 for 16×16TU 721, 16×4TU TU 723 and the threshold value is 2 for 32×32TU 741, 32×8TU TU 743. For the luma component, if the sum of x-block-index and y-block-index of the sub-block is smaller than the threshold value, region-1 context set is used for the sub-block. Otherwise region-2 context set is used for the sub-block. Accordingly, onesub-block 710 inFIG. 7A and threesub-blocks 731 through 733 inFIG. 7B use region-1 context and other sub-blocks use region-2 context. Furthermore, the value of significant coeffgroup flag can be inferred as 1 for region-1 sub-blocks for unification. - While the 4×4 sub-block is used as an example of the block-based context selection, other sub-block sizes may also be used. For example, instead of the 4×4 sub-blocks, other sub-blocks such as 4×8, 8×4, 8×8, 16×16 and 32×32 may also be used. While the above block-based significance map coding is used for context selection, the block-based significance map coding may also be used for context set selection or context formation selection. While the examples of block-based significance map coding shown above select context, context set or context formation based on sub-block index in scan order, horizontal sub-block index (i.e., x-block-index) and/or vertical sub-block index (i.e., y-block-index), the selection may also be based on the video component type and/or the TU width/height. The video component type may correspond to the luma component (Y) or the chroma component (Cr or Cb). The video component type may correspond to other video formats. Furthermore, the selection may depend on a combination of sub-block index in scan order, horizontal sub-block index, vertical sub-block index, video component type, and TU width/height.
- The block-based significance group flag coding may be based on sub-block index in scan order, horizontal sub-block index (i.e., x-block-index) and/or vertical sub-block index (i.e., y-block-index). The block-based significance group flag coding may also be based on the video component type and/or the TU width/height. Furthermore, the block-based significance group flag coding may also be based on the context, context set, or context formation selection associated with the significance map coding. The block-based significance group flag coding may also depend on a combination of sub-block index in scan order, horizontal sub-block index, vertical sub-block index, video component type, TU width/height, context, context set, and context formation selection associated with the significance map coding.
- The above description is presented to enable a person of ordinary skill in the art to practice the present invention as provided in the context of a particular application and its requirement. Various modifications to the described embodiments will be apparent to those with skill in the art, and the general principles defined herein may be applied to other embodiments. Therefore, the present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed. In the above detailed description, various specific details are illustrated in order to provide a thorough understanding of the present invention. Nevertheless, it will be understood by those skilled in the art that the present invention may be practiced.
- Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware code may be developed in different programming languages and different formats or styles. The software code may also be compiled for different target platforms. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
- The invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/368,264 US10298956B2 (en) | 2012-01-03 | 2012-11-22 | Method and apparatus for block-based significance map and significance group flag context selection |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261582725P | 2012-01-03 | 2012-01-03 | |
PCT/CN2012/085034 WO2013102380A1 (en) | 2012-01-03 | 2012-11-22 | Method and apparatus for block-based significance map and significance group flag context selection |
US14/368,264 US10298956B2 (en) | 2012-01-03 | 2012-11-22 | Method and apparatus for block-based significance map and significance group flag context selection |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2012/085034 A-371-Of-International WO2013102380A1 (en) | 2012-01-03 | 2012-11-22 | Method and apparatus for block-based significance map and significance group flag context selection |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/299,907 Division US20170041637A1 (en) | 2012-01-03 | 2016-10-21 | Method and Apparatus for Block-based Significance Map and Significance Group Flag Context Selection |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150003514A1 true US20150003514A1 (en) | 2015-01-01 |
US10298956B2 US10298956B2 (en) | 2019-05-21 |
Family
ID=48744982
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/368,264 Active 2035-05-27 US10298956B2 (en) | 2012-01-03 | 2012-11-22 | Method and apparatus for block-based significance map and significance group flag context selection |
US15/299,907 Abandoned US20170041637A1 (en) | 2012-01-03 | 2016-10-21 | Method and Apparatus for Block-based Significance Map and Significance Group Flag Context Selection |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/299,907 Abandoned US20170041637A1 (en) | 2012-01-03 | 2016-10-21 | Method and Apparatus for Block-based Significance Map and Significance Group Flag Context Selection |
Country Status (7)
Country | Link |
---|---|
US (2) | US10298956B2 (en) |
EP (2) | EP2745512B1 (en) |
CN (2) | CN108600761B (en) |
ES (1) | ES2862124T3 (en) |
HU (1) | HUE053382T2 (en) |
PL (1) | PL3139609T3 (en) |
WO (1) | WO2013102380A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130188699A1 (en) * | 2012-01-22 | 2013-07-25 | Qualcomm Incorporated | Coding of coefficients in video coding |
US20130188698A1 (en) * | 2012-01-19 | 2013-07-25 | Qualcomm Incorporated | Coefficient level coding |
US20150117548A1 (en) * | 2013-10-24 | 2015-04-30 | Samsung Electronics Co., Ltd. | Method and apparatus for accelerating inverse transform, and method and apparatus for decoding video stream |
WO2018166429A1 (en) * | 2017-03-16 | 2018-09-20 | Mediatek Inc. | Method and apparatus of enhanced multiple transforms and non-separable secondary transform for video coding |
US20190208225A1 (en) * | 2018-01-02 | 2019-07-04 | Qualcomm Incorporated | Sign prediction in video coding |
US10609367B2 (en) | 2016-12-21 | 2020-03-31 | Qualcomm Incorporated | Low-complexity sign prediction for video coding |
US11516462B2 (en) * | 2018-03-27 | 2022-11-29 | Kt Corporation | Method and apparatus for processing video signal |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI538487B (en) | 2013-12-05 | 2016-06-11 | 財團法人工業技術研究院 | Method and system of coding prediction for screen video |
FR3023112A1 (en) * | 2014-06-27 | 2016-01-01 | Bcom | METHOD FOR ENCODING A DIGITAL IMAGE, DECODING METHOD, DEVICES AND COMPUTER PROGRAMS |
EP3490253A1 (en) | 2017-11-23 | 2019-05-29 | Thomson Licensing | Encoding and decoding methods and corresponding devices |
CN109831670B (en) * | 2019-02-26 | 2020-04-24 | 北京大学深圳研究生院 | Inverse quantization method, system, equipment and computer readable medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130058418A1 (en) * | 2010-05-12 | 2013-03-07 | Thomson Licensing | Methods and Apparatus for Unified Significance Map Coding |
US20130188684A1 (en) * | 2011-12-21 | 2013-07-25 | Panasonic Corporation | Image coding method, image decoding method, image coding apparatus and image decoding apparatus |
US20130215969A1 (en) * | 2011-12-20 | 2013-08-22 | General Instrument Corporation | Method and apparatus for last coefficient indexing for high efficiency video coding |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6856701B2 (en) | 2001-09-14 | 2005-02-15 | Nokia Corporation | Method and system for context-based adaptive binary arithmetic coding |
CN1874509B (en) | 2001-09-14 | 2014-01-15 | 诺基亚有限公司 | Method and system for context-based adaptive binary arithmetic coding |
US8599925B2 (en) | 2005-08-12 | 2013-12-03 | Microsoft Corporation | Efficient coding and decoding of transform blocks |
CN101389021B (en) * | 2007-09-14 | 2010-12-22 | 华为技术有限公司 | Video encoding/decoding method and apparatus |
BRPI0818444A2 (en) * | 2007-10-12 | 2016-10-11 | Qualcomm Inc | adaptive encoding of video block header information |
US7592937B1 (en) * | 2008-06-02 | 2009-09-22 | Mediatek Inc. | CABAC decoding unit and method |
CN108777790B (en) * | 2010-04-13 | 2021-02-09 | Ge视频压缩有限责任公司 | Apparatus for decoding saliency map and apparatus and method for encoding saliency map |
WO2011129672A2 (en) * | 2010-04-16 | 2011-10-20 | 에스케이텔레콤 주식회사 | Video encoding/decoding apparatus and method |
CN101938657B (en) * | 2010-10-07 | 2012-07-04 | 西安电子科技大学 | Self-adaptively dividing method for code units in high-efficiency video coding |
CA2773990C (en) | 2011-11-19 | 2015-06-30 | Research In Motion Limited | Multi-level significance map scanning |
-
2012
- 2012-11-22 EP EP12864244.4A patent/EP2745512B1/en active Active
- 2012-11-22 US US14/368,264 patent/US10298956B2/en active Active
- 2012-11-22 WO PCT/CN2012/085034 patent/WO2013102380A1/en active Application Filing
- 2012-11-22 CN CN201810329565.2A patent/CN108600761B/en active Active
- 2012-11-22 EP EP16195645.3A patent/EP3139609B1/en active Active
- 2012-11-22 HU HUE16195645A patent/HUE053382T2/en unknown
- 2012-11-22 CN CN201280065480.9A patent/CN104025600B/en active Active
- 2012-11-22 PL PL16195645T patent/PL3139609T3/en unknown
- 2012-11-22 ES ES16195645T patent/ES2862124T3/en active Active
-
2016
- 2016-10-21 US US15/299,907 patent/US20170041637A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130058418A1 (en) * | 2010-05-12 | 2013-03-07 | Thomson Licensing | Methods and Apparatus for Unified Significance Map Coding |
US20130215969A1 (en) * | 2011-12-20 | 2013-08-22 | General Instrument Corporation | Method and apparatus for last coefficient indexing for high efficiency video coding |
US20130188684A1 (en) * | 2011-12-21 | 2013-07-25 | Panasonic Corporation | Image coding method, image decoding method, image coding apparatus and image decoding apparatus |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130188698A1 (en) * | 2012-01-19 | 2013-07-25 | Qualcomm Incorporated | Coefficient level coding |
US9866829B2 (en) * | 2012-01-22 | 2018-01-09 | Qualcomm Incorporated | Coding of syntax elements that correspond to coefficients of a coefficient block in video coding |
US20130188699A1 (en) * | 2012-01-22 | 2013-07-25 | Qualcomm Incorporated | Coding of coefficients in video coding |
US10743011B2 (en) * | 2013-10-24 | 2020-08-11 | Samsung Electronics Co., Ltd. | Method and apparatus for accelerating inverse transform, and method and apparatus for decoding video stream |
US20150117548A1 (en) * | 2013-10-24 | 2015-04-30 | Samsung Electronics Co., Ltd. | Method and apparatus for accelerating inverse transform, and method and apparatus for decoding video stream |
US10609367B2 (en) | 2016-12-21 | 2020-03-31 | Qualcomm Incorporated | Low-complexity sign prediction for video coding |
US10666937B2 (en) * | 2016-12-21 | 2020-05-26 | Qualcomm Incorporated | Low-complexity sign prediction for video coding |
WO2018166429A1 (en) * | 2017-03-16 | 2018-09-20 | Mediatek Inc. | Method and apparatus of enhanced multiple transforms and non-separable secondary transform for video coding |
CN110419218A (en) * | 2017-03-16 | 2019-11-05 | 联发科技股份有限公司 | The method and apparatus of enhancing multiple transform and inseparable quadratic transformation for coding and decoding video |
US11509934B2 (en) | 2017-03-16 | 2022-11-22 | Hfi Innovation Inc. | Method and apparatus of enhanced multiple transforms and non-separable secondary transform for video coding |
US20190208225A1 (en) * | 2018-01-02 | 2019-07-04 | Qualcomm Incorporated | Sign prediction in video coding |
US11516462B2 (en) * | 2018-03-27 | 2022-11-29 | Kt Corporation | Method and apparatus for processing video signal |
US11962762B2 (en) | 2018-03-27 | 2024-04-16 | Kt Corporation | Method and apparatus for processing video signal by skipping inverse-transform |
Also Published As
Publication number | Publication date |
---|---|
CN104025600B (en) | 2018-05-11 |
EP3139609A1 (en) | 2017-03-08 |
NZ622475A (en) | 2016-02-26 |
EP2745512B1 (en) | 2019-10-23 |
US10298956B2 (en) | 2019-05-21 |
NZ713803A (en) | 2016-02-26 |
EP3139609B1 (en) | 2021-01-06 |
ES2862124T3 (en) | 2021-10-07 |
EP2745512A1 (en) | 2014-06-25 |
WO2013102380A1 (en) | 2013-07-11 |
CN108600761A (en) | 2018-09-28 |
EP2745512A4 (en) | 2015-10-14 |
PL3139609T3 (en) | 2021-09-20 |
US20170041637A1 (en) | 2017-02-09 |
HUE053382T2 (en) | 2021-06-28 |
CN104025600A (en) | 2014-09-03 |
WO2013102380A4 (en) | 2014-03-13 |
CN108600761B (en) | 2020-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10298956B2 (en) | Method and apparatus for block-based significance map and significance group flag context selection | |
US10893273B2 (en) | Data encoding and decoding | |
EP2839645B1 (en) | Coefficient groups and coefficient coding for coefficient scans | |
TWI781435B (en) | Apparatus and method for decoding and encoding a significance map, and related decoder, data stream and computer readable digital storage medium | |
EP3229473B1 (en) | Methods and devices for coding and decoding the position of the last significant coefficient | |
US20200045316A1 (en) | Method and device for context-adaptive binary arithmetic coding a sequence of binary symbols representing a syntax element related to picture data | |
US9729890B2 (en) | Method and apparatus for unification of significance map context selection | |
CN104041049A (en) | Method and apparatus for unification of coefficient scan of 8x8 transform units in HEVC | |
EP2618573A1 (en) | Methods and devices for context modeling to enable modular processing | |
NZ713803B2 (en) | Method and apparatus for block-based significance map and significance group flag context selection | |
NZ622475B2 (en) | Method and apparatus for block-based significance map and significance group flag context selection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MEDIATEK INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HSU, CHIH-WEI;CHUANG, TZU-DER;CHEN, CHING-YEH;AND OTHERS;REEL/FRAME:033160/0900 Effective date: 20140521 |
|
AS | Assignment |
Owner name: HFI INNOVATION INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MEDIATEK INC.;REEL/FRAME:039609/0864 Effective date: 20160628 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP, ISSUE FEE PAYMENT VERIFIED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |