US10298956B2 - Method and apparatus for block-based significance map and significance group flag context selection - Google Patents

Method and apparatus for block-based significance map and significance group flag context selection Download PDF

Info

Publication number
US10298956B2
US10298956B2 US14/368,264 US201214368264A US10298956B2 US 10298956 B2 US10298956 B2 US 10298956B2 US 201214368264 A US201214368264 A US 201214368264A US 10298956 B2 US10298956 B2 US 10298956B2
Authority
US
United States
Prior art keywords
sub
block
context set
context
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/368,264
Other languages
English (en)
Other versions
US20150003514A1 (en
Inventor
Chih-Wei Hsu
Tzu-Der Chuang
Ching-Yeh Chen
Yu-Wen Huang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HFI Innovation Inc
Original Assignee
HFI Innovation Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HFI Innovation Inc filed Critical HFI Innovation Inc
Priority to US14/368,264 priority Critical patent/US10298956B2/en
Assigned to MEDIATEK INC. reassignment MEDIATEK INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHEN, CHING-YEH, CHUANG, TZU-DER, HSU, CHIH-WEI, HUANG, YU-WEN
Publication of US20150003514A1 publication Critical patent/US20150003514A1/en
Assigned to HFI INNOVATION INC. reassignment HFI INNOVATION INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MEDIATEK INC.
Application granted granted Critical
Publication of US10298956B2 publication Critical patent/US10298956B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • H04N19/64Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission
    • H04N19/647Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission using significance based coding, e.g. Embedded Zerotrees of Wavelets [EZW] or Set Partitioning in Hierarchical Trees [SPIHT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/129Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding

Definitions

  • the present invention relates to video coding or video processing.
  • the present invention relates to significance map coding and significance group flag coding.
  • the arithmetic coding is known as an efficient data compressing method and is widely used in coding standards, such as JBIG, JPEG2000, H.264/AVC, and High-Efficiency Video Coding (HEVC).
  • HEVC High-Efficiency Video Coding
  • JM JVT Test Model
  • HM HEVC Test Model
  • CABAC Context-Based Adaptive Binary Arithmetic Coding
  • FIG. 1 illustrates an example of CABAC encoder 100 which includes three parts: Binarization 110 , Context Modeling 120 , and Binary Arithmetic Coding (BAC) 130 .
  • binarization each syntax element is uniquely mapped into a binary string (also called bin or bins in this disclosure).
  • context modeling step a probability model is selected for each bin. The corresponding probability model may depend on previously encoded syntax elements, bin indexes, side information, or any combination of the above.
  • a bin value along with its associated context model is provided to the binary arithmetic coding engine, i.e., the BAC 130 block in FIG. 1 .
  • the bin value can be coded in two coding modes depending on the syntax element and bin indexes, where one is the regular coding mode, and the other is the bypass mode.
  • the bins corresponding to regular coding mode are referred to as regular bins and the bins corresponding to bypass coding mode are referred to as bypass bins in this disclosure.
  • the probability of the Most Probable Symbol (MPS) and the probability of the Least Probable Symbol (LPS) for BAC are derived from the associated context model.
  • the bypass coding mode the probability of the MPS and the LPS are equal.
  • CABAC the bypass mode is introduced to speed up the encoding process.
  • High-Efficiency Video Coding is a new international video coding standard that is being developed by the Joint Collaborative Team on Video Coding (JCT-VC).
  • JCT-VC Joint Collaborative Team on Video Coding
  • HEVC is based on the hybrid block-based motion-compensated DCT-like transform coding architecture.
  • the basic unit for compression termed Coding Unit (CU)
  • CU The basic unit for compression, termed Coding Unit (CU), is a 2N ⁇ 2N square block, and each CU can be recursively split into four smaller CUs until a predefined minimum size is reached.
  • Each CU contains one or several variable-block-sized Prediction Unit(s) (PUs) and Transform Unit(s) (TUs).
  • PUs Prediction Unit
  • TUs Transform Unit
  • For each PU either intra-picture or inter-picture prediction is selected.
  • Each TU is processed by a spatial block transformation and the transform coefficients for the TU are then quantized.
  • the transform coefficients are coded TU by TU.
  • syntax elements last_significant_coeff_x and last_significant_coeff_y are transmitted to indicate the last non-zero coefficient horizontal and vertical positions respectively according to a selected scanning order.
  • a TU is divided into multiple subsets for the TUs having size larger than 4 ⁇ 4.
  • the 64 coefficients are divided into 4 subsets according to the diagonal scanning order through the entire 8 ⁇ 8 TU as shown in FIG. 2 .
  • the scanning through the transform coefficients will convert the two-dimensional data into a one-dimensional data.
  • Each subset contains 16 continuous coefficients of the diagonally scanned coefficients.
  • the TUs are divided into 4 ⁇ 4sub-blocks.
  • Each sub-block corresponds to a coefficient sub-set.
  • the significance map which is represented by significant_coeff_flag [x,y] , is coded first.
  • Variable x is the horizontal position of the coefficient within the sub-block and the value of x is from 0to (sub-block width ⁇ 1 ).
  • Variable y is the vertical position of the coefficient within the sub-block and the value of y is from 0 to (sub-block height ⁇ 1 ).
  • the flag, significant_coeff_flag[x,y] indicates whether the corresponding coefficient of the TU is zero or non-zero.
  • the index [x,y] is omitted from significant_coeff_flag[x, y].
  • the level and sign of the non-zero coefficient is represented by coeff_abs_level_greater1_flag, coeff abs_level_greater2_flag, coeff_abs_level_minus3, and coeff_sign_flag.
  • one significant_coeffgroup_flag is coded for each sub-block prior to the coding of level and sign of the sub-block (e.g. the significant_coeff_flag, coeff_abs_level_greater1_flag, coeff_abs_level_greater2_flag, coeff_ab_level_minus3, and coeff_sign_flag). If significant coeffgroup flag is equal to 0, it indicates that the entire 4 ⁇ 4 sub-block is zero. Therefore, there is no need for any additional information to represent this sub-block.
  • level and sign of sub-block can be skipped. If significant_coeffgroup_flag is equal to 1, it indicates that at least one coefficient in the 4 ⁇ 4 sub-block is non-zero. The level and sign of each non-zero coefficient in the sub-block will be coded after the significant_coeffgroup_flag. The value of significant coeff_flag is inferred as 1 for the sub-block containing the DC term (i.e., the transform coefficient with the lowest spatial frequency).
  • significant_coeff_flag is coded in regular CABAC mode with context modeling.
  • Different context selection methods are used for different TU sizes. For TUs with size of 4 ⁇ 4 or 8 ⁇ 8, the context selection is based on the position of the coefficient within the TU.
  • FIG. 3 shows the position-based context selection map for a 4 ⁇ 4 TU
  • FIG. 4 shows the position-based context selection map for an 8 ⁇ 8 TU as adopted in HM-5.0.
  • significance map 310 is used for the luma component
  • significance map 320 is used for the chroma component, where each number corresponds to a context selection.
  • luma and chroma 8 ⁇ 8 TUs share the same significance map.
  • FIGS. 5A and 5B illustrate examples of the neighboring-information-dependent context selection for luma and chroma components respectively.
  • One context is used for the DC coefficient.
  • the context selection depends on the neighboring coefficients. For example, a group of neighboring non-zero coefficients including I, H, F, E, and B around a current coefficient X are used for the context selection. If none of the neighboring pixels is non-zero, context # 0 is used for coefficient X. If one or two of the neighboring pixels are non-zero, context # 1 is used for X. Otherwise context # 2 is used for coefficient X.
  • the non-DC coefficients of the entire TU are divided into two regions (i.e., region- 1 and region- 2 ) for the luma component and one region (region- 2 ) for the chroma component.
  • regions will use different context sets.
  • Each context set includes three contexts (i.e., context # 0 , # 1 , and # 2 ).
  • the area of region- 1 for the luma component can be mathematically specified by the x-position and y-position of a coefficient X within the TU. As shown in FIG. 5A , if the sum of x-position and y-position of coefficient X is smaller than a threshold value and greater than 0, region-1 context set is selected for coefficient X.
  • region- 2 context set is selected.
  • the threshold value can be determined based on the width and the height of the TU. For example, the threshold can be set to a quarter of the maximum value of the TU width and the TU height. Accordingly, in the case of TU sizes 32 ⁇ 32, 32 ⁇ 8 or 8 ⁇ 32, the threshold value can be set to 8.
  • FIG. 6A illustrates an example where one 4 ⁇ 4 sub-block 610 (the center of the sub-block is indicated by a dot) for 16 ⁇ 16 TU 621 , 16 ⁇ 4 622 , and 4 ⁇ 16 TU 623 will use two context sets for significant coeff flag coding.
  • FIG. 6A illustrates an example where one 4 ⁇ 4 sub-block 610 (the center of the sub-block is indicated by a dot) for 16 ⁇ 16 TU 621 , 16 ⁇ 4 622 , and 4 ⁇ 16 TU 623 will use two context sets for significant coeff flag coding.
  • FIG. 6B illustrates an example where three 4 ⁇ 4 sub-blocks 631 to 633 for 32 ⁇ 32 TU 641 , 32 ⁇ 8 TU 642 , and 8 ⁇ 32 TU 643 will use two context sets for significant_coeff_flag coding.
  • sub-blocks 632 and 633 the sum of x-potion and y-position of coefficient X has to be calculated in order to determine whether the coefficient X is in region- 1 or region- 2 .
  • the sub-block containing the DC term i.e., sub-block 631
  • the position of the DC term is known and all other coefficients in the sub-block belong to region- 1 .
  • significant_coeffgroup_flag can be inferred and there is no need to calculate the sum of x-position and y-position.
  • For other sub-blocks there is no need to calculate the sum of x-position and y-position of coefficient X since all coefficients of other sub-blocks are in region- 2 and one context set for significant_coeff_flag coding is used.
  • the TU is divided into one or more sub-blocks and at least two context sets are used for the TU.
  • Non-DC transform coefficients in each sub-block are coded based on the same context, context set, or context formation.
  • the context, context set, or context formation for each sub-block can be determined based on sub-block index in scan order, horizontal sub-block index, vertical sub-block index, video component type, TU width,
  • the sub-block index in scan order, the horizontal sub-block index, the vertical sub-block index, or a combination of them can be compared with a threshold to determine the context, context set, or context formation for each sub-block.
  • the threshold is related to the TU width, the TU height or a combination of them.
  • the threshold can be set to the maximum of the TU width and the TU height divided by 16.
  • the sum of the horizontal sub-block index and the vertical sub-block index is used to classify each sub-block into a class and the context, context set, or context formation is then determined according to the class.
  • the sum can be compared with a threshold to classify each sub-block and the threshold is derived based on the maximum of the TU width and the TU height divided by 16.
  • the sub-block size can be 4 ⁇ 4, 4 ⁇ 8, 8 ⁇ 4, 8 ⁇ 8, 16 ⁇ 16, or 32 ⁇ 32.
  • a method and apparatus for significance group flag coding are disclosed.
  • the TUs are divided into one or more sub-blocks and the significance group flags are coded based on sub-block index in scan order, horizontal sub-block index, vertical sub-block index, video component type, TU width, TU height, context selection, context set selection, context formation selection, or any combination of the above.
  • the context selection, the context set selection and the context formation selection are associated with significance map coding of the sub-block.
  • the significance group flag coding will also share the same second context selection, second context set selection, or second context formation selection.
  • FIG. 1 illustrates exemplary architecture of CABAC encoding system with a bypass mode.
  • FIG. 2 illustrates an exemplary diagonal scanning order for the transform coefficinets of an 8 ⁇ 8 TU.
  • FIG. 3 illustrates an example of context selection maps for the 4 ⁇ 4 TU of luma and chroma components used by HEVC Test Model Version 5.0.
  • FIG. 4 illustrates an example of context selection map for the 8 ⁇ 8 TU of luma and chroma components used by HEVC Test Model Version 5.0.
  • FIG. 5A illustrates an example of neighboring-information-dependent context selection for the 16 ⁇ 16 TU of luma component used by HEVC Test Model Version 5.0.
  • FIG. 5B illustrates an example of neighboring-information-dependent context selection for the 16 ⁇ 16 TU of chroma component used by HEVC Test Model Version 5.0.
  • FIG. 6A illustrates an example of context selection for the 16 ⁇ 16 TU of luma component used by HEVC Test Model Version 5.0.
  • FIG. 6B illustrates an example of context selection for the 32 ⁇ 32 TU of luma component used by HEVC Test Model Version 5.0.
  • FIG. 7A illustrates an example of block-based context selection for the 16 ⁇ 16 TU of luma component according to an embodiment of the present invention.
  • FIG. 7B illustrates an example of block-based context selection for the 32 ⁇ 32 TU of luma component according to an embodiment of the present invention.
  • embodiments of the present invention use block-based context selection to simplify and unify the context set, context selection and context formation for significant coeff flag coding.
  • the region- 1 /region- 2 context selection depends on the x-block-index and y-block-index of the sub-block instead of the x-position and y-position of the coefficient X.
  • the x-block-index and y-block-index refer to the horizontal sub-block index and the vertical sub-block index respectively.
  • the value of the x-block-index is from 0 to (number of horizontal sub-blocks ⁇ 1).
  • the value of the y-block-index is from 0 to (number of vertical sub-blocks ⁇ 1).
  • none of the sub-blocks will cross the boundary between region- 1 and region- 2
  • the region- 1 /region- 2 determination can be based on the sum of the x-block-index and y-block-index of each sub-block.
  • the sum can be compared with a threshold.
  • the threshold value can either depend on the TU width and/or height or can be a fixed value.
  • FIG. 7A and FIG. 7B illustrates an example of block-based context selection according to an embodiment of the present invention.
  • the threshold value is set to the maximum value of TU width and TU height divided by 16. Therefore, the threshold value is 1 for 16 ⁇ 16 TU 721 , 16 ⁇ 4 TU 722 , and 4 ⁇ 16 TU 723 and the threshold value is 2 for 32 ⁇ 32 TU 741 , 32 ⁇ 8 TU 742 and 8 ⁇ 32 TU 743 .
  • region- 1 context set is used for the sub-block. Otherwise region- 2 context set is used for the sub-block.
  • one sub-block 710 in FIG. 7A and three sub-blocks 731 through 733 in FIG. 7B use region- 1 context and other sub-blocks use region- 2 context.
  • the value of significant_coeffgroup_flag can be inferred as 1 for region- 1 sub-blocks for unification.
  • 4 ⁇ 4 sub-block is used as an example of the block-based context selection
  • other sub-block sizes may also be used.
  • other sub-blocks such as 4 ⁇ 8, 8 ⁇ 4, 8 ⁇ 8, 16 ⁇ 16 and 32 ⁇ 32 may also be used.
  • block-based significance map coding is used for context selection, the block-based significance map coding may also be used for context set selection or context formation selection.
  • block-based significance map coding select context, context set or context formation based on sub-block index in scan order, horizontal sub-block index (i.e., x-block-index) and/or vertical sub-block index (i.e., y-block-index), the selection may also be based on the video component type and/or the TU width/height.
  • the video component type may correspond to the luma component (Y) or the chroma component (Cr or Cb).
  • the video component type may correspond to other video formats.
  • the selection may depend on a combination of sub-block index in scan order, horizontal sub-block index, vertical sub-block index, video component type, and TU width/height.
  • the block-based significance group flag coding may be based on sub-block index in scan order, horizontal sub-block index (i.e., x-block-index) and/or vertical sub-block index (i.e., y-block-index).
  • the block-based significance group flag coding may also be based on the video component type and/or the TU width/height.
  • the block-based significance group flag coding may also be based on the context, context set, or context formation selection associated with the significance map coding.
  • the block-based significance group flag coding may also depend on a combination of sub-block index in scan order, horizontal sub-block index, vertical sub-block index, video component type, TU width/height, context, context set, and context formation selection associated with the significance map coding.
  • Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both.
  • an embodiment of the present invention can be a circuit integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein.
  • An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
  • DSP Digital Signal Processor
  • the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
  • the software code or firmware code may be developed in different programming languages and different formats or styles.
  • the software code may also be compiled for different target platforms.
  • different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
US14/368,264 2012-01-03 2012-11-22 Method and apparatus for block-based significance map and significance group flag context selection Active 2035-05-27 US10298956B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/368,264 US10298956B2 (en) 2012-01-03 2012-11-22 Method and apparatus for block-based significance map and significance group flag context selection

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201261582725P 2012-01-03 2012-01-03
PCT/CN2012/085034 WO2013102380A1 (en) 2012-01-03 2012-11-22 Method and apparatus for block-based significance map and significance group flag context selection
US14/368,264 US10298956B2 (en) 2012-01-03 2012-11-22 Method and apparatus for block-based significance map and significance group flag context selection

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/085034 A-371-Of-International WO2013102380A1 (en) 2012-01-03 2012-11-22 Method and apparatus for block-based significance map and significance group flag context selection

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/299,907 Division US20170041637A1 (en) 2012-01-03 2016-10-21 Method and Apparatus for Block-based Significance Map and Significance Group Flag Context Selection

Publications (2)

Publication Number Publication Date
US20150003514A1 US20150003514A1 (en) 2015-01-01
US10298956B2 true US10298956B2 (en) 2019-05-21

Family

ID=48744982

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/368,264 Active 2035-05-27 US10298956B2 (en) 2012-01-03 2012-11-22 Method and apparatus for block-based significance map and significance group flag context selection
US15/299,907 Abandoned US20170041637A1 (en) 2012-01-03 2016-10-21 Method and Apparatus for Block-based Significance Map and Significance Group Flag Context Selection

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/299,907 Abandoned US20170041637A1 (en) 2012-01-03 2016-10-21 Method and Apparatus for Block-based Significance Map and Significance Group Flag Context Selection

Country Status (7)

Country Link
US (2) US10298956B2 (pl)
EP (2) EP3139609B1 (pl)
CN (2) CN104025600B (pl)
ES (1) ES2862124T3 (pl)
HU (1) HUE053382T2 (pl)
PL (1) PL3139609T3 (pl)
WO (1) WO2013102380A1 (pl)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130188698A1 (en) * 2012-01-19 2013-07-25 Qualcomm Incorporated Coefficient level coding
US9866829B2 (en) * 2012-01-22 2018-01-09 Qualcomm Incorporated Coding of syntax elements that correspond to coefficients of a coefficient block in video coding
KR102250088B1 (ko) * 2013-10-24 2021-05-10 삼성전자주식회사 비디오 스트림을 복호화하는 방법 및 장치
TWI538487B (zh) 2013-12-05 2016-06-11 財團法人工業技術研究院 螢幕視訊之預測編碼的方法與系統
FR3023112A1 (fr) * 2014-06-27 2016-01-01 Bcom Procede de codage d'une image numerique, procede de decodage, dispositifs et programmes d'ordinateurs associes
US20180176582A1 (en) * 2016-12-21 2018-06-21 Qualcomm Incorporated Low-complexity sign prediction for video coding
CN110419218B (zh) * 2017-03-16 2021-02-26 联发科技股份有限公司 编码或解码视频数据的方法和装置
EP3490253A1 (en) 2017-11-23 2019-05-29 Thomson Licensing Encoding and decoding methods and corresponding devices
US20190208225A1 (en) * 2018-01-02 2019-07-04 Qualcomm Incorporated Sign prediction in video coding
CN112166614B (zh) * 2018-03-27 2023-07-14 株式会社Kt 用于处理视频信号的方法和设备
CN109831670B (zh) * 2019-02-26 2020-04-24 北京大学深圳研究生院 一种反量化方法、系统、设备及计算机可读介质

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6856701B2 (en) 2001-09-14 2005-02-15 Nokia Corporation Method and system for context-based adaptive binary arithmetic coding
CN1874509A (zh) 2001-09-14 2006-12-06 诺基亚有限公司 基于上下文的自适应二进制算术编码的方法和系统
CN101243611A (zh) 2005-08-12 2008-08-13 微软公司 变换块的高效编码和解码
CN101389021A (zh) 2007-09-14 2009-03-18 华为技术有限公司 视频编解码方法及装置
CN101938657A (zh) 2010-10-07 2011-01-05 西安电子科技大学 高效视频编码中编码单元自适应划分方法
WO2011128303A2 (en) 2010-04-13 2011-10-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Coding of significance maps and transform coefficient blocks
WO2011129672A2 (ko) 2010-04-16 2011-10-20 에스케이텔레콤 주식회사 영상 부호화/복호화 장치 및 방법
WO2011142817A1 (en) 2010-05-12 2011-11-17 Thomson Licensing Methods and apparatus for unified significance map coding
US20130128985A1 (en) 2011-11-19 2013-05-23 Research In Motion Limited Multi-level significance map scanning
US20130188684A1 (en) * 2011-12-21 2013-07-25 Panasonic Corporation Image coding method, image decoding method, image coding apparatus and image decoding apparatus
US20130215969A1 (en) * 2011-12-20 2013-08-22 General Instrument Corporation Method and apparatus for last coefficient indexing for high efficiency video coding

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BRPI0818444A2 (pt) * 2007-10-12 2016-10-11 Qualcomm Inc codificação adaptativa de informação de cabeçalho de bloco de vídeo
US7592937B1 (en) * 2008-06-02 2009-09-22 Mediatek Inc. CABAC decoding unit and method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6856701B2 (en) 2001-09-14 2005-02-15 Nokia Corporation Method and system for context-based adaptive binary arithmetic coding
CN1874509A (zh) 2001-09-14 2006-12-06 诺基亚有限公司 基于上下文的自适应二进制算术编码的方法和系统
CN101243611A (zh) 2005-08-12 2008-08-13 微软公司 变换块的高效编码和解码
US8599925B2 (en) 2005-08-12 2013-12-03 Microsoft Corporation Efficient coding and decoding of transform blocks
CN101389021A (zh) 2007-09-14 2009-03-18 华为技术有限公司 视频编解码方法及装置
WO2011128303A2 (en) 2010-04-13 2011-10-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Coding of significance maps and transform coefficient blocks
WO2011129672A2 (ko) 2010-04-16 2011-10-20 에스케이텔레콤 주식회사 영상 부호화/복호화 장치 및 방법
WO2011142817A1 (en) 2010-05-12 2011-11-17 Thomson Licensing Methods and apparatus for unified significance map coding
US20130058418A1 (en) * 2010-05-12 2013-03-07 Thomson Licensing Methods and Apparatus for Unified Significance Map Coding
CN101938657A (zh) 2010-10-07 2011-01-05 西安电子科技大学 高效视频编码中编码单元自适应划分方法
US20130128985A1 (en) 2011-11-19 2013-05-23 Research In Motion Limited Multi-level significance map scanning
US20130215969A1 (en) * 2011-12-20 2013-08-22 General Instrument Corporation Method and apparatus for last coefficient indexing for high efficiency video coding
US20130188684A1 (en) * 2011-12-21 2013-07-25 Panasonic Corporation Image coding method, image decoding method, image coding apparatus and image decoding apparatus
EP2797321A1 (en) 2011-12-21 2014-10-29 Panasonic Intellectual Property Corporation of America Method for encoding image, image-encoding device, method for decoding image, image-decoding device, and image encoding/decoding device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Chuang, T.D., et al.; "Block-Based Significance Map Context Selection;" Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP3 and ISO/IEC JTC1/SC29/WG11; Feb. 2012; pp. 1-5.
Ji, T., et al.; "Sub-Block Based Significance Map Region Classification;" Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP3 and ISO/IEC JTC1/SC29/WG11; Feb. 2012; pp. 1-10.
Terads, K., et al.; "Simplification of Context Selection for Significant_Coeff_Flag;" oint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP3 and ISO/IEC JTC1/SC29/WG11; Feb. 2012; pp. 1-9.

Also Published As

Publication number Publication date
EP3139609B1 (en) 2021-01-06
WO2013102380A1 (en) 2013-07-11
CN108600761B (zh) 2020-05-08
NZ713803A (en) 2016-02-26
CN104025600A (zh) 2014-09-03
EP2745512B1 (en) 2019-10-23
EP2745512A4 (en) 2015-10-14
EP3139609A1 (en) 2017-03-08
ES2862124T3 (es) 2021-10-07
HUE053382T2 (hu) 2021-06-28
CN104025600B (zh) 2018-05-11
US20170041637A1 (en) 2017-02-09
CN108600761A (zh) 2018-09-28
WO2013102380A4 (en) 2014-03-13
PL3139609T3 (pl) 2021-09-20
EP2745512A1 (en) 2014-06-25
US20150003514A1 (en) 2015-01-01
NZ622475A (en) 2016-02-26

Similar Documents

Publication Publication Date Title
US10298956B2 (en) Method and apparatus for block-based significance map and significance group flag context selection
US10893273B2 (en) Data encoding and decoding
EP2839645B1 (en) Coefficient groups and coefficient coding for coefficient scans
CN113632472A (zh) 用于视频译码的扩展的多变换选择
EP3306924A1 (en) Method and device for context-adaptive binary arithmetic coding a sequence of binary symbols representing a syntax element related to picture data
US9729890B2 (en) Method and apparatus for unification of significance map context selection
CN104041049A (zh) Hevc中8×8变换单元的联合系数扫描方法及其装置
NZ713803B2 (en) Method and apparatus for block-based significance map and significance group flag context selection
NZ622475B2 (en) Method and apparatus for block-based significance map and significance group flag context selection

Legal Events

Date Code Title Description
AS Assignment

Owner name: MEDIATEK INC., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HSU, CHIH-WEI;CHUANG, TZU-DER;CHEN, CHING-YEH;AND OTHERS;REEL/FRAME:033160/0900

Effective date: 20140521

AS Assignment

Owner name: HFI INNOVATION INC., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MEDIATEK INC.;REEL/FRAME:039609/0864

Effective date: 20160628

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP, ISSUE FEE PAYMENT VERIFIED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4