US20200374561A1 - Luma and chroma decoding using a common predictor - Google Patents
Luma and chroma decoding using a common predictor Download PDFInfo
- Publication number
- US20200374561A1 US20200374561A1 US16/896,596 US202016896596A US2020374561A1 US 20200374561 A1 US20200374561 A1 US 20200374561A1 US 202016896596 A US202016896596 A US 202016896596A US 2020374561 A1 US2020374561 A1 US 2020374561A1
- Authority
- US
- United States
- Prior art keywords
- signal data
- video signal
- decoding
- video
- mode
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/107—Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/12—Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
- H04N19/122—Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
Definitions
- the present invention relates generally to video encoders and decoders and, more particularly, to methods and apparatus for video encoding and decoding.
- the 4:4:4 format of the International Telecommunication Union, Telecommunication Sector (ITU-T) H.264 standard (hereinafter the “H.264 standard”) only codes one of three channels as luma, with the other two channels being coded as chroma using less efficient tools.
- ITU-T International Telecommunication Union, Telecommunication Sector
- H.264 standard only codes one of three channels as luma, with the other two channels being coded as chroma using less efficient tools.
- coding two out of the three input components with the less effective chroma coding algorithm results in the use of more bits in those two channels. This particular problem is more noticeable in intra frames.
- the H.264 standard running in the Intra-Only mode is less efficient than JPEG2k for overall compression quality at 40 dB (PSNR) and above.
- a video encoder for encoding video signal data for an image block.
- the video encoder includes an encoder for encoding all color components of the video signal data using a common predictor.
- a method for encoding video signal data for an image block includes encoding all color components of the video signal data using a common predictor.
- a video decoder for decoding video signal data for an image block.
- the video decoder includes a decoder for decoding all color components of the video signal data using a common predictor.
- a method for decoding video signal data for an image block includes decoding all color components of the video signal data using a common predictor.
- FIG. 1 is a block diagram illustrating an exemplary video encoding apparatus to which the present principles may be applied;
- FIG. 2 is a block diagram illustrating an exemplary video decoding apparatus to which the present principles may be applied;
- FIG. 3 is a flow diagram illustrating an exemplary video encoding process with a pre-encoding, color transform block, in accordance with the present principles
- FIG. 4 is a flow diagram illustrating an exemplary video decoding process with a post-decoding, inverse color transform block, in accordance with the present principles
- FIG. 5 is a block diagram illustrating a simplified model of residual color transform (RCT).
- FIGS. 6A and 6B are plots of average PSNR verses bit rate for ATV intra-only in accordance with the present principles
- FIGS. 7A and 7B are plots of average PSNR verses bit rate for CT intra-only in accordance with the present principles
- FIGS. 8A and 8B are plots of average PSNR verses bit rate for DT intra-only in accordance with the present principles
- FIGS. 9A and 9B are plots of average PSNR verses bit rate for MIR_HD intra-only in accordance with the present principles
- FIGS. 10A and 10B are plots of average PSNR verses bit rate for RT intra-only in accordance with the present principles
- FIGS. 11A and 11B are plots of average PSNR verses bit rate for STB_HD intra-only in accordance with the present principles
- FIG. 12 is a table illustrating H.264 sequence parameter syntax in accordance with the present principles.
- FIGS. 13A, 13B, 13C, and 13D comprise a table illustrating H.264 residual data syntax in accordance with the present principles
- FIG. 14 is a flow diagram illustrating an exemplary video encoding process with a pre-encoding, color transform block, in accordance with the present principles
- FIG. 15 is a flow diagram illustrating an exemplary video decoding process with a post-decoding, inverse color transform step block, in accordance with the present principles.
- FIGS. 16A and 16B comprise a table illustrating H.264 macroblock prediction syntax in accordance with the present principles.
- the present invention is directed to methods and apparatus for video encoding and decoding video signal data. It is to be appreciated that while the present invention is primarily described with respect to video signal data sampled using the 4:4:4 format of the International Telecommunication Union, Telecommunication Sector (ITU-T) H.264 standard, the present invention may also be applied to video signal data sampled using other formats (e.g., the 4:2:2 and/or 4:2:0 format) of the H.264 standard as well as other video compression standards while maintaining the scope of the present invention.
- ITU-T International Telecommunication Union, Telecommunication Sector
- a luma coding algorithm is used to code all three component channels of, e.g., 4:4:4 content.
- Advantages of this embodiment include an improvement in the overall coding performance for compressing 4:4:4 content with respect to the prior art.
- color transformation is performed as a pre-processing step.
- a Residual Color Transform (RCT) is not performed inside the compression loop.
- Advantages of this embodiment include the providing of consistent encoder/decoder architecture among all color formats.
- the same motion/spatial prediction mode is used for all three components.
- Advantages of this embodiment include reduced codec complexity and backwards compatibility.
- a set (or subset) of three (3) restricted spatial predictors may be utilized for the three components.
- Advantages of this embodiment include an improvement in the overall coding performance for compressing 4:4:4 content with respect to the prior art.
- a luma coding algorithm is advantageously used to code all three component channels, color transformation is performed as a pre-processing step, and a single predictor is used for all three component channels.
- a luma coding algorithm is advantageously used to code all three component channels, color transformation is performed as a pre-processing step, and a set (or subset) of three (3) restricted spatial predictors may be utilized for the three component channels.
- other combinations of the various embodiments may also be implemented given the teachings of the present principles provided herein, while maintaining the scope of the present invention.
- processor or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (“DSP”) hardware, read-only memory (“ROM”) for storing software, random access memory (“RAM”), and non-volatile storage.
- DSP digital signal processor
- ROM read-only memory
- RAM random access memory
- any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
- any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function.
- the invention as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
- the video encoding apparatus 199 includes a video encoder 100 and a pre-encoding color transform module 105 .
- the pre-encoding color transform module 105 is for performing color pre-processing of video signals prior to inputting the same to the video encoder 100 .
- the color pre-processing performed by the pre-encoding, color transform module 105 is further described herein below. It is to be appreciated that the pre-encoding, color transform module 105 may be omitted in some embodiments.
- An input of the pre-encoding color transform module 105 and an input of the video encoder 100 are available as inputs of the video encoding apparatus 199 .
- An output of the pre-encoding, color transform module 105 is connected in signal communication with the input of the video encoder 100 .
- the input of the video encoder 100 is connected in signal communication with a non-inverting input of a summing junction 110 .
- the output of the summing junction 110 is connected in signal communication with a transformer/quantizer 120 .
- the output of the transformer/quantizer 120 is connected in signal communication with an entropy coder 140 .
- An output of the entropy coder 140 is available as an output of the video encoder 100 and also as an output of the video encoding apparatus 199 .
- the output of the transformer/quantizer 120 is further connected in signal communication with an inverse transformer/quantizer 150 .
- An output of the inverse transformer/quantizer 150 is connected in signal communication with an input of a deblock filter 160 .
- An output of the deblock filter 160 is connected in signal communication with reference picture stores 170 .
- a first output of the reference picture stores 170 is connected in signal communication with a first input of a motion and spatial prediction estimator 180 .
- the input to the video encoder 100 is further connected in signal communication with a second input of the motion and spatial prediction estimator 180 .
- the output of the motion and spatial prediction estimator 180 is connected in signal communication with a first input of a motion and spatial prediction compensator 190 .
- a second output of the reference picture stores 170 is connected in signal communication with a second input of the motion and spatial compensator 190 .
- the output of the motion and spatial compensator 190 is connected in signal communication with an inverting input of the summing junction 110 .
- the video decoding apparatus 299 includes a video decoder 200 and a post-decoder, inverse color transform module 293 .
- An input of the video decoder 200 is available as an input of the video decoding apparatus 299 .
- the input to the video decoder 200 is connected in signal communication with an input of the entropy decoder 210 .
- a first output of the entropy decoder 210 is connected in signal communication with an input of an inverse quantizer/transformer 220 .
- An output of the inverse quantizer/transformer 220 is connected in signal communication with a first input of a summing junction 240 .
- the output of the summing junction 240 is connected in signal communication with a deblock filter 290 .
- An output of the deblock filter 290 is connected in signal communication with reference picture stores 250 .
- the reference picture store 250 is connected in signal communication with a first input of a motion and spatial prediction compensator 260 .
- An output of the motion spatial prediction compensator 260 is connected in signal communication with a second input of the summing junction 240 .
- a second output of the entropy decoder 210 is connected in signal communication with a second input of the motion compensator 260 .
- the output of the deblock filter 290 is available as an output of the video decoder 200 and also as an output of the video decoding apparatus 299 .
- an output of the post-decoding, inverse color transform module 293 may be available as an output of the video decoding apparatus 299 .
- the output of the video decoder 200 may be connected in signal communication with an input of the post-decoding, inverse color transform module 293 , which is a post-processing module with respect to the video decoder 200 .
- An output of the post-decoding, inverse color transform module 293 provides a post-processed, inverse color transformed signal with respect to the output of the video decoder 200 . It is to be appreciated that use of the post-decoding, inverse color transform module 293 is optional.
- a first described embodiment is a combined embodiment in which the luma coding algorithm is used for all color components, the same spatial prediction mode is used for all color components, and the Residual Color Transform (RCT) is omitted from inside the compression loop. Test results for this combined embodiment are also provided. Subsequently thereafter, a second combined embodiment is described wherein the luma coding algorithm is used for all color components, a set (or subset) of restricted spatial predictors is used for all color components (instead of a single spatial prediction mode), and the Residual Color Transform (RCT) is omitted from inside the compression loop.
- a difference between the first and second combined embodiments is the use of a single spatial prediction mode for all color components in the first combined embodiment versus the use of a set (or subset) of restricted spatial predictors for all color components in the second combined embodiment.
- the embodiments described herein may be implemented as stand alone embodiments or may be combined in any manner, as readily appreciated by one of ordinary skill in this and related arts.
- only a single spatial prediction mode is used, without combination with other embodiments such as the omission of RCT from the compression loop. It is to be appreciated that given the teachings of the present principles provided herein, these and other variations, implementations, and combinations of the embodiments of the present invention will be readily ascertainable by one of ordinary skill in this and related arts, while maintaining the scope of the present invention.
- FIG. 3 an exemplary video encoding process with a pre-encoding, color transform block, are indicated generally by the reference numerals 300 and 301 , respectively.
- the pre-encoding, color transform block 301 includes blocks 306 , 308 , and 310 . Moreover, it is to be appreciated that the pre-encoding, color transform block 301 is optional and, thus, may be omitted in some embodiments of the present invention.
- the pre-encoding, color transform block 301 includes a loop limit block 306 that begins a loop for each block in an image, and passes control to a function block 308 .
- the function block 308 performs color pre-processing of the video signal data of the current image block, and passes control to a loop limit block 310 .
- the loop limit block 310 ends the loop.
- the loop limit block 310 passes control to a loop limit block 312 , the latter being included in the video encoding process 300 .
- the loop limit block 312 begins a loop for each block in the image, and passes control to a function block 315 .
- the function block 315 forms a motion compensated or spatial prediction of the current image block using a common predictor for each color component of the current image block, and passes control to a function block 320 .
- the function block 320 subtracts the motion compensated or spatial prediction from the current image block to form a prediction residual, and passes control to a function block 330 .
- the function block 330 transforms and quantizes the prediction residual, and passes control to a function block 335 .
- the function block 335 inverse transforms and quantizes the prediction residual to form a coded prediction residual, and passes control to a function block 345 .
- the function block 345 adds the coded residual to the prediction to form a coded picture block, and passes control to an end loop block 350 .
- the end loop block 350 ends the loop and passes control to an end block 355 .
- FIG. 4 an exemplary video decoding process with a post-decoding, inverse color transform block, are indicated generally by the reference numerals 400 and 460 , respectively.
- the post-decoding, inverse color transform block 460 includes blocks 462 , 464 , 466 , and 468 . Moreover, it is to be appreciated that the post-decoding, inverse color transform block 460 is optional and, thus, may be omitted in some embodiments of the present invention.
- the decoding process 400 includes a loop limit block 410 that begins a loop for a current block in an image, and passes control to a function block 415 .
- the function block 415 entropy decodes the coded residual, and passes control to a function block 420 .
- the function block 420 inverse transforms and quantizes the decoded residual to form a coded residual, and passes control to a function block 430 .
- the function block 430 adds the coded residual to the prediction formed from a common predictor for each color component to form a coded picture block, and passes control to a loop limit block 435 .
- the loop limit block 435 ends the loop and passes control to an end block 440 .
- the loop limit block 435 optionally passes control to the post-decoding, inverse color transform block 460 , in particular, the loop limit block 462 included in the post-decoding, inverse color transform block 460 .
- the loop limit block 462 begins a loop for each block in an image, and passes control to a function block 464 .
- the function block 464 performs an inverse color post-processing of the video signal data of the current image block, and passes control to a loop limit block 466 .
- the loop limit block 466 ends the loop, and passes control to an end block 468 .
- every component channel has full resolution.
- the luma coding algorithm is used on every color component to achieve the maximum overall compression efficiency.
- every color component may be compressed, e.g., using those prediction modes listed in Table 8-2, Table 8-3, and Table 8-4 in ISO/IEC 14496 10 Advanced Video Coding 3 rd Edition (ITU-T Rec. H.264), ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6, Document N6540, July 2004.
- the same spatial prediction mode is used for all three pixel components, to further reduce the complexity of the codec and improve performance.
- the prediction mode set by the prev_intra4 ⁇ 4_pred_mode_flag, rem_intra4 ⁇ 4_pred_mode, prev_intra8 ⁇ 8_pred_mode_flag, and rem_intra8 ⁇ 8_pred_mode parameters for the luma in the macroblock prediction header may be used by all three components. Therefore, no extra bits and syntax elements are needed.
- the reference pixels at fractional pixel locations may be calculated by the interpolation methods described in Section 8.4.2.2.1 of the H.264 standard for all three channels. The detailed syntax and semantic changes to the current H.264 standard are further discussed herein below.
- Residual Color Transform was added to the encoder/decoder in the High 4:4:4 Profile.
- the compression structure for the 4:4:4 format is different from the one currently used in all of the other profiles in the H.264 standard for 4:2:0 and 4:2:2 formats.
- YCOCG does not always improve the overall compression performance.
- the effectiveness of YCOCG is highly content dependent.
- the color transform is placed outside of the prediction loop as a part of the preprocessing block.
- the RCT model 500 includes a reference pixel generator 510 , a summing junction 520 , and a linear transform module 530 .
- Inputs to the reference pixel generator 510 are configured to receive motion/edge information and vectors [X 1 ], [X 2 ] . . . [X n ].
- An output of the reference pixel generator 510 is connected in signal communication with an inverting input of the summing junction 520 , which provides prediction vector [X p ] thereto.
- a non-inverting input of the summing junction 520 is configured to receive input vector [X in ]thereto.
- An output of the summing junction 520 is connected in signal communication with an input of the linear transform module 530 , which provides vector [X d ] thereto.
- An output of the linear transform module 530 is configured to provide vector [Y d ].
- the [X in ], [X d ], [X p ], [X 1 ], [X 2 ] . . . [X n ] are 3 ⁇ 1 vectors representing the pixels in the RGB domain.
- the [Y d ] is a 3 ⁇ 1 vector representing the result of the color transform. Therefore,
- the reference pixel [X p ] can be expressed as follows:
- a n ⁇ 1 vector [C] represents the linear operations involved in the spatial predictors and interpolation filters defined in the H.264 standard.
- the reference pixel is calculated by using a total number of n neighboring pixels [X 1 ], [X 2 ], . . . [X n ].
- [ Y d ] [ A ] ⁇ [ X i ⁇ ⁇ n ] - [ A ] ⁇ ( [ R 1 R 2 R 3 ... R n G 1 G 2 G 3 ... G n B 1 B 2 B 3 ... B n ] ⁇ [ c 1 c 2 c 3 ⁇ c n ] ) . ( 4 )
- [ Y d ] [ Y i ⁇ ⁇ n ] - [ Y 1 Y 2 Y 3 ... Y n ] ⁇ [ c 1 c 2 c 3 ⁇ c n ] . ( 6 )
- equation (6) clearly shows that using YUV as the input to the encoder/decoder in accordance with the principles of the present invention as configured in this embodiment, is identical to performing RCT.
- This new profile_idc may be added in the sequence parameter header, and may be used in the macroblock layer header, as well as the residual data header.
- plots of average PSNR verses bit rate for ATV intra-only are indicated generally by the reference numerals 600 and 650 , respectively.
- plots of average PSNR verses bit rate for CT intra-only are indicated generally by the reference numerals 700 and 750 , respectively.
- plots of average PSNR verses bit rate for DT intra-only are indicated generally by the reference numerals 800 and 850 .
- plots of average PSNR verses bit rate for MIR_HD intra-only are indicated generally by the reference numerals 900 and 950 , respectively.
- plots of average PSNR verses bit rate for RT intra-only are indicated generally by the reference numerals 1000 and 1050 , respectively.
- plots of average PSNR verses bit rate for STB_HD intra-only are indicated generally by the reference numerals 1100 and 1150 .
- FIGS. 6A, 7A, 8A, 9A, 10, and 11A illustrate test results for the proposed Advanced 4:4:4 profile (indicated and preceded by the term “new”) versus approximation results corresponding thereto.
- FIGS. 6B, 7B, 8B, 9B, 10B , and 11 B illustrate test results for the proposed Advanced 4:4:4 profile (indicated and preceded by the term “new”) versus JPEK2k.
- the PSNR is indicated in decibels (dB) and the bit rate is indicated in bits per second (bps).
- ATV, CT, DT, MIR, RT, STB are the names of the test clips.
- the proposed advanced 4:4:4 profiles were implemented in the JVT Reference software JM9.6. Both intra-only and IBBP coding structure were used in the tests.
- the quantization parameter was set at 6, 12, 18, 24, 30, and 42 for each of the R-D curves.
- the RD-optimized mode selection was used.
- KaKadu V2.2.3 software was used in the tests.
- the test results were generated by using 5 levels of wavelet decompression with the 9/7-tap bi-orthogonal wavelet filter. There was only one tile per frame and the RD-Optimization for a given target rate was also used.
- an implementation in accordance with the principles of the present invention as configured in an embodiment, in general, is very similar to JPEG2k in terms of overall compression efficiency. In some cases, it is even slightly better.
- an implementation in accordance with the principles of the present invention as configured in an embodiment provides significantly greater performance (compression) than the current High 4:4:4 Profile for quality above 40 dB (PSNR).
- PSNR High 4:4:4 Profile for quality above 40 dB
- New1-YCOCG or New3-YCOCG is better than YCOCG and RCT-ON
- New1-RGB or New3-RGB is better than RCT-OFF.
- the average improvement in the average PSNR is more than 1.5 dB.
- the improvement can be translated to more than 25% bit savings at a PSNR equal to 45 dB.
- a table for H.264 sequence parameter syntax is indicated generally by the reference numeral 1200 . Changes to the syntax in accordance with the principles of the present invention as configured in an embodiment, are indicated by italic text.
- a table for H.264 residual data syntax is indicated generally by the reference numeral 1300 . Additions/changes to the syntax in accordance with the principles of the present invention as configured in an embodiment, are indicated by italic text.
- the luma section in the residual data header along with some necessary text modifications are repeated twice to support the luma1 and luma2 1 , respectively.
- a set (or subset) of three (3) restricted spatial predictors is utilized for the component channels (e.g., RGB, YUV, YCrCb formats, and so forth) instead of a single spatial prediction mode.
- this embodiment may be combined with other embodiments described herein, such as, e.g., the use of only the luma coding algorithm to code all three component channels of content and/or the use of color transformation as a pre-processing step.
- FIG. 14 an exemplary video encoding process with a pre-encoding, color transform step are indicated generally by the reference numerals 1400 and 1401 , respectively.
- the pre-encoding, color transform block 1401 includes blocks 1406 , 1408 , and 1410 . Moreover, it is to be appreciated that the pre-encoding, color transform block 1401 is optional and, thus, may be omitted in some embodiments of the present invention.
- the pre-encoding, color transform block 1401 includes a loop limit block 1406 that begins a loop for each block in an image, and passes control to a function block 1408 .
- the function block 1408 performs color pre-processing of the video signal data of the current image block, and passes control to a loop limit block 1410 .
- the loop limit block 1410 ends the loop.
- the loop limit block 1410 passes control to a loop limit block 1412 , the latter being included in the video encoding process 1400 .
- the loop limit block 1412 begins a loop for each block in the image, and passes control to a function block 1415 .
- the function block 1415 forms a motion compensated or spatial prediction of the current image block using a common predictor for each color component of the current image block, and passes control to a function block 1420 .
- the function block 1420 subtracts the motion compensated or spatial prediction from the current image block to form a prediction residual, and passes control to a function block 1430 .
- the function block 1430 transforms and quantizes the prediction residual, and passes control to a function block 1435 .
- the function block 1435 inverse transforms and quantizes the prediction residual to form a coded prediction residual, and passes control to a function block 1445 .
- the function block 1445 adds the coded residual to the prediction to form a coded picture block, and passes control to an end loop block 1450 .
- the end loop block 1450 ends the loop and passes control to an end block 1455 .
- FIG. 15 an exemplary video decoding process with a post-decoding, inverse color transform step are indicated generally by the reference numerals 1500 and 1560 , respectively.
- the post-decoding, inverse color transform block 1560 includes blocks 1562 , 1564 , 1566 , and 1568 . Moreover, it is to be appreciated that the post-decoding, inverse color transform block 1560 is optional and, thus, may be omitted in some embodiments of the present invention.
- the decoding process 1500 includes a loop limit block 1510 that begins a loop for a current block in an image, and passes control to a function block 1515 .
- the function block 1515 entropy decodes the coded residual, and passes control to a function block 1520 .
- the function block 1520 inverse transforms and quantizes the decoded residual to form a coded residual, and passes control to a function block 1530 .
- the function block 1530 adds the coded residual to the prediction formed from a common predictor for each color component to form a coded picture block, and passes control to a loop limit block 1535 .
- the loop limit block 1535 ends the loop and passes control to an end block 1540 .
- the loop limit block 1535 optionally passes control to the post-decoding, inverse color transform block 1560 , in particular, the loop limit block 1562 included in the post-decoding, inverse color transform block 1560 .
- the loop limit block 1562 begins a loop for each block in an image, and passes control to a function block 1564 .
- the function block 1564 performs an inverse color post-processing of the video signal data of the current image block, and passes control to a loop limit block 1566 .
- the loop limit block 1566 ends the loop, and passes control to an end block 1568 .
- profile_idc 166
- This new profile may also be used for the second combined embodiment, with corresponding semantic and syntax changes as described herein below for the second combined embodiment.
- This new profile_idc is added in the Sequence Parameter Set and will be mainly used in the subsequent headers to indicate that the input format is 4:4:4 and all three input channels are coded similarly as luma.
- an Intra_4 ⁇ 4 macroblock in the Advanced 4:4:4 Profile means every input component channel may be encoded by using all of the 9 possible prediction modes given in Table 8-2 of the H.264 standard.
- the current High 4:4:4 Profile two of the channels for an Intra_4 ⁇ 4 macroblock will be treated as chroma and only one of the 4 possible intra prediction mode in Table 8-5 of the H.264 standard will be used.
- the changes made for the Advanced 4:4:4 Profile occur at the interpolation process for the calculation of the reference pixel value at the fractional pixel location.
- the procedure described in Section 8.4.2.2.1 of the H.264 standard, Luma sample interpolation process will be applied for luma, Cr, and Cb.
- the current High 4:4:4 Profile uses Section 8.4.2.2.2 of the H.264 standard, Chroma sample interpolation process, for two of the input channels.
- the ResidueColorTransformFlag is removed from the sequence parameter set in the Advanced 4:4:4 Profile.
- semantic changes to the corresponding syntax include the following.
- CodedBlockPatternChroma shall be set to 0.
- CodedBlockPatternLuma specifies, for each of the twelve 8 ⁇ 8 luma, Cb, and Cr blocks of the macroblock, one of the following cases: (1) All transform coefficient levels of the twelve 4 ⁇ 4 luma blocks in the 8 ⁇ 8luma, 8 ⁇ 8 Cb and 8 ⁇ 8 Cr blocks are equal to zero; (2) One or more transform coefficient levels of one or more of the 4 ⁇ 4 luma blocks in the 8 ⁇ 8luma, 8 ⁇ 8 Cb, and 8 ⁇ 8 Cr blocks shall be non-zero valued.
- Intra_4 ⁇ 4 macroblock is chosen as the mb_type, luma, Cr, or Cb could still find its own best spatial prediction mode in Table 8-2 in Section 8.3.1.1 of the H.264 standard such as, e.g., Intra_4 ⁇ 4_Vertical for luma, Intra_ 4 _ 4 _Horizontal for Cr, and Intra_4 ⁇ 4_Diagonal_Down_Left for Cb.
- Another approach relating to the first combined embodiment described above, is to constrain all three input channels to share the same prediction mode. This can be done by using the prediction information that is currently carried by the existing syntax elements, such as prev_intra4 ⁇ 4_pre_mode_flag, rem_intra4 ⁇ 4_pred_mode, pre_intra8 ⁇ 8_pred_mode_flag, and rem_intra8 ⁇ 8_pred_mode, in the Macroblock Prediction syntax. This option will result in less change to the H.264 standard and some slight loss of the coding efficiency as well.
- FIG. 16 a table for H.264 macroblock prediction syntax is indicated generally by the reference numeral 1700 .
- the modified Macroblock Prediction Syntax to support using the three prediction modes is listed below, where:
- KaKadu V2.2.3 software was used in the tests.
- the test results were generated by using 5 levels of wavelet decompression with the 9/7-tap bi-orthogonal wavelet filter. There was only one tile per frame and the RD-Optimization for a given target rate was also used.
- PSNR measurements were primarily calculated in the original color domain of the source contents, which is RGB for the clips described above.
- Average PSNR defined as (PSNR(red)+PSNR(green)+PSNR(blue))/3, is used to compare the overall compression quality.
- New1 the proposed Advanced 4:4:4 Profile with a single prediction mode.
- New3 the proposed Advanced 4:4:4 Profile with three prediction modes.
- YCOCG RGB to YCOCG conversion was done outside the codec. Then the converted YCOCG was used as the input to the JVT software.
- R+G+B Proposed method approximated by compressing the R, G, and B signals separately.
- Y+CO+CG Proposed method approximated by compressing the converted Y, CO, and CG signals separately.
- JPEG2k_RGB The JPEG2k compression was done in the RGB domain. The JPEG2k color transform was turned off.
- JPEG2k_YUV The JPEG2k compression was done in the YUV domain. The JPEG2k color transform was used.
- the proposed Advanced 4:4:4 Profile in accordance with the present principles is very similar to JPEK2k in terms of overall compression efficiency. In some cases, it is even slightly better.
- the approach in accordance with the principles of the present invention is clearly better than the current High 4:4:4 Profile.
- PSNR PSNR equal to and greater than 45 dB
- the average improvement in the average PSNR is more than 1.5 dB.
- the improvement can be translated to more than 25% bit savings at a PSNR equal to 45 dB.
- the test results demonstrate that the proposed Advanced 4:4:4 Profile, utilizing the improvements corresponding to the principles of the present invention, delivers improved performance when compared to the current High 4:4:4 Profile.
- the performance gain is significant.
- moving the color transform outside the codec will make the architecture of the codec consistent among all of the color formats. As a result, it will make the implementation easier and reduce the cost. It will also make the codec more robust in terms of selecting the optimum color transform for achieving better coding efficiency.
- the proposed approach does not add any new coding tools and requires only some slight changes to the syntax and semantics.
- a method and apparatus are provided for video encoding and decoding. Modifications to the existing H.264 standard are provided which improve performance beyond that currently achievable. Moreover, performance is improved even beyond JPEG-2000 for high quality applications.
- significant 4:4:4 coding performance improvements in the H.264 standard can be achieved by using the luma coding algorithm to code all of the three color components of 4:4:4 content. That is, no new tools are necessary for the luma (or chroma, which is not used) compression algorithm. Instead, the existing luma coding tools are utilized.
- syntax and semantic changes to the current 4:4:4 profile may be implemented in accordance with the present principles to support the luma coding of all three component channels.
- the spatial prediction tools used in luma clearly exhibited their superior performance to those used in chroma.
- the test sequences when every color component was encoded as luma, more than a 30% bit reduction was observed at a compressed quality greater than or equal to 45 dB(Average PSNR).
- the teachings of the present invention are implemented as a combination of hardware and software.
- the software may be implemented as an application program tangibly embodied on a program storage unit.
- the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
- the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPU”), a random access memory (“RAM”), and input/output (“I/O”) interfaces.
- CPU central processing units
- RAM random access memory
- I/O input/output
- the computer platform may also include an operating system and microinstruction code.
- the various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU.
- various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.
Abstract
Description
- This application claims the benefit of U.S. Provisional Application Ser. No. 60/671,255, filed Apr. 13, 2005 and U.S. Provisional Application Ser. No. 60/700,834, filed Jul. 20, 2005 both of which are incorporated by reference herein in their respective entireties. Moreover, this application is related to the U.S. Patent Applications, application Ser. Nos. 11/918,204, 14/221,998, 11/918,097 entitled “METHOD AND APPARATUS FOR VIDEO CODING”; Ser. No. 11/887,791 entitled “METHOD AND APPARATUS FOR VIDEO DECODING”; and Ser. No. 11/918,027 entitled “METHOD AND APPARATUS FOR VIDEO ENCODING AND DECODING”, each filed concurrently herewith.
- The present invention relates generally to video encoders and decoders and, more particularly, to methods and apparatus for video encoding and decoding.
- Presently, the 4:4:4 format of the International Telecommunication Union, Telecommunication Sector (ITU-T) H.264 standard (hereinafter the “H.264 standard”) only codes one of three channels as luma, with the other two channels being coded as chroma using less efficient tools. When an input to a codec is in the 4:4:4 format with full resolution in every input component, coding two out of the three input components with the less effective chroma coding algorithm results in the use of more bits in those two channels. This particular problem is more noticeable in intra frames. For example, the H.264 standard running in the Intra-Only mode is less efficient than JPEG2k for overall compression quality at 40 dB (PSNR) and above.
- Accordingly, it would be desirable and highly advantageous to have methods and apparatus for video encoding and decoding that overcome the above-described disadvantages of the prior art.
- These and other drawbacks and disadvantages of the prior art are addressed by the present invention, which is directed to methods and apparatus for video encoding and decoding.
- According to an aspect of the present invention, there is provided a video encoder for encoding video signal data for an image block. The video encoder includes an encoder for encoding all color components of the video signal data using a common predictor.
- According to another aspect of the present invention, there is provided a method for encoding video signal data for an image block. The method includes encoding all color components of the video signal data using a common predictor.
- According to yet another aspect of the present invention, there is provided a video decoder for decoding video signal data for an image block. The video decoder includes a decoder for decoding all color components of the video signal data using a common predictor.
- According to still another aspect of the present invention, there is provided a method for decoding video signal data for an image block. The method includes decoding all color components of the video signal data using a common predictor.
- These and other aspects, features and advantages of the present invention will become apparent from the following detailed description of exemplary embodiments, which is to be read in connection with the accompanying drawings.
- The present invention may be better understood in accordance with the following exemplary figures, in which:
-
FIG. 1 is a block diagram illustrating an exemplary video encoding apparatus to which the present principles may be applied; -
FIG. 2 is a block diagram illustrating an exemplary video decoding apparatus to which the present principles may be applied; -
FIG. 3 is a flow diagram illustrating an exemplary video encoding process with a pre-encoding, color transform block, in accordance with the present principles; -
FIG. 4 is a flow diagram illustrating an exemplary video decoding process with a post-decoding, inverse color transform block, in accordance with the present principles; -
FIG. 5 is a block diagram illustrating a simplified model of residual color transform (RCT); -
FIGS. 6A and 6B are plots of average PSNR verses bit rate for ATV intra-only in accordance with the present principles; -
FIGS. 7A and 7B are plots of average PSNR verses bit rate for CT intra-only in accordance with the present principles; -
FIGS. 8A and 8B are plots of average PSNR verses bit rate for DT intra-only in accordance with the present principles; -
FIGS. 9A and 9B are plots of average PSNR verses bit rate for MIR_HD intra-only in accordance with the present principles; -
FIGS. 10A and 10B are plots of average PSNR verses bit rate for RT intra-only in accordance with the present principles; -
FIGS. 11A and 11B are plots of average PSNR verses bit rate for STB_HD intra-only in accordance with the present principles; -
FIG. 12 is a table illustrating H.264 sequence parameter syntax in accordance with the present principles; -
FIGS. 13A, 13B, 13C, and 13D comprise a table illustrating H.264 residual data syntax in accordance with the present principles; -
FIG. 14 is a flow diagram illustrating an exemplary video encoding process with a pre-encoding, color transform block, in accordance with the present principles; -
FIG. 15 is a flow diagram illustrating an exemplary video decoding process with a post-decoding, inverse color transform step block, in accordance with the present principles; and -
FIGS. 16A and 16B comprise a table illustrating H.264 macroblock prediction syntax in accordance with the present principles. - The present invention is directed to methods and apparatus for video encoding and decoding video signal data. It is to be appreciated that while the present invention is primarily described with respect to video signal data sampled using the 4:4:4 format of the International Telecommunication Union, Telecommunication Sector (ITU-T) H.264 standard, the present invention may also be applied to video signal data sampled using other formats (e.g., the 4:2:2 and/or 4:2:0 format) of the H.264 standard as well as other video compression standards while maintaining the scope of the present invention.
- It is to be appreciated that methods and apparatus in accordance with the present principles do not require use of any new tool(s) for the luma or chroma compression algorithm. Instead, the existing luma coding tools can be used. Accordingly, one advantageous result there from is that the coding performance of the 4:4:4 format may be maximized while preserving backward compatibility and minimizing any change to the existing H.264 (or other applicable) standard.
- In accordance with the principles of the present invention as configured in an embodiment, a luma coding algorithm is used to code all three component channels of, e.g., 4:4:4 content. Advantages of this embodiment include an improvement in the overall coding performance for compressing 4:4:4 content with respect to the prior art. Presently, in the existing H.264 standard, only one of three channels is coded as luma, and the other two are coded as chroma using less efficient tools.
- Further, in accordance with the principles of the present invention as configured in an embodiment, color transformation is performed as a pre-processing step. Thus, in accordance with this embodiment, a Residual Color Transform (RCT) is not performed inside the compression loop. Advantages of this embodiment include the providing of consistent encoder/decoder architecture among all color formats.
- Moreover, in accordance with the principles of the present invention as configured in an embodiment, the same motion/spatial prediction mode is used for all three components. Advantages of this embodiment include reduced codec complexity and backwards compatibility.
- Also, in accordance with another embodiment, instead of using the same predictor for all three components, a set (or subset) of three (3) restricted spatial predictors may be utilized for the three components. Advantages of this embodiment include an improvement in the overall coding performance for compressing 4:4:4 content with respect to the prior art.
- It is to be appreciated that the various embodiments described above and subsequently herein may be implemented as stand alone embodiments or may be combined in any manner as readily appreciated by one of ordinary skill in this and related arts. Thus, for example, in a first combined embodiment, a luma coding algorithm is advantageously used to code all three component channels, color transformation is performed as a pre-processing step, and a single predictor is used for all three component channels. In a second combined embodiment, a luma coding algorithm is advantageously used to code all three component channels, color transformation is performed as a pre-processing step, and a set (or subset) of three (3) restricted spatial predictors may be utilized for the three component channels. Of course, as noted above, other combinations of the various embodiments may also be implemented given the teachings of the present principles provided herein, while maintaining the scope of the present invention.
- The present description illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within its spirit and scope.
- All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
- Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
- Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the principles of the invention. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
- The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (“DSP”) hardware, read-only memory (“ROM”) for storing software, random access memory (“RAM”), and non-volatile storage.
- Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
- In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The invention as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
- Turning to
FIG. 1 , an exemplary video encoding apparatus is indicated generally by thereference numeral 199. Thevideo encoding apparatus 199 includes avideo encoder 100 and a pre-encodingcolor transform module 105. - The pre-encoding
color transform module 105 is for performing color pre-processing of video signals prior to inputting the same to thevideo encoder 100. The color pre-processing performed by the pre-encoding,color transform module 105 is further described herein below. It is to be appreciated that the pre-encoding,color transform module 105 may be omitted in some embodiments. - An input of the pre-encoding
color transform module 105 and an input of thevideo encoder 100 are available as inputs of thevideo encoding apparatus 199. - An output of the pre-encoding,
color transform module 105 is connected in signal communication with the input of thevideo encoder 100. - The input of the
video encoder 100 is connected in signal communication with a non-inverting input of a summingjunction 110. The output of the summingjunction 110 is connected in signal communication with a transformer/quantizer 120. The output of the transformer/quantizer 120 is connected in signal communication with anentropy coder 140. An output of theentropy coder 140 is available as an output of thevideo encoder 100 and also as an output of thevideo encoding apparatus 199. - The output of the transformer/
quantizer 120 is further connected in signal communication with an inverse transformer/quantizer 150. An output of the inverse transformer/quantizer 150 is connected in signal communication with an input of adeblock filter 160. An output of thedeblock filter 160 is connected in signal communication with reference picture stores 170. A first output of the reference picture stores 170 is connected in signal communication with a first input of a motion andspatial prediction estimator 180. The input to thevideo encoder 100 is further connected in signal communication with a second input of the motion andspatial prediction estimator 180. The output of the motion andspatial prediction estimator 180 is connected in signal communication with a first input of a motion andspatial prediction compensator 190. A second output of the reference picture stores 170 is connected in signal communication with a second input of the motion andspatial compensator 190. The output of the motion andspatial compensator 190 is connected in signal communication with an inverting input of the summingjunction 110. - Turning to
FIG. 2 , an exemplary video decoding apparatus is indicated generally by thereference numeral 299. Thevideo decoding apparatus 299 includes avideo decoder 200 and a post-decoder, inversecolor transform module 293. - An input of the
video decoder 200 is available as an input of thevideo decoding apparatus 299. The input to thevideo decoder 200 is connected in signal communication with an input of theentropy decoder 210. A first output of theentropy decoder 210 is connected in signal communication with an input of an inverse quantizer/transformer 220. An output of the inverse quantizer/transformer 220 is connected in signal communication with a first input of a summingjunction 240. - The output of the summing
junction 240 is connected in signal communication with adeblock filter 290. An output of thedeblock filter 290 is connected in signal communication with reference picture stores 250. Thereference picture store 250 is connected in signal communication with a first input of a motion andspatial prediction compensator 260. An output of the motionspatial prediction compensator 260 is connected in signal communication with a second input of the summingjunction 240. A second output of theentropy decoder 210 is connected in signal communication with a second input of themotion compensator 260. The output of thedeblock filter 290 is available as an output of thevideo decoder 200 and also as an output of thevideo decoding apparatus 299. - Moreover, an output of the post-decoding, inverse
color transform module 293 may be available as an output of thevideo decoding apparatus 299. In such a case, the output of thevideo decoder 200 may be connected in signal communication with an input of the post-decoding, inversecolor transform module 293, which is a post-processing module with respect to thevideo decoder 200. An output of the post-decoding, inversecolor transform module 293 provides a post-processed, inverse color transformed signal with respect to the output of thevideo decoder 200. It is to be appreciated that use of the post-decoding, inversecolor transform module 293 is optional. - A description is now presented for enhanced 4:4:4 coding in accordance with the principles of the present invention. A first described embodiment is a combined embodiment in which the luma coding algorithm is used for all color components, the same spatial prediction mode is used for all color components, and the Residual Color Transform (RCT) is omitted from inside the compression loop. Test results for this combined embodiment are also provided. Subsequently thereafter, a second combined embodiment is described wherein the luma coding algorithm is used for all color components, a set (or subset) of restricted spatial predictors is used for all color components (instead of a single spatial prediction mode), and the Residual Color Transform (RCT) is omitted from inside the compression loop. Thus, a difference between the first and second combined embodiments is the use of a single spatial prediction mode for all color components in the first combined embodiment versus the use of a set (or subset) of restricted spatial predictors for all color components in the second combined embodiment. Of course, as noted above, the embodiments described herein may be implemented as stand alone embodiments or may be combined in any manner, as readily appreciated by one of ordinary skill in this and related arts. For example, in accordance with the principles of the present invention as configured in an embodiment, only a single spatial prediction mode is used, without combination with other embodiments such as the omission of RCT from the compression loop. It is to be appreciated that given the teachings of the present principles provided herein, these and other variations, implementations, and combinations of the embodiments of the present invention will be readily ascertainable by one of ordinary skill in this and related arts, while maintaining the scope of the present invention.
- Turning to
FIG. 3 , an exemplary video encoding process with a pre-encoding, color transform block, are indicated generally by thereference numerals 300 and 301, respectively. - It is to be appreciated that the pre-encoding, color transform block 301 includes
blocks - The pre-encoding, color transform block 301 includes a
loop limit block 306 that begins a loop for each block in an image, and passes control to afunction block 308. Thefunction block 308 performs color pre-processing of the video signal data of the current image block, and passes control to aloop limit block 310. Theloop limit block 310 ends the loop. Moreover, the loop limit block 310 passes control to aloop limit block 312, the latter being included in thevideo encoding process 300. - The
loop limit block 312 begins a loop for each block in the image, and passes control to afunction block 315. Thefunction block 315 forms a motion compensated or spatial prediction of the current image block using a common predictor for each color component of the current image block, and passes control to afunction block 320. Thefunction block 320 subtracts the motion compensated or spatial prediction from the current image block to form a prediction residual, and passes control to afunction block 330. Thefunction block 330 transforms and quantizes the prediction residual, and passes control to afunction block 335. Thefunction block 335 inverse transforms and quantizes the prediction residual to form a coded prediction residual, and passes control to afunction block 345. Thefunction block 345 adds the coded residual to the prediction to form a coded picture block, and passes control to anend loop block 350. Theend loop block 350 ends the loop and passes control to anend block 355. - Turning to
FIG. 4 , an exemplary video decoding process with a post-decoding, inverse color transform block, are indicated generally by thereference numerals - It is to be appreciated that the post-decoding, inverse
color transform block 460 includesblocks color transform block 460 is optional and, thus, may be omitted in some embodiments of the present invention. - The
decoding process 400 includes aloop limit block 410 that begins a loop for a current block in an image, and passes control to afunction block 415. Thefunction block 415 entropy decodes the coded residual, and passes control to afunction block 420. Thefunction block 420 inverse transforms and quantizes the decoded residual to form a coded residual, and passes control to afunction block 430. Thefunction block 430 adds the coded residual to the prediction formed from a common predictor for each color component to form a coded picture block, and passes control to aloop limit block 435. Theloop limit block 435 ends the loop and passes control to anend block 440. - In some embodiments, the
loop limit block 435 optionally passes control to the post-decoding, inversecolor transform block 460, in particular, theloop limit block 462 included in the post-decoding, inversecolor transform block 460. Theloop limit block 462 begins a loop for each block in an image, and passes control to a function block 464. The function block 464 performs an inverse color post-processing of the video signal data of the current image block, and passes control to aloop limit block 466. Theloop limit block 466 ends the loop, and passes control to anend block 468. - In the H.264 4:4:4 format, every component channel has full resolution. Thus, in accordance with the first combined embodiment set forth above, the luma coding algorithm is used on every color component to achieve the maximum overall compression efficiency. Accordingly, in the embodiment, for intra frames, every color component may be compressed, e.g., using those prediction modes listed in Table 8-2, Table 8-3, and Table 8-4 in ISO/IEC 14496 10
Advanced Video Coding 3 rd Edition (ITU-T Rec. H.264), ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6, Document N6540, July 2004. - In addition, in the embodiment, the same spatial prediction mode is used for all three pixel components, to further reduce the complexity of the codec and improve performance. For example, the prediction mode set by the prev_intra4×4_pred_mode_flag, rem_intra4×4_pred_mode, prev_intra8×8_pred_mode_flag, and rem_intra8×8_pred_mode parameters for the luma in the macroblock prediction header may be used by all three components. Therefore, no extra bits and syntax elements are needed. For the B and P (predictive) frames, the reference pixels at fractional pixel locations may be calculated by the interpolation methods described in Section 8.4.2.2.1 of the H.264 standard for all three channels. The detailed syntax and semantic changes to the current H.264 standard are further discussed herein below.
- Residual Color Transform (RCT) was added to the encoder/decoder in the High 4:4:4 Profile. As a result, the compression structure for the 4:4:4 format is different from the one currently used in all of the other profiles in the H.264 standard for 4:2:0 and 4:2:2 formats. This results in some extra complexity to the implementation. Moreover, similar to any other color transforms, YCOCG does not always improve the overall compression performance. The effectiveness of YCOCG is highly content dependent. Thus, to improve the overall compression and robustness, in the embodiment, the color transform is placed outside of the prediction loop as a part of the preprocessing block. By doing this, selecting an optimum color transform for a specific compression task is an operational issue and the best answer for a particular input sequence could be found among a number of options. In accordance with an embodiment where all three components are using the same spatial predictors for the intra frames and the same interpolation filters for the B and P (predictive or inter-coded) frames, having the color transform performed on the prediction residues is identical to performing the color transform on the source images outside of the codec when the rounding/truncation errors are ignored. This will be discussed further herein below. Thus, the RCT block is removed from the coding structure to make the coding structure consistent among all of the color formats.
- Turning to
FIG. 5 , a simplified model of RCT is indicated generally by thereference numeral 500. TheRCT model 500 includes areference pixel generator 510, a summingjunction 520, and a linear transform module 530. Inputs to thereference pixel generator 510 are configured to receive motion/edge information and vectors [X1], [X2] . . . [Xn]. An output of thereference pixel generator 510 is connected in signal communication with an inverting input of the summingjunction 520, which provides prediction vector [Xp] thereto. A non-inverting input of the summingjunction 520 is configured to receive input vector [Xin]thereto. An output of the summingjunction 520 is connected in signal communication with an input of the linear transform module 530, which provides vector [Xd] thereto. An output of the linear transform module 530 is configured to provide vector [Yd]. - In the simplified model of
RCT 500, the color transform represented by a 3×3 matrix [A] (a linear transform) is defined as follows: -
- The [Xin], [Xd], [Xp], [X1], [X2] . . . [Xn] are 3×1 vectors representing the pixels in the RGB domain. The [Yd] is a 3×1 vector representing the result of the color transform. Therefore,
-
[Y d]=[A][X d]=[A][X in]−[A][X p] (2) - Since, in the embodiment, the same spatial predictors and interpolation filters are used for all three components in a macroblock in accordance with the principles of the present invention as configured in an embodiment, the reference pixel [Xp] can be expressed as follows:
-
- where a n×1 vector [C] represents the linear operations involved in the spatial predictors and interpolation filters defined in the H.264 standard. Here, it is presumed that the reference pixel is calculated by using a total number of n neighboring pixels [X1], [X2], . . . [Xn].
- Substituting [Xp] in equation (3) into equation (2) results in the following:
-
- Ignoring the rounding/truncation errors and assuming the same prediction mode is selected in either the RGB or Y domain results in the following:
-
- Therefore,
-
- Thus, equation (6) clearly shows that using YUV as the input to the encoder/decoder in accordance with the principles of the present invention as configured in this embodiment, is identical to performing RCT.
- Also, in accordance with the principles of the present invention as configured in an embodiment, a new 4:4:4 profile is added to the H.264 standard, referred to herein as “Advanced 4:4:4 Profile with profile_idc=166”. This new profile_idc may be added in the sequence parameter header, and may be used in the macroblock layer header, as well as the residual data header.
- To support using the luma algorithm to code all three color components, some changes may be made to the residual data syntax. In addition, changes may also be made to the semantics of some of the elements in the macroblock header, residue data header, and so forth. In general, the existing syntax for luma in the H.264 specification will remain unchanged and be used to code one of the three components. The changes are backward compatible. The detailed syntax and semantics changes are described herein below.
- A description will now be given regarding simulation results performed in accordance with the principles of the present invention as configured in various embodiments.
- Turning to
FIGS. 6A and 6B , plots of average PSNR verses bit rate for ATV intra-only are indicated generally by thereference numerals - Turning to
FIGS. 7A and 7B , plots of average PSNR verses bit rate for CT intra-only are indicated generally by thereference numerals - Turning to
FIGS. 8A and 8B , plots of average PSNR verses bit rate for DT intra-only are indicated generally by thereference numerals - Turning to
FIGS. 9A and 9B , plots of average PSNR verses bit rate for MIR_HD intra-only are indicated generally by thereference numerals - Turning to
FIGS. 10A and 10B , plots of average PSNR verses bit rate for RT intra-only are indicated generally by thereference numerals - Turning to
FIGS. 11A and 11B , plots of average PSNR verses bit rate for STB_HD intra-only are indicated generally by thereference numerals - In particular,
FIGS. 6A, 7A, 8A, 9A, 10, and 11A illustrate test results for the proposed Advanced 4:4:4 profile (indicated and preceded by the term “new”) versus approximation results corresponding thereto. Moreover,FIGS. 6B, 7B, 8B, 9B, 10B , and 11B illustrate test results for the proposed Advanced 4:4:4 profile (indicated and preceded by the term “new”) versus JPEK2k. - In all of
FIGS. 6A, 6B through 11A, 11B , the PSNR is indicated in decibels (dB) and the bit rate is indicated in bits per second (bps). ATV, CT, DT, MIR, RT, STB are the names of the test clips. - All JVT/FRExt test sequences described in JVT-J042, Film-Originated Test Sequences, were used in the tests. They are all 4:4:4 10 bit film material and each clip has 58 frames.
- The proposed advanced 4:4:4 profiles were implemented in the JVT Reference software JM9.6. Both intra-only and IBBP coding structure were used in the tests. The quantization parameter was set at 6, 12, 18, 24, 30, and 42 for each of the R-D curves. The RD-optimized mode selection was used.
- The proposed Advanced 4:4:4 Profile was also compared with the results that were done by running the reference software with the YUVFormat=0(4:0:0) on every individual input component. Three separate individual compressed bit counts were simply added together to get the total compressed bits for calculating the compressed bit rate.
- Regarding JPEG2k, KaKadu V2.2.3 software was used in the tests. The test results were generated by using 5 levels of wavelet decompression with the 9/7-tap bi-orthogonal wavelet filter. There was only one tile per frame and the RD-Optimization for a given target rate was also used.
- All of the PSNR measurements were done in the RGB domain. Average PSNR, defined as (PSNR(red)+PSNR(green)+PSNR(blue))/3, is used to compare the overall compression quality. This is mainly because the JPEG2k compressed data are computed using an unknown rate control algorithm provided by the software. For some cases, the RGB PSNR values are quite far apart from each other, especially when the JPEG2k color transform was used.
- The compression comparison was performed as follows:
-
- New1: the proposed Advanced 4:4:4 Profile with a single prediction mode.
- New3: the proposed Advanced 4:4:4 Profile with three prediction modes.
- RCT-OFF: RGB input with RCT=off.
- RCT-ON: RGB input with RCT=on.
- YCOCG: RGB to YCOCG conversion was done outside the codec. Then the converted YCOCG was used as the input to the JVT software.
- R+G+B: Proposed method approximated by compressing the R, G, and B signals separately.
- Y+CO+CG: Proposed method approximated by compressing the converted Y, CO, and CG signals separately.
- J2k_RGB: The JPEG2k compression was done in the RGB domain. The JPEG2k color transform was turned off.
- J2k_YUV: The JPEG2k compression was done in the YUV domain. The JPEG2k color transform was used.
- According to the test results, an implementation in accordance with the principles of the present invention as configured in an embodiment, in general, is very similar to JPEG2k in terms of overall compression efficiency. In some cases, it is even slightly better.
- Further, an implementation in accordance with the principles of the present invention as configured in an embodiment, provides significantly greater performance (compression) than the current High 4:4:4 Profile for quality above 40 dB (PSNR). Specifically, New1-YCOCG or New3-YCOCG is better than YCOCG and RCT-ON, New1-RGB or New3-RGB is better than RCT-OFF. At a PSNR equal to and greater than 45 dB (PSNR), the average improvement in the average PSNR is more than 1.5 dB. In the last example, the improvement can be translated to more than 25% bit savings at a PSNR equal to 45 dB.
- According to the test results, it seems that color transforms will help the coding performance when the content is more color saturated, such as TP, RT. That is, if the color is neutral and less saturated, coding in the RGB domain might be the right choice. The above observation is independent from what color transform is used.
- Comparing the results of New1-YCOCG or New3-YCOCG and JPEG-2k_YUV, it has been observed that the performance of a specific color transform in terms of improving coding efficiency is very content dependent. No single color transform is always the best. Therefore, our data confirmed that having a color transform, such as RCT, inside the encoding (or decoding) loop might not be a good idea. Instead, performing the color transform, if it is necessary, outside the encoder/decoder could make the entire compression system provide a better and more robust performance.
- Comparing YCOCG with RCT-ON, the test results do not show any coding efficiency improvement from RCT. In addition, it should be noted that running the reference software with the RCT turned on significantly increased the coding time. The running time was more than 2.5 times longer.
- A description will now be given regarding syntax and semantics changes in accordance with the principles of the present invention as configured in an embodiment.
- Turning to
FIG. 12 , a table for H.264 sequence parameter syntax is indicated generally by thereference numeral 1200. Changes to the syntax in accordance with the principles of the present invention as configured in an embodiment, are indicated by italic text. - Turning to
FIG. 13 , a table for H.264 residual data syntax is indicated generally by the reference numeral 1300. Additions/changes to the syntax in accordance with the principles of the present invention as configured in an embodiment, are indicated by italic text. In the table 1300, the luma section in the residual data header along with some necessary text modifications are repeated twice to support the luma1 and luma21, respectively. - As noted above, the above described first combined embodiment was evaluated and tested by implementing the present principles in the JVT reference software JM9.6. The test results marked with New1-RGB or New1-YCOCG represent the first combined embodiment.
- As noted above, in accordance with the principles of the present invention as configured in an embodiment, a set (or subset) of three (3) restricted spatial predictors is utilized for the component channels (e.g., RGB, YUV, YCrCb formats, and so forth) instead of a single spatial prediction mode. Moreover, as noted above, this embodiment may be combined with other embodiments described herein, such as, e.g., the use of only the luma coding algorithm to code all three component channels of content and/or the use of color transformation as a pre-processing step.
- A description will now be given regarding the above described second combined embodiment involving the use of a set (or subset) of three (3) restricted spatial predictors for the color components, the use of only the luma coding algorithm to code all three color components, and the use of color transformation as a pre-processing step (i.e., no RCT within the compression loop). Some variations of this embodiment will also be described there with.
- Turning to
FIG. 14 , an exemplary video encoding process with a pre-encoding, color transform step are indicated generally by thereference numerals 1400 and 1401, respectively. - It is to be appreciated that the pre-encoding, color transform block 1401 includes
blocks - The pre-encoding, color transform block 1401 includes a
loop limit block 1406 that begins a loop for each block in an image, and passes control to afunction block 1408. Thefunction block 1408 performs color pre-processing of the video signal data of the current image block, and passes control to aloop limit block 1410. Theloop limit block 1410 ends the loop. Moreover, theloop limit block 1410 passes control to aloop limit block 1412, the latter being included in thevideo encoding process 1400. - The
loop limit block 1412 begins a loop for each block in the image, and passes control to afunction block 1415. Thefunction block 1415 forms a motion compensated or spatial prediction of the current image block using a common predictor for each color component of the current image block, and passes control to afunction block 1420. Thefunction block 1420 subtracts the motion compensated or spatial prediction from the current image block to form a prediction residual, and passes control to afunction block 1430. Thefunction block 1430 transforms and quantizes the prediction residual, and passes control to afunction block 1435. Thefunction block 1435 inverse transforms and quantizes the prediction residual to form a coded prediction residual, and passes control to afunction block 1445. Thefunction block 1445 adds the coded residual to the prediction to form a coded picture block, and passes control to anend loop block 1450. Theend loop block 1450 ends the loop and passes control to anend block 1455. - Turning to
FIG. 15 , an exemplary video decoding process with a post-decoding, inverse color transform step are indicated generally by thereference numerals - It is to be appreciated that the post-decoding, inverse
color transform block 1560 includesblocks color transform block 1560 is optional and, thus, may be omitted in some embodiments of the present invention. - The
decoding process 1500 includes aloop limit block 1510 that begins a loop for a current block in an image, and passes control to afunction block 1515. Thefunction block 1515 entropy decodes the coded residual, and passes control to afunction block 1520. Thefunction block 1520 inverse transforms and quantizes the decoded residual to form a coded residual, and passes control to afunction block 1530. Thefunction block 1530 adds the coded residual to the prediction formed from a common predictor for each color component to form a coded picture block, and passes control to aloop limit block 1535. Theloop limit block 1535 ends the loop and passes control to anend block 1540. - In some embodiments, the
loop limit block 1535 optionally passes control to the post-decoding, inversecolor transform block 1560, in particular, theloop limit block 1562 included in the post-decoding, inversecolor transform block 1560. Theloop limit block 1562 begins a loop for each block in an image, and passes control to afunction block 1564. Thefunction block 1564 performs an inverse color post-processing of the video signal data of the current image block, and passes control to aloop limit block 1566. Theloop limit block 1566 ends the loop, and passes control to an end block 1568. - As noted above, a new profile (profile_idc=166) for the Advanced 4:4:4 Profile is disclosed. This new profile may also be used for the second combined embodiment, with corresponding semantic and syntax changes as described herein below for the second combined embodiment. This new profile_idc is added in the Sequence Parameter Set and will be mainly used in the subsequent headers to indicate that the input format is 4:4:4 and all three input channels are coded similarly as luma.
- To minimize the necessary changes to the H.264 standard, no new macroblock type is disclosed for the Advanced 4:4:4 Profile. Instead, all of the macroblock types along with the associated coding parameters listed in Table 7-11, Table 7-13, and Table 7-14 of the H.264 standard are still valid. For the case of intra macroblocks, all three input channels, luma, Cr, and Cb, will be encoded based on the MbPartPredMode defined in Table 7-11 of the H.264 standard. For example, an Intra_4×4 macroblock in the Advanced 4:4:4 Profile means every input component channel may be encoded by using all of the 9 possible prediction modes given in Table 8-2 of the H.264 standard. For reference, in the current High 4:4:4 Profile, two of the channels for an Intra_4×4 macroblock will be treated as chroma and only one of the 4 possible intra prediction mode in Table 8-5 of the H.264 standard will be used. For the B and P macroblocks, the changes made for the Advanced 4:4:4 Profile occur at the interpolation process for the calculation of the reference pixel value at the fractional pixel location. Here, the procedure described in Section 8.4.2.2.1 of the H.264 standard, Luma sample interpolation process, will be applied for luma, Cr, and Cb. Again for reference, the current High 4:4:4 Profile uses Section 8.4.2.2.2 of the H.264 standard, Chroma sample interpolation process, for two of the input channels.
- In the case when the CABAC is chosen as the entropy coding mode, two separate sets of context models identical to those currently defined for luma will be created for Cr and Cb. They will also be updated independently during the course of encoding.
- Finally, in the embodiment, since there is no RCT block in the coding loop, the ResidueColorTransformFlag is removed from the sequence parameter set in the Advanced 4:4:4 Profile.
- Up to this point, most syntax changes occur in the residue data as shown in
FIG. 13 , where the original syntax for luma are repeated twice to support Cr and Cb in the proposed Advanced 4:4:4 profiles. - Regarding the H.264 macroblock layer table (not shown), semantic changes to the corresponding syntax include the following.
- coded_block_pattern (Add). When chroma_format_idc is equal to 3 and coded_block_pattern is present, CodedBlockPatternChroma shall be set to 0. In addition, CodedBlockPatternLuma specifies, for each of the twelve 8×8 luma, Cb, and Cr blocks of the macroblock, one of the following cases: (1) All transform coefficient levels of the twelve 4×4 luma blocks in the 8×8luma, 8×8 Cb and 8×8 Cr blocks are equal to zero; (2) One or more transform coefficient levels of one or more of the 4×4 luma blocks in the 8×8luma, 8×8 Cb, and 8×8 Cr blocks shall be non-zero valued.
- A description will now be given regarding spatial prediction mode selection for the intra blocks in accordance with the second combined embodiment (or the sole embodiment relating to the use of the set (or subset) of three restricted spatial predictors).
- For each component to choose its best MbPartPredMode and the subsequent best spatial prediction mode independently, as in the case while encoding each input channel separately, some new intra block types may be added to Table 7-11 of the H.264 standard. As a result, a large amount of changes to the H.264 standard will be made. In an embodiment relating to the second combined embodiment, the current mb_types remain unchanged and an alternative solution is provided. In the embodiment, the three input channels are restricted to be encoded with the same MbPartPredMode or macroblock type. Then, a small amount of new elements are added into the Macroblock Prediction Syntax to support three separate prediction modes. Therefore, each component can still theoretically choose its best spatial prediction mode independently in order to minimize the prediction error for each component channel. For example, assuming an Intra_4×4 macroblock is chosen as the mb_type, luma, Cr, or Cb could still find its own best spatial prediction mode in Table 8-2 in Section 8.3.1.1 of the H.264 standard such as, e.g., Intra_4×4_Vertical for luma, Intra_4_4_Horizontal for Cr, and Intra_4×4_Diagonal_Down_Left for Cb.
- Another approach, relating to the first combined embodiment described above, is to constrain all three input channels to share the same prediction mode. This can be done by using the prediction information that is currently carried by the existing syntax elements, such as prev_intra4×4_pre_mode_flag, rem_intra4×4_pred_mode, pre_intra8×8_pred_mode_flag, and rem_intra8×8_pred_mode, in the Macroblock Prediction syntax. This option will result in less change to the H.264 standard and some slight loss of the coding efficiency as well.
- Based on the test results, using three prediction modes could improve the overall coding performance by about 0.2 dB over the first combined embodiment.
- Turning to
FIG. 16 , a table for H.264 macroblock prediction syntax is indicated generally by the reference numeral 1700. For reference, the modified Macroblock Prediction Syntax to support using the three prediction modes is listed below, where: -
- prev_intra4×4_pred_mode_flag0 and rem_intra4×4_pred_mode0 are for luma;
- prev_intra4×4_pred_mode_flag1 and rem_intra4×4_pred_mode1 are for Cr;
- prev_intra4×4_pred_mode_flag2 and rem_intra4×4_pred_mode2 are for Cb;
- A description will now be given regarding simulation results performed in accordance with the principles of the present invention as configured in an embodiment, for the second combined embodiment.
- All JVT/FRExt test sequences described in JVT-J042, Film-Originated Test Sequence, JVT-J039 (Viper). They are all 4:4:4 10-bit materials and each clip has 58 frames.
- The proposed algorithm was implemented in the JVT Reference software JM9.6 and the modified software was used in the tests. Both Intra-only and IBRrBP were tested. Here, “Br” means the recorded B pictures. The intra-only case was done for all of the sequences with the quantization parameter equal to 6, 12, 18, 24, 30, 36 and 42. Due to the large amount of time involved in the simulation, the IBRrBP GOP structure was only done for the film clips with a quantization parameter equal to 12, 18, 24, 30 and 36. According to the discussion in the 4:4:4 AHG, the following key parameters were used in the tests:
-
- SymbolMode=1
- RDOptimization=1
- ScalingMatrixPresentFlag=0
- OffsetMatrixPresentFlag=1
- QoffsetMatrixFile=“q_offset.cfg”
- AdaptiveRounding=1
- AdaptRndPeriod=1
- AdaptRndChroma=1
- AdaptRndWFactorX=8
- SearchRange=64
- UseFME=1
- Regarding JPEG2k, KaKadu V2.2.3 software was used in the tests. The test results were generated by using 5 levels of wavelet decompression with the 9/7-tap bi-orthogonal wavelet filter. There was only one tile per frame and the RD-Optimization for a given target rate was also used.
- The PSNR measurements were primarily calculated in the original color domain of the source contents, which is RGB for the clips described above. Average PSNR, defined as (PSNR(red)+PSNR(green)+PSNR(blue))/3, is used to compare the overall compression quality.
- The compression comparison was performed as follows:
- New1: the proposed Advanced 4:4:4 Profile with a single prediction mode.
- New3: the proposed Advanced 4:4:4 Profile with three prediction modes.
- RCT-OFF: RGB input with RCT=off.
- RCT-ON: RGB input with RCT=on.
- YCOCG: RGB to YCOCG conversion was done outside the codec. Then the converted YCOCG was used as the input to the JVT software.
- R+G+B: Proposed method approximated by compressing the R, G, and B signals separately.
- Y+CO+CG: Proposed method approximated by compressing the converted Y, CO, and CG signals separately.
- JPEG2k_RGB: The JPEG2k compression was done in the RGB domain. The JPEG2k color transform was turned off.
- JPEG2k_YUV: The JPEG2k compression was done in the YUV domain. The JPEG2k color transform was used.
- For the Intra-Only case, the proposed Advanced 4:4:4 Profile in accordance with the present principles is very similar to JPEK2k in terms of overall compression efficiency. In some cases, it is even slightly better.
- The approach in accordance with the principles of the present invention, is clearly better than the current High 4:4:4 Profile. At a PSNR equal to and greater than 45 dB (PSNR), the average improvement in the average PSNR is more than 1.5 dB. In some case, the improvement can be translated to more than 25% bit savings at a PSNR equal to 45 dB.
- Even with the same block type, using three prediction modes is slightly better than a single one. However, more syntax and semantic changes may be utilized.
- A description will now be given of some of the many attendant advantages/features provided by the principles of embodiments of the present invention.
- The test results demonstrate that the proposed Advanced 4:4:4 Profile, utilizing the improvements corresponding to the principles of the present invention, delivers improved performance when compared to the current High 4:4:4 Profile. The performance gain is significant. In addition, moving the color transform outside the codec will make the architecture of the codec consistent among all of the color formats. As a result, it will make the implementation easier and reduce the cost. It will also make the codec more robust in terms of selecting the optimum color transform for achieving better coding efficiency. Also, the proposed approach does not add any new coding tools and requires only some slight changes to the syntax and semantics.
- Thus, in accordance with the principles of the present invention as configured in an embodiment, a method and apparatus are provided for video encoding and decoding. Modifications to the existing H.264 standard are provided which improve performance beyond that currently achievable. Moreover, performance is improved even beyond JPEG-2000 for high quality applications. In accordance with the principles of the present invention as configured in an embodiment, significant 4:4:4 coding performance improvements in the H.264 standard can be achieved by using the luma coding algorithm to code all of the three color components of 4:4:4 content. That is, no new tools are necessary for the luma (or chroma, which is not used) compression algorithm. Instead, the existing luma coding tools are utilized. Further, syntax and semantic changes to the current 4:4:4 profile may be implemented in accordance with the present principles to support the luma coding of all three component channels. In tests conducted in accordance with an embodiment of the present invention, when the source content has lots of spatial textures and edges, the spatial prediction tools used in luma clearly exhibited their superior performance to those used in chroma. For some of the test sequences, when every color component was encoded as luma, more than a 30% bit reduction was observed at a compressed quality greater than or equal to 45 dB(Average PSNR).
- It is to be appreciated that while the present invention has primarily been described herein with respect to video signal data sampled using the 4:4:4 format of the H.264 standard, the present invention may also be readily implemented with respect to video signal data sampled using other formats (e.g., the 4:2:0 format and/or the 4:2:2 format) of the H.264 standard, as well as other video compression standards. Given the teachings of the present invention provided herein, these and other variations of the present invention may also be readily implemented by one of ordinary skill in this and related arts, while maintaining the scope of the present invention.
- These and other features and advantages of the present invention may be readily ascertained by one of ordinary skill in the pertinent art based on the teachings herein. It is to be understood that the teachings of the present invention may be implemented in various forms of hardware, software, firmware, special purpose processors, or combinations thereof.
- Most preferably, the teachings of the present invention are implemented as a combination of hardware and software. Moreover, the software may be implemented as an application program tangibly embodied on a program storage unit. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPU”), a random access memory (“RAM”), and input/output (“I/O”) interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.
- It is to be further understood that, because some of the constituent system components and methods depicted in the accompanying drawings are preferably implemented in software, the actual connections between the system components or the process function blocks may differ depending upon the manner in which the present invention is programmed. Given the teachings herein, one of ordinary skill in the pertinent art will be able to contemplate these and similar implementations or configurations of the present invention.
- Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the present invention is not limited to those precise embodiments, and that various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Claims (15)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/896,596 US20200374561A1 (en) | 2005-04-13 | 2020-06-09 | Luma and chroma decoding using a common predictor |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US67125505P | 2005-04-13 | 2005-04-13 | |
US70083405P | 2005-07-20 | 2005-07-20 | |
PCT/US2006/009587 WO2006113003A1 (en) | 2005-04-13 | 2006-03-16 | Method and apparatus for video decoding |
US91809807A | 2007-10-09 | 2007-10-09 | |
US14/222,111 US20150071349A1 (en) | 2005-04-13 | 2014-03-21 | Luma and chroma decoding using a common predictor |
US15/394,254 US10123046B2 (en) | 2005-04-13 | 2016-12-29 | Method and apparatus for video decoding |
US16/143,583 US20190089985A1 (en) | 2005-04-13 | 2018-09-27 | Luma and chroma decoding using a common predictor |
US16/896,596 US20200374561A1 (en) | 2005-04-13 | 2020-06-09 | Luma and chroma decoding using a common predictor |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/143,583 Continuation US20190089985A1 (en) | 2005-04-13 | 2018-09-27 | Luma and chroma decoding using a common predictor |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200374561A1 true US20200374561A1 (en) | 2020-11-26 |
Family
ID=36644425
Family Applications (10)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/918,097 Active 2030-02-02 US8767826B2 (en) | 2005-04-13 | 2006-03-16 | Luma and chroma encoding using a common predictor |
US11/918,204 Active 2030-06-30 US8724699B2 (en) | 2005-04-13 | 2006-03-16 | Luma and chroma encoding using a common predictor |
US11/918,098 Active 2030-06-05 US8718134B2 (en) | 2005-04-13 | 2006-03-16 | Luma and chroma decoding using a common predictor |
US11/887,791 Active 2030-02-15 US8750376B2 (en) | 2005-04-13 | 2006-03-16 | Luma and chroma decoding using a common predictor |
US11/918,027 Active 2030-03-27 US8761251B2 (en) | 2005-04-13 | 2006-03-16 | Luma-chroma coding with one common or three distinct spatial predictors |
US14/221,998 Abandoned US20150271490A1 (en) | 2005-04-13 | 2014-03-21 | Luma and chroma encoding using a common predictor |
US14/222,111 Abandoned US20150071349A1 (en) | 2005-04-13 | 2014-03-21 | Luma and chroma decoding using a common predictor |
US15/394,254 Expired - Fee Related US10123046B2 (en) | 2005-04-13 | 2016-12-29 | Method and apparatus for video decoding |
US16/143,583 Abandoned US20190089985A1 (en) | 2005-04-13 | 2018-09-27 | Luma and chroma decoding using a common predictor |
US16/896,596 Abandoned US20200374561A1 (en) | 2005-04-13 | 2020-06-09 | Luma and chroma decoding using a common predictor |
Family Applications Before (9)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/918,097 Active 2030-02-02 US8767826B2 (en) | 2005-04-13 | 2006-03-16 | Luma and chroma encoding using a common predictor |
US11/918,204 Active 2030-06-30 US8724699B2 (en) | 2005-04-13 | 2006-03-16 | Luma and chroma encoding using a common predictor |
US11/918,098 Active 2030-06-05 US8718134B2 (en) | 2005-04-13 | 2006-03-16 | Luma and chroma decoding using a common predictor |
US11/887,791 Active 2030-02-15 US8750376B2 (en) | 2005-04-13 | 2006-03-16 | Luma and chroma decoding using a common predictor |
US11/918,027 Active 2030-03-27 US8761251B2 (en) | 2005-04-13 | 2006-03-16 | Luma-chroma coding with one common or three distinct spatial predictors |
US14/221,998 Abandoned US20150271490A1 (en) | 2005-04-13 | 2014-03-21 | Luma and chroma encoding using a common predictor |
US14/222,111 Abandoned US20150071349A1 (en) | 2005-04-13 | 2014-03-21 | Luma and chroma decoding using a common predictor |
US15/394,254 Expired - Fee Related US10123046B2 (en) | 2005-04-13 | 2016-12-29 | Method and apparatus for video decoding |
US16/143,583 Abandoned US20190089985A1 (en) | 2005-04-13 | 2018-09-27 | Luma and chroma decoding using a common predictor |
Country Status (10)
Country | Link |
---|---|
US (10) | US8767826B2 (en) |
EP (5) | EP1872588B1 (en) |
JP (19) | JP2008536450A (en) |
KR (5) | KR101278324B1 (en) |
CN (1) | CN103458243B (en) |
BR (5) | BRPI0609281A2 (en) |
ES (2) | ES2901528T3 (en) |
MX (5) | MX2007012706A (en) |
MY (6) | MY151482A (en) |
WO (5) | WO2006113003A1 (en) |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101348365B1 (en) * | 2006-08-02 | 2014-01-10 | 삼성전자주식회사 | An video encoding/decoding method and apparatus |
EP2105025B1 (en) * | 2007-01-11 | 2021-04-07 | InterDigital VC Holdings, Inc. | Methods and apparatus for using syntax for the coded_block_flag syntax element and the coded_block_pattern syntax element for the cavlc 4:4:4 intra, high 4:4:4 intra, and high 4:4:4 predictive profiles in mpeg-4 avc high level coding |
JP2008193627A (en) * | 2007-01-12 | 2008-08-21 | Mitsubishi Electric Corp | Image encoding device, image decoding device, image encoding method, and image decoding method |
US20090003449A1 (en) * | 2007-06-28 | 2009-01-01 | Mitsubishi Electric Corporation | Image encoding device, image decoding device, image encoding method and image decoding method |
EP2183922A4 (en) | 2007-08-16 | 2011-04-27 | Nokia Corp | A method and apparatuses for encoding and decoding an image |
US20090154567A1 (en) * | 2007-12-13 | 2009-06-18 | Shaw-Min Lei | In-loop fidelity enhancement for video compression |
EP2091227A1 (en) | 2008-02-15 | 2009-08-19 | Thomson Licensing | Method for adjusting the settings of a reproduction color device |
CA2730831A1 (en) | 2008-07-15 | 2010-01-21 | Azuna, Llc | Method and assembly for personalized three-dimensional products |
KR100954172B1 (en) * | 2008-10-24 | 2010-04-20 | 부산대학교 산학협력단 | Common prediction block system in svc decoder |
KR102174807B1 (en) | 2009-08-12 | 2020-11-06 | 인터디지털 브이씨 홀딩스 인코포레이티드 | Methods and apparatus for improved intra chroma encoding and decoding |
CN105472387B (en) * | 2010-04-09 | 2018-11-02 | Lg电子株式会社 | The method and apparatus for handling video data |
US8750383B2 (en) * | 2011-01-17 | 2014-06-10 | Exaimage Corporation | Systems and methods for wavelet and channel-based high definition video encoding |
KR101675707B1 (en) | 2011-06-23 | 2016-11-11 | 가부시키가이샤 제이브이씨 켄우드 | Image encoding device, image encoding method and image encoding program, and image decoding device, image decoding method and image decoding program |
GB201119206D0 (en) | 2011-11-07 | 2011-12-21 | Canon Kk | Method and device for providing compensation offsets for a set of reconstructed samples of an image |
US20140072027A1 (en) * | 2012-09-12 | 2014-03-13 | Ati Technologies Ulc | System for video compression |
ES2665908T3 (en) * | 2013-04-08 | 2018-04-30 | Ge Video Compression, Llc | Inter-component prediction |
TWI676389B (en) * | 2013-07-15 | 2019-11-01 | 美商內數位Vc專利控股股份有限公司 | Method for encoding and method for decoding a colour transform and corresponding devices |
US9996803B2 (en) | 2013-09-03 | 2018-06-12 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems and methods for authenticating a user through an unobservable re-authentication system |
CN108027978B (en) * | 2015-09-18 | 2023-09-22 | 交互数字Vc控股公司 | Determination of position luma samples for HDR encoded/decoded color component samples |
US20170105012A1 (en) * | 2015-10-08 | 2017-04-13 | Mediatek Inc. | Method and Apparatus for Cross Color Space Mode Decision |
US11153591B2 (en) * | 2019-03-12 | 2021-10-19 | Tencent America LLC | Method and apparatus for color transform in VVC |
US10742992B1 (en) | 2019-03-26 | 2020-08-11 | Electronic Arts Inc. | Video compression for video games |
Family Cites Families (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5011321A (en) * | 1973-05-30 | 1975-02-05 | ||
US4125856A (en) * | 1977-08-19 | 1978-11-14 | Bell Telephone Laboratories, Incorporated | Digital encoding and decoding of color video signals |
JP2741696B2 (en) | 1986-09-19 | 1998-04-22 | キヤノン株式会社 | Adaptive differential coding |
JP2737902B2 (en) | 1988-01-22 | 1998-04-08 | 株式会社豊田自動織機製作所 | Driving route determination processing method for image type unmanned vehicles |
JPH06113326A (en) * | 1992-09-25 | 1994-04-22 | Sony Corp | Picture coder and picture decoder |
AU5632394A (en) * | 1993-03-05 | 1994-09-08 | Sony Corporation | Apparatus and method for reproducing a prediction-encoded video signal |
CN1095286A (en) | 1993-05-15 | 1994-11-23 | 邹刚 | Anti-decrepit beauty liquid and preparation method thereof |
JPH07254993A (en) * | 1994-03-15 | 1995-10-03 | Toshiba Corp | Yuv/rgb conversion circuit |
US5724450A (en) | 1994-09-30 | 1998-03-03 | Apple Computer, Inc. | Method and system for color image compression in conjunction with color transformation techniques |
CN1112810A (en) | 1995-03-23 | 1995-12-06 | 徐锦章 | Skin aspic jelly |
US5617334A (en) * | 1995-07-21 | 1997-04-01 | The Trustees Of Columbia University In The City Of New York | Multi-viewpoint digital video coder/decoder and method |
JPH09102954A (en) * | 1995-10-04 | 1997-04-15 | Matsushita Electric Ind Co Ltd | Method for calculating picture element value of block from one or two predictive blocks |
JP3359215B2 (en) | 1995-12-28 | 2002-12-24 | 株式会社リコー | Multi-level image coding device |
KR100440522B1 (en) * | 1996-08-29 | 2004-10-15 | 마츠시타 덴끼 산교 가부시키가이샤 | Image decoder and image memory overcoming various kinds of delaying factors caused by hardware specifications specific to image memory by improving storing system and reading-out system |
TW358296B (en) * | 1996-11-12 | 1999-05-11 | Matsushita Electric Ind Co Ltd | Digital picture encoding method and digital picture encoding apparatus, digital picture decoding method and digital picture decoding apparatus, and data storage medium |
US6618443B1 (en) * | 1997-03-12 | 2003-09-09 | Matsushita Electric Industrial Co., Ltd. | Upsampling filter for a down conversion system |
JPH1188909A (en) * | 1997-09-11 | 1999-03-30 | Mitsubishi Electric Corp | Image compression transmitter |
JP3063715B2 (en) * | 1997-12-19 | 2000-07-12 | 日本電気株式会社 | Image compression device |
US6829301B1 (en) * | 1998-01-16 | 2004-12-07 | Sarnoff Corporation | Enhanced MPEG information distribution apparatus and method |
JP2001112023A (en) * | 1999-10-12 | 2001-04-20 | Mega Chips Corp | Image compression method |
JP2002335407A (en) | 2001-05-08 | 2002-11-22 | Fuji Xerox Co Ltd | Image encoder and image encoding method |
JP2004007379A (en) * | 2002-04-10 | 2004-01-08 | Toshiba Corp | Method for encoding moving image and method for decoding moving image |
US7305034B2 (en) * | 2002-04-10 | 2007-12-04 | Microsoft Corporation | Rounding control for multi-stage interpolation |
AU2002339530A1 (en) | 2002-09-07 | 2004-03-29 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and devices for efficient data transmission link control in mobile multicast communication systems |
KR20040028318A (en) | 2002-09-30 | 2004-04-03 | 삼성전자주식회사 | Image encoding and decoding method and apparatus using spatial predictive coding |
US7266247B2 (en) | 2002-09-30 | 2007-09-04 | Samsung Electronics Co., Ltd. | Image coding method and apparatus using spatial predictive coding of chrominance and image decoding method and apparatus |
JP4355893B2 (en) * | 2002-12-26 | 2009-11-04 | 富士ゼロックス株式会社 | Color conversion processing apparatus and color conversion processing method |
JP4324844B2 (en) | 2003-04-25 | 2009-09-02 | ソニー株式会社 | Image decoding apparatus and image decoding method |
KR100624429B1 (en) | 2003-07-16 | 2006-09-19 | 삼성전자주식회사 | A video encoding/ decoding apparatus and method for color image |
US7333544B2 (en) | 2003-07-16 | 2008-02-19 | Samsung Electronics Co., Ltd. | Lossless image encoding/decoding method and apparatus using inter-color plane prediction |
KR100718122B1 (en) | 2003-07-16 | 2007-05-15 | 삼성전자주식회사 | Lossless color image coding method and apparatus using inter-plane prediction |
JP4617644B2 (en) | 2003-07-18 | 2011-01-26 | ソニー株式会社 | Encoding apparatus and method |
US7327894B2 (en) * | 2003-11-07 | 2008-02-05 | Texas Instruments Incorporated | Image compression |
EP1538826A3 (en) * | 2003-12-05 | 2007-03-07 | Samsung Electronics Co., Ltd. | Color transformation method and apparatus |
CN100536573C (en) | 2004-01-16 | 2009-09-02 | 北京工业大学 | Inframe prediction method used for video frequency coding |
WO2006016406A1 (en) | 2004-08-12 | 2006-02-16 | Fujitsu Limited | Mobile communication network system |
US20060210156A1 (en) * | 2005-03-18 | 2006-09-21 | Sharp Laboratories Of America, Inc. | Video compression for raw rgb format using residual color transform |
US7792370B2 (en) * | 2005-03-18 | 2010-09-07 | Sharp Laboratories Of America, Inc. | Residual color transform for 4:2:0 RGB format |
CN101160972B (en) | 2005-04-13 | 2010-05-19 | 汤姆逊许可公司 | Luma and chroma decoding using a common predictor |
KR101246915B1 (en) * | 2005-04-18 | 2013-03-25 | 삼성전자주식회사 | Method and apparatus for encoding or decoding moving picture |
US7537961B2 (en) * | 2006-03-17 | 2009-05-26 | Panasonic Corporation | Conductive resin composition, connection method between electrodes using the same, and electric connection method between electronic component and circuit substrate using the same |
WO2007122503A2 (en) | 2006-04-24 | 2007-11-01 | Nokia Corporation | Reliable multicast/broadcast in a wireless network |
CN101102283A (en) | 2007-08-17 | 2008-01-09 | 杭州华三通信技术有限公司 | A method and device for optimizing unknown unicast forward at wireless access point |
JP5011321B2 (en) | 2009-02-09 | 2012-08-29 | 東洋製罐株式会社 | Method for forming multilayer container |
JP5927083B2 (en) | 2012-08-28 | 2016-05-25 | 株式会社荏原製作所 | Dressing process monitoring method and polishing apparatus |
JP6376685B2 (en) | 2014-05-16 | 2018-08-22 | 東レエンジニアリング株式会社 | Thin film forming apparatus and thin film forming method |
-
2006
- 2006-03-16 EP EP06738965.0A patent/EP1872588B1/en not_active Revoked
- 2006-03-16 EP EP06738444.6A patent/EP1872586B1/en active Active
- 2006-03-16 KR KR1020077023524A patent/KR101278324B1/en active IP Right Grant
- 2006-03-16 WO PCT/US2006/009587 patent/WO2006113003A1/en active Application Filing
- 2006-03-16 MX MX2007012706A patent/MX2007012706A/en active IP Right Grant
- 2006-03-16 ES ES06738477T patent/ES2901528T3/en active Active
- 2006-03-16 JP JP2008506469A patent/JP2008536450A/en active Pending
- 2006-03-16 EP EP06738477.6A patent/EP1869892B1/en active Active
- 2006-03-16 CN CN201310369407.7A patent/CN103458243B/en active Active
- 2006-03-16 MX MX2007012708A patent/MX2007012708A/en active IP Right Grant
- 2006-03-16 US US11/918,097 patent/US8767826B2/en active Active
- 2006-03-16 KR KR1020077023523A patent/KR101278308B1/en active IP Right Grant
- 2006-03-16 WO PCT/US2006/009429 patent/WO2006112997A1/en active Application Filing
- 2006-03-16 US US11/918,204 patent/US8724699B2/en active Active
- 2006-03-16 US US11/918,098 patent/US8718134B2/en active Active
- 2006-03-16 BR BRPI0609281-0A patent/BRPI0609281A2/en not_active Application Discontinuation
- 2006-03-16 WO PCT/US2006/009990 patent/WO2006113022A1/en active Application Filing
- 2006-03-16 EP EP06738487.5A patent/EP1872587B1/en active Active
- 2006-03-16 US US11/887,791 patent/US8750376B2/en active Active
- 2006-03-16 JP JP2008506467A patent/JP2008536448A/en active Pending
- 2006-03-16 US US11/918,027 patent/US8761251B2/en active Active
- 2006-03-16 MX MX2007012653A patent/MX2007012653A/en active IP Right Grant
- 2006-03-16 KR KR1020077023522A patent/KR101287721B1/en active IP Right Grant
- 2006-03-16 JP JP2008506473A patent/JP2008536452A/en active Pending
- 2006-03-16 BR BRPI0609124-5A patent/BRPI0609124A2/en not_active Application Discontinuation
- 2006-03-16 EP EP06738625.0A patent/EP1869893B1/en not_active Revoked
- 2006-03-16 BR BRPI0609239-0A patent/BRPI0609239A2/en not_active Application Discontinuation
- 2006-03-16 KR KR1020077023526A patent/KR101254355B1/en active IP Right Grant
- 2006-03-16 JP JP2008506466A patent/JP2008536447A/en active Pending
- 2006-03-16 BR BRPI0609236-5A patent/BRPI0609236A2/en not_active Application Discontinuation
- 2006-03-16 BR BRPI0609280-2A patent/BRPI0609280A2/en not_active Application Discontinuation
- 2006-03-16 KR KR1020077023528A patent/KR101254356B1/en active IP Right Grant
- 2006-03-16 MX MX2007012710A patent/MX2007012710A/en active IP Right Grant
- 2006-03-16 MX MX2007012705A patent/MX2007012705A/en active IP Right Grant
- 2006-03-16 WO PCT/US2006/009417 patent/WO2006112996A1/en active Application Filing
- 2006-03-16 WO PCT/US2006/009381 patent/WO2006112992A1/en active Application Filing
- 2006-03-16 ES ES06738444T patent/ES2805105T3/en active Active
- 2006-03-16 JP JP2008506465A patent/JP2008536446A/en active Pending
- 2006-03-24 MY MYPI20061301A patent/MY151482A/en unknown
- 2006-03-24 MY MYPI20061302A patent/MY162364A/en unknown
- 2006-03-24 MY MYPI2010002781A patent/MY167744A/en unknown
- 2006-03-24 MY MYPI20061300A patent/MY154995A/en unknown
- 2006-03-24 MY MYPI20061304A patent/MY158264A/en unknown
- 2006-03-24 MY MYPI20061303A patent/MY163150A/en unknown
-
2013
- 2013-06-19 JP JP2013129010A patent/JP2013214997A/en active Pending
- 2013-06-19 JP JP2013129003A patent/JP2013214994A/en active Pending
- 2013-06-19 JP JP2013129005A patent/JP2013214995A/en active Pending
- 2013-06-19 JP JP2013129008A patent/JP2013214996A/en active Pending
- 2013-06-26 JP JP2013133803A patent/JP2013240081A/en active Pending
-
2014
- 2014-03-21 US US14/221,998 patent/US20150271490A1/en not_active Abandoned
- 2014-03-21 US US14/222,111 patent/US20150071349A1/en not_active Abandoned
- 2014-12-24 JP JP2014261280A patent/JP2015084577A/en active Pending
-
2015
- 2015-01-08 JP JP2015002641A patent/JP2015109680A/en active Pending
- 2015-10-29 JP JP2015212434A patent/JP2016026457A/en not_active Ceased
- 2015-10-29 JP JP2015212435A patent/JP6538521B2/en active Active
- 2015-10-30 JP JP2015214278A patent/JP6345637B2/en active Active
-
2016
- 2016-04-21 JP JP2016085299A patent/JP6550010B2/en active Active
- 2016-12-29 US US15/394,254 patent/US10123046B2/en not_active Expired - Fee Related
-
2017
- 2017-09-06 JP JP2017170916A patent/JP6382413B2/en active Active
- 2017-10-18 JP JP2017201677A patent/JP6561101B2/en active Active
- 2017-12-20 JP JP2017243608A patent/JP2018057036A/en not_active Ceased
-
2018
- 2018-09-27 US US16/143,583 patent/US20190089985A1/en not_active Abandoned
-
2020
- 2020-06-09 US US16/896,596 patent/US20200374561A1/en not_active Abandoned
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200374561A1 (en) | Luma and chroma decoding using a common predictor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: THOMSON LICENSING, FRANCE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:YU, HAOPING;REEL/FRAME:053155/0716 Effective date: 20070919 Owner name: INTERDIGITAL VC HOLDINGS, INC., DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THOMSON LICENSING;REEL/FRAME:053158/0709 Effective date: 20180730 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |