US10999581B2 - Position based intra prediction - Google Patents
Position based intra prediction Download PDFInfo
- Publication number
- US10999581B2 US10999581B2 US16/850,509 US202016850509A US10999581B2 US 10999581 B2 US10999581 B2 US 10999581B2 US 202016850509 A US202016850509 A US 202016850509A US 10999581 B2 US10999581 B2 US 10999581B2
- Authority
- US
- United States
- Prior art keywords
- samples
- chroma
- block
- neighboring
- luma
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 claims abstract description 643
- 238000006243 chemical reaction Methods 0.000 claims abstract description 192
- 241000023320 Luma <angiosperm> Species 0.000 claims description 362
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 claims description 362
- 230000015654 memory Effects 0.000 claims description 20
- 238000012545 processing Methods 0.000 abstract description 106
- 238000005516 engineering process Methods 0.000 description 48
- 238000004590 computer program Methods 0.000 description 33
- 230000008569 process Effects 0.000 description 33
- 230000006870 function Effects 0.000 description 24
- 101150067055 minC gene Proteins 0.000 description 24
- 101150076173 cntL gene Proteins 0.000 description 18
- 238000009795 derivation Methods 0.000 description 18
- 229910052739 hydrogen Inorganic materials 0.000 description 17
- 230000001419 dependent effect Effects 0.000 description 16
- 238000005286 illumination Methods 0.000 description 12
- 229910052698 phosphorus Inorganic materials 0.000 description 12
- 229910052717 sulfur Inorganic materials 0.000 description 12
- 238000003491 array Methods 0.000 description 11
- 229910052760 oxygen Inorganic materials 0.000 description 8
- 238000005070 sampling Methods 0.000 description 7
- 239000013256 coordination polymer Substances 0.000 description 6
- 238000012935 Averaging Methods 0.000 description 5
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 229910052757 nitrogen Inorganic materials 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000007792 addition Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000013515 script Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 101100149023 Bacillus subtilis (strain 168) secA gene Proteins 0.000 description 1
- 241000947853 Vibrionales Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 229910052799 carbon Inorganic materials 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/149—Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/156—Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
- H04N19/463—Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- This patent document relates to video processing techniques, devices and systems.
- CCLM cross-component linear model
- the described methods may be applied to both the existing video coding standards (e.g., High Efficiency Video Coding (HEVC)) and future video coding standards (e.g., Versatile Video Coding (VVC)) or codecs.
- HEVC High Efficiency Video Coding
- VVC Versatile Video Coding
- the disclosed technology may be used to provide a method for video processing.
- the method includes determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model based on R chroma samples from a group of neighboring chroma samples, wherein the R chroma samples are selected from the group based on a position rule and R is greater than or equal to 2; and performing the conversion based on the determining.
- the disclosed technology may be used to provide a method for video processing.
- the method includes determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model based on selected chroma samples based on positions of the chroma samples, wherein the selected chroma samples are selected from a group of neighboring chroma samples, and performing the conversion based on the determining.
- the disclosed technology may be used to provide a method for video processing.
- the method includes determining, for a current video block, a group of neighboring chroma samples used to derive a set of values for parameters of a linear model, wherein a width and a height of the current video block is W and H, respectively, and wherein the group of neighboring chroma samples comprises at least one sample that is located beyond 2 ⁇ W above neighboring chroma samples or 2 ⁇ H left neighboring chroma samples; and performing, based on the linear model, a conversion between the current video block and a coded representation of a video including the current video block.
- the disclosed technology may be used to provide a method for video processing.
- the method includes: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, multiple sets of parameters, wherein each set of parameters defines a cross-component linear model (CCLM) and is derived from a corresponding group of chroma samples at corresponding chroma sample positions; determining, based on the multiple sets of parameters, parameters for a final CCLM; and performing the conversion based on the final CCLM.
- CCLM cross-component linear model
- the disclosed technology may be used to provide a method for video processing.
- the method includes determining, for a conversion between a current video block of a video and a coded representation of the video, parameters of a cross-component linear model (CCLM) based on maximum and minimum values of chroma and luma samples of N groups of chroma and luma samples selected from neighboring luma and chroma samples of the current video block; and performing the conversion using the CCLM.
- CCLM cross-component linear model
- the disclosed technology may be used to provide a method for video processing.
- the method includes determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model that are completely determinable by two chroma samples and corresponding two luma samples; and performing the conversion based on the determining.
- the disclosed technology may be used to provide a method for video processing.
- the method includes determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model using a parameter table whose entries are retrieved according to two chroma sample values and two luma sample values; and performing the conversion based on the determining.
- the disclosed technology may be used to provide a method for video processing.
- the method includes determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, a final prediction P(x, y) of a chroma sample at a position (x, y) in the current video block as a combination of prediction results of multiple cross-component linear models (MCCLMs), wherein the MCCLMs are selected based on the position (x, y) of the chroma sample; and performing the conversion based on the final prediction.
- MCLMs cross-component linear models
- the disclosed technology may be used to provide a method for video processing.
- the method includes performing, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, a first determination regarding whether a first cross-component linear model (CCLM) that uses only left-neighboring samples is used for predicting samples of the current video block and/or a second determination regarding whether a second cross-component linear model (CCLM) that uses only above-neighboring samples is used for predicting samples of the current video block; and performing the conversion based on the first determination and/or the second determination.
- CCLM cross-component linear model
- the disclosed technology may be used to provide a method for video processing.
- the method includes determining, for a conversion between a current video block of a video and a coded representation of the video, a context that is used to code a flag using arithmetic coding in the coded representation of the current video block, wherein the context is based on whether a top-left neighboring block of the current video block is coded using a cross-component linear model (CCLM) prediction mode; and performing the conversion based on the determining.
- CCLM cross-component linear model
- the disclosed technology may be used to provide a method for video processing.
- the method includes determining, for a conversion between a current video block of a video and a coded representation of the video, a coding order for one or more indications of a direct intra prediction mode (DM mode) and a linear intra prediction mode (LM mode) based on a coding mode of one or more neighboring blocks of the current video block; and performing the conversion based on the determining.
- DM mode direct intra prediction mode
- LM mode linear intra prediction mode
- the disclosed technology may be used to provide a method for video processing.
- the method includes determining, for a conversion between a current video block of a video and a coded representation of the video, parameters for a linear model prediction or cross-color component prediction based on refined chroma and luma samples of the current video block; and performing the conversion based on the determining.
- the disclosed technology may be used to provide a method for video processing.
- the method includes determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters for a linear model prediction or cross-color component prediction based on by selecting neighboring samples based on a position of a largest or a smallest neighboring sample; and performing the conversion based on the determining.
- the disclosed technology may be used to provide a method for video processing.
- the method includes determining, for a conversion between a current video block of a video and a coded representation of the video, parameters for a linear model prediction or cross-color component prediction based on a main color component and a dependent color component, the main color component selected as one of a luma color component and a chroma color component and the dependent color component selected as the other of the luma color component and the chroma color component; and performing the conversion based on the determining.
- the disclosed technology may be used to provide a method for video processing.
- the method comprises: performing downsampling on chroma and luma samples of a neighboring block of the current video block; determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of cross-component linear model (CCLM) based on the downsampled chroma and luma samples obtained from the downsampling; and performing the conversion based on the determining.
- CCLM cross-component linear model
- the disclosed technology may be used to provide a method for video processing.
- the method comprises: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model (CCLM) based on two or more chroma samples from a group of neighboring chroma samples, wherein the two or more chroma samples are selected based on a coding mode of the current video block; and performing the conversion based on the determining.
- CCLM cross-component linear model
- the disclosed technology may be used to provide a method for video processing.
- the method comprises: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of cross-component linear model (CCLM) based on chroma samples that are selected based on W available above-neighboring samples, W being an integer; and performing the conversion based on the determining.
- CCLM cross-component linear model
- the disclosed technology may be used to provide a method for video processing.
- the method comprises: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of cross-component linear model (CCLM) based on chroma samples that are selected based on H available left-neighboring samples of the current video block; and performing the conversion based on the determining.
- CCLM cross-component linear model
- the disclosed technology may be used to provide a method for video processing.
- the method comprises: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model (CCLM) based on two or four chroma samples and/or corresponding luma samples; and performing the conversion based on the determining.
- CCLM cross-component linear model
- the disclosed technology may be used to provide a method for video processing.
- the method comprises: selecting, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, chroma samples based on a position rule, the chroma samples used to derive parameters of a cross-component linear model (CCLM); and performing the conversion based on the determining, wherein the position rule specifies to select the chroma samples that are located within an above row and/or a left column of the current video block.
- CCLM cross-component linear model
- the disclosed technology may be used to provide a method for video processing.
- the method comprises: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, positions at which luma samples are downsampled, wherein the downsampled luma samples are used to determine parameters of a cross-component linear model (CCLM) based on chroma samples and downsampled luma samples, wherein the downsampled luma samples are at positions corresponding to positions of the chroma samples that are used to derive the parameters of the CCLM; and performing the conversion based on the determining.
- CCLM cross-component linear model
- the disclosed technology may be used to provide a method for video processing.
- the method comprises: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, a method to derive parameters of a cross-component linear model (CCLM) using chroma samples and luma samples based on a coding condition associated with the current video block; and performing the conversion based on the determining.
- CCLM cross-component linear model
- the disclosed technology may be used to provide a method for video processing.
- the method comprises: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, whether to derive maximum values and/or minimum values of a luma component and a chroma component that are used to derive parameters of a cross-component linear model (CCLM) based on availability of a left-neighboring block and an above-neighboring block of the current video block; and performing the conversion based on the determining.
- CCLM cross-component linear model
- the disclosed technology may be used to provide a method for video processing.
- the method comprises determining, for a conversion between a current video block of a video and a coded representation of the video, parameters of a coding tool using a linear model based on selected neighboring samples of the current video block and corresponding neighboring samples of a reference block; and performing the conversion based on the determining.
- the disclosed technology may be used to provide a method for video processing.
- the method comprises: determining, for a conversion between a current video block of a video and a coded representation of the video, parameters of a local illumination compensation (LIC) tool based on N neighboring samples of the current video block and N corresponding neighboring samples of a reference block, wherein the N neighboring samples of the current video block are selected based on positions of the N neighboring samples; and performing the conversion based on the determining, wherein the LIC tool uses a linear model of illumination changes in the current video block during the conversion.
- LIC local illumination compensation
- the disclosed technology may be used to provide a method for video processing.
- the method comprises determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model (CCLM) based on chroma samples and corresponding luma samples; and performing the conversion based on the determining, wherein some of the chroma samples are obtained by a padding operation and the chroma samples and the corresponding luma samples are grouped into two arrays G0 and G1, each array including two chroma samples and corresponding luma samples.
- CCLM cross-component linear model
- the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.
- a device that is configured or operable to perform the above-described method.
- the device may include a processor that is programmed to implement this method.
- a video decoder apparatus may implement a method as described herein.
- FIG. 1 shows an example of locations of samples used for the derivation of the weights of the linear model used for cross-component prediction.
- FIG. 2 shows an example of classifying neighboring samples into two groups.
- FIG. 3A shows an example of a chroma sample and its corresponding luma samples.
- FIG. 3B shows an example of down filtering for the cross-component linear model (CCLM) in the Joint Exploration Model (JEM).
- CCLM cross-component linear model
- JEM Joint Exploration Model
- FIGS. 4A and 4B show examples of only top-neighboring and only left-neighboring samples used for prediction based on a linear model, respectively.
- FIG. 5 shows an example of a straight line between minimum and maximum luma values as a function of the corresponding chroma samples.
- FIG. 6 shows an example of a current chroma block and its neighboring samples.
- FIG. 7 shows an example of different parts of a chroma block predicted by a linear model using only left-neighboring samples (LM-L) and a linear model using only above-neighboring samples (LM-A).
- L-L left-neighboring samples
- LM-A linear model using only above-neighboring samples
- FIG. 8 shows an example of a top-left neighboring block.
- FIG. 9 shows an example of samples to be used to derive a linear model.
- FIG. 10 shows an example of left and below-left columns and above and above-right rows relative to a current block.
- FIG. 11 shows an example of a current block and its reference samples.
- FIG. 12 shows examples of two neighboring samples when both left and above neighboring reference samples are available.
- FIG. 13 shows examples of two neighboring samples when only above neighboring reference samples are available.
- FIG. 14 shows examples of two neighboring samples when only left neighboring reference samples are available.
- FIG. 15 shows examples of four neighboring samples when both left and above neighboring reference samples are available.
- FIG. 16 shows an example of lookup tables used in LM derivations.
- FIG. 17 shows an example of an LM parameter derivation process with 64 entries.
- FIG. 18 shows a flowchart of an example method for video processing based on some implementations of the disclosed technology.
- FIGS. 19A and 19B show flowcharts of example methods for video processing based on some implementations of the disclosed technology.
- FIGS. 20A and 20B show flowcharts of another example methods for video processing based on some implementations of the disclosed technology.
- FIG. 21 shows a flowchart of another example method for video processing based on some implementations of the disclosed technology.
- FIG. 22 shows a flowchart of an example method for video processing based on some implementations of the disclosed technology.
- FIGS. 23A and 23B show flowcharts of example methods for video processing based on some implementations of the disclosed technology.
- FIGS. 24A-24E show flowcharts of example methods for video processing based on some implementations of the disclosed technology.
- FIGS. 25A and 25B show flowcharts of example methods for video processing based on some implementations of the disclosed technology.
- FIGS. 26A and 26B show flowcharts of example methods for video processing based on some implementations of the disclosed technology.
- FIGS. 27A and 27B show flowcharts of example methods for video processing based on some implementations of the disclosed technology.
- FIGS. 28A-28C show flowcharts of example methods for video processing based on some implementations of the disclosed technology.
- FIGS. 29A-29C show flowcharts of example methods for video processing based on some implementations of the disclosed technology.
- FIGS. 30A and 30B are block diagrams of examples of hardware platforms for implementing a visual media decoding or a visual media encoding technique described in the present document.
- FIGS. 31A and 31B show examples of LM parameter derivation process with four entries.
- FIG. 31A shows an example when both above and left neighboring samples are available and
- FIG. 31B shows an example when only above neighboring samples are available and top-right is not available.
- FIG. 32 shows examples of neighboring samples to derive LIC parameters.
- Video codecs typically include an electronic circuit or software that compresses or decompresses digital video, and are continually being improved to provide higher coding efficiency.
- a video codec converts uncompressed video to a compressed format or vice versa.
- the compressed format usually conforms to a standard video compression specification, e.g., the High Efficiency Video Coding (HEVC) standard (also known as H.265 or MPEG-H Part 2), the Versatile Video Coding (VVC) standard to be finalized, or other current and/or future video coding standards.
- HEVC High Efficiency Video Coding
- VVC Versatile Video Coding
- Embodiments of the disclosed technology may be applied to existing video coding standards (e.g., HEVC, H.265) and future standards to improve runtime performance.
- Section headings are used in the present document to improve readability of the description and do not in any way limit the discussion or the embodiments (and/or implementations) to the respective sections only.
- Cross-component prediction is a form of the chroma-to-luma prediction approach that has a well-balanced trade-off between complexity and compression efficiency improvement.
- CCLM Cross-component linear model
- pred C (i, j) represents the predicted chroma samples in a CU and rec L ′(i, j) represents the downsampled reconstructed luma samples of the same CU for color formats 4:2:0 or 4:2:2 while rec L ′(i, j) represents the reconstructed luma samples of the same CU for color format 4:4:4.
- CCLM parameters ⁇ and ⁇ are derived by minimizing the regression error between the neighboring reconstructed luma and chroma samples around the current block as follows:
- L(n) represents the down-sampled (for color formats 4:2:0 or 4:2:2) or original (for color format 4:4:4) top and left neighboring reconstructed luma samples
- C(n) represents the top and left neighboring reconstructed chroma samples
- value of N is equal to twice of the minimum of width and height of the current chroma coding block.
- FIG. 1 shows the location of the left and above reconstructed samples and the sample of the current block involved in the CCLM mode.
- this regression error minimization computation is performed as part of the decoding process, not just as an encoder search operation, so no syntax is used to convey the ⁇ and ⁇ values.
- the CCLM prediction mode also includes prediction between the two chroma components, e.g., the Cr (red-difference) component is predicted from the Cb (blue-difference) component.
- resi cb ′(i,j) presents the reconstructed Cb residue sample at position (i,j).
- the scaling factor ⁇ may be derived in a similar way as in the CCLM luma-to-chroma prediction. The only difference is an addition of a regression cost relative to a default ⁇ value in the error function so that the derived scaling factor is biased towards a default value of ⁇ 0.5 as follows:
- Cb(n) represents the neighboring reconstructed Cb samples
- Cr(n) represents the neighboring reconstructed Cr samples
- ⁇ is equal to ⁇ (Cb(n) ⁇ Cb(n))>>9.
- the CCLM luma-to-chroma prediction mode is added as one additional chroma intra prediction mode.
- one more RD cost check for the chroma components is added for selecting the chroma intra prediction mode.
- intra prediction modes other than the CCLM luma-to-chroma prediction mode is used for the chroma components of a CU
- CCLM Cb-to-Cr prediction is used for Cr component prediction.
- the single model CCLM mode employs one linear model for predicting the chroma samples from the luma samples for the whole CU, while in MMLM, there can be two models.
- neighboring luma samples and neighboring chroma samples of the current block are classified into two groups, each group is used as a training set to derive a linear model (i.e., a particular ⁇ and ⁇ are derived for a particular group). Furthermore, the samples of the current luma block are also classified based on the same rule for the classification of neighboring luma samples.
- the reconstructed luma block needs to be downsampled to match the size of the chroma signal.
- the downsampling assumes the “type 0” phase relationship as shown in FIG. 3A for the positions of the chroma samples relative to the positions of the luma samples, e.g., collocated sampling horizontally and interstitial sampling vertically.
- the exemplary 6-tap downsampling filter defined in (6) is used as the default filter for both the single model CCLM mode and the multiple model CCLM mode.
- the encoder can alternatively select one of four additional luma downsampling filters to be applied for prediction in a CU, and send a filter index to indicate which of these is used.
- the four selectable luma downsampling filters for the MMLM mode as shown in FIG.
- MDLM multi-directional LM
- two additional CCLM modes are proposed: LM-A, where the linear model parameters are derived only based on the top-neighboring (or above-neighboring) samples as shown in FIG. 4A
- LM-L where the linear model parameters are derived only based on the left-neighboring samples as shown in FIG. 4B .
- This existing implementation proposes to replace the LMS algorithm of the linear model parameters ⁇ and ⁇ by a straight line equation, so called two-point method.
- the 2 points (couple of Luma and Chroma) (A, B) are the minimum and maximum values inside the set of neighboring Luma samples as depicted in FIG. 5 .
- the division operation needed in the derivation of a is avoided and replaced by multiplications and shifts as below:
- S is set equal to iShift
- ⁇ is set equal to a
- ⁇ is set equal to b.
- g_aiLMDivTableLow and g_aiLMDivTableHigh are two tables each with 512 entries, wherein each entry stores a 16-bit integer.
- This implementation is also simpler than the current VTM implementation because shift S always has the same value.
- CCLM as in JEM is adopted in VTM-2.0, but MM-CCLM in JEM is not adopted in VTM-2.0. MDLM and simplified CCLM have been adopted into VTM-3.0.
- LIC Local Illumination Compensation
- CU inter-mode coded coding unit
- a least square error method is employed to derive the parameters a and b by using the neighbouring samples of the current CU and their corresponding reference samples. More specifically, as illustrated in FIG. 32 , the subsampled (2:1 subsampling) neighbouring samples of the CU and the corresponding pixels (identified by motion information of the current CU or sub-CU) in the reference picture are used. The IC parameters are derived and applied for each prediction direction separately.
- the LIC flag is copied from neighbouring blocks, in a way similar to motion information copy in merge mode; otherwise, an LIC flag is signalled for the CU to indicate whether LIC applies or not.
- LIC When LIC is enabled for a pciture, addtional CU level RD check is needed to determine whether LIC is applied or not for a CU.
- MR-SAD mean-removed sum of absolute diffefference
- MR-SATD mean-removed sum of absolute Hadamard-transformed difference
- LIC is disabled for the entire picture when there is no obvious illumination change between a current picture and its reference pictures.
- histograms of a current picture and every reference picture of the current picture are calculated at the encoder. If the histogram difference between the current picture and every reference picture of the current picture is smaller than a given threshold, LIC is disabled for the current picture; otherwise, LIC is enabled for the current picture.
- Embodiments of the presently disclosed technology overcome drawbacks of existing implementations, thereby providing video coding with higher coding efficiencies and lower computational complexity.
- Simplified linear model derivations for cross-component prediction may enhance both existing and future video coding standards, is elucidated in the following examples described for various implementations.
- the examples of the disclosed technology provided below explain general concepts, and are not meant to be interpreted as limiting. In an example, unless explicitly indicated to the contrary, the various features described in these examples may be combined.
- the term “LM method” includes, but is not limited to, the LM mode in JEM or VTM, and MMLM mode in JEM, left-LM mode which only uses left neighboring samples to derive the linear model, the above-LM mode which only uses above neighboring samples to derive the linear model or other kinds of methods which utilize luma reconstruction samples to derive chroma prediction blocks. All LM modes which are not the LM-L nor the LM-A are called normal LM modes.
- SignShift(x, s) is defined as
- off is an integer such as 0 or 2 s ⁇ 1 .
- the height and width of a current chroma block are denoted H and W, respectively.
- FIG. 6 shows an example of neighboring samples of the current chroma block. Let the coordinate of the top-left sample of the current chroma block be denoted as (x, y). Then, the neighboring chroma samples (as shown in FIG. 6 ) are denoted as:
- the parameters ⁇ and ⁇ in LM methods are derived from chroma samples at two or more specific positions.
- Sets of parameters in CCLM mode can be firstly derived and then combined to form the final linear model parameter used for coding one block.
- ⁇ 1 and ⁇ 1 are derived from a group of chroma samples at specific positions denoted as Group 1
- ⁇ 2 and ⁇ 2 are derived from a group of chroma samples at specific positions denoted as Group 2
- ⁇ . . , ⁇ N and ⁇ N are derived from a group of chroma samples at specific positions denoted as Group N
- the final ⁇ and ⁇ can be derived from ( ⁇ 1 , ⁇ 1 ), . . . ( ⁇ N , ⁇ N ).
- the two-point method can derive ⁇ and ⁇ with the input as
- bit depths of luma samples and chroma samples are denoted BL and BC.
- One or more simplifications for this implementation include:
- One single chroma block may use multiple linear models and the selection of multiple linear model is dependent on the position of chroma samples within the chroma block.
- the neighboring samples including chroma samples and their corresponding luma samples, which may be down-sampled
- their corresponding chroma values are denoted as MaxC k and MinC k , respectively.
- a flag is signaled to indicate whether CCLM mode is applied.
- the context used in arithmetic coding to code the flag may depend on whether the top-left neighboring block as shown in FIG. 8 applies CCLM mode or not.
- a first context is used if the top-left neighboring block applies CCLM mode; and a second context is used if the top-left neighboring block does not apply CCLM mode.
- Indications or codewords of DM and LM modes may be coded in different orders from sequence to sequence/picture to picture/tile to tile/block to block.
- samples may be located beyond the range of 2 ⁇ W above neighboring samples or 2 ⁇ H left neighboring samples as shown in FIG. 6 .
- the chroma neighboring samples and their corresponding luma samples are down-sampled before deriving the linear model parameters ⁇ and ⁇ as disclosed in Examples 1-7.
- the width and height of the current chroma block is W and H.
- Neighboring downsampled/originally reconstructed samples and/or downsampled/originally reconstructed samples may be further refined before being used in the linear model prediction process or cross-color component prediction process.
- luma and chroma may be switched.
- luma color component may be replaced by the main color component (e.g., G)
- chroma color component may be replaced by dependent color component (e.g., B or R).
- Selection of locations of chroma samples (and/or corresponding luma samples) may depend on the coded mode information.
- top-right row is available, or when 1st top-right sample is available.
- top-right row is available, or when 1st top-right sample is available.
- luma and chroma may be switched.
- luma color component may be replaced by the main color component (e.g., G)
- chroma color component may be replaced by dependent color component (e.g., B or R).
- the selected chroma samples shall be located within the above row (i.e., with W samples) as depicted in FIG. 10 , and/or the left column (i.e., with H samples) wherein W and H are the width and height of the current block.
- Whether to derive the maximum/minimum values of luma and chroma components used to derive CCLM parameters may depend on the availability of left and above neighbours. For example, the maximum/minimum values for luma and chroma components used to derive CCLM parameters may not be derived if both the left and above neighbouring blocks are unavailable.
- the proposed method to derive the parameters used in CCLM may be used to derive the parameters used in LIC or other coding tools that relies on linear model.
- cross-component prediction mode is proposed wherein the chroma samples are predicted with corresponding reconstructed luma samples according to the prediction model, as shown in Eq. 12.
- Pred C (x, y) denotes a prediction sample of chroma.
- ⁇ and ⁇ are two model parameters.
- Rec′ L (x, y) is a down-sampled luma sample.
- Pred C ( x,y ) 0 ⁇ Rec L ′( x,y )+ ⁇ , (12)
- a six-tap filter is introduced for the luma down-sampled process for block A in FIG. 11 , as shown in Eq. 13.
- Rec L ′ ⁇ ( x , y ) ( 2 ⁇ Rec L ⁇ ( 2 ⁇ x , 2 ⁇ y ) + 2 ⁇ Rec L ⁇ ( 2 ⁇ x , 2 ⁇ y + 1 ) + Rec L ⁇ ( 2 ⁇ x - 1 , 2 ⁇ y ) + Rec L ⁇ ( 2 ⁇ x + 1 , 2 ⁇ y ) + Rec L ⁇ ( 2 ⁇ x + 1 , 2 ⁇ y + 1 ) + Rec L ⁇ ( 2 ⁇ x + 1 , 2 ⁇ y + 1 ) + 4 ) >> 3. ( 13 )
- the above surrounding luma reference samples shaded in FIG. 11 are down-sampled with a 3-tap filter, as shown in Eq. 14.
- the left surrounding luma reference samples are down-sampled according to Eq. 15. If the left or above samples are not available, a 2-tap filter defined in Eq. 16 and Eq. 17 will be used.
- Rec L ′( x,y ) (2 ⁇ Rec L (2 x, 2 y )+Rec L (2 x ⁇ 1,2 y )+Rec L (2 x+ 1,2 y ))>>2
- Rec L ′( x,y ) (2 ⁇ Rec L (2 x, 2 y )+Rec L (2 x, 2 y+ 1)+Rec L (2 x, 2 y ⁇ 1))>>2
- Rec L ( x,y ) (3 ⁇ Rec L (2 x, 2 y )+Rec L (2 x, 2 y 1)+2)>>2
- the surrounding luma reference samples are down sampled to the equal size to the chroma reference samples.
- the size is denoted as width and height.
- a look-up table is applied to avoid the division operation when deriving ⁇ and ⁇ . The derivation methods is illustrated below.
- FIG. 16 shows an example of lookup tables with 128, 64 and 32 entries and each entry is represented by 16 bits.
- the 2-point LM derivation process is simplified as shown in Table 1 and FIG. 17 with 64 entries. It should be noted that the first entry may not be stored into the table.
- each entry in the exemplary tables are designed to be with 16 bits, it can be easily transformed to a number with less bits (such as 8 bits or 12 bits).
- maxLuma and minLuma may indicate the maximum and minimum luma samples values of selected positions. Alternatively, they may indicate a function of maximum and minimum luma samples values of selected positions, such as averaging. When there are only 4 positions selected, they may also indicate the average of two larger luma values and average of two smaller luma values. Further note that in FIG. 17 , maxChroma and minChroma represents the chroma values corresponding to maxLuma and minLuma.
- the block width and height of current chroma block is W and H, respectively.
- the top-left coordinate of current chroma block is [0, 0].
- the two left samples' coordinates are [ ⁇ 1, floor(H/4)] and [ ⁇ 1, floor (3*H/4)].
- the selected samples are painted in red as depicted in FIG. 31A .
- the 4 samples are sorted according to luma sample intensity and classified into 2 group.
- the two larger samples and two smaller samples are respectively averaged.
- Cross component prediction model is derived with the 2 averaged points.
- the maximum and minimum value of the four samples are used to derive the LM parameters.
- the four selected above samples' coordinates are [W/8, ⁇ 1], [W/8+W/4, ⁇ 1], [W/8+2*W/4, ⁇ 1], and [W/8+3*W/4, ⁇ 1].
- the selected samples are painted in red as depicted in FIG. 31B
- the four selected left samples' coordinates are [ ⁇ 1, H/8], [ ⁇ 1, H/8+H/4], [ ⁇ 1, H/8+2*H/4, ⁇ 1], and [ ⁇ 1, H/8+3*H/4].
- W′ is the available number of above neighbouring samples, which can be 2*W.
- the four selected above samples' coordinates are [W′/8, ⁇ 1], [W′/8+W′/4, ⁇ 1], [W′/8+2*W′/4, ⁇ 1], and [W′/8+3*W′/4, ⁇ 1].
- H′ is the available number of left neighbouring samples, which can be 2*H.
- the four selected left samples' coordinates are [ ⁇ 1, H′/8], [ ⁇ 1, H′/8+H′/4], [ ⁇ 1, H′/8+2*H′/4, ⁇ 1], and [ ⁇ 1, H′/8+3*H′/4].
- the number of available neighbouring chroma samples on the top and top-right numTopSamp and the number of available neighbouring chroma samples on the left and left-below nLeftSamp are derived as follows:
- the number of available neighbouring chroma samples on the top and top-right numTopSamp and the number of available neighbouring chroma samples on the left and left-below nLeftSamp are derived as follows:
- the number of available neighbouring chroma samples on the top and top-right numTopSamp and the number of available neighbouring chroma samples on the left and left-below nLeftSamp are derived as follows:
- FIG. 18 shows a flowchart of an exemplary method for video processing.
- the method 1800 includes, at step 1802 , determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model based on R chroma samples from a group of neighboring chroma samples, wherein the R chroma samples are selected from the group based on a position rule and R is greater than or equal to 2.
- the method 1800 further includes, at step 1804 , performing the conversion based on the determining.
- FIG. 19A shows a flowchart of an exemplary method for video processing.
- the method 1900 includes, at step 1902 , determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model based on selected chroma samples based on positions of the chroma samples, wherein the selected chroma samples are selected from a group of neighboring chroma samples.
- the method 1900 further includes, at step 1804 , performing the conversion based on the determining.
- FIG. 19B shows a flowchart of an exemplary method for video processing.
- the method 1910 includes, at step 1912 , determining, for a current video block, a group of neighboring chroma samples used to derive a set of values for parameters of a linear model, wherein a width and a height of the current video block is W and H, respectively, and wherein the group of neighboring chroma samples comprises at least one sample that is located beyond 2 ⁇ W above neighboring chroma samples or 2 ⁇ H left neighboring chroma samples.
- the method 1910 further includes, at step 1914 , performing, based on the linear model, a conversion between the current video block and a coded representation of a video including the current video block.
- FIG. 20A shows a flowchart of an exemplary method for video processing.
- the method 2000 includes, at step 2002 , determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, multiple sets of parameters, wherein each set of parameters defines a cross-component linear model (CCLM) and is derived from a corresponding group of chroma samples at corresponding chroma sample positions.
- the method 2000 further includes, at step 2004 , determining, based on the multiple sets of parameters, parameters for a final CCLM.
- the method 2000 further includes, at step 2006 , performing the conversion based on the final CCLM.
- CCLM cross-component linear model
- FIG. 20B shows a flowchart of an exemplary method for video processing.
- the method 2010 includes, at step 2012 , determining, for a conversion between a current video block of a video and a coded representation of the video, parameters of a cross-component linear model (CCLM) based on maximum and minimum values of chroma and luma samples of N groups of chroma and luma samples selected from neighboring luma and chroma samples of the current video block.
- the method 2010 further includes, at step 2014 , performing the conversion using the CCLM.
- CCLM cross-component linear model
- FIG. 21 shows a flowchart of an exemplary method for video processing.
- the method 2100 includes, at step 2102 , determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model that are completely determinable by two chroma samples and corresponding two luma samples.
- the method 2100 further includes, at step 2104 , performing the conversion based on the determining.
- FIG. 22 shows a flowchart of an exemplary method for video processing.
- the method 2200 includes, at step 2202 , determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model using a parameter table whose entries are retrieved according to two chroma sample values and two luma sample values.
- the method 220 further includes, at step 2204 , performing the conversion based on the determining.
- FIG. 23A shows a flowchart of an exemplary method for video processing.
- the method 2310 includes, at step 2312 , determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, a final prediction P(x, y) of a chroma sample at a position (x, y) in the current video block as a combination of prediction results of multiple cross-component linear models (MCCLMs), wherein the MCCLMs are selected based on the position (x, y) of the chroma sample.
- MCLMs cross-component linear models
- FIG. 24A shows a flowchart of an exemplary method for video processing.
- the method 2410 includes, at step 2412 , determining, for a conversion between a current video block of a video and a coded representation of the video, a context that is used to code a flag using arithmetic coding in the coded representation of the current video block, wherein the context is based on whether a top-left neighboring block of the current video block is coded using a cross-component linear model (CCLM) prediction mode.
- the method 2410 includes, at step 2414 , performing the conversion based on the determining.
- CCLM cross-component linear model
- FIG. 24B shows a flowchart of an exemplary method for video processing.
- the method 2420 includes, at step 2422 , determining, for a conversion between a current video block of a video and a coded representation of the video, a coding order for one or more indications of a direct intra prediction mode (DM mode) and a linear intra prediction mode (LM mode) based on a coding mode of one or more neighboring blocks of the current video block.
- the method 2420 includes, at step 2424 , performing the conversion based on the determining.
- DM mode direct intra prediction mode
- LM mode linear intra prediction mode
- FIG. 24C shows a flowchart of an exemplary method for video processing.
- the method 2430 includes, at step 2432 , determining, for a conversion between a current video block of a video and a coded representation of the video, parameters for a linear model prediction or cross-color component prediction based on refined chroma and luma samples of the current video block.
- the method 2430 includes, at step 2434 , performing the conversion based on the determining.
- FIG. 24D shows a flowchart of an exemplary method for video processing.
- the method 2440 includes, at step 2442 , determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters for a linear model prediction or cross-color component prediction based on by selecting neighboring samples based on a position of a largest or a smallest neighboring sample.
- the method 2440 further includes, at step 2444 , performing the conversion based on the determining.
- FIG. 24E shows a flowchart of an exemplary method for video processing.
- the method 2450 includes, at step 2452 , determining, for a conversion between a current video block of a video and a coded representation of the video, parameters for a linear model prediction or cross-color component prediction based on a main color component and a dependent color component, the main color component selected as one of a luma color component and a chroma color component and the dependent color component selected as the other of the luma color component and the chroma color component.
- the method 2450 further includes, at step 2454 , performing the conversion based on the determining.
- FIG. 25A shows a flowchart of an exemplary method for video processing.
- the method 2510 includes, at step 2512 , performing downsampling on chroma and luma samples of a neighboring block of the current video block.
- the method 2510 further includes, at step 2514 , determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of cross-component linear model (CCLM) based on the downsampled chroma and luma samples obtained from the downsampling.
- the method 2510 further includes, at step 2516 , performing the conversion based on the determining.
- CCLM cross-component linear model
- FIG. 25B shows a flowchart of an exemplary method for video processing.
- the method 2520 includes, at step 2522 , determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model (CCLM) based on two or more chroma samples from a group of neighboring chroma samples, wherein the two or more chroma samples are selected based on a coding mode of the current video block.
- the method 2520 further includes, at step 2524 , performing the conversion based on the determining.
- CCLM cross-component linear model
- FIG. 26A shows a flowchart of an exemplary method for video processing.
- the method 2610 includes, at step 2612 , determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of cross-component linear model (CCLM) based on chroma samples that are selected based on W available above-neighboring samples, W being an integer.
- the method 2610 further includes, at step 2614 , performing the conversion based on the determining.
- CCLM cross-component linear model
- FIG. 26B shows a flowchart of an exemplary method for video processing.
- the method 2620 includes, at step 2622 , determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of cross-component linear model (CCLM) based on chroma samples that are selected based on H available left-neighboring samples of the current video block.
- the method 2620 further includes, at step 2622 , performing the conversion based on the determining.
- CCLM cross-component linear model
- FIG. 27A shows a flowchart of an exemplary method for video processing.
- the method 2710 includes, at step 2712 , determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model (CCLM) based on two or four chroma samples and/or corresponding luma samples.
- the method 2710 further includes, at step 2714 , performing the conversion based on the determining.
- CCLM cross-component linear model
- FIG. 27B shows a flowchart of an exemplary method for video processing.
- the method 2720 includes, at step 2722 , selecting, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, chroma samples based on a position rule, the chroma samples used to derive parameters of a cross-component linear model (CCLM).
- the method 2720 further includes, at step 2724 , performing the conversion based on the determining.
- the position rule specifies to select the chroma samples that are located within an above row and/or a left column of the current video block.
- FIG. 28A shows a flowchart of an exemplary method for video processing.
- the method 2810 includes, at step 2812 , determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, positions at which luma samples are downsampled, wherein the downsampled luma samples are used to determine parameters of a cross-component linear model (CCLM) based on chroma samples and downsampled luma samples, wherein the downsampled luma samples are at positions corresponding to positions of the chroma samples that are used to derive the parameters of the CCLM.
- the method 2810 further includes, at step 2814 , performing the conversion based on the determining.
- CCLM cross-component linear model
- FIG. 28B shows a flowchart of an exemplary method for video processing.
- the method 2820 includes, at step 2822 , determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, a method to derive parameters of a cross-component linear model (CCLM) using chroma samples and luma samples based on a coding condition associated with the current video block.
- the method 2820 further includes, at step 2824 , performing the conversion based on the determining.
- CCLM cross-component linear model
- FIG. 28C shows a flowchart of an exemplary method for video processing.
- the method 2830 includes, at step 2832 , determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, whether to derive maximum values and/or minimum values of a luma component and a chroma component that are used to derive parameters of a cross-component linear model (CCLM) based on availability of a left-neighboring block and an above-neighboring block of the current video block.
- the method 2830 further includes, at step 2834 , performing the conversion based on the determining.
- CCLM cross-component linear model
- FIG. 29A shows a flowchart of an exemplary method for video processing.
- the method 2910 includes, at step 2912 , determining, for a conversion between a current video block of a video and a coded representation of the video, parameters of a coding tool using a linear model based on selected neighboring samples of the current video block and corresponding neighboring samples of a reference block.
- the method 2910 further includes, at step 2914 , performing the conversion based on the determining.
- FIG. 29B shows a flowchart of an exemplary method for video processing.
- the method 2920 includes, at step 2922 , determining, for a conversion between a current video block of a video and a coded representation of the video, parameters of a local illumination compensation (LIC) tool based on N neighboring samples of the current video block and N corresponding neighboring samples of a reference block, wherein the N neighboring samples of the current video block are selected based on positions of the N neighboring samples.
- the method 2920 further includes, at step 2924 , performing the conversion based on the determining.
- the LIC tool uses a linear model of illumination changes in the current video block during the conversion.
- FIG. 29C shows a flowchart of an exemplary method for video processing.
- the method 2930 includes, at step 2932 , determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model (CCLM) based on chroma samples and corresponding luma samples.
- the method 2930 further includes, at step 2934 , performing the conversion based on the determining.
- some of the chroma samples are obtained by a padding operation and the chroma samples and the corresponding luma samples are grouped into two arrays G0 and G1, each array including two chroma samples and corresponding luma samples.
- FIG. 30A is a block diagram of a video processing apparatus 3000 .
- the apparatus 3000 may be used to implement one or more of the methods described herein.
- the apparatus 3000 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on.
- the apparatus 3000 may include one or more processors 3002 , one or more memories 3004 and video processing hardware 3006 .
- the processor(s) 3002 may be configured to implement one or more methods (including, but not limited to, methods as shown FIGS. 18 to 29C ) described in the present document.
- the memory (memories) 3004 may be used for storing data and code used for implementing the methods and techniques described herein.
- the video processing hardware 3006 may be used to implement, in hardware circuitry, some techniques described in the present document.
- FIG. 30B is another example of a block diagram of a video processing system in which disclosed techniques may be implemented.
- FIG. 30B is a block diagram showing an example video processing system 3100 in which various techniques disclosed herein may be implemented.
- Various implementations may include some or all of the components of the system 3100 .
- the system 3100 may include input 3102 for receiving video content.
- the video content may be received in a raw or uncompressed format, e.g., 8 or 10 bit multi-component pixel values, or may be in a compressed or encoded format.
- the input 3102 may represent a network interface, a peripheral bus interface, or a storage interface. Examples of network interface include wired interfaces such as Ethernet, passive optical network (PON), etc. and wireless interfaces such as Wi-Fi or cellular interfaces.
- the system 3100 may include a coding component 3104 that may implement the various coding or encoding methods described in the present document.
- the coding component 3104 may reduce the average bitrate of video from the input 3102 to the output of the coding component 3104 to produce a coded representation of the video.
- the coding techniques are therefore sometimes called video compression or video transcoding techniques.
- the output of the coding component 3104 may be either stored, or transmitted via a communication connected, as represented by the component 3106 .
- the stored or communicated bitstream (or coded) representation of the video received at the input 3102 may be used by the component 3108 for generating pixel values or displayable video that is sent to a display interface 3110 .
- the process of generating user-viewable video from the bitstream representation is sometimes called video decompression.
- video processing operations are referred to as “coding” operations or tools, it will be appreciated that the coding tools or operations are used at an encoder and corresponding decoding tools or operations that reverse the results of the coding will be performed by a decoder.
- peripheral bus interface or a display interface may include universal serial bus (USB) or high definition multimedia interface (HDMI) or Displayport, and so on.
- storage interfaces include SATA (serial advanced technology attachment), PCI, IDE interface, and the like.
- the video coding methods may be implemented using an apparatus that is implemented on a hardware platform as described with respect to FIG. 30A or 30B .
- a method of video processing comprising: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model based on R chroma samples from a group of neighboring chroma samples, wherein the R chroma samples are selected from the group based on a position rule and R is greater than or equal to 2; and performing the conversion based on the determining.
- a top-left sample of the chroma block is (x, y), wherein a width and a height of the chroma block is W and H, respectively, and wherein the group of neighboring chroma samples comprises: sample A with coordinates (x ⁇ 1, y), sample B with coordinates (x ⁇ 1, y+H/2 ⁇ 1), sample C with coordinates (x ⁇ 1, y+H/2), sample D with coordinates (x ⁇ 1, y+H ⁇ 1), sample E with coordinates (x ⁇ 1, y+H), sample F with coordinates (x ⁇ 1, y+H+H/2 ⁇ 1), sample G with coordinates (x ⁇ 1, y+H+H/2), sample I with coordinates (x ⁇ 1, y+H+H ⁇ 1), sample J with coordinates (x, y ⁇ 1), sample K with coordinates (x+W/2 ⁇ 1, y ⁇ 1), sample L with coordinates (x+W/2, y ⁇ 1), sample M with coordinates (x+W ⁇ 1, y ⁇ 1)
- the method further includes checking an additional chroma sample.
- An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of clauses 1 to 26.
- a computer program product stored on a non-transitory computer readable media including program code for carrying out the method in any one of clauses 1 to 26.
- a method for video processing comprising: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model based on selected chroma samples based on positions of the chroma samples, wherein the selected chroma samples are selected from a group of neighboring chroma samples; and performing the conversion based on the determining.
- a method for video processing comprising: determining, for a current video block, a group of neighboring chroma samples used to derive a set of values for parameters of a linear model, wherein a width and a height of the current video block is W and H, respectively, and wherein the group of neighboring chroma samples comprises at least one sample that is located beyond 2 ⁇ W above neighboring chroma samples or 2 ⁇ H left neighboring chroma samples; and performing, based on the linear model, a conversion between the current video block and a coded representation of a video including the current video block.
- An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of clauses 1 to 21.
- a computer program product stored on a non-transitory computer readable media including program code for carrying out the method in any one of clauses 1 to 21.
- a method for video processing comprising: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, multiple sets of parameters, wherein each set of parameters defines a cross-component linear model (CCLM) and is derived from a corresponding group of chroma samples at corresponding chroma sample positions; determining, based on the multiple sets of parameters, parameters for a final CCLM; and performing the conversion based on the final CCLM.
- CCLM cross-component linear model
- a top-left sample of the chroma block is (x, y) and a width and a height of the chroma block is W and H, respectively, and wherein the group of chroma samples comprises at least one of:
- a method of video processing comprising: determining, for a conversion between a current video block of a video and a coded representation of the video, parameters of a cross-component linear model (CCLM) based on maximum and minimum values of chroma and luma samples of N groups of chroma and luma samples selected from neighboring luma and chroma samples of the current video block; and performing the conversion using the CCLM.
- CCLM cross-component linear model
- maxC Sm maxC Sm
- minC Sm minC Sm
- f4 is a fourth function
- N is signaled in a sequence parameter set (SPS), a video parameter set (VPS), a picture parameter set (PPS), a picture header, a slice header, a tile group header, one or more largest coding units or one or more coding units.
- SPS sequence parameter set
- VPS video parameter set
- PPS picture parameter set
- An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of clauses 1 to 29.
- a computer program product stored on a non-transitory computer readable media including program code for carrying out the method in any one of clauses 1 to 29.
- a method of video processing comprising: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model that are completely determinable by two chroma samples and corresponding two luma samples; and performing the conversion based on the determining.
- An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of clauses 1 to 14.
- a computer program product stored on a non-transitory computer readable media including program code for carrying out the method in any one of clauses 1 to 14.
- a method of video processing comprising: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model using a parameter table whose entries are retrieved according to two chroma sample values and two luma sample values; and performing the conversion based on the determining.
- An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of clauses 1 to 19.
- a computer program product stored on a non-transitory computer readable media including program code for carrying out the method in any one of clauses 1 to 19.
- a method of video processing comprising: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, a final prediction P(x, y) of a chroma sample at a position (x, y) in the current video block as a combination of prediction results of multiple cross-component linear models (MCCLMs), wherein the MCCLMs are selected based on the position (x, y) of the chroma sample; and performing the conversion based on the final prediction.
- MCLMs cross-component linear models
- a method of video processing comprising: performing, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, a first determination regarding whether a first cross-component linear model (CCLM) that uses only left-neighboring samples is used for predicting samples of the current video block and/or a second determination regarding whether a second cross-component linear model (CCLM) that uses only above-neighboring samples is used for predicting samples of the current video block; and performing the conversion based on the first determination and/or the second determination.
- CCLM cross-component linear model
- An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of clauses 1 to 17.
- a computer program product stored on a non-transitory computer readable media including program code for carrying out the method in any one of clauses 1 to 17.
- a method for video processing comprising: determining, for a conversion between a current video block of a video and a coded representation of the video, a context that is used to code a flag using arithmetic coding in the coded representation of the current video block, wherein the context is based on whether a top-left neighboring block of the current video block is coded using a cross-component linear model (CCLM) prediction mode; and performing the conversion based on the determining.
- CCLM cross-component linear model
- a method of video processing comprising: determining, for a conversion between a current video block of a video and a coded representation of the video, a coding order for one or more indications of a direct intra prediction mode (DM mode) and a linear intra prediction mode (LM mode) based on a coding mode of one or more neighboring blocks of the current video block; and performing the conversion based on the determining.
- DM mode direct intra prediction mode
- LM mode linear intra prediction mode
- a top-left neighboring block of the one or more neighboring blocks is coded with a coding mode that is different from the LM mode, and wherein an indication of the DM mode is coded first.
- a method of video processing comprising: determining, for a conversion between a current video block of a video and a coded representation of the video, parameters for a linear model prediction or cross-color component prediction based on refined chroma and luma samples of the current video block; and performing the conversion based on the determining.
- C0 and L0 are based on S chroma and luma samples, denoted ⁇ Cx1, Cx2, . . . , CxS ⁇ and ⁇ Lx1, Lx2, . . . , LxS ⁇ , respectively, wherein C1 and L1 are based on T chroma and luma samples, denoted ⁇ Cy1, Cy2, . . . , CyT ⁇ and ⁇ Ly1, Ly2, . . . , LyT ⁇ , respectively, wherein ⁇ Cx1, Cx2, . . . , CxS ⁇ are corresponding to ⁇ Lx1, Lx2, . . .
- a method of video processing comprising: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters for a linear model prediction or cross-color component prediction based on by selecting neighboring samples based on a position of a largest or a smallest neighboring sample; and performing the conversion based on the determining.
- a method of video processing comprising: determining, for a conversion between a current video block of a video and a coded representation of the video, parameters for a linear model prediction or cross-color component prediction based on a main color component and a dependent color component, the main color component selected as one of a luma color component and a chroma color component and the dependent color component selected as the other of the luma color component and the chroma color component; and performing the conversion based on the determining.
- An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of clauses 1 to 33.
- a computer program product stored on a non-transitory computer readable media including program code for carrying out the method in any one of clauses 1 to 33.
- a method for video processing comprising: performing downsampling on chroma and luma samples of a neighboring block of the current video block; determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of cross-component linear model (CCLM) based on the downsampled chroma and luma samples obtained from the downsampling; and performing the conversion based on the determining.
- CCLM cross-component linear model
- a top-left sample of the current video block is R[0, 0]
- the downsampled chroma samples comprise samples R[ ⁇ 1, K ⁇ H/W]
- K is a non-negative integer ranging from 0 to W ⁇ 1.
- a method of video processing comprising: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model (CCLM) based on two or more chroma samples from a group of neighboring chroma samples, wherein the two or more chroma samples are selected based on a coding mode of the current video block; and performing the conversion based on the determining.
- CCLM cross-component linear model
- the coding mode of the current video block is a first linear mode that is different from a second linear mode that uses only left-neighboring samples and a third linear mode that uses only above-neighboring samples, wherein coordinates of a top-left sample of the current video block are (x, y), and wherein a width and a height of the current video block is W and H, respectively.
- An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of clauses 1 to 34.
- a computer program product stored on a non-transitory computer readable media including program code for carrying out the method in any one of clauses 1 to 34.
- a method for video processing comprising: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of cross-component linear model (CCLM) based on chroma samples that are selected based on W available above-neighboring samples, W being an integer; and performing the conversion based on the determining.
- CCLM cross-component linear model
- W is set to i) a width of the current video block, ii) L times the width of the current video block, L being an integer, iii) a sum of a height of the current video block and a width of the current video block, or iv) a sum of the width of the current video block and the number of available top-right neighboring samples.
- a method for video processing comprising: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of cross-component linear model (CCLM) based on chroma samples that are selected based on H available left-neighboring samples of the current video block; and performing the conversion based on the determining.
- CCLM cross-component linear model
- H is set to one of i) a height of the current video block, ii) L times the height of the current video block, L being an integer, iii) a sum of a height of the current video block and a width of the current video block, or iv) a sum of the height of the current video block and the number of available left-bottom neighboring samples.
- H depends on an availability of at least one of an above-neighboring block or a left-neighboring block of the current video block.
- An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of clauses 1 to 44.
- a computer program product stored on a non-transitory computer readable media including program code for carrying out the method in any one of clauses 1 to 44.
- a method for video processing comprising: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model (CCLM) based on two or four chroma samples and/or corresponding luma samples; and performing the conversion based on the determining.
- CCLM cross-component linear model
- minY is calculated as an average of luma sample values of G0[0] and G0[1] or an average of luma sample values of G1[0] and G1[1]
- minC is calculated as an average of chroma sample values of G0[0] and G0[1] or an average of chroma sample values of G1[0] and G1[1].
- a method for video processing comprising: selecting, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, chroma samples based on a position rule, the chroma samples used to derive parameters of a cross-component linear model (CCLM); and performing the conversion based on the determining, wherein the position rule specifies to select the chroma samples that are located within an above row and/or a left column of the current video block.
- CCLM cross-component linear model
- numSampT is set based on a rule specifying that numSampT is set equal to nTbW in a case that above neighboring samples are available and numSampT is set equal to 0 in a case that the above neighboring samples are not available, and wherein numSampT represents the number of chroma samples within an above neighboring row used to derive the parameters of a cross-component linear model and nTbW represents an width of the current video block.
- numSampT is set based on a rule specifying that numSampT is set equal to nTbW+Min(numTopRight, nTbH) in a case that above neighboring samples are available and the current video block is coded with a first CCLM mode that uses only above-neighboring samples to derive the CCLM, and that otherwise the numSampT is set equal to 0, and wherein numSampT represents the number of chroma samples within an above neighboring row used to derive the parameters of the cross-component linear model, nTbW and nTbH represent a width and a height of the current block, respectively, and numTopRight represents the number of available top right neighgoring samples.
- numSampL is set based on a rule specifying that numSampL is set equal to nTbH in a case that left neighboring samples are available and otherwise numSampL is set equal to 0, and wherein numSampL represents the number of chroma samples within a left neighbouring column used to derive parameters of the cross-component linear model and nTbH represents a height of the current video block.
- numSampL is set based on a rule specifying that numSampL is set equal to nTbH+Min(numLeftBelow, nTbW) in a case that left neighbouring samples are available and the current video block is coded with a second CCLM mode that uses only left-neighboring samples to derive the CCLM and that otherwise numSampL is set equal to 0, and wherein numSampL represents the number of chroma samples within a left neighboring column used to derive the parameters of the cross-component linear model, nTbW and nTbH represent a width and a height of the current block, respectively, and numLeftBelow represents the number of available below-left neighgoring samples.
- An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of clauses 1 to 44.
- a computer program product stored on a non-transitory computer readable media including program code for carrying out the method in any one of clauses 1 to 44.
- a method for video processing comprising: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, positions at which luma samples are downsampled, wherein the downsampled luma samples are used to determine parameters of a cross-component linear model (CCLM) based on chroma samples and downsampled luma samples, wherein the downsampled luma samples are at positions corresponding to positions of the chroma samples that are used to derive the parameters of the CCLM; and performing the conversion based on the determining.
- CCLM cross-component linear model
- a method for video processing comprising: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, a method to derive parameters of a cross-component linear model (CCLM) using chroma samples and luma samples based on a coding condition associated with the current video block; and performing the conversion based on the determining.
- CCLM cross-component linear model
- a method for video processing comprising: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, whether to derive maximum values and/or minimum values of a luma component and a chroma component that are used to derive parameters of a cross-component linear model (CCLM) based on availability of a left-neighboring block and an above-neighboring block of the current video block; and performing the conversion based on the determining.
- CCLM cross-component linear model
- An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of clauses 1 to 19.
- a computer program product stored on a non-transitory computer readable media including program code for carrying out the method in any one of clauses 1 to 19.
- a method for video processing comprising: determining, for a conversion between a current video block of a video and a coded representation of the video, parameters of a coding tool using a linear model based on selected neighboring samples of the current video block and corresponding neighboring samples of a reference block; and performing the conversion based on the determining.
- the coding tool is a local illumination compensation (LIC) tool that includes using a linear model of illumination changes in the current video block during the conversion.
- LIC local illumination compensation
- N neighboring samples of the current video block includes N/2 samples from an above row of the current video block and N/2 samples from a left column of the current video block.
- N is equal to min (L, T), T being a total number of available neighboring samples of the current video block and L being an integer.
- N neighboring samples are selected based on a same rule that is applicable to select samples to derive parameters of a first mode of the CCLM that uses above-neighboring samples only.
- N neighboring samples are selected based on a same rule that is applicable to select samples to derive parameters of a second mode of the CCLM that uses left-neighboring samples only.
- a method for video processing comprising: determining, for a conversion between a current video block of a video and a coded representation of the video, parameters of a local illumination compensation (LIC) tool based on N neighboring samples of the current video block and N corresponding neighboring samples of a reference block, wherein the N neighboring samples of the current video block are selected based on positions of the N neighboring samples; and performing the conversion based on the determining, wherein the LIC tool uses a linear model of illumination changes in the current video block during the conversion.
- LIC local illumination compensation
- N neighboring samples of the current video block are selected with a first position offset value (F) and a step value (S) that depend on a dimension of the current video block and availabilities of neighboring blocks.
- a method for video processing comprising: determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model (CCLM) based on chroma samples and corresponding luma samples; and performing the conversion based on the determining, wherein some of the chroma samples are obtained by a padding operation and the chroma samples and the corresponding luma samples are grouped into two arrays G0 and G1, each array including two chroma samples and corresponding luma samples.
- CCLM cross-component linear model
- determining of the parameters further includes, after the initializing of the values, upon a comparison of two luma sample values of G0[0] and G1[1], swamping chroma samples and its corresponding luma samples of G0[0] or G0[1] with those of G1[0] or G1[1].
- the determining of the parameters further includes, after the initializing of the values, upon a comparison of two luma sample values of G0[0], G0[1], G[0], and G1[1], performing following swamping operations in an order: i) a swamping operation of chroma sample and its corresponding luma sample of G0[0] with those of G0[1], ii) a swamping operation of chroma sample and its corresponding luma sample of G1[0] with those of G1[1], iii) a swamping operation of chroma samples and its corresponding luma samples of G0[0] or G0[1] with those of G1[0] or G1[1], and iv) a swamping operation of a chroma sample and its corresponding luma sample of G0[1] with those of G1[0].
- An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of clauses 1 to 40.
- a computer program product stored on a non-transitory computer readable media including program code for carrying out the method in any one of clauses 1 to 40.
- Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus.
- the computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them.
- data processing unit or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
- the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
- a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
- a computer program does not necessarily correspond to a file in a file system.
- a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
- a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
- the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output.
- the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
- processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
- a processor will receive instructions and data from a read only memory or a random access memory or both.
- the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
- a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
- mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
- a computer need not have such devices.
- Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices.
- semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices.
- the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Mathematical Optimization (AREA)
- Physics & Mathematics (AREA)
- Algebra (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Analysis (AREA)
- Pure & Applied Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computing Systems (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Color Television Systems (AREA)
Applications Claiming Priority (34)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2018114158 | 2018-11-06 | ||
WOPCT/CN2018/114158 | 2018-11-06 | ||
CNPCT/CN2018/114158 | 2018-11-06 | ||
CNPCT/CN2018/118799 | 2018-12-01 | ||
CN2018118799 | 2018-12-01 | ||
WOPCT/CN2018/118799 | 2018-12-01 | ||
WOPCT/CN2018/119709 | 2018-12-07 | ||
CN2018119709 | 2018-12-07 | ||
CNPCT/CN2018/119709 | 2018-12-07 | ||
CNPCT/CN2018/125412 | 2018-12-29 | ||
WOPCT/CN2018/125412 | 2018-12-29 | ||
CN2018125412 | 2018-12-29 | ||
CNPCT/CN2019/070002 | 2019-01-01 | ||
CN2019070002 | 2019-01-01 | ||
WOPCT/CN2019/070002 | 2019-01-01 | ||
WOPCT/CN2019/075874 | 2019-02-22 | ||
CN2019075874 | 2019-02-22 | ||
CNPCT/CN2019/075874 | 2019-02-22 | ||
WOPCT/CN2019/075993 | 2019-02-24 | ||
CN2019075993 | 2019-02-24 | ||
CNPCT/CN2019/075993 | 2019-02-24 | ||
WOPCT/CN2019/076195 | 2019-02-26 | ||
CN2019076195 | 2019-02-26 | ||
CNPCT/CN2019/076195 | 2019-02-26 | ||
WOPCT/CN2019/079396 | 2019-03-24 | ||
CN2019079396 | 2019-03-24 | ||
CNPCT/CN2019/079396 | 2019-03-24 | ||
CNPCT/CN2019/079431 | 2019-03-25 | ||
CN2019079431 | 2019-03-25 | ||
WOPCT/CN2019/079431 | 2019-03-25 | ||
WOPCT/CN2019/079769 | 2019-03-26 | ||
CN2019079769 | 2019-03-26 | ||
CNPCT/CN2019/079769 | 2019-03-26 | ||
PCT/CN2019/115985 WO2020094057A1 (en) | 2018-11-06 | 2019-11-06 | Position based intra prediction |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/115985 Continuation WO2020094057A1 (en) | 2018-11-06 | 2019-11-06 | Position based intra prediction |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200252619A1 US20200252619A1 (en) | 2020-08-06 |
US10999581B2 true US10999581B2 (en) | 2021-05-04 |
Family
ID=70610795
Family Applications (9)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/850,509 Active US10999581B2 (en) | 2018-11-06 | 2020-04-16 | Position based intra prediction |
US16/940,826 Active US11019344B2 (en) | 2018-11-06 | 2020-07-28 | Position dependent intra prediction |
US16/940,877 Active US10979717B2 (en) | 2018-11-06 | 2020-07-28 | Simplified parameter derivation for intra prediction |
US16/987,844 Active US11025915B2 (en) | 2018-11-06 | 2020-08-07 | Complexity reduction in parameter derivation intra prediction |
US17/201,711 Active US11438598B2 (en) | 2018-11-06 | 2021-03-15 | Simplified parameter derivation for intra prediction |
US17/246,821 Abandoned US20210258572A1 (en) | 2018-11-06 | 2021-05-03 | Multi-models for intra prediction |
US17/246,794 Active US11930185B2 (en) | 2018-11-06 | 2021-05-03 | Multi-parameters based intra prediction |
US18/310,344 Pending US20230269379A1 (en) | 2018-11-06 | 2023-05-01 | Multi-parameters based intra prediction |
US18/345,608 Granted US20230345009A1 (en) | 2018-11-06 | 2023-06-30 | Multi-models for intra prediction |
Family Applications After (8)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/940,826 Active US11019344B2 (en) | 2018-11-06 | 2020-07-28 | Position dependent intra prediction |
US16/940,877 Active US10979717B2 (en) | 2018-11-06 | 2020-07-28 | Simplified parameter derivation for intra prediction |
US16/987,844 Active US11025915B2 (en) | 2018-11-06 | 2020-08-07 | Complexity reduction in parameter derivation intra prediction |
US17/201,711 Active US11438598B2 (en) | 2018-11-06 | 2021-03-15 | Simplified parameter derivation for intra prediction |
US17/246,821 Abandoned US20210258572A1 (en) | 2018-11-06 | 2021-05-03 | Multi-models for intra prediction |
US17/246,794 Active US11930185B2 (en) | 2018-11-06 | 2021-05-03 | Multi-parameters based intra prediction |
US18/310,344 Pending US20230269379A1 (en) | 2018-11-06 | 2023-05-01 | Multi-parameters based intra prediction |
US18/345,608 Granted US20230345009A1 (en) | 2018-11-06 | 2023-06-30 | Multi-models for intra prediction |
Country Status (6)
Country | Link |
---|---|
US (9) | US10999581B2 (xx) |
EP (4) | EP3861728A4 (xx) |
JP (8) | JP7422757B2 (xx) |
KR (4) | KR20210087928A (xx) |
CN (7) | CN112997491B (xx) |
WO (6) | WO2020094058A1 (xx) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11115655B2 (en) | 2019-02-22 | 2021-09-07 | Beijing Bytedance Network Technology Co., Ltd. | Neighboring sample selection for intra prediction |
US11336907B2 (en) * | 2018-07-16 | 2022-05-17 | Huawei Technologies Co., Ltd. | Video encoder, video decoder, and corresponding encoding and decoding methods |
US11438598B2 (en) | 2018-11-06 | 2022-09-06 | Beijing Bytedance Network Technology Co., Ltd. | Simplified parameter derivation for intra prediction |
US11438581B2 (en) | 2019-03-24 | 2022-09-06 | Beijing Bytedance Network Technology Co., Ltd. | Conditions in parameter derivation for intra prediction |
US11595687B2 (en) | 2018-12-07 | 2023-02-28 | Beijing Bytedance Network Technology Co., Ltd. | Context-based intra prediction |
US11729405B2 (en) | 2019-02-24 | 2023-08-15 | Beijing Bytedance Network Technology Co., Ltd. | Parameter derivation for intra prediction |
US11902507B2 (en) | 2018-12-01 | 2024-02-13 | Beijing Bytedance Network Technology Co., Ltd | Parameter derivation for intra prediction |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW202007164A (zh) | 2018-07-15 | 2020-02-01 | 大陸商北京字節跳動網絡技術有限公司 | 跨分量運動資訊匯出 |
TWI814890B (zh) | 2018-08-17 | 2023-09-11 | 大陸商北京字節跳動網絡技術有限公司 | 簡化的跨分量預測 |
WO2020053806A1 (en) * | 2018-09-12 | 2020-03-19 | Beijing Bytedance Network Technology Co., Ltd. | Size dependent down-sampling in cross component linear model |
US11197005B2 (en) * | 2018-11-08 | 2021-12-07 | Qualcomm Incorporated | Cross-component prediction for video coding |
HUE060910T2 (hu) | 2018-11-23 | 2023-04-28 | Lg Electronics Inc | Video kódolás és dekódolás CCLM predikció alapján |
CN113396591A (zh) * | 2018-12-21 | 2021-09-14 | Vid拓展公司 | 针对用于基于模板的视频译码的改进的线性模型估计的方法、架构、装置和系统 |
CN113273203B (zh) | 2018-12-22 | 2024-03-12 | 北京字节跳动网络技术有限公司 | 两步交叉分量预测模式 |
AU2019419322B2 (en) * | 2018-12-31 | 2023-11-23 | Huawei Technologies Co., Ltd. | Method and apparatus of cross-component linear modeling for intra prediction |
MX2021008080A (es) | 2019-01-02 | 2021-08-11 | Guangdong Oppo Mobile Telecommunications Corp Ltd | Procedimiento y aparato de decodificacion de prediccion y medio de almacenamiento informatico. |
US11516512B2 (en) * | 2019-03-04 | 2022-11-29 | Alibaba Group Holding Limited | Method and system for processing video content |
WO2020182093A1 (en) | 2019-03-08 | 2020-09-17 | Beijing Bytedance Network Technology Co., Ltd. | Signaling of reshaping information in video processing |
JP7302009B2 (ja) | 2019-04-18 | 2023-07-03 | 北京字節跳動網絡技術有限公司 | クロスコンポーネントモードの利用可能性に対する制約 |
CA3135973A1 (en) | 2019-04-23 | 2020-10-29 | Beijing Bytedance Network Technology Co., Ltd. | Methods for cross component dependency reduction |
KR102641796B1 (ko) | 2019-05-08 | 2024-03-04 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 교차-성분 코딩의 적용가능성에 대한 조건들 |
EP3973707A4 (en) | 2019-06-22 | 2022-08-31 | Beijing Bytedance Network Technology Co., Ltd. | CHROMA REST SCALE SYNTAX ELEMENT |
EP3977738A4 (en) | 2019-07-07 | 2022-08-17 | Beijing Bytedance Network Technology Co., Ltd. | SIGNALING OF CHROMA RESIDUAL SCALE |
KR20220042125A (ko) | 2019-08-10 | 2022-04-04 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 서브픽처 디코딩에서의 버퍼 관리 |
CN114430901B (zh) | 2019-09-20 | 2024-07-05 | 北京字节跳动网络技术有限公司 | 带有色度缩放的亮度映射 |
JP7322290B2 (ja) | 2019-10-02 | 2023-08-07 | 北京字節跳動網絡技術有限公司 | ビデオビットストリームにおけるサブピクチャシグナリングのための構文 |
JP7482220B2 (ja) | 2019-10-18 | 2024-05-13 | 北京字節跳動網絡技術有限公司 | サブピクチャのパラメータセットシグナリングにおける構文制約 |
KR102707834B1 (ko) | 2019-10-29 | 2024-09-19 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 루마 차이를 이용한 크로스 컴포넌트 적응적 루프 필터 |
BR112022011466A2 (pt) | 2019-12-11 | 2022-08-23 | Beijing Bytedance Network Tech Co Ltd | Método de processamento de dados de vídeo, aparelho para processamento de dados de vídeo, meio de armazenamento e meio de gravação legíveis por computador não transitório |
WO2021136498A1 (en) | 2019-12-31 | 2021-07-08 | Beijing Bytedance Network Technology Co., Ltd. | Multiple reference line chroma prediction |
CN116325728A (zh) | 2020-06-30 | 2023-06-23 | 抖音视界有限公司 | 自适应环路滤波的边界位置 |
US11647198B2 (en) * | 2021-01-25 | 2023-05-09 | Lemon Inc. | Methods and apparatuses for cross-component prediction |
US11683474B2 (en) | 2021-01-25 | 2023-06-20 | Lemon Inc. | Methods and apparatuses for cross-component prediction |
WO2023138628A1 (en) * | 2022-01-21 | 2023-07-27 | Mediatek Inc. | Method and apparatus of cross-component linear model prediction in video coding system |
WO2024114701A1 (en) * | 2022-11-30 | 2024-06-06 | Douyin Vision Co., Ltd. | Method, apparatus, and medium for video processing |
WO2024169947A1 (en) * | 2023-02-16 | 2024-08-22 | Douyin Vision Co., Ltd. | Method, apparatus, and medium for video processing |
Citations (79)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120328013A1 (en) | 2011-06-24 | 2012-12-27 | Madhukar Budagavi | Luma-Based Chroma Intra-Prediction for Video Coding |
CN103096055A (zh) | 2011-11-04 | 2013-05-08 | 华为技术有限公司 | 一种图像信号帧内预测及解码的方法和装置 |
US20130128966A1 (en) | 2011-11-18 | 2013-05-23 | Futurewei Technologies, Inc. | Scanning of Prediction Residuals in High Efficiency Video Coding |
CN103379321A (zh) | 2012-04-16 | 2013-10-30 | 华为技术有限公司 | 视频图像分量的预测方法和装置 |
CN103650512A (zh) | 2011-07-12 | 2014-03-19 | 英特尔公司 | 基于亮度的色度帧内预测 |
CN103782596A (zh) | 2011-06-28 | 2014-05-07 | 三星电子株式会社 | 使用图像的亮度分量的对图像的色度分量的预测方法和设备 |
US20150036745A1 (en) | 2012-04-16 | 2015-02-05 | Mediatek Singapore Pte. Ltd. | Method and apparatus of simplified luma-based chroma intra prediction |
CN104380741A (zh) | 2012-01-19 | 2015-02-25 | 华为技术有限公司 | 用于lm帧内预测的参考像素缩减 |
US20150098510A1 (en) | 2013-10-07 | 2015-04-09 | Vid Scale, Inc. | Combined scalability processing for multi-layer video coding |
CN104871537A (zh) | 2013-03-26 | 2015-08-26 | 联发科技股份有限公司 | 色彩间帧内预测的方法 |
WO2016066028A1 (en) | 2014-10-28 | 2016-05-06 | Mediatek Singapore Pte. Ltd. | Method of guided cross-component prediction for video coding |
WO2016115708A1 (en) | 2015-01-22 | 2016-07-28 | Mediatek Singapore Pte. Ltd. | Methods for chroma component coding with separate intra prediction mode |
US20170016972A1 (en) | 2015-07-13 | 2017-01-19 | Siemens Medical Solutions Usa, Inc. | Fast Prospective Motion Correction For MR Imaging |
US20170085917A1 (en) | 2015-09-23 | 2017-03-23 | Nokia Technologies Oy | Method, an apparatus and a computer program product for coding a 360-degree panoramic video |
CN106664410A (zh) | 2014-06-19 | 2017-05-10 | Vid拓展公司 | 用于基于三维色彩映射模型参数优化的系统和方法 |
CN107211121A (zh) | 2015-01-22 | 2017-09-26 | 联发科技(新加坡)私人有限公司 | 色度分量的视频编码方法 |
WO2017203882A1 (en) | 2016-05-24 | 2017-11-30 | Sharp Kabushiki Kaisha | Systems and methods for intra prediction coding |
US20170347123A1 (en) | 2016-05-25 | 2017-11-30 | Arris Enterprises Llc | Jvet coding block structure with asymmetrical partitioning |
US20180048889A1 (en) | 2016-08-15 | 2018-02-15 | Qualcomm Incorporated | Intra video coding using a decoupled tree structure |
WO2018035130A1 (en) | 2016-08-15 | 2018-02-22 | Qualcomm Incorporated | Intra video coding using a decoupled tree structure |
US20180063527A1 (en) * | 2016-08-31 | 2018-03-01 | Qualcomm Incorporated | Cross-component filter |
US20180063531A1 (en) | 2016-08-26 | 2018-03-01 | Qualcomm Incorporated | Unification of parameters derivation procedures for local illumination compensation and cross-component linear model prediction |
US20180077426A1 (en) | 2016-09-15 | 2018-03-15 | Qualcomm Incorporated | Linear model chroma intra prediction for video coding |
CN107836116A (zh) | 2015-07-08 | 2018-03-23 | Vid拓展公司 | 使用交叉平面滤波的增强色度编码 |
US9948930B2 (en) | 2016-05-17 | 2018-04-17 | Arris Enterprises Llc | Template matching for JVET intra prediction |
US20180139469A1 (en) | 2015-06-19 | 2018-05-17 | Nokia Technologies Oy | An Apparatus, A Method and A Computer Program for Video Coding and Decoding |
WO2018116925A1 (ja) | 2016-12-21 | 2018-06-28 | シャープ株式会社 | イントラ予測画像生成装置、画像復号装置、および画像符号化装置 |
US20180205946A1 (en) | 2017-01-13 | 2018-07-19 | Qualcomm Incorporated | Coding video data using derived chroma mode |
WO2018140587A1 (en) | 2017-01-27 | 2018-08-02 | Qualcomm Incorporated | Bilateral filters in video coding with reduced complexity |
US10045023B2 (en) | 2015-10-09 | 2018-08-07 | Telefonaktiebolaget Lm Ericsson (Publ) | Cross component prediction in video coding |
CN108464002A (zh) | 2015-11-25 | 2018-08-28 | 高通股份有限公司 | 视频译码中具有非正方形预测单元的线性模型预测 |
CN109005408A (zh) | 2018-08-01 | 2018-12-14 | 北京奇艺世纪科技有限公司 | 一种帧内预测方法、装置及电子设备 |
WO2019006363A1 (en) | 2017-06-30 | 2019-01-03 | Vid Scale, Inc. | LOCAL LIGHTING COMPENSATION USING GENERALIZED BI-PREDICTION |
US20190014316A1 (en) | 2017-07-05 | 2019-01-10 | Arris Enterprises Llc | Post-filtering for weighted angular prediction |
US20190028702A1 (en) | 2017-07-24 | 2019-01-24 | Arris Enterprises Llc | Intra mode jvet coding |
US20190028701A1 (en) | 2017-07-24 | 2019-01-24 | Arris Enterprises Llc | Intra mode jvet coding |
CN109274969A (zh) | 2017-07-17 | 2019-01-25 | 华为技术有限公司 | 色度预测的方法和设备 |
US20190082184A1 (en) | 2016-03-24 | 2019-03-14 | Nokia Technologies Oy | An Apparatus, a Method and a Computer Program for Video Coding and Decoding |
US10237558B2 (en) | 2017-08-09 | 2019-03-19 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US20190110076A1 (en) | 2016-05-27 | 2019-04-11 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US20190110045A1 (en) | 2017-10-09 | 2019-04-11 | Qualcomm Incorporated | Position-dependent prediction combinations in video coding |
US10277895B2 (en) | 2016-12-28 | 2019-04-30 | Arris Enterprises Llc | Adaptive unequal weight planar prediction |
US20190174133A1 (en) | 2016-08-10 | 2019-06-06 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US10326989B2 (en) | 2016-05-25 | 2019-06-18 | Arris Enterprises Llc | General block partitioning method |
US10382781B2 (en) | 2016-09-28 | 2019-08-13 | Qualcomm Incorporated | Interpolation filters for intra prediction in video coding |
US20190268599A1 (en) | 2016-11-08 | 2019-08-29 | Nokia Technologies Oy | An apparatus, a method and a computer program for video coding and decoding |
US20190289306A1 (en) | 2016-07-22 | 2019-09-19 | Sharp Kabushiki Kaisha | Systems and methods for coding video data using adaptive component scaling |
US20190297339A1 (en) | 2016-06-30 | 2019-09-26 | Nokia Technologies Oy | An Apparatus, A Method and A Computer Program for Video Coding and Decoding |
US20190313108A1 (en) | 2018-04-05 | 2019-10-10 | Qualcomm Incorporated | Non-square blocks in video coding |
US20190342546A1 (en) | 2018-05-03 | 2019-11-07 | FG Innovation Company Limited | Device and method for coding video data based on different reference sets in linear model prediction |
US10477240B2 (en) | 2016-12-19 | 2019-11-12 | Qualcomm Incorporated | Linear model prediction mode with sample accessing for video coding |
US10484712B2 (en) | 2016-06-08 | 2019-11-19 | Qualcomm Incorporated | Implicit coding of reference line index used in intra prediction |
US10499068B2 (en) | 2014-12-31 | 2019-12-03 | Nokia Technologies Oy | Apparatus, a method and a computer program for video coding and decoding |
US10523949B2 (en) | 2016-05-25 | 2019-12-31 | Arris Enterprises Llc | Weighted angular prediction for intra coding |
US10542264B2 (en) | 2017-04-04 | 2020-01-21 | Arris Enterprises Llc | Memory reduction implementation for weighted angular prediction |
US10567808B2 (en) | 2016-05-25 | 2020-02-18 | Arris Enterprises Llc | Binary ternary quad tree partitioning for JVET |
US10575023B2 (en) | 2017-10-09 | 2020-02-25 | Arris Enterprises Llc | Adaptive unequal weight planar prediction |
US10602180B2 (en) | 2017-06-13 | 2020-03-24 | Qualcomm Incorporated | Motion vector prediction |
US10609402B2 (en) | 2018-05-02 | 2020-03-31 | Tencent America LLC | Method and apparatus for prediction and transform for small blocks |
US10616596B2 (en) | 2016-12-28 | 2020-04-07 | Arris Enterprises Llc | Unequal weight planar prediction |
US20200128272A1 (en) | 2017-06-21 | 2020-04-23 | Lg Electronics Inc. | Intra-prediction mode-based image processing method and apparatus therefor |
US10645395B2 (en) | 2016-05-25 | 2020-05-05 | Arris Enterprises Llc | Weighted angular prediction coding for intra coding |
US20200154100A1 (en) | 2018-11-14 | 2020-05-14 | Tencent America LLC | Constrained intra prediction and unified most probable mode list generation |
US20200154115A1 (en) | 2018-11-08 | 2020-05-14 | Qualcomm Incorporated | Cross-component prediction for video coding |
US10674165B2 (en) | 2016-12-21 | 2020-06-02 | Arris Enterprises Llc | Constrained position dependent intra prediction combination (PDPC) |
US20200177911A1 (en) | 2017-06-28 | 2020-06-04 | Sharp Kabushiki Kaisha | Video encoding device and video decoding device |
US20200195976A1 (en) | 2018-12-18 | 2020-06-18 | Tencent America LLC | Method and apparatus for video encoding or decoding |
US20200195970A1 (en) | 2017-04-28 | 2020-06-18 | Sharp Kabushiki Kaisha | Image decoding device and image encoding device |
US20200195930A1 (en) | 2018-07-02 | 2020-06-18 | Lg Electronics Inc. | Cclm-based intra-prediction method and device |
US10694188B2 (en) | 2017-12-18 | 2020-06-23 | Arris Enterprises Llc | System and method for constructing a plane for planar prediction |
US10701402B2 (en) | 2017-10-02 | 2020-06-30 | Arris Enterprises Llc | System and method for reducing blocking artifacts and providing improved coding efficiency |
US10742978B2 (en) | 2016-08-10 | 2020-08-11 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US20200260070A1 (en) | 2019-01-15 | 2020-08-13 | Lg Electronics Inc. | Image coding method and device using transform skip flag |
US20200267392A1 (en) | 2017-06-29 | 2020-08-20 | Dolby Laboratories Licensing Corporation | Integrated image reshaping and video coding |
US20200288135A1 (en) | 2017-10-09 | 2020-09-10 | Canon Kabushiki Kaisha | New sample sets and new down-sampling schemes for linear component sample prediction |
US20200359051A1 (en) | 2018-11-06 | 2020-11-12 | Beijing Bytedance Network Technology Co., Ltd. | Position dependent intra prediction |
US20200366933A1 (en) | 2018-12-07 | 2020-11-19 | Beijing Bytedance Network Technology Co., Ltd. | Context-based intra prediction |
US20200382800A1 (en) | 2019-02-24 | 2020-12-03 | Beijing Bytedance Network Technology Co., Ltd. | Parameter derivation for intra prediction |
US20200382769A1 (en) | 2019-02-22 | 2020-12-03 | Beijing Bytedance Network Technology Co., Ltd. | Neighboring sample selection for intra prediction |
Family Cites Families (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101138392B1 (ko) | 2004-12-30 | 2012-04-26 | 삼성전자주식회사 | 색차 성분의 상관관계를 이용한 컬러 영상의 부호화,복호화 방법 및 그 장치 |
CN101502119B (zh) | 2006-08-02 | 2012-05-23 | 汤姆逊许可公司 | 用于视频编码的自适应几何分割方法和设备 |
US20080089411A1 (en) | 2006-10-16 | 2008-04-17 | Nokia Corporation | Multiple-hypothesis cross-layer prediction |
JP2010135864A (ja) * | 2007-03-29 | 2010-06-17 | Toshiba Corp | 画像符号化方法及び装置並びに画像復号化方法及び装置 |
CN101877785A (zh) * | 2009-04-29 | 2010-11-03 | 祝志怡 | 一种基于混合预测的视频编码方法 |
KR101789634B1 (ko) | 2010-04-09 | 2017-10-25 | 엘지전자 주식회사 | 비디오 데이터 처리 방법 및 장치 |
US9288500B2 (en) * | 2011-05-12 | 2016-03-15 | Texas Instruments Incorporated | Luma-based chroma intra-prediction for video coding |
JP2013034162A (ja) | 2011-06-03 | 2013-02-14 | Sony Corp | 画像処理装置及び画像処理方法 |
JP2013034163A (ja) * | 2011-06-03 | 2013-02-14 | Sony Corp | 画像処理装置及び画像処理方法 |
US9654785B2 (en) | 2011-06-09 | 2017-05-16 | Qualcomm Incorporated | Enhanced intra-prediction mode signaling for video coding using neighboring mode |
US8780981B2 (en) * | 2011-06-27 | 2014-07-15 | Panasonic Intellectual Property Corporation Of America | Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding-decoding apparatus |
CA2837537C (en) | 2011-06-30 | 2019-04-02 | Panasonic Corporation | Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus |
GB2495942B (en) * | 2011-10-25 | 2014-09-03 | Canon Kk | Method and apparatus for processing components of an image |
WO2013102293A1 (en) * | 2012-01-04 | 2013-07-11 | Mediatek Singapore Pte. Ltd. | Improvements of luma-based chroma intra prediction |
US9438904B2 (en) * | 2012-01-19 | 2016-09-06 | Futurewei Technologies, Inc. | Reduced look-up table for LM mode calculation |
WO2013112739A1 (en) | 2012-01-24 | 2013-08-01 | Futurewei Technologies, Inc. | Simplification of lm mode |
US9438905B2 (en) * | 2012-04-12 | 2016-09-06 | Futurewei Technologies, Inc. | LM mode with uniform bit-width multipliers |
CN104471940B (zh) | 2012-04-16 | 2017-12-15 | 联发科技(新加坡)私人有限公司 | 色度帧内预测方法及装置 |
GB2501535A (en) | 2012-04-26 | 2013-10-30 | Sony Corp | Chrominance Processing in High Efficiency Video Codecs |
WO2014010943A1 (ko) * | 2012-07-10 | 2014-01-16 | 한국전자통신연구원 | 영상 부호화/복호화 방법 및 장치 |
CN103916673B (zh) * | 2013-01-06 | 2017-12-22 | 华为技术有限公司 | 基于双向预测的编码方法、解码方法和装置 |
US10003815B2 (en) | 2013-06-03 | 2018-06-19 | Qualcomm Incorporated | Hypothetical reference decoder model and conformance for cross-layer random access skipped pictures |
US10506243B2 (en) * | 2014-03-06 | 2019-12-10 | Samsung Electronics Co., Ltd. | Image decoding method and device therefor, and image encoding method and device therefor |
CN103856782B (zh) | 2014-03-18 | 2017-01-11 | 天津大学 | 基于多视点视频整帧丢失的自适应错误掩盖方法 |
US10200700B2 (en) * | 2014-06-20 | 2019-02-05 | Qualcomm Incorporated | Cross-component prediction in video coding |
US9883184B2 (en) * | 2014-10-07 | 2018-01-30 | Qualcomm Incorporated | QP derivation and offset for adaptive color transform in video coding |
US9838662B2 (en) * | 2014-10-10 | 2017-12-05 | Qualcomm Incorporated | Harmonization of cross-component prediction and adaptive color transform in video coding |
EP3203735A4 (en) * | 2014-11-05 | 2017-08-09 | Samsung Electronics Co., Ltd. | Per-sample prediction encoding apparatus and method |
KR102150979B1 (ko) * | 2014-12-19 | 2020-09-03 | 에이치에프아이 이노베이션 인크. | 비디오 및 이미지 코딩에서의 비-444 색채 포맷을 위한 팔레트 기반 예측의 방법 |
WO2016115343A2 (en) * | 2015-01-14 | 2016-07-21 | Vid Scale, Inc. | Palette coding for non-4:4:4 screen content video |
US9998742B2 (en) * | 2015-01-27 | 2018-06-12 | Qualcomm Incorporated | Adaptive cross component residual prediction |
US10455249B2 (en) * | 2015-03-20 | 2019-10-22 | Qualcomm Incorporated | Downsampling process for linear model prediction mode |
WO2016167538A1 (ko) | 2015-04-12 | 2016-10-20 | 엘지전자(주) | 비디오 신호의 인코딩, 디코딩 방법 및 그 장치 |
EP3273692A4 (en) * | 2015-06-10 | 2018-04-04 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding or decoding image using syntax signaling for adaptive weight prediction |
US10148977B2 (en) | 2015-06-16 | 2018-12-04 | Futurewei Technologies, Inc. | Advanced coding techniques for high efficiency video coding (HEVC) screen content coding (SCC) extensions |
WO2017139937A1 (en) | 2016-02-18 | 2017-08-24 | Mediatek Singapore Pte. Ltd. | Advanced linear model prediction for chroma coding |
CN109155863B (zh) | 2016-05-20 | 2022-09-20 | 松下电器(美国)知识产权公司 | 编码装置、解码装置、编码方法及解码方法 |
WO2018021374A1 (ja) | 2016-07-29 | 2018-02-01 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 符号化装置、復号装置、符号化方法及び復号方法 |
US20180041778A1 (en) | 2016-08-02 | 2018-02-08 | Qualcomm Incorporated | Geometry transformation-based adaptive loop filtering |
WO2018030294A1 (ja) | 2016-08-10 | 2018-02-15 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 符号化装置、復号装置、符号化方法及び復号方法 |
JP2018056685A (ja) * | 2016-09-27 | 2018-04-05 | 株式会社ドワンゴ | 画像符号化装置、画像符号化方法、及び画像符号化プログラム、並びに、画像復号装置、画像復号方法、及び画像復号プログラム |
CN116389739A (zh) | 2016-11-21 | 2023-07-04 | 松下电器(美国)知识产权公司 | 图像编码装置及图像解码装置 |
TW201826796A (zh) * | 2016-11-22 | 2018-07-16 | 美商松下電器(美國)知識產權公司 | 編碼裝置、解碼裝置、編碼方法及解碼方法 |
JP2020031252A (ja) | 2016-12-22 | 2020-02-27 | シャープ株式会社 | 画像復号装置及び画像符号化装置 |
US20180199062A1 (en) | 2017-01-11 | 2018-07-12 | Qualcomm Incorporated | Intra prediction techniques for video coding |
WO2018174457A1 (ko) | 2017-03-22 | 2018-09-27 | 엘지전자(주) | 영상 처리 방법 및 이를 위한 장치 |
CN116828179A (zh) | 2017-03-31 | 2023-09-29 | 松下电器(美国)知识产权公司 | 图像编码装置及存储介质 |
US11190762B2 (en) | 2017-06-21 | 2021-11-30 | Lg Electronics, Inc. | Intra-prediction mode-based image processing method and apparatus therefor |
GB2571313B (en) * | 2018-02-23 | 2022-09-21 | Canon Kk | New sample sets and new down-sampling schemes for linear component sample prediction |
EP3791588A1 (en) * | 2018-06-29 | 2021-03-17 | Beijing Bytedance Network Technology Co. Ltd. | Checking order of motion candidates in lut |
DK3815377T3 (da) | 2018-07-16 | 2023-04-03 | Huawei Tech Co Ltd | Videokoder, videoafkoder og tilsvarende kodnings- og afkodningsfremgangsmåder |
WO2020069667A1 (en) | 2018-10-05 | 2020-04-09 | Huawei Technologies Co., Ltd. | Intra prediction method and device |
CN113170122B (zh) | 2018-12-01 | 2023-06-27 | 北京字节跳动网络技术有限公司 | 帧内预测的参数推导 |
-
2019
- 2019-11-06 EP EP19881464.2A patent/EP3861728A4/en active Pending
- 2019-11-06 WO PCT/CN2019/115992 patent/WO2020094058A1/en unknown
- 2019-11-06 WO PCT/CN2019/115985 patent/WO2020094057A1/en unknown
- 2019-11-06 JP JP2021523516A patent/JP7422757B2/ja active Active
- 2019-11-06 EP EP19881016.0A patent/EP3861738A4/en active Pending
- 2019-11-06 WO PCT/CN2019/116027 patent/WO2020094066A1/en active Application Filing
- 2019-11-06 JP JP2021523513A patent/JP7212157B2/ja active Active
- 2019-11-06 CN CN201980072548.8A patent/CN112997491B/zh active Active
- 2019-11-06 EP EP19881776.9A patent/EP3861736A4/en active Pending
- 2019-11-06 CN CN201980072613.7A patent/CN112956199B/zh active Active
- 2019-11-06 KR KR1020217008485A patent/KR20210087928A/ko active Search and Examination
- 2019-11-06 CN CN201980072730.3A patent/CN113039791B/zh active Active
- 2019-11-06 CN CN202311722882.8A patent/CN117640955A/zh active Pending
- 2019-11-06 CN CN201980072731.8A patent/CN112997492B/zh active Active
- 2019-11-06 KR KR1020217008481A patent/KR20210089131A/ko not_active Application Discontinuation
- 2019-11-06 CN CN201980072612.2A patent/CN112997488B/zh active Active
- 2019-11-06 CN CN201980072729.0A patent/CN112997484B/zh active Active
- 2019-11-06 EP EP19883299.0A patent/EP3861739A4/en active Pending
- 2019-11-06 JP JP2021523502A patent/JP2022506277A/ja active Pending
- 2019-11-06 WO PCT/CN2019/115999 patent/WO2020094059A1/en unknown
- 2019-11-06 JP JP2021523511A patent/JP2022506283A/ja active Pending
- 2019-11-06 KR KR1020217008482A patent/KR102653562B1/ko active IP Right Grant
- 2019-11-06 KR KR1020217008483A patent/KR20210089133A/ko unknown
- 2019-11-06 WO PCT/CN2019/116028 patent/WO2020094067A1/en unknown
- 2019-11-06 WO PCT/CN2019/116015 patent/WO2020094061A1/en active Application Filing
-
2020
- 2020-04-16 US US16/850,509 patent/US10999581B2/en active Active
- 2020-07-28 US US16/940,826 patent/US11019344B2/en active Active
- 2020-07-28 US US16/940,877 patent/US10979717B2/en active Active
- 2020-08-07 US US16/987,844 patent/US11025915B2/en active Active
-
2021
- 2021-03-15 US US17/201,711 patent/US11438598B2/en active Active
- 2021-05-03 US US17/246,821 patent/US20210258572A1/en not_active Abandoned
- 2021-05-03 US US17/246,794 patent/US11930185B2/en active Active
-
2023
- 2023-01-12 JP JP2023002914A patent/JP2023033408A/ja active Pending
- 2023-04-13 JP JP2023065495A patent/JP2023083394A/ja active Pending
- 2023-05-01 US US18/310,344 patent/US20230269379A1/en active Pending
- 2023-06-30 US US18/345,608 patent/US20230345009A1/en active Granted
- 2023-10-19 JP JP2023179944A patent/JP2023184579A/ja active Pending
- 2023-10-20 JP JP2023181439A patent/JP2023184580A/ja active Pending
Patent Citations (94)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120328013A1 (en) | 2011-06-24 | 2012-12-27 | Madhukar Budagavi | Luma-Based Chroma Intra-Prediction for Video Coding |
US20170295365A1 (en) | 2011-06-24 | 2017-10-12 | Texas Instruments Incorporated | Luma-based chroma intra-prediction for video coding |
CN103782596A (zh) | 2011-06-28 | 2014-05-07 | 三星电子株式会社 | 使用图像的亮度分量的对图像的色度分量的预测方法和设备 |
CN103650512A (zh) | 2011-07-12 | 2014-03-19 | 英特尔公司 | 基于亮度的色度帧内预测 |
CN103096055A (zh) | 2011-11-04 | 2013-05-08 | 华为技术有限公司 | 一种图像信号帧内预测及解码的方法和装置 |
US20140233650A1 (en) | 2011-11-04 | 2014-08-21 | Huawei Technologies Co., Ltd. | Intra-Frame Prediction and Decoding Methods and Apparatuses for Image Signal |
US20130128966A1 (en) | 2011-11-18 | 2013-05-23 | Futurewei Technologies, Inc. | Scanning of Prediction Residuals in High Efficiency Video Coding |
CN104380741A (zh) | 2012-01-19 | 2015-02-25 | 华为技术有限公司 | 用于lm帧内预测的参考像素缩减 |
US20150036745A1 (en) | 2012-04-16 | 2015-02-05 | Mediatek Singapore Pte. Ltd. | Method and apparatus of simplified luma-based chroma intra prediction |
CN103379321A (zh) | 2012-04-16 | 2013-10-30 | 华为技术有限公司 | 视频图像分量的预测方法和装置 |
US20170295366A1 (en) | 2013-03-26 | 2017-10-12 | Mediatek Inc. | Method of Cross Color Intra Prediction |
CN104871537A (zh) | 2013-03-26 | 2015-08-26 | 联发科技股份有限公司 | 色彩间帧内预测的方法 |
US20150365684A1 (en) | 2013-03-26 | 2015-12-17 | Mediatek Inc. | Method of Cross Color Intra Prediction |
US20150098510A1 (en) | 2013-10-07 | 2015-04-09 | Vid Scale, Inc. | Combined scalability processing for multi-layer video coding |
US10063886B2 (en) | 2013-10-07 | 2018-08-28 | Vid Scale, Inc. | Combined scalability processing for multi-layer video coding |
CN106664410A (zh) | 2014-06-19 | 2017-05-10 | Vid拓展公司 | 用于基于三维色彩映射模型参数优化的系统和方法 |
WO2016066028A1 (en) | 2014-10-28 | 2016-05-06 | Mediatek Singapore Pte. Ltd. | Method of guided cross-component prediction for video coding |
CN107079166A (zh) | 2014-10-28 | 2017-08-18 | 联发科技(新加坡)私人有限公司 | 用于视频编码的引导交叉分量预测的方法 |
US10499068B2 (en) | 2014-12-31 | 2019-12-03 | Nokia Technologies Oy | Apparatus, a method and a computer program for video coding and decoding |
CN107211121A (zh) | 2015-01-22 | 2017-09-26 | 联发科技(新加坡)私人有限公司 | 色度分量的视频编码方法 |
US20170366818A1 (en) | 2015-01-22 | 2017-12-21 | Mediatek Singapore Pte. Ltd. | Method of Video Coding for Chroma Components |
WO2016115708A1 (en) | 2015-01-22 | 2016-07-28 | Mediatek Singapore Pte. Ltd. | Methods for chroma component coding with separate intra prediction mode |
US20180139469A1 (en) | 2015-06-19 | 2018-05-17 | Nokia Technologies Oy | An Apparatus, A Method and A Computer Program for Video Coding and Decoding |
CN107836116A (zh) | 2015-07-08 | 2018-03-23 | Vid拓展公司 | 使用交叉平面滤波的增强色度编码 |
US20170016972A1 (en) | 2015-07-13 | 2017-01-19 | Siemens Medical Solutions Usa, Inc. | Fast Prospective Motion Correction For MR Imaging |
US20170085917A1 (en) | 2015-09-23 | 2017-03-23 | Nokia Technologies Oy | Method, an apparatus and a computer program product for coding a 360-degree panoramic video |
US10045023B2 (en) | 2015-10-09 | 2018-08-07 | Telefonaktiebolaget Lm Ericsson (Publ) | Cross component prediction in video coding |
CN108464002A (zh) | 2015-11-25 | 2018-08-28 | 高通股份有限公司 | 视频译码中具有非正方形预测单元的线性模型预测 |
US20190082184A1 (en) | 2016-03-24 | 2019-03-14 | Nokia Technologies Oy | An Apparatus, a Method and a Computer Program for Video Coding and Decoding |
US9948930B2 (en) | 2016-05-17 | 2018-04-17 | Arris Enterprises Llc | Template matching for JVET intra prediction |
WO2017203882A1 (en) | 2016-05-24 | 2017-11-30 | Sharp Kabushiki Kaisha | Systems and methods for intra prediction coding |
US20190306516A1 (en) | 2016-05-24 | 2019-10-03 | Sharp Kabushiki Kaisha | Systems and methods for intra prediction coding |
US10326989B2 (en) | 2016-05-25 | 2019-06-18 | Arris Enterprises Llc | General block partitioning method |
US20170347123A1 (en) | 2016-05-25 | 2017-11-30 | Arris Enterprises Llc | Jvet coding block structure with asymmetrical partitioning |
US10523949B2 (en) | 2016-05-25 | 2019-12-31 | Arris Enterprises Llc | Weighted angular prediction for intra coding |
US10567808B2 (en) | 2016-05-25 | 2020-02-18 | Arris Enterprises Llc | Binary ternary quad tree partitioning for JVET |
US10645395B2 (en) | 2016-05-25 | 2020-05-05 | Arris Enterprises Llc | Weighted angular prediction coding for intra coding |
US20190110076A1 (en) | 2016-05-27 | 2019-04-11 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US10484712B2 (en) | 2016-06-08 | 2019-11-19 | Qualcomm Incorporated | Implicit coding of reference line index used in intra prediction |
US20190297339A1 (en) | 2016-06-30 | 2019-09-26 | Nokia Technologies Oy | An Apparatus, A Method and A Computer Program for Video Coding and Decoding |
US20190289306A1 (en) | 2016-07-22 | 2019-09-19 | Sharp Kabushiki Kaisha | Systems and methods for coding video data using adaptive component scaling |
US20190174133A1 (en) | 2016-08-10 | 2019-06-06 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US10742978B2 (en) | 2016-08-10 | 2020-08-11 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US10368107B2 (en) | 2016-08-15 | 2019-07-30 | Qualcomm Incorporated | Intra video coding using a decoupled tree structure |
US20180048889A1 (en) | 2016-08-15 | 2018-02-15 | Qualcomm Incorporated | Intra video coding using a decoupled tree structure |
WO2018035130A1 (en) | 2016-08-15 | 2018-02-22 | Qualcomm Incorporated | Intra video coding using a decoupled tree structure |
US10326986B2 (en) | 2016-08-15 | 2019-06-18 | Qualcomm Incorporated | Intra video coding using a decoupled tree structure |
WO2018039596A1 (en) | 2016-08-26 | 2018-03-01 | Qualcomm Incorporated | Unification of parameters derivation procedures for local illumination compensation and cross-component linear model prediction |
US20180063531A1 (en) | 2016-08-26 | 2018-03-01 | Qualcomm Incorporated | Unification of parameters derivation procedures for local illumination compensation and cross-component linear model prediction |
US10419757B2 (en) | 2016-08-31 | 2019-09-17 | Qualcomm Incorporated | Cross-component filter |
US20180063527A1 (en) * | 2016-08-31 | 2018-03-01 | Qualcomm Incorporated | Cross-component filter |
US20180077426A1 (en) | 2016-09-15 | 2018-03-15 | Qualcomm Incorporated | Linear model chroma intra prediction for video coding |
US10382781B2 (en) | 2016-09-28 | 2019-08-13 | Qualcomm Incorporated | Interpolation filters for intra prediction in video coding |
US20190268599A1 (en) | 2016-11-08 | 2019-08-29 | Nokia Technologies Oy | An apparatus, a method and a computer program for video coding and decoding |
US10477240B2 (en) | 2016-12-19 | 2019-11-12 | Qualcomm Incorporated | Linear model prediction mode with sample accessing for video coding |
US10674165B2 (en) | 2016-12-21 | 2020-06-02 | Arris Enterprises Llc | Constrained position dependent intra prediction combination (PDPC) |
WO2018116925A1 (ja) | 2016-12-21 | 2018-06-28 | シャープ株式会社 | イントラ予測画像生成装置、画像復号装置、および画像符号化装置 |
US10277895B2 (en) | 2016-12-28 | 2019-04-30 | Arris Enterprises Llc | Adaptive unequal weight planar prediction |
US10616596B2 (en) | 2016-12-28 | 2020-04-07 | Arris Enterprises Llc | Unequal weight planar prediction |
WO2018132710A1 (en) | 2017-01-13 | 2018-07-19 | Qualcomm Incorporated | Coding video data using derived chroma mode |
US20180205946A1 (en) | 2017-01-13 | 2018-07-19 | Qualcomm Incorporated | Coding video data using derived chroma mode |
WO2018140587A1 (en) | 2017-01-27 | 2018-08-02 | Qualcomm Incorporated | Bilateral filters in video coding with reduced complexity |
US10542264B2 (en) | 2017-04-04 | 2020-01-21 | Arris Enterprises Llc | Memory reduction implementation for weighted angular prediction |
US20200195970A1 (en) | 2017-04-28 | 2020-06-18 | Sharp Kabushiki Kaisha | Image decoding device and image encoding device |
US10602180B2 (en) | 2017-06-13 | 2020-03-24 | Qualcomm Incorporated | Motion vector prediction |
US20200128272A1 (en) | 2017-06-21 | 2020-04-23 | Lg Electronics Inc. | Intra-prediction mode-based image processing method and apparatus therefor |
US20200177911A1 (en) | 2017-06-28 | 2020-06-04 | Sharp Kabushiki Kaisha | Video encoding device and video decoding device |
US20200267392A1 (en) | 2017-06-29 | 2020-08-20 | Dolby Laboratories Licensing Corporation | Integrated image reshaping and video coding |
WO2019006363A1 (en) | 2017-06-30 | 2019-01-03 | Vid Scale, Inc. | LOCAL LIGHTING COMPENSATION USING GENERALIZED BI-PREDICTION |
US20190014316A1 (en) | 2017-07-05 | 2019-01-10 | Arris Enterprises Llc | Post-filtering for weighted angular prediction |
CN109274969A (zh) | 2017-07-17 | 2019-01-25 | 华为技术有限公司 | 色度预测的方法和设备 |
US20190028701A1 (en) | 2017-07-24 | 2019-01-24 | Arris Enterprises Llc | Intra mode jvet coding |
US20190028702A1 (en) | 2017-07-24 | 2019-01-24 | Arris Enterprises Llc | Intra mode jvet coding |
US10237558B2 (en) | 2017-08-09 | 2019-03-19 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
US10701402B2 (en) | 2017-10-02 | 2020-06-30 | Arris Enterprises Llc | System and method for reducing blocking artifacts and providing improved coding efficiency |
US20200288135A1 (en) | 2017-10-09 | 2020-09-10 | Canon Kabushiki Kaisha | New sample sets and new down-sampling schemes for linear component sample prediction |
US10575023B2 (en) | 2017-10-09 | 2020-02-25 | Arris Enterprises Llc | Adaptive unequal weight planar prediction |
US20190110045A1 (en) | 2017-10-09 | 2019-04-11 | Qualcomm Incorporated | Position-dependent prediction combinations in video coding |
US10694188B2 (en) | 2017-12-18 | 2020-06-23 | Arris Enterprises Llc | System and method for constructing a plane for planar prediction |
US20190313108A1 (en) | 2018-04-05 | 2019-10-10 | Qualcomm Incorporated | Non-square blocks in video coding |
US10609402B2 (en) | 2018-05-02 | 2020-03-31 | Tencent America LLC | Method and apparatus for prediction and transform for small blocks |
US20190342546A1 (en) | 2018-05-03 | 2019-11-07 | FG Innovation Company Limited | Device and method for coding video data based on different reference sets in linear model prediction |
US20200195930A1 (en) | 2018-07-02 | 2020-06-18 | Lg Electronics Inc. | Cclm-based intra-prediction method and device |
CN109005408A (zh) | 2018-08-01 | 2018-12-14 | 北京奇艺世纪科技有限公司 | 一种帧内预测方法、装置及电子设备 |
US20200359051A1 (en) | 2018-11-06 | 2020-11-12 | Beijing Bytedance Network Technology Co., Ltd. | Position dependent intra prediction |
US20200366896A1 (en) | 2018-11-06 | 2020-11-19 | Beijing Bytedance Network Technology Co., Ltd. | Simplified parameter derivation for intra prediction |
US20200366910A1 (en) | 2018-11-06 | 2020-11-19 | Beijing Bytedance Network Technology Co., Ltd. | Complexity reduction in parameter derivation for intra prediction |
US20200154115A1 (en) | 2018-11-08 | 2020-05-14 | Qualcomm Incorporated | Cross-component prediction for video coding |
US20200154100A1 (en) | 2018-11-14 | 2020-05-14 | Tencent America LLC | Constrained intra prediction and unified most probable mode list generation |
US20200366933A1 (en) | 2018-12-07 | 2020-11-19 | Beijing Bytedance Network Technology Co., Ltd. | Context-based intra prediction |
US20200195976A1 (en) | 2018-12-18 | 2020-06-18 | Tencent America LLC | Method and apparatus for video encoding or decoding |
US20200260070A1 (en) | 2019-01-15 | 2020-08-13 | Lg Electronics Inc. | Image coding method and device using transform skip flag |
US20200382769A1 (en) | 2019-02-22 | 2020-12-03 | Beijing Bytedance Network Technology Co., Ltd. | Neighboring sample selection for intra prediction |
US20200382800A1 (en) | 2019-02-24 | 2020-12-03 | Beijing Bytedance Network Technology Co., Ltd. | Parameter derivation for intra prediction |
Non-Patent Citations (40)
Title |
---|
Benjamin Bross et al. "Versatile Video Coding (Draft 1)" Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 10th Meeting: San Diego, U.S., Apr. 10-20, 2018, Document JVET-J1001-v1. |
Benjamin Bross et al. "Versatile Video Coding (Draft 2)" Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 10th Meeting: San Diego, U.S., Apr. 10-20, 2018, Document JVET-J1001-v2. |
Benjamin Bross et al. "Versatile Video Coding (Draft 2)" Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 11th Meeting: Ljubljana, SI, Jul. 10-18, 2018, JVET-K1001-v7, Jul. 2018. |
Guillaume Laroche et al. "CE3-5.1: On Cross-Component Linear Model Simplification" JVET Document Management System, JVET-L0191, 2018. |
Guillaume Laroche et al. "Non-CE3: On Cross-Component Linear Model Simplification," Joint Video Experts Team (JVET) of ITU-T SG 16 WP3 and ISO/IEC JTC 1/SC 29/WG 11, 11th Meeting: Ljubljana, SI, Jul. 10-18, 2018, JVET-K0204-v1andv3, Jul. 18, 2018. |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2019/076361 dated May 18, 2020(10 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2019/076362 dated May 9, 2020(11 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2019/115985 dated Feb. 1, 2020(10 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2019/115992 dated Feb. 5, 2020(10 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2019/115999 dated Jan. 31, 2020(10 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2019/116015 dated Jan. 23, 2020(10 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2019/116027 dated Jan. 23, 2020(10 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2019/116028 dated Jan. 23, 2020(9 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2019/121850 dated Feb. 7, 2020(11 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2019/123229 dated Mar. 6, 2020(9 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/080823 dated Jun. 16, 2020(9 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/081304 dated Jun. 23, 2020(11 pages). |
Jangwon Choi et al. " CE3-related: Reduced Number of Reference Samples of CCLM Parameter Calculation," Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting: Macao, CN, Oct. 3-12, 2018, JVET-L0138-v3, Oct. 2018. |
Jangwon Choi et al. "CE3-related: Reduced No. Of Reference Samples of CCLM Parameter Calculation," Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting: Macao, CN, Oct. 12, 2018, JVET-L0138-v2, Oct. 2018. |
Jangwon Choi et al. Non-CE3: CCLM Prediction for 4:2:2 and 4:4:4 Color Format Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting, Geneva, CH, Mar. 2019, Document JVET-N0229. |
Jianle Chen et al. "Algorithm Description of Joint Exploration Test Model 4," Joint Video Exploration Team (JVET) of ITU-T SG 16 WP3 and ISO/IEC JTC 1/SC 29/WG 11, 4th Meeting: Chengdu, CN, Oct. 21, 2016, JVET-D1001, Oct. 2016. |
Jianle Chen et al. "Algorithm Description of Versatile Video Coding and Test Model 3 (VTM 3)" Joint Video Experts Team (JVET) of ITU-T SG 16 WP3 and ISO/IEC JTC 1/SC 29/WG 11, 2018, Document No. JVET-L1002-v1. |
Junyan Huo et al. "CE3-related: Fixed Reference Samples Design for CCLM," Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, JVET-M0211, Jan. 2019. |
Kai Zhang et al. "CE3-related: CCLM Prediction with Single-Line Neighbouring Luma Samples," Joint Video Experts Team (JVET) of ITU-T SG 16 WP3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting, Macao, CN, JVET-L329 Oct. 12, 2018. |
Kai Zhang et al. "EE5: Enhanced Cross-Component Linear Model Intra-Prediction," Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 5th Meeting, Geneva, CH, Jan. 12-20, 2017, JVET-E0077, Jan. 2017. |
Kai Zhang et al. "Enhanced Cross-Component Linear Model for Chroma Intra-Prediction in Video Coding," IEEE Transactions on Image Processing, Aug. 2018, 27(8):3983-3997. |
Kai Zhang et al. "Enhanced Cross-Component Linear Model Intra-Prediction" Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 4th Meeting: Chengdu, CN, Oct. 15-21, 2016, JVET-D0110 Oct. 21, 2016. |
Liang Zhao et al. "CE3-related: Simplified Look-Up Table for CCLM Mode", Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 13th Meeting, JVET-M0493, Jan. 2019. |
Non-Final Office Action from U.S. Appl. No. 16/940,826 dated Oct. 1, 2020. |
Non-Final Office Action from U.S. Appl. No. 16/940,877 dated Sep. 17, 2020. |
Non-Final Office Action from U.S. Appl. No. 16/987,670 dated Sep. 8, 2020. |
Non-Final Office Action from U.S. Appl. No. 16/987,844 dated Sep. 25, 2020. |
Non-Final Office Action from U.S. Appl. No. 16/993,526 dated Oct. 9, 2020. |
Notice of Allowance from U.S. Appl. No. 16/940,877 date Dec. 9, 2020. |
Notice of Allowance from U.S. Appl. No. 16/993,487 date Sep. 29, 2020. |
Shuai Wan "Non-CE3: CCLM Performance of Extended Neighboring Region," Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, JVET-L0107, Oct. 12, 2018. |
Xiang Ma et al. "CE3: CCLM/MDLM Using Simplified Coefficients Derivation Method (Test 5.6.1, 5.6.2 and 5.63)" Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG, 11, 12th Meeting: Macao, CN, Oct. 12, 2018, JVET-L0341-rl, Oct. 2018. |
Xiang Ma et al. "CE3: Classification-based mean value for CCLM Coefficients Derivation", Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 13th Meeting, JVET-M0401-v1, Jan. 2019. |
Xiang Ma et al. "CE3: Multi-directional LM (MDLM) (Test 5.4.1 and 5.4.2)" Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting: Macao, CN, JVET-L0338, Oct. 3-12, 2018. |
Xiang Ma et al. "CE3-related: CCLM Coefficients Derivation Method without Down-Sampling Operation" JVET Document Management Systems, JVET-L0340, 2018. |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11336907B2 (en) * | 2018-07-16 | 2022-05-17 | Huawei Technologies Co., Ltd. | Video encoder, video decoder, and corresponding encoding and decoding methods |
US11438598B2 (en) | 2018-11-06 | 2022-09-06 | Beijing Bytedance Network Technology Co., Ltd. | Simplified parameter derivation for intra prediction |
US11930185B2 (en) | 2018-11-06 | 2024-03-12 | Beijing Bytedance Network Technology Co., Ltd. | Multi-parameters based intra prediction |
US11902507B2 (en) | 2018-12-01 | 2024-02-13 | Beijing Bytedance Network Technology Co., Ltd | Parameter derivation for intra prediction |
US11595687B2 (en) | 2018-12-07 | 2023-02-28 | Beijing Bytedance Network Technology Co., Ltd. | Context-based intra prediction |
US11115655B2 (en) | 2019-02-22 | 2021-09-07 | Beijing Bytedance Network Technology Co., Ltd. | Neighboring sample selection for intra prediction |
US11729405B2 (en) | 2019-02-24 | 2023-08-15 | Beijing Bytedance Network Technology Co., Ltd. | Parameter derivation for intra prediction |
US11438581B2 (en) | 2019-03-24 | 2022-09-06 | Beijing Bytedance Network Technology Co., Ltd. | Conditions in parameter derivation for intra prediction |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10999581B2 (en) | Position based intra prediction | |
US11057642B2 (en) | Context-based intra prediction | |
US11729405B2 (en) | Parameter derivation for intra prediction | |
US11115655B2 (en) | Neighboring sample selection for intra prediction | |
US11902507B2 (en) | Parameter derivation for intra prediction | |
US11438581B2 (en) | Conditions in parameter derivation for intra prediction | |
WO2020192717A1 (en) | Parameter derivation for inter prediction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIU, HONGBIN;WANG, YUE;REEL/FRAME:052419/0876 Effective date: 20191030 Owner name: BYTEDANCE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHANG, KAI;ZHANG, LI;REEL/FRAME:052419/0799 Effective date: 20191029 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: BYTEDANCE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:XU, JIZHENG;REEL/FRAME:052485/0377 Effective date: 20191030 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |