WO2023194104A1 - Prédiction de mode intra temporelle - Google Patents

Prédiction de mode intra temporelle Download PDF

Info

Publication number
WO2023194104A1
WO2023194104A1 PCT/EP2023/057363 EP2023057363W WO2023194104A1 WO 2023194104 A1 WO2023194104 A1 WO 2023194104A1 EP 2023057363 W EP2023057363 W EP 2023057363W WO 2023194104 A1 WO2023194104 A1 WO 2023194104A1
Authority
WO
WIPO (PCT)
Prior art keywords
video block
motion
block
intra prediction
reference samples
Prior art date
Application number
PCT/EP2023/057363
Other languages
English (en)
Inventor
Franck Galpin
Thierry DUMAS
Karam NASER
Philippe Bordes
Original Assignee
Interdigital Ce Patent Holdings, Sas
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Interdigital Ce Patent Holdings, Sas filed Critical Interdigital Ce Patent Holdings, Sas
Publication of WO2023194104A1 publication Critical patent/WO2023194104A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • At least one of the present embodiments generally relates to a method or an apparatus for video encoding or decoding, compression or decompression.
  • image and video coding schemes usually employ prediction, including motion vector prediction, and transform to leverage spatial and temporal redundancy in the video content.
  • prediction including motion vector prediction, and transform
  • intra or inter prediction is used to exploit the intra or inter frame correlation, then the differences between the original image and the predicted image, often denoted as prediction errors or prediction residuals, are transformed, quantized, and entropy coded.
  • the compressed data are decoded by inverse processes corresponding to the entropy coding, quantization, transform, and prediction.
  • At least one of the present embodiments generally relates to a method or an apparatus for video encoding or decoding, and more particularly, to a method or an apparatus for improving the coding efficiency of intra prediction mode.
  • a method comprising steps for extracting motion information for a video block; obtaining motion compensated reference samples from the motion information for the video block; determining an intra prediction from the motion compensated reference samples for the video block; and encoding at least a portion of the video block using the intra prediction.
  • another method comprises steps for extracting motion information for a video block; obtaining motion compensated reference samples from the motion information for the video block; determining an intra prediction from the motion compensated reference samples for the video block; and decoding at least a portion of the video block using the intra prediction.
  • an apparatus comprising a processor.
  • the processor can be configured to encode a block of a video or decode a bitstream by executing any of the aforementioned methods.
  • a device comprising an apparatus according to any of the decoding embodiments; and at least one of (i) an antenna configured to receive a signal, the signal including the video block, (ii) a band limiter configured to limit the received signal to a band of frequencies that includes the video block, or (iii) a display configured to display an output representative of a video block.
  • a non-transitory computer readable medium containing data content generated according to any of the described encoding embodiments or variants.
  • a signal comprising video data generated according to any of the described encoding embodiments or variants.
  • a bitstream is formatted to include data content generated according to any of the described encoding embodiments or variants.
  • a computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out any of the described decoding embodiments or variants.
  • Figure 1 illustrates the 67 intra prediction modes in WC and ECM.
  • Figure 2 illustrates matrix weighted intra prediction process.
  • Figure 3 illustrates the definition of samples used by PDPC applied to diagonal and adjacent angular intra modes.
  • Figure 4 illustrates an example of four reference lines neighboring a prediction block.
  • Figure 5 illustrates intra prediction directions.
  • Figure 6 illustrates an example workflow for temporal intra prediction.
  • Figure 7 illustrates an example of displaced reference samples.
  • Figure 8 illustrates an example of a simplified one pass motion compensation and filtering process.
  • Figure 9 illustrates an example of new reference sample availability.
  • Figure 10 illustrates an example of new intra prediction directions.
  • Figure 11 illustrates an example of inner samples selection.
  • Figure 12 illustrates one embodiment of a method for performing the described aspects.
  • Figure 13 illustrates another embodiment of a method for performing the described aspects.
  • Figure 14 illustrates one embodiment of an apparatus for implementing the described aspects.
  • Figure 15 illustrates a generic video encoding or compression system.
  • Figure 16 illustrates a generic video decoding or decompression system.
  • Figure 17 illustrates a processor-based system for implementing the described aspects.
  • the intra prediction is a fundamental coding tool in video compression.
  • the encoder selects the best prediction mode and signals its index to the decoder to perform the same prediction.
  • the intra prediction is performed using reference samples which are samples around the current block to decode already decoded.
  • reference samples which are samples around the current block to decode already decoded.
  • MIP Tempox based Intra Prediction
  • prediction is created using samples from one or more reference frames.
  • the prediction from the motion compensation of the block is not always suitable.
  • the temporal gap can be so large than motion compensated blocks do not necessarily offer a good RD (rate distortion) compromise, even for areas common between the 2 frames.
  • RD rate distortion
  • the ratio of intra coded block in the first inter frame of GOP of size 32 is commonly above 50%.
  • intra mode prediction increases the decoder complexity because it introduces latency during the decoding of inter slice: all inter and intra blocks around the current block should first be reconstructed in order to start the reconstruction of the current intra block. For low delay decoder it can introduces further latency in the pipeline.
  • inter based prediction allows full parallelism of inter based reconstruction.
  • the number of directional intra modes in WC is extended from 33, as used in HEVC, to 65.
  • the new directional modes not in HEVC are depicted as red dotted arrows in Figure 1 , and the planar and DC modes remain the same.
  • These denser directional intra prediction modes apply for all block sizes and for both luma and chroma intra predictions.
  • every intra-coded block has a square shape and the length of each of its side is a power of 2. Thus, no division operations are required to generate an intrapredictor using DC mode.
  • blocks can have a rectangular shape that necessitates the use of a division operation per block in the general case. To avoid division operations for DC prediction, only the longer side is used to compute the average for non-square blocks.
  • MPM most probable mode
  • a unified 6-MPM list is used for intra blocks irrespective of whether MRL and ISP coding tools are applied or not.
  • the MPM list is constructed based on intra modes of the left and above neighboring block. Suppose the mode of the left is denoted as Left and the mode of the above block is denoted as Above, the unified MPM list is constructed as follows:
  • Max - Min is equal to 1 :
  • Max - Min is greater than or equal to 62 :
  • Max - Min is equal to 2 :
  • the first bin of the mpm index codeword is CABAC context coded. In total three contexts are used, corresponding to whether the current intra block is MRL enabled, ISP enabled, or a normal intra block.
  • TBC Truncated Binary Code
  • Matrix weighted intra prediction (MIP) method is a newly added intra prediction technique into WC. For predicting the samples of a rectangular block of width W and height H, matrix weighted intra prediction (MIP) takes one line of H reconstructed neighbouring boundary samples left of the block and one line of W reconstructed neighbouring boundary samples above the block as input. If the reconstructed samples are unavailable, they are generated as it is done in the conventional intra prediction. The generation of the prediction signal is based on the following three steps, which are averaging, matrix vector multiplication and linear interpolation as shown in Figure 19.
  • Position dependent intra prediction combination In VVC, the results of intra prediction of DC, planar and several angular modes are further modified by a position dependent intra prediction combination (PDPC) method.
  • PDPC is an intra prediction method which invokes a combination of the boundary reference samples and HEVC style intra prediction with filtered boundary reference samples.
  • PDPC is applied to the following intra modes without signalling: planar, DC, intra angles less than or equal to horizontal, and intra angles greater than or equal to vertical and less than or equal to 80. If the current block is Bdpcm mode or MRL index is larger than 0, PDPC is not applied.
  • PDPC When PDPC is applied to DC, planar, horizontal, and vertical intra modes, additional boundary filters are not needed, as required in the case of HEVC DC mode boundary filter or horizontal/vertical mode edge filters.
  • PDPC process for DC and Planar modes is identical.
  • For angular modes if the current angular mode is HORJDX or VERJDX, left or top reference samples is not used, respectively.
  • the PDPC weights and scale factors are dependent on prediction modes and the block sizes. PDPC is applied to the block with both width and height greater than or equal to 4.
  • Figure 3 illustrates the definition of reference samples (R x ,-i and R-i, y ) for PDPC applied over various prediction modes.
  • the prediction sample pred(x’, y’) is located at (x’, y’) within the prediction block.
  • the reference samples R x ,-i and R-i, y could be located in fractional sample position. In this case, the sample value of the nearest integer sample location is used.
  • Multiple reference line (MRL) intra prediction uses more reference lines for intra prediction.
  • Figure 4 an example of 4 reference lines is depicted, where the samples of segments A and F are not fetched from reconstructed neighbouring samples but padded with the closest samples from Segment B and E, respectively.
  • HEVC intra-picture prediction uses the nearest reference line (i.e., reference line 0).
  • reference line 0 the nearest reference line
  • 2 additional lines reference line 1 and reference line 3 are used.
  • the index of selected reference line (mrl_idx) is signalled and used to generate intra predictor.
  • reference line idx which is greater than 0, only include additional reference line modes in MPM list and only signal mpm index without remaining mode.
  • the reference line index is signalled before intra prediction modes, and Planar mode is excluded from intra prediction modes in case a nonzero reference line index is signalled.
  • MRL is disabled for the first line of blocks inside a CTU to prevent using extended reference samples outside the current CTU line. Also, PDPC is disabled when additional line is used.
  • MRL mode the derivation of DC value in DC intra prediction mode for non-zero reference line indices is aligned with that of reference line index 0.
  • MRL requires the storage of 3 neighboring luma reference lines with a CTU to generate predictions. Note that the Cross-Component Linear Model (CCLM) tool also requires 3 neighboring luma reference lines for its down-sampling filters. MRL is designed to use the same 3 lines as CCLM to reduce the storage requirements for decoders.
  • CCLM Cross-Component Linear Model
  • variable refH specifying the reference samples height
  • variable filterFlag is derived as follows:
  • filterFlag is set equal to 1 : refldx is equal to 0; nTbW * nTbH is greater than 32;
  • IntraSubPartitionsSplitType is equal to ISP_NO_SPLIT;
  • filterFlag is set equal to 0.
  • Outputs of this process are the predicted samples predSamples[ x ][ y ], with x
  • predV[ nTbW predH[ ( nTbH predSamples[ x ][ y ] ( predV[ x ][ y ] + predH[ x ][ y ] + nTbW * nTbH ) »
  • a variable dcVal is derived as follows:
  • variable refH specifying the reference samples height
  • nTbS is set equal to ( Log2 ( nTbW ) + Log2 ( nTbH ) ) » 1 .
  • variable filterFlag is derived as follows:
  • filterFlag is set equal to 0:
  • IntraSubPartitionsSplitType is not equal to ISP_NO_SPLIT.
  • the variable minDistVerHor is set equal to Min( Abs( predModelntra - 50 ), Abs( predModelntra - 18 ) ).
  • the variable intraHorVerDistThres[ nTbS ] is specified in Table 23.
  • filterFlag is set equal to 1 .
  • filterFlag is set equal to 0.
  • Figure 9 illustrates the 93 prediction directions, where the dashed directions are associated with the wide-angle modes that are only applied to non-square blocks.
  • Reference samples used in the intra prediction are constructed using a motion compensated version of the reference samples template.
  • a global motion information is sent at the slice level.
  • at least one global motion model is computed at encoder and sent in the slide header (or picture header).
  • the motion model can be an affine 4, affine 6 or homographic model for example.
  • several models can be transmitted.
  • the model parameters can be sent as the motion of the corner of the frame, coded differentially:
  • Top-left corner sent as a motion vector using mvd encoding of the codec
  • Top-right corner send the mvd of the difference between the motion of the topright and the motion of the top-left already decoded
  • Bottom-right mvd of the difference between the bottom-right motion and the average of the top-left and bottom -left motion vector
  • the motion of the current block is computed, for example using the motion of the center of the block.
  • top-left or other location can be used to compute the motion of the block.
  • an index of the model to use is signaled for the block using this mode.
  • a context-based coding using neighboring block typically top and left blocks
  • Another way to get the motion information is to use the same process as the one used in inter coded blocks to deduce the motion.
  • a list of probable motion vector candidates is computed, using a process similar to the merge inter list creation for example.
  • the first candidate of the list is then used to infer the motion of the current block.
  • an index of the candidate to use to infer the motion is sent.
  • a template-based re-ordering similar to ARMC-TM is used to select the most probable candidate in the list.
  • the collocated frame used as a reference frame for the motion vector is signaled in the slice header.
  • the index is signaled at block level, the same way reference frame index is signaled in inter coded blocks.
  • the reference samples can be constructed.
  • the motion vector decoded at the previous stage is applied to the collocated block position.
  • the motion compensation is done pixel wise.
  • the sub-pixel motion compensation filter and Gaussian filtering stages can be done in 1 stage.
  • horizontal reference samples (corresponding to the top neighbors in the block) are only filtered in the horizontal direction, i.e., the sub-pixel motion compensation filter and the Gaussian filter are applied only horizontally.
  • the same logic applies to the vertical samples.
  • the resulting filter of the sub-pixel motion compensation and Gaussian filtering is reduced to 3 taps, giving the same complexity than current intra smoothing of reference samples.
  • both directions are filtered, requiring more reference samples to perform the full reference samples construction.
  • MIP modes are also extended to take into account the newly available samples.
  • planar mode can take advantage of this.
  • the DC value can also be adapted:
  • Inner reference samples temporal intra mode As reference samples are also available inside the block, they can be used to give a better prediction (note: which is again technically an inter prediction but it does not try to accurately predict each pixel of the block).
  • all the intra reference samples are taken as in a regular prediction but on the inner side of the border of the block instead of the outer side as depicted in Figure 11.
  • planar mode becomes:
  • a more generic variant is to use all samples of the block. However, the goal of the prediction is NOT to give an accurate prediction of the inner of the block (otherwise, inter based mode would have been selected).
  • a prediction similar to MIP is created: reference samples are put in a vector and a linear function (matrix vector multiplication) is computed to create prediction samples (or subset of prediction samples).
  • - Dataset is created by using intra block from encoding of a dataset, for which a motion vector is inferred using a method as described earlier. Compensated blocks which are too different from the original blocks (according to some threshold) are not used in the dataset.
  • a subset of the prediction is created and linear interpolation is used to fill the whole prediction in order to limit the complexity.
  • the table below shows an extract of the decoding of a coding unit, specifically the intra mode decoding.
  • a flag is added at top of the intra syntax to signal the use of the temporal intra prediction.
  • the temporal predictor used is inferred (for example using the global motion model).
  • the same syntax as before to describe the intra prediction is kept, but the mode will apply to the displaced block.
  • the coding unit decoding contains a new syntax element related to the temporal intra prediction.
  • the remainder of the intra direction is slightly changed (using the intra luma mpm remainder ext) in order to take into account added negative directions.
  • the final syntax is shown below:
  • the global motion model parameters are also transmitted, for example using the syntax below:
  • the syntax element model_order is an integer between 0 and 3 to control the order of the motion model (translational, affine 4, affine 6 or homographic).
  • the cpmv are coded differentially by predicting each corner using the already available comers and the associated model.
  • FIG. 12 One embodiment of a method 1200 under the general aspects described here is shown in Figure 12.
  • the method commences at start block 1201 and control proceeds to block 1210 for extracting motion information for a video block.
  • Control proceeds from block 1210 to block 1220 for obtaining motion compensated reference samples from the motion information for the video block.
  • Control proceeds from block 1220 to block 1230 for determining an intra prediction from the motion compensated reference samples for the video block.
  • Control proceeds from block 1230 to block 1240 for encoding at least a portion of the video block using the intra prediction.
  • FIG. 13 One embodiment of a method 1300 under the general aspects described here is shown in Figure 13.
  • the method commences at start block 1301 and control proceeds to block 1310 for extracting motion information for a video block.
  • Control proceeds from block 1310 to block 1320 for obtaining motion compensated reference samples from the motion information for the video block.
  • Control proceeds from block 1320 to block 1330 for determining an intra prediction from the motion compensated reference samples for the video block.
  • Control proceeds from block 1330 to block 1340 for decoding at least a portion of the video block using the intra prediction.
  • Figure 14 shows one embodiment of an apparatus 1400 for encoding, decoding, compressing, or decompressing video data using any of the above methods, or variations.
  • the apparatus comprises Processor 1910 and can be interconnected to a memory 1920 through at least one port. Both Processor 1910 and memory 1920 can also have one or more additional interconnections to external connections.
  • Processor 1410 is also configured to either insert or receive information in a bitstream and, either compressing, encoding, or decoding using any of the described aspects.
  • the embodiments described here include a variety of aspects, including tools, features, embodiments, models, approaches, etc. Many of these aspects are described with specificity and, at least to show the individual characteristics, are often described in a manner that may sound limiting. However, this is for purposes of clarity in description, and does not limit the application or scope of those aspects. Indeed, all of the different aspects can be combined and interchanged to provide further aspects. Moreover, the aspects can be combined and interchanged with aspects described in earlier filings as well.
  • Figures 15, 16, and 17 provide some embodiments, but other embodiments are contemplated and the discussion of Figures 15, 16, and 17 does not limit the breadth of the implementations.
  • At least one of the aspects generally relates to video encoding and decoding, and at least one other aspect generally relates to transmitting a bitstream generated or encoded.
  • These and other aspects can be implemented as a method, an apparatus, a computer readable storage medium having stored thereon instructions for encoding or decoding video data according to any of the methods described, and/or a computer readable storage medium having stored thereon a bitstream generated according to any of the methods described.
  • the terms “reconstructed” and “decoded” may be used interchangeably, the terms “pixel” and “sample” may be used interchangeably, the terms “image,” “picture” and “frame” may be used interchangeably.
  • the term “reconstructed” is used at the encoder side while “decoded” is used at the decoder side.
  • each of the methods comprises one or more steps or actions for achieving the described method. Unless a specific order of steps or actions is required for proper operation of the method, the order and/or use of specific steps and/or actions may be modified or combined.
  • Various methods and other aspects described in this application can be used to modify modules, for example, the intra prediction, entropy coding, and/or decoding modules (160, 360, 145, 330), of a video encoder 100 and decoder 200 as shown in Figure 15 and Figure 16.
  • the present aspects are not limited to WC or HEVC, and can be applied, for example, to other standards and recommendations, whether preexisting or future-developed, and extensions of any such standards and recommendations (including VVC and HEVC). Unless indicated otherwise, or technically precluded, the aspects described in this application can be used individually or in combination.
  • Figure 15 illustrates an encoder 100. Variations of this encoder 100 are contemplated, but the encoder 100 is described below for purposes of clarity without describing all expected variations.
  • the video sequence may go through pre-encoding processing (101 ), for example, applying a color transform to the input color picture (e.g., conversion from RGB 4:4:4 to YCbCr 4:2:0), or performing a remapping of the input picture components in order to get a signal distribution more resilient to compression (for instance using a histogram equalization of one of the color components).
  • Metadata can be associated with the pre-processing and attached to the bitstream.
  • a picture is encoded by the encoder elements as described below.
  • the picture to be encoded is partitioned (102) and processed in units of, for example, CUs.
  • Each unit is encoded using, for example, either an intra or inter mode.
  • intra prediction 160
  • inter mode motion estimation (175) and compensation (170) are performed.
  • the encoder decides (105) which one of the intra mode or inter mode to use for encoding the unit, and indicates the intra/inter decision by, for example, a prediction mode flag.
  • Prediction residuals are calculated, for example, by subtracting (110) the predicted block from the original image block.
  • the prediction residuals are then transformed (125) and quantized (130).
  • the quantized transform coefficients, as well as motion vectors and other syntax elements, are entropy coded (145) to output a bitstream.
  • the encoder can skip the transform and apply quantization directly to the non-transformed residual signal.
  • the encoder can bypass both transform and quantization, i.e., the residual is coded directly without the application of the transform or quantization processes.
  • the encoder decodes an encoded block to provide a reference for further predictions.
  • the quantized transform coefficients are de-quantized (140) and inverse transformed (150) to decode prediction residuals.
  • In-loop filters (165) are applied to the reconstructed picture to perform, for example, deblocking/SAO (Sample Adaptive Offset)/ALF (Adaptive Loop Filtering) filtering to reduce encoding artifacts.
  • the filtered image is stored at a reference picture buffer (180).
  • Figure 16 illustrates a block diagram of a video decoder 200.
  • a bitstream is decoded by the decoder elements as described below.
  • Video decoder 200 generally performs a decoding pass reciprocal to the encoding pass as described in Figure 15.
  • the encoder 100 also generally performs video decoding as part of encoding video data.
  • the input of the decoder includes a video bitstream, which can be generated by video encoder 100.
  • the bitstream is first entropy decoded (230) to obtain transform coefficients, motion vectors, and other coded information.
  • the picture partition information indicates how the picture is partitioned.
  • the decoder may therefore divide (235) the picture according to the decoded picture partitioning information.
  • the transform coefficients are de-quantized (240) and inverse transformed (250) to decode the prediction residuals.
  • Combining (255) the decoded prediction residuals and the predicted block an image block is reconstructed.
  • the predicted block can be obtained (270) from intra prediction (260) or motion-compensated prediction (i.e., inter prediction) (275).
  • Inloop filters (265) are applied to the reconstructed image.
  • the filtered image is stored at a reference picture buffer (280).
  • the decoded picture can further go through post-decoding processing (285), for example, an inverse color transform (e.g., conversion from YcbCr 4:2:0 to RGB 4:4:4) or an inverse remapping performing the inverse of the remapping process performed in the pre-encoding processing (101 ).
  • post-decoding processing can use metadata derived in the pre-encoding processing and signaled in the bitstream.
  • FIG 17 illustrates a block diagram of an example of a system in which various aspects and embodiments are implemented.
  • System 1000 can be embodied as a device including the various components described below and is configured to perform one or more of the aspects described in this document. Examples of such devices include, but are not limited to, various electronic devices such as personal computers, laptop computers, smartphones, tablet computers, digital multimedia set top boxes, digital television receivers, personal video recording systems, connected home appliances, and servers. Elements of system 1000, singly or in combination, can be embodied in a single integrated circuit (IC), multiple ICs, and/or discrete components. For example, in at least one embodiment, the processing and encoder/decoder elements of system 1000 are distributed across multiple ICs and/or discrete components.
  • IC integrated circuit
  • system 1000 is communicatively coupled to one or more other systems, or other electronic devices, via, for example, a communications bus or through dedicated input and/or output ports.
  • system 1000 is configured to implement one or more of the aspects described in this document.
  • the system 1000 includes at least one processor 1010 configured to execute instructions loaded therein for implementing, for example, the various aspects described in this document.
  • Processor 1010 can include embedded memory, input output interface, and various other circuitries as known in the art.
  • the system 1000 includes at least one memory 1020 (e.g., a volatile memory device, and/or a non-volatile memory device).
  • System 1000 includes a storage device 1040, which can include non-volatile memory and/or volatile memory, including, but not limited to, Electrically Erasable Programmable Read-Only Memory (EEPROM), Read-Only Memory (ROM), Programmable Read-Only Memory (PROM), Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), flash, magnetic disk drive, and/or optical disk drive.
  • the storage device 1040 can include an internal storage device, an attached storage device (including detachable and non-detachable storage devices), and/or a network accessible storage device, as non-limiting examples.
  • System 1000 includes an encoder/decoder module 1030 configured, for example, to process data to provide an encoded video or decoded video, and the encoder/decoder module 1030 can include its own processor and memory.
  • the encoder/decoder module 1030 represents module(s) that can be included in a device to perform the encoding and/or decoding functions. As is known, a device can include one or both of the encoding and decoding modules. Additionally, encoder/decoder module 1030 can be implemented as a separate element of system 1000 or can be incorporated within processor 1010 as a combination of hardware and software as known to those skilled in the art.
  • processor 1010 Program code to be loaded onto processor 1010 or encoder/decoder 1030 to perform the various aspects described in this document can be stored in storage device 1040 and subsequently loaded onto memory 1020 for execution by processor 1010.
  • processor 1010, memory 1020, storage device 1040, and encoder/decoder module 1030 can store one or more of various items during the performance of the processes described in this document.
  • Such stored items can include, but are not limited to, the input video, the decoded video or portions of the decoded video, the bitstream, matrices, variables, and intermediate or final results from the processing of equations, formulas, operations, and operational logic.
  • memory inside of the processor 1010 and/or the encoder/decoder module 1030 is used to store instructions and to provide working memory for processing that is needed during encoding or decoding.
  • a memory external to the processing device (for example, the processing device can be either the processor 1010 or the encoder/decoder module 1030) is used for one or more of these functions.
  • the external memory can be the memory 1020 and/or the storage device 1040, for example, a dynamic volatile memory and/or a non-volatile flash memory.
  • an external non-volatile flash memory is used to store the operating system of, for example, a television.
  • a fast external dynamic volatile memory such as a RAM is used as working memory for video coding and decoding operations, such as for MPEG-2 (MPEG refers to the Moving Picture Experts Group, MPEG-2 is also referred to as ISO/IEC 13818, and 13818-1 is also known as H.222, and 13818-2 is also known as H.262), HEVC (HEVC refers to High Efficiency Video Coding, also known as H.265 and MPEG-H Part 2), or WC (Versatile Video Coding, a new standard being developed by JVET, the Joint Video Experts Team).
  • MPEG-2 MPEG refers to the Moving Picture Experts Group
  • MPEG-2 is also referred to as ISO/IEC 13818
  • 13818-1 is also known as H.222
  • 13818-2 is also known as H.262
  • HEVC High Efficiency Video Coding
  • WC Very Video Coding
  • the input to the elements of system 1000 can be provided through various input devices as indicated in block 1130.
  • Such input devices include, but are not limited to, (i) a radio frequency (RF) portion that receives an RF signal transmitted, for example, over the air by a broadcaster, (ii) a Component (COMP) input terminal (or a set of COMP input terminals), (iii) a Universal Serial Bus (USB) input terminal, and/or (iv) a High Definition Multimedia Interface (HDMI) input terminal.
  • RF radio frequency
  • COMP Component
  • USB Universal Serial Bus
  • HDMI High Definition Multimedia Interface
  • the input devices of block 1130 have associated respective input processing elements as known in the art.
  • the RF portion can be associated with elements suitable for (i) selecting a desired frequency (also referred to as selecting a signal, or band-limiting a signal to a band of frequencies), (ii) downconverting the selected signal, (iii) band-limiting again to a narrower band of frequencies to select (for example) a signal frequency band which can be referred to as a channel in certain embodiments, (iv) demodulating the downconverted and band-limited signal, (v) performing error correction, and (vi) demultiplexing to select the desired stream of data packets.
  • the RF portion of various embodiments includes one or more elements to perform these functions, for example, frequency selectors, signal selectors, bandlimiters, channel selectors, filters, downconverters, demodulators, error correctors, and demultiplexers.
  • the RF portion can include a tuner that performs various of these functions, including, for example, downconverting the received signal to a lower frequency (for example, an intermediate frequency or a near-baseband frequency) or to baseband.
  • the RF portion and its associated input processing element receives an RF signal transmitted over a wired (for example, cable) medium, and performs frequency selection by filtering, downconverting, and filtering again to a desired frequency band.
  • Adding elements can include inserting elements in between existing elements, such as, for example, inserting amplifiers and an analog-to-digital converter.
  • the RF portion includes an antenna.
  • the USB and/or HDMI terminals can include respective interface processors for connecting system 1000 to other electronic devices across USB and/or HDMI connections. It is to be understood that various aspects of input processing, for example, Reed-Solomon error correction, can be implemented, for example, within a separate input processing IC or within processor 1010 as necessary. Similarly, aspects of USB or HDMI interface processing can be implemented within separate interface les or within processor 1010 as necessary.
  • the demodulated, error corrected, and demultiplexed stream is provided to various processing elements, including, for example, processor 1010, and encoder/decoder 1030 operating in combination with the memory and storage elements to process the datastream as necessary for presentation on an output device.
  • Various elements of system 1000 can be provided within an integrated housing, Within the integrated housing, the various elements can be interconnected and transmit data therebetween using suitable connection arrangement, for example, an internal bus as known in the art, including the Inter-IC (I2C) bus, wiring, and printed circuit boards.
  • I2C Inter-IC
  • the system 1000 includes communication interface 1050 that enables communication with other devices via communication channel 1060.
  • the communication interface 1050 can include, but is not limited to, a transceiver configured to transmit and to receive data over communication channel 1060.
  • the communication interface 1050 can include, but is not limited to, a modem or network card and the communication channel 1060 can be implemented, for example, within a wired and/or a wireless medium.
  • Wi-Fi Wireless Fidelity
  • IEEE 802.11 IEEE refers to the Institute of Electrical and Electronics Engineers
  • the Wi-Fi signal of these embodiments is received over the communications channel 1060 and the communications interface 1050 which are adapted for Wi-Fi communications.
  • the communications channel 1060 of these embodiments is typically connected to an access point or router that provides access to external networks including the Internet for allowing streaming applications and other over-the-top communications.
  • Other embodiments provide streamed data to the system 1000 using a set-top box that delivers the data over the HDMI connection of the input block 1130.
  • Still other embodiments provide streamed data to the system 1000 using the RF connection of the input block 1130.
  • various embodiments provide data in a non-streaming manner.
  • various embodiments use wireless networks other than Wi-Fi, for example a cellular network or a Bluetooth network.
  • the system 1000 can provide an output signal to various output devices, including a display 1100, speakers 1110, and other peripheral devices 1120.
  • the display 1100 of various embodiments includes one or more of, for example, a touchscreen display, an organic light-emitting diode (OLED) display, a curved display, and/or a foldable display.
  • the display 1100 can be for a television, a tablet, a laptop, a cell phone (mobile phone), or another device.
  • the display 1100 can also be integrated with other components (for example, as in a smart phone), or separate (for example, an external monitor for a laptop).
  • the other peripheral devices 1120 include, in various examples of embodiments, one or more of a stand-alone digital video disc (or digital versatile disc) (DVR, for both terms), a disk player, a stereo system, and/or a lighting system.
  • Various embodiments use one or more peripheral devices 1120 that provide a function based on the output of the system 1000. For example, a disk player performs the function of playing the output of the system 1000.
  • control signals are communicated between the system 1000 and the display 1100, speakers 1110, or other peripheral devices 1120 using signaling such as AV. Link, Consumer Electronics Control (CEC), or other communications protocols that enable device-to-device control with or without user intervention.
  • the output devices can be communicatively coupled to system 1000 via dedicated connections through respective interfaces 1070, 1080, and 1090. Alternatively, the output devices can be connected to system 1000 using the communications channel 1060 via the communications interface 1050.
  • the display 1100 and speakers 1110 can be integrated in a single unit with the other components of system 1000 in an electronic device such as, for example, a television.
  • the display interface 1070 includes a display driver, such as, for example, a timing controller (T Con) chip.
  • the display 1100 and speaker 1110 can alternatively be separate from one or more of the other components, for example, if the RF portion of input 1130 is part of a separate set-top box.
  • the output signal can be provided via dedicated output connections, including, for example, HDMI ports, USB ports, or COMP outputs.
  • the embodiments can be carried out by computer software implemented by the processor 1010 or by hardware, or by a combination of hardware and software. As a nonlimiting example, the embodiments can be implemented by one or more integrated circuits.
  • the memory 1020 can be of any type appropriate to the technical environment and can be implemented using any appropriate data storage technology, such as optical memory devices, magnetic memory devices, semiconductor-based memory devices, fixed memory, and removable memory, as non-limiting examples.
  • the processor 1010 can be of any type appropriate to the technical environment, and can encompass one or more of microprocessors, general purpose computers, special purpose computers, and processors based on a multi-core architecture, as non-limiting examples.
  • Decoding can encompass all or part of the processes performed, for example, on a received encoded sequence to produce a final output suitable for display.
  • processes include one or more of the processes typically performed by a decoder, for example, entropy decoding, inverse quantization, inverse transformation, and differential decoding.
  • processes also, or alternatively, include processes performed by a decoder of various implementations described in this application.
  • decoding refers only to entropy decoding
  • decoding refers only to differential decoding
  • decoding refers to a combination of entropy decoding and differential decoding.
  • encoding can encompass all or part of the processes performed, for example, on an input video sequence to produce an encoded bitstream.
  • processes include one or more of the processes typically performed by an encoder, for example, partitioning, differential encoding, transformation, quantization, and entropy encoding.
  • processes also, or alternatively, include processes performed by an encoder of various implementations described in this application.
  • encoding refers only to entropy encoding
  • encoding refers only to differential encoding
  • encoding refers to a combination of differential encoding and entropy encoding.
  • syntax elements as used herein are descriptive terms. As such, they do not preclude the use of other syntax element names.
  • Various embodiments may refer to parametric models or rate distortion optimization.
  • the balance or trade-off between the rate and distortion is usually considered, often given the constraints of computational complexity. It can be measured through a Rate Distortion Optimization (RDO) metric, or through Least Mean Square (LMS), Mean of Absolute Errors (MAE), or other such measurements.
  • RDO Rate Distortion Optimization
  • LMS Least Mean Square
  • MAE Mean of Absolute Errors
  • Rate distortion optimization is usually formulated as minimizing a rate distortion function, which is a weighted sum of the rate and of the distortion. There are different approaches to solve the rate distortion optimization problem.
  • the approaches may be based on an extensive testing of all encoding options, including all considered modes or coding parameters values, with a complete evaluation of their coding cost and related distortion of the reconstructed signal after coding and decoding.
  • Faster approaches may also be used, to save encoding complexity, in particular with computation of an approximated distortion based on the prediction or the prediction residual signal, not the reconstructed one.
  • Mix of these two approaches can also be used, such as by using an approximated distortion for only some of the possible encoding options, and a complete distortion for other encoding options.
  • Other approaches only evaluate a subset of the possible encoding options. More generally, many approaches employ any of a variety of techniques to perform the optimization, but the optimization is not necessarily a complete evaluation of both the coding cost and related distortion.
  • the implementations and aspects described herein can be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed can also be implemented in other forms (for example, an apparatus or program).
  • An apparatus can be implemented in, for example, appropriate hardware, software, and firmware.
  • the methods can be implemented in, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants (“PDAs”), and other devices that facilitate communication of information between endusers.
  • PDAs portable/personal digital assistants
  • references to “one embodiment” or “an embodiment” or “one implementation” or “an implementation”, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment.
  • the appearances of the phrase “in one embodiment” or “in an embodiment” or “in one implementation” or “in an implementation”, as well any other variations, appearing in various places throughout this application are not necessarily all referring to the same embodiment.
  • Determining the information can include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory.
  • Accessing the information can include one or more of, for example, receiving the information, retrieving the information (for example, from memory), storing the information, moving the information, copying the information, calculating the information, determining the information, predicting the information, or estimating the information.
  • this application may refer to “receiving” various pieces of information.
  • Receiving is, as with “accessing”, intended to be a broad term.
  • Receiving the information can include one or more of, for example, accessing the information, or retrieving the information (for example, from memory).
  • “receiving” is typically involved, in one way or another, during operations such as, for example, storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
  • any of the following “and/or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B).
  • such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C).
  • This may be extended, as is clear to one of ordinary skill in this and related arts, for as many items as are listed.
  • the word “signal” refers to, among other things, indicating something to a corresponding decoder.
  • the encoder signals a particular one of a plurality of transforms, coding modes or flags.
  • the same transform, parameter, or mode is used at both the encoder side and the decoder side.
  • an encoder can transmit (explicit signaling) a particular parameter to the decoder so that the decoder can use the same particular parameter.
  • signaling can be used without transmitting (implicit signaling) to simply allow the decoder to know and select the particular parameter.
  • signaling can be accomplished in a variety of ways. For example, one or more syntax elements, flags, and so forth are used to signal information to a corresponding decoder in various embodiments. While the preceding relates to the verb form of the word “signal”, the word “signal” can also be used herein as a noun.
  • implementations can produce a variety of signals formatted to carry information that can be, for example, stored or transmitted.
  • the information can include, for example, instructions for performing a method, or data produced by one of the described implementations.
  • a signal can be formatted to carry the bitstream of a described embodiment.
  • Such a signal can be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
  • the formatting can include, for example, encoding a data stream and modulating a carrier with the encoded data stream.
  • the information that the signal carries can be, for example, analog or digital information.
  • the signal can be transmitted over a variety of different wired or wireless links, as is known.
  • the signal can be stored on a processor-readable medium.
  • One embodiment comprises predicting a video block using an intra coding mode and reference samples from a temporally collocated reference frame.
  • One embodiment comprises using a intra prediction motion model to predict samples from a reference frame.
  • One embodiment comprises the above method wherein an intra coding mode is determined at an encoder and the mode and/or the reference frame is signaled to a corresponding decoder.
  • One embodiment comprises the above method wherein different motion models are used to determine a reference frame or a motion vector.
  • One embodiment comprises any of the above methods wherein an index is signaled to indicate a motion model, reference frame, or motion vector to be used for encoding/decoding.
  • One embodiment comprises any of the above methods using reference samples around a motion compensated block from a reference frame.
  • One embodiment comprises a bitstream or signal that includes one or more syntax elements to perform the above functions, or variations thereof.
  • One embodiment comprises a bitstream or signal that includes syntax conveying information generated according to any of the embodiments described.
  • One embodiment comprises creating and/or transmitting and/or receiving and/or decoding according to any of the embodiments described.
  • One embodiment comprises a method, process, apparatus, medium storing instructions, medium storing data, or signal according to any of the embodiments described.
  • One embodiment comprises inserting in the signaling syntax elements that enable the decoder to determine decoding information in a manner corresponding to that used by an encoder.
  • One embodiment comprises creating and/or transmitting and/or receiving and/or decoding a bitstream or signal that includes one or more of the described syntax elements, or variations thereof.
  • One embodiment comprises a TV, set-top box, cell phone, tablet, or other electronic device that performs transform method(s) according to any of the embodiments described.
  • One embodiment comprises a TV, set-top box, cell phone, tablet, or other electronic device that performs transform method(s) determination according to any of the embodiments described, and that displays (e.g. using a monitor, screen, or other type of display) a resulting image.
  • One embodiment comprises a TV, set-top box, cell phone, tablet, or other electronic device that selects, bandlimits, or tunes (e.g. using a tuner) a channel to receive a signal including an encoded image, and performs transform method(s) according to any of the embodiments described.
  • One embodiment comprises a TV, set-top box, cell phone, tablet, or other electronic device that receives (e.g. using an antenna) a signal over the air that includes an encoded image, and performs transform method(s).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Des échantillons de prédiction intra sont déterminés pour un codage dans des codeurs vidéo et un décodage dans des décodeurs vidéo à l'aide de l'un de plusieurs modes de réalisation. Dans au moins un mode de réalisation, des informations de mouvement sont dérivées pour déterminer des échantillons de référence à utiliser à partir d'une trame de référence. Les échantillons de référence sont utilisés dans un mode de prédiction intra pour déterminer des échantillons de prédiction. Les échantillons de référence sont utilisés dans le codage ou le décodage. Dans au moins un mode de réalisation, l'un de plusieurs modèles de mouvement peut être utilisé pour extraire les informations de mouvement. Le modèle de mouvement, le mode intra ou la trame de référence peuvent être signalés d'un codeur à un décodeur. La signalisation peut utiliser un indice. Le modèle de mouvement global est calculé informatiquement au niveau d'un codeur et envoyé dans un en-tête de tranche ou un en-tête d'image à un décodeur correspondant. Dans au moins un mode de réalisation, une référence différente est utilisée pour une trame de référence.
PCT/EP2023/057363 2022-04-07 2023-03-22 Prédiction de mode intra temporelle WO2023194104A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP22305482.6 2022-04-07
EP22305482 2022-04-07

Publications (1)

Publication Number Publication Date
WO2023194104A1 true WO2023194104A1 (fr) 2023-10-12

Family

ID=81388865

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/057363 WO2023194104A1 (fr) 2022-04-07 2023-03-22 Prédiction de mode intra temporelle

Country Status (1)

Country Link
WO (1) WO2023194104A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014205339A2 (fr) * 2013-06-21 2014-12-24 Qualcomm Incorporated Intra-prédiction à partir d'un bloc prédictif

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014205339A2 (fr) * 2013-06-21 2014-12-24 Qualcomm Incorporated Intra-prédiction à partir d'un bloc prédictif

Similar Documents

Publication Publication Date Title
US20220078405A1 (en) Simplifications of coding modes based on neighboring samples dependent parametric models
AU2019354653B2 (en) Generalized bi-prediction and weighted prediction
US20220191474A1 (en) Wide angle intra prediction with sub-partitions
WO2021058383A1 (fr) Procédé et appareil utilisant une syntaxe homogène au moyen d'outils de codage
EP4218240A1 (fr) Prédiction de correspondance de modèles pour codage vidéo polyvalent
EP3963882A1 (fr) Jeu d'outils de codage vidéo simplifié à syntaxe de haut niveau pour petits blocs
EP3861749A1 (fr) Directions pour une prédiction intra à grand angle
AU2022216783A1 (en) Spatial local illumination compensation
AU2022216783A9 (en) Spatial local illumination compensation
EP3709643A1 (fr) Partitionnement de mode de prédiction intra
US20230023837A1 (en) Subblock merge candidates in triangle merge mode
WO2022214361A1 (fr) Partitions géométriques à filtre d'interpolation commutable
WO2020167763A1 (fr) Extension de mode de prédiction intra
WO2023194104A1 (fr) Prédiction de mode intra temporelle
US20230336721A1 (en) Combining abt with vvc sub-block-based coding tools
US20230262268A1 (en) Chroma format dependent quantization matrices for video encoding and decoding
WO2024002699A1 (fr) Améliorations de sous-partition intra
WO2023194105A1 (fr) Dérivation intra-mode pour unités de codage inter-prédites
WO2023194103A1 (fr) Dérivation intra-mode temporelle
EP3994893A1 (fr) Signalisation d'indices de fusion permettant des divisions en triangle
EP4070547A1 (fr) Procédé de mise à l'échelle pour blocs codés de chrominance conjointe
WO2021058408A1 (fr) Signalisation de mode le plus probable avec prédiction intra de ligne de référence multiple
WO2020102000A1 (fr) Adaptation de sélection de modes candidats les plus probables en fonction de la forme d'un bloc

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23713647

Country of ref document: EP

Kind code of ref document: A1