WO2017064370A1 - Video coding with helper data for spatial intra-prediction - Google Patents

Video coding with helper data for spatial intra-prediction Download PDF

Info

Publication number
WO2017064370A1
WO2017064370A1 PCT/FI2016/050715 FI2016050715W WO2017064370A1 WO 2017064370 A1 WO2017064370 A1 WO 2017064370A1 FI 2016050715 W FI2016050715 W FI 2016050715W WO 2017064370 A1 WO2017064370 A1 WO 2017064370A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
helper
video stream
values
value
Prior art date
Application number
PCT/FI2016/050715
Other languages
French (fr)
Inventor
Jani Lainema
Alireza Aminlou
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Priority to EP16855007.7A priority Critical patent/EP3363201A4/en
Priority to JP2018518710A priority patent/JP2018530968A/en
Priority to CN201680059906.8A priority patent/CN108353186A/en
Priority to KR1020187013483A priority patent/KR20180069850A/en
Publication of WO2017064370A1 publication Critical patent/WO2017064370A1/en
Priority to PH12018500776A priority patent/PH12018500776A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • the described invention relates to processing of digital images and videos, and particularly to encoding and decoding of such images and video for telecommunications and storage thereof.
  • the recommended specifications ITU-T H.263 and H.264 (04/2015) provide typical hybrid video codecs in that they encode the video information in two phases. Firstly pixel values in a certain picture area (termed a "block") are predicted for example by motion compensation means or by spatial means. Motion compensation generally includes finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded; spatial means generally includes using the pixel values around the block to be coded in a specified manner. Secondly the prediction error is coded; the prediction error is the difference between the predicted block of pixels and the original block of pixels.
  • Coding the prediction error is typically done by transforming the difference in pixel values using a specified transform such as for example a Discreet Cosine Transform (DCT) or some variant of it, quantizing the coefficients and entropy-coding the quantized coefficients.
  • DCT Discreet Cosine Transform
  • the encoder can control the balance between the accuracy of the pixel representation (the picture quality) and the size of the resulting coded video representation (the file size or transmission bitrate).
  • ITU-T H.265 also referred to as High Efficiency Video Coding HEVC.
  • This approach builds intra frame sample prediction blocks using directional filtering, and projects the sample location of the sample to be predicted onto the reference row using a selected prediction direction, and also applies a 1 -dimensional linear filter to interpolate a predicted value for the sample. For the case of directly horizontal or directly vertical prediction directions, one of the block boundaries is additionally filtered with a sample gradient based filter.
  • HEVC also defines direct current (DC) and planar prediction modes.
  • DC prediction calculates the DC component of the reference samples and uses that as a prediction for the samples within a block, whereas planar prediction calculates an average of two linear predictions to predict blocks with smooth sample surface.
  • a method for decoding a video stream comprising: receiving with an encoded video stream an indication of a prediction mode and an indication of one or more prediction helper values; while decoding the encoded video stream, calculating a predicted value for each of at least one sample based on the received indication of the prediction mode and on the received one or more prediction helper values; and tangibly outputting the decoded video stream to at least one of a computer readable memory and a graphical display, such that the decoded video stream that is output incorporates each of the at least one sample as decoded using the respective calculated predicted value.
  • a computer readable memory storing computer program instructions that, when executed by one or more processors, cause a host decoder device to perform actions directed to decoding a video stream.
  • the actions include: receiving with an encoded video stream an indication of a prediction mode and an indication of one or more prediction helper values; while decoding the encoded video stream, calculating a predicted value for each of at least one sample based on the received indication of the prediction mode and on the received one or more prediction helper values; and tangibly outputting the decoded video stream to at least one of a computer readable memory and a graphical display, such that the decoded video stream that is output incorporates each of the at least one sample as decoded using the respective calculated predicted value.
  • the apparatus comprises at least one computer readable memory storing computer program instructions and at least one processor.
  • the computer readable memory with the computer program instructions is configured, with the at least one processor, to cause the apparatus to perform actions comprising: receiving with an encoded video stream an indication of a prediction mode and an indication of one or more prediction helper values; while decoding the encoded video stream, calculating a predicted value for each of at least one sample based on the received indication of the prediction mode and on the received one or more prediction helper values; and tangibly outputting the decoded video stream to at least one of a computer readable memory and a graphical display, such that the decoded video stream that is output incorporates each of the at least one sample as decoded using the respective calculated predicted value.
  • FIG. 1 is a schematic block diagram of a video encoder that uses the two-phase process of pixel prediction and prediction error as is known in the video coding arts.
  • FIG. 2 is a schematic block diagram of a generic video decoder for decoding the video encoded by the process of FIG. 1 , as is known in the video coding arts.
  • FIG. 3A illustrates a convention used in FIGs. 3B-D for sample locations of a prediction block corresponding to a reference sample in a control unit using a vertical-only prediction direction.
  • FIG. 3B is a control unit with one prediction unit using the convention of FIG. 3 A for illustrating a conventional approach to predicting sample values.
  • FIG. 3C is a control unit with one prediction unit using the convention of FIG. 3A for illustrating one example for linearly predicting sample values according to these teachings.
  • FIG. 3D is a control unit with one prediction unit using the convention of FIG. 3 A for illustrating one example for non-linearly predicting sample values according to these teachings.
  • FIG. 4 is a process flow diagram according to an example embodiment of these teachings.
  • FIG. 5 is a high level schematic block diagram illustrating certain apparatus/devices that are suitable for encoding and decoding a video stream practicing according to certain aspects of these teachings.
  • Video coding/decoding uses both inter- frame prediction which predicts for the subject frame from one or more other frames, and intra-frame prediction which predicts for one portion of the subject frame from one or more other portions of the subject frame.
  • FIG. 1 is a schematic block diagram of a generic video encoder that uses the two-phase process of pixel prediction and prediction error as mentioned in the background above and explained in further detail below.
  • FIG. 1 uses the following representations:
  • the source and decoded pictures are each comprised of one or more sample arrays, such as one of the following sets of sample arrays:
  • RGB Green, Blue and Red
  • these arrays may be referred to as luma (or L or Y) and chroma, where the two chroma arrays may be referred to as Cb and Cr; regardless of the actual color representation method in use.
  • the actual color representation method in use can be indicated e.g. in a coded bitstream e.g. using the Video Usability Information (VUI) syntax of H.264/AVC and/or HEVC.
  • VUI Video Usability Information
  • a component may be defined as an array or single sample from one of the three sample arrays arrays (luma and two chroma) or the array or a single sample of the array that compose a picture in monochrome format.
  • a picture may either be a frame or a field.
  • a frame comprises a matrix of luma samples and possibly the corresponding chroma samples.
  • a field is a set of alternate sample rows of a frame and may be used as encoder input, when the source signal is interlaced.
  • Chroma sample arrays may be absent (and hence monochrome sampling may be in use) or chroma sample arrays may be subsampled when compared to luma sample arrays.
  • Chroma formats may be summarized as follows:
  • each of the two chroma arrays has half the height and half the width of the luma array.
  • each of the two chroma arrays has the same height and half the width of the luma array.
  • each of the two chroma arrays has the same height and width as the luma array.
  • the location of chroma samples with respect to luma samples may be determined in the encoder side (e.g. as preprocessing step or as part of encoding).
  • the chroma sample positions with respect to luma sample positions may be pre-defined for example in a coding standard, such as H.264/AVC or HEVC, or may be indicated in the bitstream for example as part of VUI of H.264/ AVC or HEVC.
  • a partitioning may be defined as a division of a set into subsets such that each element of the set is in exactly one of the subsets.
  • a macroblock is a 16x16 block of luma samples and the corresponding blocks of chroma samples. For example, in the 4:2:0 sampling pattern, a macroblock contains one 8x8 block of chroma samples per each chroma component.
  • a picture is partitioned to one or more slice groups, and a slice group contains one or more slices.
  • a slice consists of an integer number of macroblocks ordered consecutively in the raster scan within a particular slice group.
  • a coding block may be defined as an NxN block of samples for some value of N such that the division of a coding tree block into coding blocks is a partitioning.
  • a coding tree block may be defined as an NxN block of samples for some value of N such that the division of a component into coding tree blocks is a partitioning.
  • a coding tree unit may be defined as a coding tree block of luma samples, two corresponding coding tree blocks of chroma samples of a picture that has three sample arrays, or a coding tree block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples.
  • a coding unit may be defined as a coding block of luma samples, two corresponding coding blocks of chroma samples of a picture that has three sample arrays, or a coding block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples.
  • video pictures are divided into coding units (CU) covering the area of the picture.
  • a CU consists of one or more prediction units (PU) defining the prediction process for the samples within the CU and one or more transform units (TU) defining the prediction error coding process for the samples in that CU.
  • PU prediction units
  • TU transform units
  • a CU consists of a square block of samples with a size selectable from a predefined set of possible CU sizes.
  • a CU with the maximum allowed size is typically referred to as a LCU (largest coding unit) or a CTU (coding tree unit), and the video picture is divided into non-overlapping CTUs.
  • a CTU can be further split into a combination of smaller CUs, for example by recursively splitting the CTU and resultant CUs.
  • Each resulting CU typically has at least one PU and at least one TU associated with it.
  • Each PU and TU can be further split into smaller PUs and TUs in order to increase granularity of the prediction and prediction error coding processes, respectively.
  • Each PU has prediction information associated with it defining what kind of a prediction is to be applied for the pixels within that PU; for example motion vector information for inter predicted PUs and intra prediction directionality information for intra predicted PUs.
  • each TU is associated with information describing the prediction error decoding process for the samples within the said TU, and this information may for example include DCT coefficient information.
  • FIG. 2 is a schematic block diagram of a generic video decoder for decoding the video encoded by the FIG. 1 process, and FIG. 2 uses the following representations:
  • the decoder reconstructs the output video by applying prediction means similar to the encoder to form a predicted representation of the pixel blocks using the motion or spatial information created by the encoder and stored in the compressed representation, and using prediction error decoding which is the inverse operation of the prediction error coding to recover the quantized prediction error signal in the spatial pixel domain.
  • prediction error decoding which is the inverse operation of the prediction error coding to recover the quantized prediction error signal in the spatial pixel domain.
  • the decoder sums up the prediction and prediction error signals (the pixel values) to form the output video frame.
  • the decoder of FIG. 2 as well as the encoder of FIG. 1 can also apply additional filtering means to improve the quality of the output video before passing it for display and/or storing it as a prediction reference for the forthcoming frames in the video sequence.
  • a color palette based coding can be used.
  • Palette based coding refers to a family of approaches for which a palette is defined, typically as a set of colors and associated indexes, and the value for each sample within a coding unit is expressed by indicating its index in the palette.
  • Palette based coding can achieve good coding efficiency in coding units with a relatively small number of colors, such as for example image areas which are representing computer screen content that includes only text and/or simple graphics.
  • palette index prediction approaches can be utilized, or the palette indexes can be run-length coded to be able to represent larger homogenous image areas efficiently.
  • escape coding can be utilized for the case in which the CU contains sample values that are not recurring within the CU. Escape coded samples are transmitted without referring to any of the palette indexes; Instead their values are indicated individually for each escape coded sample.
  • the motion information is indicated with motion vectors associated with each motion compensated image block. Each of these motion vectors represents the displacement of the image block in the picture to be coded or decoded (at the respective encoder and decoder sides) and the prediction source block in one of the previously coded or decoded pictures.
  • the predicted motion vectors are created in a predefined way, for example by calculating the median of the encoded or decoded motion vectors of the adjacent blocks.
  • Another way to create motion vector predictions is to generate a list of candidate predictions from adjacent blocks and/or co-located blocks in temporal reference pictures and signaling the chosen candidate as the motion vector predictor.
  • the reference index of a previously coded/decoded picture can be predicted. The reference index is typically predicted from adjacent blocks and/or or co-located blocks in a temporal reference picture.
  • typical high efficiency video codecs can employ an additional motion information coding/decoding mechanism, often called merging (sometimes referred to as merge mode) where all the motion field information, which includes motion vector and corresponding reference picture index for each available reference picture list, is predicted and used without any modification or correction.
  • merging sometimes referred to as merge mode
  • predicting the motion field information is carried out using the motion field information of adjacent blocks and/or co-located blocks in temporal reference pictures and the used motion field information is signaled among a list of motion field candidates filled with motion field information of available adjacent/co-located blocks.
  • video codecs support motion compensated prediction from one source image (uni- prediction) or from two sources (bi-prediction).
  • uni-prediction a single motion vector is applied whereas in the case of bi-prediction two motion vectors are signaled and the motion compensated predictions from the two sources are averaged to create the final sample prediction.
  • weighted prediction the relative weights of the two predictions can be adjusted, or a signaled offset can be added to the prediction signal.
  • a similar approach can be applied to intra picture prediction.
  • the displacement vector indicates where from the same picture a block of samples can be copied to form a prediction of the block to be coded or decoded. This kind of intra block copying methods can improve the coding efficiency substantially in the presence of repeating structures within the frame such as text or other graphics.
  • the prediction residual after motion compensation or intra prediction is first transformed with a transform kernel such as DCT and then coded.
  • a transform kernel such as DCT
  • Typical video encoders utilize Lagrangian cost functions to find optimal coding modes, for example the desired Macroblock mode and its associated motion vectors.
  • This kind of cost function uses a weighting factor ⁇ to tie together the (exact or estimated) image distortion due to lossy coding methods and the (exact or estimated) amount of information that is required to represent the pixel values in an image area:
  • C D + ⁇ R (Eq. 1) where: C is the Lagrangian cost to be minimized;
  • D is the image distortion (for example, Mean Squared Error) with the mode and motion vectors considered;
  • R is the number of bits needed to represent the required data to reconstruct the image block in the decoder (including the amount of data to represent the candidate motion vectors).
  • Scalable video coding refers to a coding structure where one bitstream can contain multiple representations of the content at different bitrates, resolutions or frame rates. In these cases the receiver can extract the desired representation depending on its characteristics, for example resolution that matches the best of which the display device is capable). Alternatively, a server or a network element can extract the portions of the bitstream to be transmitted to the receiver, depending for example on the network characteristics or processing capabilities of the receiver.
  • a scalable bitstream typically consists of a "base layer" providing the lowest quality video available and one or more enhancement layers that enhance the video quality when received and decoded together with the lower layers.
  • the coded representation of that layer typically depends on the lower layers; for example the motion and mode information of the enhancement layer can be predicted from lower layers. Similarly the pixel data of the lower layers can be used to create prediction for the enhancement layer.
  • a scalable video codec for quality scalability also known as Signal-to-Noise Ratio or SNR
  • spatial scalability may be implemented as follows.
  • a base layer a conventional non- scalable video encoder and decoder is used.
  • the reconstructed/decoded pictures of the base layer are included in the reference picture buffer for an enhancement layer.
  • the base layer decoded pictures may be inserted into a reference picture list(s) for coding/decoding of an enhancement layer picture similarly to the decoded reference pictures of the enhancement layer.
  • the encoder may choose a base-layer reference picture as an inter prediction reference and indicate its use, typically with a reference picture index in the coded bitstream.
  • the decoder decodes from the bitstream, for example from a reference picture index, that a base-layer picture is used as an inter prediction reference for the enhancement layer.
  • a decoded base-layer picture is used as a prediction reference for an enhancement layer, it is referred to as an inter- layer reference picture.
  • Base layer pictures are coded at a higher resolution than
  • Bit-depth scalability Base layer pictures are coded at lower bit-depth (e.g. 8 bits) than enhancement layer pictures (e.g. 10 or 12 bits).
  • Chroma format scalability Base layer pictures provide higher fidelity in chroma (e.g. coded in 4:4:4 chroma format) than enhancement layer pictures (e.g. 4:2:0 format).
  • the base layer information could be used to code the enhancement layer in order to minimize the additional bitrate overhead.
  • Scalability can be enabled in two basic ways; either by introducing new coding modes for performing prediction of pixel values or syntax from lower layers of the scalable representation, or by placing the lower layer pictures to the reference picture buffer (which is termed a decoded picture buffer, DPB) of the higher layer.
  • the first approach is more flexible and thus can provide better coding efficiency in most cases.
  • the second approach reference frame based scalability, can be implemented very efficiently with minimal changes to single layer codecs while still achieving a majority of the coding efficiency gains available.
  • a reference frame based scalability codec can be implemented by utilizing the same hardware or software implementation for all the layers, just taking care of the DPB management by external means.
  • the predicted sample values are modified in the direction of prediction, which allows the codec to compensate for more complex textures than traditional methods.
  • This modification is enabled by what we term a '3 ⁇ 4elper indication", and from it can be determined for example a linear change in the predicted sample values when moving one sample row or column further away from the reference sample row or column.
  • FIGs. 3C-D illustrate some example prediction helpers.
  • the decoder reconstructs the intended prediction signal (or a prediction sample block) by performing the following steps:
  • the decoder can also receive an indication of prediction error and apply
  • prediction error coding or decoding means based on said indication of the one or more helper values.
  • helper value h(x) for each column in the prediction block (in the case of vertical prediction modes) and one helper value h(y) for each row in the prediction block (in the case of horizontal prediction modes).
  • the helper values h(x) or h(y) are quantized and coded in the bitstream using entropy coding means.
  • the helper values are applied linearly and the helper values are accumulated to the reference samples r(x) and r(y) in the direction of prediction.
  • a y describes the fractional part of the reference sample when predicting sample row y (defined based on the selected prediction direction, for example as specified in the H.265/HEVC standard).
  • FIGs. 3A-D illustrate reference samples rl, r2, r3 and r4 along the topmost (shaded) row of each, and a prediction block that includes prediction samples in the remaining unshaded rows apart from the reference samples.
  • a prediction direction that is directly vertical; that is, each prediction block references only another reference sample in the same column.
  • a given prediction block can reference a reference sample in a different row, in a different column, or multiple reference samples in the same or different rows and/or columns.
  • the shaded lefthand column also includes reference samples apart from rO in the un-labelled shaded boxes, but in these examples they are not used as a reference for any of the prediction block and so are un-labeled.
  • the heavily-lined box of Fig. 3A may be considered a CU, and the dashed-line box within it may be considered a PU for that same CU.
  • FIGs. 3B-D assume the same reference samples rl-r4, CU and PU as is shown for FIG. 3A.
  • the individual boxes/samples within the PU are each uniquely identified in FIGs. 3B-D using the same designations pi 1 , pl2, p21 , etc. that are shown in FIG. 3 A.
  • FIG. 3A The designations pi 1, pl2, p21 , etc. shown in FIG. 3A may be considered to represent the prediction vectors, which will take in different values for FIGs. 3B-D.
  • the predicted samples within the prediction block/PU are generated by directionally copying the reference samples that are indicated by the prediction vectors into the box that bears the prediction vector value.
  • FIG. 3B represents the prior art vertical directional intra prediction while FIGs. 3C-D illustrate two simple embodiments of these teachings for vertical directional intra prediction.
  • the prediction vectors can reference one or more reference samples in a different CU of this same image frame (still intra prediction), or in a CU of a different frame (inter prediction).
  • each and every prediction vector pi 1, p21 , p31 and p41 points to the box immediately above it in the same vertical column, as they must given the constraint of this simple example for directly vertical directional prediction.
  • the prediction vector pl l points to reference sample rl and so rl is copied to the box designated as pi 1.
  • the prediction vector p21 points to the box directly above it designated as pl l , which now in the decoding process has a copy of the reference sample rl , and so reference sample rl is again copied into the box designated as p21.
  • the prediction vector p31 points to the box directly above it and similarly copies the reference sample rl into the box designated as p31 , and so forth for the column under reference sample rl .
  • the other columns are similar for their respective reference samples r2, r3 and r4, such that each row of the prediction block/PU (the unshaded portion) in FIG. 3B becomes a copy of the reference samples rl -r4 in the topmost row. Because this example is restricted to a vertical-only prediction direction, the necessary result of the conventional HEVC coding/decoding is that the reference samples are repeated exactly in the same column across all rows of the prediction block as FIG. 3B shows. This illustrates the directional bias in the prediction processes of conventional video coding/decoding.
  • FIG. 3C illustrates an example where a linear prediction helper (generally, helper information 302) of magnitude +2 is signaled for the column aligned with reference sample r3 (third prediction column).
  • helper information 302 More is no non-zero prediction helper value in the other columns of the helper information 302 for FIG. 3C so those columns after decoding are identical to the corresponding columns of FIG. 3B.
  • the non-zero helper value for the third prediction column adds +2 to the value of the block immediately above.
  • the prediction vectors refer to samples above the block and the relevant value at reference sample r3 is 87, then the third prediction row for FIG. 3C will be decoded as follows:
  • Each subsequent row in the third prediction column for FIG. 3C has a value that diverges by the helper value +2 from the corresponding value in the row immediately above.
  • the sample values are pixel colors or indices of a color palette, but in other implementations these sample values can represent any of a variety of parameters that define a digital image.
  • FIG. 3D similar to FIG. 3C in that the only non-zero helper value 302 is for the third prediction column. But in the case of FIG. 3D the helper information 302 has a magnitude +8, and more distinctively from FIG. 3C the helper information 302 for Fig. 3D was generated by a step function 304 [0, 0, 1 , 1 ] .
  • the step function is used to modulate the helper signal 302 with a result of active helpers only present for the last two samples in the selected column. Specifically, the first two zeros of the step function 304 negate the +8 helper value for the locations pl3 and p23 of the prediction block/PU at FIG.
  • the second two values of the step function 304 are unity multipliers of the helper value +8 for the locations p33 and p43 of the prediction block/PU at FIG. 3D, and so in each of these locations the magnitude +8 of the helper value is added to the magnitude of the sample this vector refers to.
  • the third prediction row for FIG. 3D will be decoded as follows:
  • the encoder decides the helper values, and in doing so may balance the sample prediction improvement against the increased bitrate needed to signal the helper values.
  • the step function can be decided by the encoder also or it can be defined external of the encoder/decoder apparatuses (for example, a given codec an always use a pre-defined step function, which the encoder can override by signaling a different step function to be applied by the decoder for only for a given PU, a given frame or for the entire video stream). Anytime the encoder decides a step function it should be signaled in the encoded bitstream so the decoder can properly decode the image frames.
  • step function may differ from the examples above so long as both encoder and decoder have a common understanding of how it is to be applied.
  • one box/prediction block referred to the prediction block or reference sample immediately above it and so for prediction block p43 the step function [0,0,1 , 1] for FIG. 3D added the helper value +8 to the value of prediction block p33.
  • the encoder and decoder can understand the prediction vector pointing to the original reference sample, in which case the same step function and helper value used for FIG.
  • FIG. 3D would yield [87, 87, 95, 95] for column 3 since for p43 the +8 helper value would be added to the value 87 of reference sample r3, as opposed to being added to the value 95 of prediction block p33.
  • the conventional coding of FIG. 3 A allows no color gradient within a column of a PU
  • the helper information enables a continuous color gradient within a given column
  • the helper information in combination with the step function enables a discontinuous color gradient within a given column of a PU. The same holds true for a horizontal-only prediction direction.
  • the conventional practice can choose for a given PU location only one reference pixel of the CU and so this is still a directional bias in some respects, and so the benefits of coding using helper information (with or without step functions) greatly mitigate the directional bias in conventional video coding. This is true even without the artificial constraint of vertical-only prediction direction that was used in the above examples for simplicity of explanation.
  • 3C-D used helper values in sets of 4 of which only one helper value was non-zero, that was for simplicity of explanation and to match the 4*4 PU size; in other embodiments the number of helper values can vary from those examples and can depend on different aspects such as the prediction direction (e.g., 2 helper values can be used for one sample if that sample vector points to a horizontal reference sample and to a vertical reference sample, or if that sample vector points to a reference location in between two reference samples and the predicted sample value is generated using values of both those reference samples) and the prediction block/PU size.
  • the prediction direction e.g., 2 helper values can be used for one sample if that sample vector points to a horizontal reference sample and to a vertical reference sample, or if that sample vector points to a reference location in between two reference samples and the predicted sample value is generated using values of both those reference samples
  • the prediction block/PU size e.g., 2 helper values can be used for one sample if that sample vector points to a horizontal reference sample and to
  • helper values can be indicated. As some non- limiting examples, they can be indicated as absolute values, or as differential values with respect to predicted sample values as in the FIG. 3C-D examples above, or other helper values such as a variable the encoder and decoder use in a predetermined algorithm (e.g., a mod function on the indicated helper value).
  • the absolute or differential helper values can be coded separately, or they can be coded jointly.
  • the absolute or differential helper values can be transform coded using one or more transforms, such as for example DCT, DST or Haar (wavelet) transform.
  • the absolute or differential helper values can be quantized at the encoder and de-quantized at the decoder using various approaches, for example scalar quantization of transformed or non-transformed values.
  • the granularity of the quantization and de-quantization can be indicated explicitly with the video stream, or it can be derived from other coding parameters such as for example quantization parameters used in prediction error coding. And of course combinations of the above can also be used in still different implementations.
  • Prediction helper values can be applied either linearly adding a constantly increasing value for the predicted samples in the direction of prediction as shown in the FIG. 3C example, or non- linearly adding a value that can be different from one processing line to another as in the FIG. 3D example. In the case of non-linear operation the values for each processing line can be pre- determined or signaled in the bitstream carrying the encoded video.
  • Prediction helper values can indicate how much the predicted sample value on the last row of a block should be modified, or it can indicate how much the predicted sample value on some other row (such as the first row) should be modified.
  • the third column of a prediction block/PU was modified by a prediction helper value but if for example the same prediction helper set 302 were signaled with an indication of horizontal prediction direction mode then the +2 or +8 values would be applied to the third row of the illustrated PU.
  • Prediction helper values may be indicated for some or all reference samples used in the prediction process, or prediction helper values may be indicated for only some pixels within the prediction block as was the case for the FIG. 3C and 3D examples above. In the case prediction helper values are indicated for some pixels within the prediction block, those helper values may be projected to the reference sample row and used as those were indicated for the reference samples.
  • Statistics of the neighboring pixels can also be used to code the prediction helper values, and/or to code how the prediction helper values are to be applied to improve the prediction (e.g., to encode the step function).
  • the neighboring pixels or prediction helper values associated with neighboring pixels can be used to define the context used in context adaptive arithmetic coding of the prediction helper values.
  • the prediction helper values can also be inferred from the neighboring pixel values, or calculated using the neighboring pixel values (e.g. considering the local changes in the sample values in the pixel neighborhood).
  • the transform coefficient coding can be adapted.
  • One possibility to adapt the transform coding is to scan the coefficients in a different order or to switch positions of some or all the transform coefficients within the transform block. For example, if a DST type of transform is used in prediction error coding and horizontal prediction with prediction helper values is selected, the magnitudes of the coefficients in the first column of the forward transform output are likely to become small, and in this case that column of coefficients may be moved to the last column prior to entropy coding the coefficients.
  • Another example of adapting the transform coding is to specify dedicated contexts used in arithmetic coding of the transform coefficients when intra prediction helper values are present.
  • One particularly advantageous way to indicate the prediction helper values is to embed them into the prediction error signaling.
  • some of the transform coefficients can be used to store the prediction helper values instead of a weight of a certain basis function.
  • one or more of the actual transform coefficients e.g. transform coefficients associated with the first basis functions of DST in either the horizontal or vertical direction
  • the decoded values typically associated with certain transform coefficients can be assigned as values for the intra prediction helpers whereas the related transform coefficients can be set to zero.
  • the number of coefficients encoded or decoded can be increased from the number required by the transform itself so as to include also the intra prediction helper values as additional coefficients.
  • Embodiments of these teachings can improve the accuracy of the spatial intra prediction in video and image processing.
  • Some of the implementation alternatives may require a small amount of additional processing as compared to traditional intra prediction techniques but for most implementations this should not be substantially limiting; dedicated image/video processors have become quite common in consumer devices and are even present in many mobile devices such as the latest smartphones for which battery consumption has become a less significant problem despite greater capabilities and higher volumes of data processing (including image/video processing).
  • FIG. 4 is a process flow diagram that summarizes some of the above aspects for decoding a video stream from the perspective of the decoder.
  • these same teachings can be used for decoding individual images that are encoded when stored or transmitted, but greater advantages are seen in utilizing these teachings for the images in a video stream rather than individual images/pictures.
  • the decoder performing the method of Fig. 4 receives an encoded video stream with also an indication of a prediction mode and an indication of one or more prediction helper values.
  • the decoder calculates a predicted value for each of at least one sample, and this predicted value is calculated based on the received indication of the prediction mode and on the received one or more prediction helper values.
  • the decoder tangibly outputs the decoded video stream to at least one of a computer readable memory and a graphical display, such that the decoded video stream that is output incorporates each of the at least one sample as decoded using the respective calculated predicted value.
  • the prediction mode that is indicated at block 402 could be a vertical prediction mode as in the FIG.
  • a horizontal prediction mode a combined vertical and horizontal prediction mode such as where a single sample refers to two different reference samples
  • a merge mode such as Macroblock in which there is motion information
  • a scalable mode such as where the encoded video stream includes a base layer and at least one enhancement layer
  • a direct current mode and a planar mode as in the HEVC standard.
  • decoding the encoded video stream as in block 404 may include, for each said sample having a corresponding non-zero prediction helper value, calculating the predicted value as in block 404 is done by applying the predicted helper value to a corresponding value of a reference sample that is located along a prediction direction relative to said sample, where the prediction direction is given by the prediction mode.
  • r3 was in the position of such a reference sample.
  • the indication of the one or more prediction helper values from block 402 can be indicated in sets of prediction helper values as explained with respect to FIGs.
  • 3C-D which each had only one such set, where each such set corresponds to one prediction unit PU of a coding unit CU within the encoded video stream.
  • the CU and the PU it includes were within one image/frame of that video stream.
  • the example from FIG. 3D showed that for at least one of the sets in such a video stream the predicted values for at least two samples of the set in a same row or same column of the prediction unit are non-linear due to applying a step function to a common prediction helper value when calculating the predicted values for the same row or same column in which lay the at least two samples.
  • +8 was the common prediction helper value.
  • the example from FIG. 3C showed that for each other set for which no step function is used when calculating the predicted values, for a given row or column of the prediction block corresponding to a non-zero prediction helper value the calculated predicted values accumulate linearly in the direction of prediction by the corresponding non-zero prediction helper value.
  • the amount of the linear accumulation was +2 per each sequential row of the PU.
  • FIG. 3C-D examples were such that the prediction helper values were restricted to only intra prediction in which the sample and a reference sample are in a same image frame of the decoded video stream. And also for the FIG. 3C-D examples it was stated that each of the samples pi 1 , pl2, p21 , etc. was a pixel and the respective prediction helper value indicated an offset value modifying the samples predicted using a reference sample (which was r3 in FIGs. 3C-D). However, a prediction helper value can also indicate a respective index of a color palette for coloring the respective pixel as compared to another index of the color palette for coloring a corresponding reference pixel.
  • helper value itself needs to be a palette color index; while it is a possible implementation that the indicated helper value is an absolute value corresponding to a palette index, the value of the prediction helper can be used as an offset to the palette index actually used in the reference sample 63, and in this way the offset 'indicated' the respective index when taken relative to the index of the reference sample r3. It is noted that while the description above assumes a transmission of the video stream since streaming video is becoming more ubiquitous and transmitted video sometimes requires a greater level of compression (and thus more encoding) than might be used when only storing the video on a computer readable memory, when video is encoded according to these teachings it would be stored with the indications mentioned at block 402 of FIG.
  • the decoder can use those same indications regardless of whether the decoder receives the encoded video over a transmission channel (wired or wireless) or reads it from a memory stick that a user plugs into some host device (laptop computer; camera/media playback device; smartphone; etc.) that includes the decoder.
  • a transmission channel wireless or wireless
  • some host device laptop computer; camera/media playback device; smartphone; etc.
  • An encoder operating according to these teachings may perform the same steps shown at FIG.
  • a video stream such as a live-recorded stream from a camera in place of block 406, then while encoding that video stream it calculates for one or more prediction samples a prediction value using a prediction direction and one or more prediction helper values in place of block 404, and then it can output the encoded video stream to a memory or to a radio for transmission along with an indication of the prediction mode that defines the prediction direction and one or more indications of the prediction helper value(s) in place of block 402.
  • Fig. 5 is a schematic diagram illustrating some components of generic host devices for encoding and decoding a video stream according to these teachings, named an encoder device 10 and a decoder device 20. While a wireless channel 1 1 is shown between them for transmitting and receiving the encoded video stream, that encoded video stream can be transferred from encoder device 10 to decoder device 20 via a physical memory such as a removable memory card or stick, or there may be one device such as a camera with a graphical display interface that records the video, encodes it when it saves the recorded video to the memory, and decodes it when accessing the encoded video from the memory for playback on the graphical display.
  • a wireless channel 1 1 is shown between them for transmitting and receiving the encoded video stream
  • that encoded video stream can be transferred from encoder device 10 to decoder device 20 via a physical memory such as a removable memory card or stick, or there may be one device such as a camera with a graphical display interface that records the video, encodes it when it saves
  • an intermediate device such that the encoder device 10 records and encodes the video, provides it to the intermediate device such as a server or other central database that may store and index the encoded video within a larger video library, and the intermediate device provides the encoded video to the decoder device 20 such as an individual's laptop computer, smartphone, vehicle-mounted video display device, or the like.
  • the intermediate device such as a server or other central database that may store and index the encoded video within a larger video library
  • the intermediate device provides the encoded video to the decoder device 20 such as an individual's laptop computer, smartphone, vehicle-mounted video display device, or the like.
  • the encoder device 10 includes a controller, such as a computer or a data processor (DP) 10D, a computer-readable memory medium embodied as a memory (MEM) 10B that stores a program of computer instructions (PROG) IOC, and it may also have a suitable wireless interface such as radio frequency (RF) transmitter/receiver combination 10D for bidirectional wireless communications with the decoder device 20 or the intermediate device via one or more antennas.
  • a controller such as a computer or a data processor (DP) 10D
  • MEM memory
  • PROG program of computer instructions
  • RF radio frequency
  • the encoder processing may be done by a separate DP as shown, or by the main central processing DP 10A, or some combination of both processing chips or more than only those two.
  • the wireless link between the encoder device 10 and the decoder device 20 can be direct as shown, or through an intermediate device such as a server on the Internet as described above, or as also mentioned there may be no wireless link for communicating the encoded video if either the encoded video is transferred between different devices 10, 20 via a removable memory or if the encoder device 10 and the decoder device 20 are the same device in which the encoded video is stored in the memory until decoded when it is to be played back at a graphical user interface such as a graphical display or a projector.
  • the decoder device 20 also includes a controller, such as a computer or a data processor (DP) 20A, a computer-readable memory medium embodied as a memory (MEM) 20B that stores a program of computer instructions (PROG) 20C, and it also may have a suitable wireless interface, such as RF transmitter/receiver combination 20D for communication with the encoder device 10 via one or more antennas. Similar to the encoder processing, the decoder processing may be done by a separate DP as shown, or by the main central processing DP 20A, or some combination of both processing chips or more than only those two.
  • DP data processor
  • PROG program of computer instructions
  • At least one of the PROGs 10C/20C is assumed to include program instructions that, when executed by the associated DP 10A 20A, enable the device to operate in accordance with exemplary embodiments of this invention as detailed above. That is, various exemplary embodiments of this invention may be implemented at least in part by computer software executable by the DP 10A of the encoder device 10; by the DP 20A of the decoder device 20, or by hardware or by a combination of software and hardware (and firmware).
  • the computer readable MEMs 10B/20B may be of any type suitable to the local technical environment and may be implemented using any one or more suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory, electromagnetic, infrared, or semiconductor systems.
  • suitable data storage technology such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory, electromagnetic, infrared, or semiconductor systems.
  • the computer readable storage medium memory an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • the DPs 10A/20A may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on a multicore processor architecture, as non-limiting examples.
  • the wireless interfaces e.g., the radios 10D/20D
  • the wireless interfaces may be of any type suitable to the local technical environment and may be implemented using any suitable communication technology such as individual transmitters, receivers, transceivers or a combination of such components.
  • the various embodiments of the encoder device 10 and/or the decoder device 20 can include, but are not limited to, smart phones with cameras and/or graphical displays, machine -to- machine (M2M) communication devices, cellular telephones, personal digital assistants (PDAs) having video recording and/or playback capabilities, portable computers having video recording and/or playback capabilities, image capture devices such as digital cameras having video recording and/or playback capabilities, gaming devices having video recording and/or playback capabilities, music storage and playback appliances having video recording and/or playback capabilities, Internet appliances permitting video recording and/or playback capabilities, as well as portable units or terminals that incorporate combinations of such functions. Any of these may be embodied as a hand-portable device, a wearable device, a device that is implanted in whole or in part, a vehicle-mounted communication device, and the like.
  • M2M machine -to- machine
  • PDAs personal digital assistants
  • portable computers having video recording and/or playback capabilities
  • image capture devices such as digital cameras having video

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

For decoding a video stream, an encoded video stream is received with an indication of a prediction mode and an indication of one or more prediction helper values. While decoding this encoded video stream, a predicted value is calculated for each of at least one sample based on the indicated prediction mode and on the prediction helper value(s). The decoded video stream, incorporating each of the at least one samples as decoded using the respective calculated predicted value,is then output to at least one of a computer readable memory and a graphical display. An optional step function can be utilized to aid the helper values and enable non-linear prediction values in a given row or column of a prediction unit. At the encoder side the encoder decides the prediction helper values that it uses to encode the video stream that is stored or transmitted to the decoder.

Description

VIDEO CODING WITH HELPER DATA FOR SPATIAL INTRA-PREDICTION
TECHNOLOGICAL FIELD :
The described invention relates to processing of digital images and videos, and particularly to encoding and decoding of such images and video for telecommunications and storage thereof.
BACKGROUND:
Acronyms used herein are listed below following the detailed description. The recommended specifications ITU-T H.263 and H.264 (04/2015) provide typical hybrid video codecs in that they encode the video information in two phases. Firstly pixel values in a certain picture area (termed a "block") are predicted for example by motion compensation means or by spatial means. Motion compensation generally includes finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded; spatial means generally includes using the pixel values around the block to be coded in a specified manner. Secondly the prediction error is coded; the prediction error is the difference between the predicted block of pixels and the original block of pixels. Coding the prediction error is typically done by transforming the difference in pixel values using a specified transform such as for example a Discreet Cosine Transform (DCT) or some variant of it, quantizing the coefficients and entropy-coding the quantized coefficients. By varying the fidelity of the quantization process, the encoder can control the balance between the accuracy of the pixel representation (the picture quality) and the size of the resulting coded video representation (the file size or transmission bitrate).
Another image/video coding standard is ITU-T H.265, also referred to as High Efficiency Video Coding HEVC). This approach builds intra frame sample prediction blocks using directional filtering, and projects the sample location of the sample to be predicted onto the reference row using a selected prediction direction, and also applies a 1 -dimensional linear filter to interpolate a predicted value for the sample. For the case of directly horizontal or directly vertical prediction directions, one of the block boundaries is additionally filtered with a sample gradient based filter. HEVC also defines direct current (DC) and planar prediction modes. DC prediction calculates the DC component of the reference samples and uses that as a prediction for the samples within a block, whereas planar prediction calculates an average of two linear predictions to predict blocks with smooth sample surface.
From the above review it is clear that spatial intra prediction typically creates a sample prediction block based on decoded samples around the block. That approach is able to model certain kinds of structures in the block very well, but at the same time it fails to predict some common classes of textures. For example, the directional sample prediction is able to accurately model shapes that match with the supported prediction directions, but when moving further away from the reference samples the prediction tends to become less reliable and often some prediction error aligned with the selected prediction direction begins to appear. Embodiments of these teachings detailed more particularly below address this shortfall or the prior art and others.
SUMMARY:
According to a first aspect of these teachings there is a method for decoding a video stream, the method comprising: receiving with an encoded video stream an indication of a prediction mode and an indication of one or more prediction helper values; while decoding the encoded video stream, calculating a predicted value for each of at least one sample based on the received indication of the prediction mode and on the received one or more prediction helper values; and tangibly outputting the decoded video stream to at least one of a computer readable memory and a graphical display, such that the decoded video stream that is output incorporates each of the at least one sample as decoded using the respective calculated predicted value.
According to a second aspect of these teachings there is a computer readable memory storing computer program instructions that, when executed by one or more processors, cause a host decoder device to perform actions directed to decoding a video stream. In this regard the actions include: receiving with an encoded video stream an indication of a prediction mode and an indication of one or more prediction helper values; while decoding the encoded video stream, calculating a predicted value for each of at least one sample based on the received indication of the prediction mode and on the received one or more prediction helper values; and tangibly outputting the decoded video stream to at least one of a computer readable memory and a graphical display, such that the decoded video stream that is output incorporates each of the at least one sample as decoded using the respective calculated predicted value.
According to a third aspect of these teachings there is an apparatus for decoding a video stream. The apparatus comprises at least one computer readable memory storing computer program instructions and at least one processor. The computer readable memory with the computer program instructions is configured, with the at least one processor, to cause the apparatus to perform actions comprising: receiving with an encoded video stream an indication of a prediction mode and an indication of one or more prediction helper values; while decoding the encoded video stream, calculating a predicted value for each of at least one sample based on the received indication of the prediction mode and on the received one or more prediction helper values; and tangibly outputting the decoded video stream to at least one of a computer readable memory and a graphical display, such that the decoded video stream that is output incorporates each of the at least one sample as decoded using the respective calculated predicted value.
These and other aspects of the invention are detailed below with more particularity.
BRIEF DESCRIPTION OF THE DRAWINGS :
FIG. 1 is a schematic block diagram of a video encoder that uses the two-phase process of pixel prediction and prediction error as is known in the video coding arts. FIG. 2 is a schematic block diagram of a generic video decoder for decoding the video encoded by the process of FIG. 1 , as is known in the video coding arts.
FIG. 3A illustrates a convention used in FIGs. 3B-D for sample locations of a prediction block corresponding to a reference sample in a control unit using a vertical-only prediction direction.
FIG. 3B is a control unit with one prediction unit using the convention of FIG. 3 A for illustrating a conventional approach to predicting sample values.
FIG. 3C is a control unit with one prediction unit using the convention of FIG. 3A for illustrating one example for linearly predicting sample values according to these teachings.
FIG. 3D is a control unit with one prediction unit using the convention of FIG. 3 A for illustrating one example for non-linearly predicting sample values according to these teachings. FIG. 4 is a process flow diagram according to an example embodiment of these teachings.
FIG. 5 is a high level schematic block diagram illustrating certain apparatus/devices that are suitable for encoding and decoding a video stream practicing according to certain aspects of these teachings.
DETAILED DESCRIPTION:
Videos are streams of individual pictures or images in sequence, and video coding exploits the fact that among discrete sets of images often much of the image remains unchanged or changed very little. As such one image can be constructed with reference to other images that are near in sequence around the picture of interest as well as with reference to other sections of the same image, and often individual images of a video are referred to as frames. Video coding/decoding thus uses both inter- frame prediction which predicts for the subject frame from one or more other frames, and intra-frame prediction which predicts for one portion of the subject frame from one or more other portions of the subject frame. A codec refers typically to the software that executes a specific coding and decoding procedure, though in principle nothing prevents a codec from being embodied in hardware (circuitry) or a combination of hardware and software. To better appreciate the advance these teachings provide over conventional codecs, FIG. 1 is a schematic block diagram of a generic video encoder that uses the two-phase process of pixel prediction and prediction error as mentioned in the background above and explained in further detail below. FIG. 1 uses the following representations:
In Image to be encoded
P' Predicted representation of an image block
Dn: Prediction error signal
D' Reconstructed prediction error signal
Γ Preliminary reconstructed image
R' n Final reconstructed imag °e
T, T 1 Transform and inverse transform
Q, Q Quantization and inverse quantization
E Entropy encoding
RFM Reference frame memory
Pinter Inter prediction
Pintra Intra prediction
MS Mode selection
F Filtering
The source and decoded pictures are each comprised of one or more sample arrays, such as one of the following sets of sample arrays:
• Luma (Y) only (monochrome).
• Luma and two chroma (YCbCr or YCgCo).
• Green, Blue and Red (GBR, also known as RGB).
• Arrays representing other unspecified monochrome or tri-stimulus color samplings (for example, YZX, also known as XYZ).
In the following, these arrays may be referred to as luma (or L or Y) and chroma, where the two chroma arrays may be referred to as Cb and Cr; regardless of the actual color representation method in use. The actual color representation method in use can be indicated e.g. in a coded bitstream e.g. using the Video Usability Information (VUI) syntax of H.264/AVC and/or HEVC. A component may be defined as an array or single sample from one of the three sample arrays arrays (luma and two chroma) or the array or a single sample of the array that compose a picture in monochrome format. In H.264/ AVC and HEVC, a picture may either be a frame or a field. A frame comprises a matrix of luma samples and possibly the corresponding chroma samples. A field is a set of alternate sample rows of a frame and may be used as encoder input, when the source signal is interlaced. Chroma sample arrays may be absent (and hence monochrome sampling may be in use) or chroma sample arrays may be subsampled when compared to luma sample arrays. Chroma formats may be summarized as follows:
In monochrome sampling there is only one sample array, which may be nominally considered the luma array.
In 4:2:0 sampling, each of the two chroma arrays has half the height and half the width of the luma array.
In 4:2:2 sampling, each of the two chroma arrays has the same height and half the width of the luma array.
In 4:4:4 sampling when no separate color planes are in use, each of the two chroma arrays has the same height and width as the luma array.
• In H.264/AVC and HEVC, it is possible to code sample arrays as separate color planes into the bitstream and respectively decode separately coded color planes from the bitstream. When separate color planes are in use, each one of them is separately processed (by the encoder and/or the decoder) as a picture with monochrome sampling.
When chroma subsampling is in use (e.g. 4:2:0 or 4:2:2 chroma sampling), the location of chroma samples with respect to luma samples may be determined in the encoder side (e.g. as preprocessing step or as part of encoding). The chroma sample positions with respect to luma sample positions may be pre-defined for example in a coding standard, such as H.264/AVC or HEVC, or may be indicated in the bitstream for example as part of VUI of H.264/ AVC or HEVC.
A partitioning may be defined as a division of a set into subsets such that each element of the set is in exactly one of the subsets.
In H.264/AVC, a macroblock is a 16x16 block of luma samples and the corresponding blocks of chroma samples. For example, in the 4:2:0 sampling pattern, a macroblock contains one 8x8 block of chroma samples per each chroma component. In H.264/AVC, a picture is partitioned to one or more slice groups, and a slice group contains one or more slices. In H.264/AVC, a slice consists of an integer number of macroblocks ordered consecutively in the raster scan within a particular slice group. When describing the operation of HEVC encoding and/or decoding, the following terms may be used. A coding block may be defined as an NxN block of samples for some value of N such that the division of a coding tree block into coding blocks is a partitioning. A coding tree block (CTB) may be defined as an NxN block of samples for some value of N such that the division of a component into coding tree blocks is a partitioning. A coding tree unit (CTU) may be defined as a coding tree block of luma samples, two corresponding coding tree blocks of chroma samples of a picture that has three sample arrays, or a coding tree block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples. A coding unit (CU) may be defined as a coding block of luma samples, two corresponding coding blocks of chroma samples of a picture that has three sample arrays, or a coding block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples.
In some video codecs such as HEVC, video pictures are divided into coding units (CU) covering the area of the picture. A CU consists of one or more prediction units (PU) defining the prediction process for the samples within the CU and one or more transform units (TU) defining the prediction error coding process for the samples in that CU. Typically, a CU consists of a square block of samples with a size selectable from a predefined set of possible CU sizes. A CU with the maximum allowed size is typically referred to as a LCU (largest coding unit) or a CTU (coding tree unit), and the video picture is divided into non-overlapping CTUs. A CTU can be further split into a combination of smaller CUs, for example by recursively splitting the CTU and resultant CUs. Each resulting CU typically has at least one PU and at least one TU associated with it. Each PU and TU can be further split into smaller PUs and TUs in order to increase granularity of the prediction and prediction error coding processes, respectively. Each PU has prediction information associated with it defining what kind of a prediction is to be applied for the pixels within that PU; for example motion vector information for inter predicted PUs and intra prediction directionality information for intra predicted PUs. Similarly each TU is associated with information describing the prediction error decoding process for the samples within the said TU, and this information may for example include DCT coefficient information. In the storage and/or transmission of encoded video it is typically signaled at the CU level whether prediction error coding is applied or not for each CU. In the case there is no prediction error residual associated with the CU, it can be considered there are no TUs for that same CU. The division of the image into CUs, and the division of CUs into PUs and Tus, is typically signaled in the bitstream transmitting the video (or stored with the video if encoded video is put into a computer readable memory), which allows the decoder to reproduce the intended structure of these units. FIG. 2 is a schematic block diagram of a generic video decoder for decoding the video encoded by the FIG. 1 process, and FIG. 2 uses the following representations:
P' Predicted representation of an image block
D' Reconstructed prediction error signal
Γ Preliminary reconstructed image
R' n Final reconstructed imag °e
T 1 Inverse transform
Q Inverse quantization
E Entropy decoding
RFM Reference frame memory
P Prediction (either inter or intra)
F Filtering
The decoder reconstructs the output video by applying prediction means similar to the encoder to form a predicted representation of the pixel blocks using the motion or spatial information created by the encoder and stored in the compressed representation, and using prediction error decoding which is the inverse operation of the prediction error coding to recover the quantized prediction error signal in the spatial pixel domain. After applying prediction and prediction error decoding means the decoder sums up the prediction and prediction error signals (the pixel values) to form the output video frame. The decoder of FIG. 2 as well as the encoder of FIG. 1 can also apply additional filtering means to improve the quality of the output video before passing it for display and/or storing it as a prediction reference for the forthcoming frames in the video sequence. Instead of or in addition to approaches utilizing sample value prediction and transform coding for indicating the coded sample values, a color palette based coding can be used. Palette based coding refers to a family of approaches for which a palette is defined, typically as a set of colors and associated indexes, and the value for each sample within a coding unit is expressed by indicating its index in the palette. Palette based coding can achieve good coding efficiency in coding units with a relatively small number of colors, such as for example image areas which are representing computer screen content that includes only text and/or simple graphics. In order to improve the coding efficiency of palette coding, different kinds of palette index prediction approaches can be utilized, or the palette indexes can be run-length coded to be able to represent larger homogenous image areas efficiently. Also, escape coding can be utilized for the case in which the CU contains sample values that are not recurring within the CU. Escape coded samples are transmitted without referring to any of the palette indexes; Instead their values are indicated individually for each escape coded sample. In typical video codecs the motion information is indicated with motion vectors associated with each motion compensated image block. Each of these motion vectors represents the displacement of the image block in the picture to be coded or decoded (at the respective encoder and decoder sides) and the prediction source block in one of the previously coded or decoded pictures. In order to represent motion vectors efficiently those are typically coded differentially with respect to block specific predicted motion vectors. In typical video codecs the predicted motion vectors are created in a predefined way, for example by calculating the median of the encoded or decoded motion vectors of the adjacent blocks. Another way to create motion vector predictions is to generate a list of candidate predictions from adjacent blocks and/or co-located blocks in temporal reference pictures and signaling the chosen candidate as the motion vector predictor. In addition to predicting the motion vector values, the reference index of a previously coded/decoded picture can be predicted. The reference index is typically predicted from adjacent blocks and/or or co-located blocks in a temporal reference picture. Moreover, typical high efficiency video codecs can employ an additional motion information coding/decoding mechanism, often called merging (sometimes referred to as merge mode) where all the motion field information, which includes motion vector and corresponding reference picture index for each available reference picture list, is predicted and used without any modification or correction. Similarly, predicting the motion field information is carried out using the motion field information of adjacent blocks and/or co-located blocks in temporal reference pictures and the used motion field information is signaled among a list of motion field candidates filled with motion field information of available adjacent/co-located blocks.
Typically video codecs support motion compensated prediction from one source image (uni- prediction) or from two sources (bi-prediction). In the case of uni-prediction a single motion vector is applied whereas in the case of bi-prediction two motion vectors are signaled and the motion compensated predictions from the two sources are averaged to create the final sample prediction. In the case of weighted prediction the relative weights of the two predictions can be adjusted, or a signaled offset can be added to the prediction signal. In addition to applying motion compensation for inter picture prediction, a similar approach can be applied to intra picture prediction. In this case the displacement vector indicates where from the same picture a block of samples can be copied to form a prediction of the block to be coded or decoded. This kind of intra block copying methods can improve the coding efficiency substantially in the presence of repeating structures within the frame such as text or other graphics.
In typical video codecs the prediction residual after motion compensation or intra prediction is first transformed with a transform kernel such as DCT and then coded. The reason for this is that often there still exists some correlation among the residual and the transform kernal can in many cases help reduce this correlation and provide more efficient coding.
Typical video encoders utilize Lagrangian cost functions to find optimal coding modes, for example the desired Macroblock mode and its associated motion vectors. This kind of cost function uses a weighting factor λ to tie together the (exact or estimated) image distortion due to lossy coding methods and the (exact or estimated) amount of information that is required to represent the pixel values in an image area: C = D + λR (Eq. 1) where: C is the Lagrangian cost to be minimized;
D is the image distortion (for example, Mean Squared Error) with the mode and motion vectors considered; and
R is the number of bits needed to represent the required data to reconstruct the image block in the decoder (including the amount of data to represent the candidate motion vectors).
Scalable video coding refers to a coding structure where one bitstream can contain multiple representations of the content at different bitrates, resolutions or frame rates. In these cases the receiver can extract the desired representation depending on its characteristics, for example resolution that matches the best of which the display device is capable). Alternatively, a server or a network element can extract the portions of the bitstream to be transmitted to the receiver, depending for example on the network characteristics or processing capabilities of the receiver. A scalable bitstream typically consists of a "base layer" providing the lowest quality video available and one or more enhancement layers that enhance the video quality when received and decoded together with the lower layers. In order to improve coding efficiency for the enhancement layers, the coded representation of that layer typically depends on the lower layers; for example the motion and mode information of the enhancement layer can be predicted from lower layers. Similarly the pixel data of the lower layers can be used to create prediction for the enhancement layer.
A scalable video codec for quality scalability (also known as Signal-to-Noise Ratio or SNR) and/or spatial scalability may be implemented as follows. For a base layer, a conventional non- scalable video encoder and decoder is used. The reconstructed/decoded pictures of the base layer are included in the reference picture buffer for an enhancement layer. In H.264/AVC, HEVC, and similar codecs using reference picture list(s) for inter prediction, the base layer decoded pictures may be inserted into a reference picture list(s) for coding/decoding of an enhancement layer picture similarly to the decoded reference pictures of the enhancement layer. Consequently, the encoder may choose a base-layer reference picture as an inter prediction reference and indicate its use, typically with a reference picture index in the coded bitstream. The decoder decodes from the bitstream, for example from a reference picture index, that a base-layer picture is used as an inter prediction reference for the enhancement layer. When a decoded base-layer picture is used as a prediction reference for an enhancement layer, it is referred to as an inter- layer reference picture.
In addition to quality scalability following scalability modes exist:
• Spatial scalability: Base layer pictures are coded at a higher resolution than
enhancement layer pictures.
• Bit-depth scalability: Base layer pictures are coded at lower bit-depth (e.g. 8 bits) than enhancement layer pictures (e.g. 10 or 12 bits).
• Chroma format scalability: Base layer pictures provide higher fidelity in chroma (e.g. coded in 4:4:4 chroma format) than enhancement layer pictures (e.g. 4:2:0 format).
In all of the above scalability cases, the base layer information could be used to code the enhancement layer in order to minimize the additional bitrate overhead.
Scalability can be enabled in two basic ways; either by introducing new coding modes for performing prediction of pixel values or syntax from lower layers of the scalable representation, or by placing the lower layer pictures to the reference picture buffer (which is termed a decoded picture buffer, DPB) of the higher layer. The first approach is more flexible and thus can provide better coding efficiency in most cases. However the second approach, reference frame based scalability, can be implemented very efficiently with minimal changes to single layer codecs while still achieving a majority of the coding efficiency gains available. Essentially a reference frame based scalability codec can be implemented by utilizing the same hardware or software implementation for all the layers, just taking care of the DPB management by external means.
But as mentioned in the background section above conventional spatial intra prediction does not predict some common classes of textures very well, and directional sample prediction sometimes aligns the prediction error with the selected prediction direction. It is these aspects in which advantages of the encoding/decoding techniques described herein are most pronounced.
In one aspect of these teachings the predicted sample values are modified in the direction of prediction, which allows the codec to compensate for more complex textures than traditional methods. This modification is enabled by what we term a '¾elper indication", and from it can be determined for example a linear change in the predicted sample values when moving one sample row or column further away from the reference sample row or column. FIGs. 3C-D illustrate some example prediction helpers.
In another aspect of these teachings there is a modification to the prediction error coding to optimize for the improved prediction performance, while also avoiding redundancy in the signaling of prediction error information. This can be achieved, for example, by switching positions of one or more lines of transform coefficients in the residual coding phase. In one particular embodiment below the intra prediction helper information can be embedded in the coded prediction error signal.
Consider one particular embodiment from the perspective of the decoder side. In this case the decoder reconstructs the intended prediction signal (or a prediction sample block) by performing the following steps:
1. Receive an indication of a prediction mode;
2. Receive an indication of one or more intra prediction helper values; and
3. Calculate a predicted value for a sample based on said indication of the prediction mode and said indication of the one or more helper values.
4. Optionally the decoder can also receive an indication of prediction error and apply
prediction error coding or decoding means based on said indication of the one or more helper values.
In this particular embodiment there can be one helper value h(x) for each column in the prediction block (in the case of vertical prediction modes) and one helper value h(y) for each row in the prediction block (in the case of horizontal prediction modes). The helper values h(x) or h(y) are quantized and coded in the bitstream using entropy coding means. The helper values are applied linearly and the helper values are accumulated to the reference samples r(x) and r(y) in the direction of prediction.
In the case of directly vertical prediction the predicted sample values p(x, y) can be represented as: p(x, y) = r(x) + y*h(x) (Eq. 2)
In the more generic case the predicted sample values p(x, y) can be represented as: p(x, y) = ay*r(x+ny) + (l -ay)*r(x+ny+l) + y*(ay*h(x+ny) + (l-ay)* h(x+ny+l)) (Eq. 3) where: % is an integer; and
ay describes the fractional part of the reference sample when predicting sample row y (defined based on the selected prediction direction, for example as specified in the H.265/HEVC standard).
FIGs. 3A-D illustrate reference samples rl, r2, r3 and r4 along the topmost (shaded) row of each, and a prediction block that includes prediction samples in the remaining unshaded rows apart from the reference samples. For simplicity of explanation these examples assume a prediction direction that is directly vertical; that is, each prediction block references only another reference sample in the same column. In other embodiments a given prediction block can reference a reference sample in a different row, in a different column, or multiple reference samples in the same or different rows and/or columns. The shaded lefthand column also includes reference samples apart from rO in the un-labelled shaded boxes, but in these examples they are not used as a reference for any of the prediction block and so are un-labeled.
Generally the heavily-lined box of Fig. 3A may be considered a CU, and the dashed-line box within it may be considered a PU for that same CU. For FIGs. 3B-D assume the same reference samples rl-r4, CU and PU as is shown for FIG. 3A. The individual boxes/samples within the PU are each uniquely identified in FIGs. 3B-D using the same designations pi 1 , pl2, p21 , etc. that are shown in FIG. 3 A.
The designations pi 1, pl2, p21 , etc. shown in FIG. 3A may be considered to represent the prediction vectors, which will take in different values for FIGs. 3B-D. The predicted samples within the prediction block/PU are generated by directionally copying the reference samples that are indicated by the prediction vectors into the box that bears the prediction vector value. FIG. 3B represents the prior art vertical directional intra prediction while FIGs. 3C-D illustrate two simple embodiments of these teachings for vertical directional intra prediction. In other embodiments the prediction vectors can reference one or more reference samples in a different CU of this same image frame (still intra prediction), or in a CU of a different frame (inter prediction).
First consider the prior art approach that FIG. 3B represents. In the column in which reference sample rl lay, each and every prediction vector pi 1, p21 , p31 and p41 points to the box immediately above it in the same vertical column, as they must given the constraint of this simple example for directly vertical directional prediction. The prediction vector pl l points to reference sample rl and so rl is copied to the box designated as pi 1. The prediction vector p21 points to the box directly above it designated as pl l , which now in the decoding process has a copy of the reference sample rl , and so reference sample rl is again copied into the box designated as p21. The prediction vector p31 points to the box directly above it and similarly copies the reference sample rl into the box designated as p31 , and so forth for the column under reference sample rl . The other columns are similar for their respective reference samples r2, r3 and r4, such that each row of the prediction block/PU (the unshaded portion) in FIG. 3B becomes a copy of the reference samples rl -r4 in the topmost row. Because this example is restricted to a vertical-only prediction direction, the necessary result of the conventional HEVC coding/decoding is that the reference samples are repeated exactly in the same column across all rows of the prediction block as FIG. 3B shows. This illustrates the directional bias in the prediction processes of conventional video coding/decoding.
FIG. 3C illustrates an example where a linear prediction helper (generally, helper information 302) of magnitude +2 is signaled for the column aligned with reference sample r3 (third prediction column). There is no non-zero prediction helper value in the other columns of the helper information 302 for FIG. 3C so those columns after decoding are identical to the corresponding columns of FIG. 3B. But the non-zero helper value for the third prediction column adds +2 to the value of the block immediately above. Assuming as one specific example that the prediction vectors refer to samples above the block and the relevant value at reference sample r3 is 87, then the third prediction row for FIG. 3C will be decoded as follows:
Figure imgf000014_0001
Each subsequent row in the third prediction column for FIG. 3C has a value that diverges by the helper value +2 from the corresponding value in the row immediately above. In one specific example the sample values are pixel colors or indices of a color palette, but in other implementations these sample values can represent any of a variety of parameters that define a digital image.
Now consider the FIG. 3D example, similar to FIG. 3C in that the only non-zero helper value 302 is for the third prediction column. But in the case of FIG. 3D the helper information 302 has a magnitude +8, and more distinctively from FIG. 3C the helper information 302 for Fig. 3D was generated by a step function 304 [0, 0, 1 , 1 ] . In this specific example the step function is used to modulate the helper signal 302 with a result of active helpers only present for the last two samples in the selected column. Specifically, the first two zeros of the step function 304 negate the +8 helper value for the locations pl3 and p23 of the prediction block/PU at FIG. 3D, and so in these locations the reference sample r3 is copied without alteration. The second two values of the step function 304 are unity multipliers of the helper value +8 for the locations p33 and p43 of the prediction block/PU at FIG. 3D, and so in each of these locations the magnitude +8 of the helper value is added to the magnitude of the sample this vector refers to. Continuing the assumption of FIG. 3C that the prediction vectors refer to samples above the block and the relevant value at reference sample r3 is 87, then the third prediction row for FIG. 3D will be decoded as follows:
I -alien 1 ii i (// /v/(7V/7< v hi alitni 1 (////r lor ciinvnl In aiimi
Reference sample rl ) N/A 87
Prediction block pi. 3 87 87
Prediction block p2. 3 87 87
Prediction block p3. 3 87 95
Prediction block p4. 3 95 103
The encoder decides the helper values, and in doing so may balance the sample prediction improvement against the increased bitrate needed to signal the helper values. The step function can be decided by the encoder also or it can be defined external of the encoder/decoder apparatuses (for example, a given codec an always use a pre-defined step function, which the encoder can override by signaling a different step function to be applied by the decoder for only for a given PU, a given frame or for the entire video stream). Anytime the encoder decides a step function it should be signaled in the encoded bitstream so the decoder can properly decode the image frames. In practice the manner in which the step function is applied may differ from the examples above so long as both encoder and decoder have a common understanding of how it is to be applied. In the above examples one box/prediction block referred to the prediction block or reference sample immediately above it and so for prediction block p43 the step function [0,0,1 , 1] for FIG. 3D added the helper value +8 to the value of prediction block p33. But in other conventions the encoder and decoder can understand the prediction vector pointing to the original reference sample, in which case the same step function and helper value used for FIG. 3D would yield [87, 87, 95, 95] for column 3 since for p43 the +8 helper value would be added to the value 87 of reference sample r3, as opposed to being added to the value 95 of prediction block p33. Comparing FIGs. 3A, 3B and 3D for this example and assuming vertical-only prediction direction, the conventional coding of FIG. 3 A allows no color gradient within a column of a PU, the helper information enables a continuous color gradient within a given column and the helper information in combination with the step function enables a discontinuous color gradient within a given column of a PU. The same holds true for a horizontal-only prediction direction. For combinations of vertical and horizontal, the conventional practice can choose for a given PU location only one reference pixel of the CU and so this is still a directional bias in some respects, and so the benefits of coding using helper information (with or without step functions) greatly mitigate the directional bias in conventional video coding. This is true even without the artificial constraint of vertical-only prediction direction that was used in the above examples for simplicity of explanation.
There are various implementations for embodying these teachings. For example, there are various ways to indicate the prediction mode which gives the prediction direction (e.g., vertical, horizontal, or combined/merge mode if two reference samples are used for one prediction sample). While the examples at FIGs. 3C-D used helper values in sets of 4 of which only one helper value was non-zero, that was for simplicity of explanation and to match the 4*4 PU size; in other embodiments the number of helper values can vary from those examples and can depend on different aspects such as the prediction direction (e.g., 2 helper values can be used for one sample if that sample vector points to a horizontal reference sample and to a vertical reference sample, or if that sample vector points to a reference location in between two reference samples and the predicted sample value is generated using values of both those reference samples) and the prediction block/PU size.
There are also a wide variety of ways in which the helper values can be indicated. As some non- limiting examples, they can be indicated as absolute values, or as differential values with respect to predicted sample values as in the FIG. 3C-D examples above, or other helper values such as a variable the encoder and decoder use in a predetermined algorithm (e.g., a mod function on the indicated helper value). In any of these examples the absolute or differential helper values can be coded separately, or they can be coded jointly. In the case of joint coding the absolute or differential helper values can be transform coded using one or more transforms, such as for example DCT, DST or Haar (wavelet) transform. The absolute or differential helper values can be quantized at the encoder and de-quantized at the decoder using various approaches, for example scalar quantization of transformed or non-transformed values. The granularity of the quantization and de-quantization can be indicated explicitly with the video stream, or it can be derived from other coding parameters such as for example quantization parameters used in prediction error coding. And of course combinations of the above can also be used in still different implementations. Prediction helper values can be applied either linearly adding a constantly increasing value for the predicted samples in the direction of prediction as shown in the FIG. 3C example, or non- linearly adding a value that can be different from one processing line to another as in the FIG. 3D example. In the case of non-linear operation the values for each processing line can be pre- determined or signaled in the bitstream carrying the encoded video.
Prediction helper values can indicate how much the predicted sample value on the last row of a block should be modified, or it can indicate how much the predicted sample value on some other row (such as the first row) should be modified. In the above examples only the third column of a prediction block/PU was modified by a prediction helper value but if for example the same prediction helper set 302 were signaled with an indication of horizontal prediction direction mode then the +2 or +8 values would be applied to the third row of the illustrated PU.
Prediction helper values may be indicated for some or all reference samples used in the prediction process, or prediction helper values may be indicated for only some pixels within the prediction block as was the case for the FIG. 3C and 3D examples above. In the case prediction helper values are indicated for some pixels within the prediction block, those helper values may be projected to the reference sample row and used as those were indicated for the reference samples.
Statistics of the neighboring pixels can also be used to code the prediction helper values, and/or to code how the prediction helper values are to be applied to improve the prediction (e.g., to encode the step function). As an example, the neighboring pixels or prediction helper values associated with neighboring pixels can be used to define the context used in context adaptive arithmetic coding of the prediction helper values. In some cases the prediction helper values can also be inferred from the neighboring pixel values, or calculated using the neighboring pixel values (e.g. considering the local changes in the sample values in the pixel neighborhood).
In a practical deployment of these teachings it may be that in the presence of prediction helper values the residual coding operations may be different from the residual coding operations that are used when there are no prediction helper values associated with the block of samples. In this regard one example of such a difference is that the transform coefficient coding can be adapted. One possibility to adapt the transform coding is to scan the coefficients in a different order or to switch positions of some or all the transform coefficients within the transform block. For example, if a DST type of transform is used in prediction error coding and horizontal prediction with prediction helper values is selected, the magnitudes of the coefficients in the first column of the forward transform output are likely to become small, and in this case that column of coefficients may be moved to the last column prior to entropy coding the coefficients. Another example of adapting the transform coding is to specify dedicated contexts used in arithmetic coding of the transform coefficients when intra prediction helper values are present.
One particularly advantageous way to indicate the prediction helper values is to embed them into the prediction error signaling. For example, some of the transform coefficients can be used to store the prediction helper values instead of a weight of a certain basis function. In this case one or more of the actual transform coefficients (e.g. transform coefficients associated with the first basis functions of DST in either the horizontal or vertical direction) can be set to zero and the values of the prediction helpers in the related direction can be passed to the coefficient coding process. Similarly on the decoder side the decoded values typically associated with certain transform coefficients can be assigned as values for the intra prediction helpers whereas the related transform coefficients can be set to zero. In an alternative example the number of coefficients encoded or decoded can be increased from the number required by the transform itself so as to include also the intra prediction helper values as additional coefficients.
Embodiments of these teachings can improve the accuracy of the spatial intra prediction in video and image processing. Some of the implementation alternatives may require a small amount of additional processing as compared to traditional intra prediction techniques but for most implementations this should not be substantially limiting; dedicated image/video processors have become quite common in consumer devices and are even present in many mobile devices such as the latest smartphones for which battery consumption has become a less significant problem despite greater capabilities and higher volumes of data processing (including image/video processing). FIG. 4 is a process flow diagram that summarizes some of the above aspects for decoding a video stream from the perspective of the decoder. In other embodiments these same teachings can be used for decoding individual images that are encoded when stored or transmitted, but greater advantages are seen in utilizing these teachings for the images in a video stream rather than individual images/pictures. Firstly at block 402 the decoder performing the method of Fig. 4 receives an encoded video stream with also an indication of a prediction mode and an indication of one or more prediction helper values. While decoding the encoded video stream at block 404 the decoder calculates a predicted value for each of at least one sample, and this predicted value is calculated based on the received indication of the prediction mode and on the received one or more prediction helper values. Then at block 406 the decoder tangibly outputs the decoded video stream to at least one of a computer readable memory and a graphical display, such that the decoded video stream that is output incorporates each of the at least one sample as decoded using the respective calculated predicted value. In the above examples the prediction mode that is indicated at block 402 could be a vertical prediction mode as in the FIG. 3C-D examples, a horizontal prediction mode, a combined vertical and horizontal prediction mode such as where a single sample refers to two different reference samples, a merge mode such as Macroblock in which there is motion information, a scalable mode such as where the encoded video stream includes a base layer and at least one enhancement layer, and also a direct current mode and a planar mode as in the HEVC standard.
In any of these above embodiments, decoding the encoded video stream as in block 404 may include, for each said sample having a corresponding non-zero prediction helper value, calculating the predicted value as in block 404 is done by applying the predicted helper value to a corresponding value of a reference sample that is located along a prediction direction relative to said sample, where the prediction direction is given by the prediction mode. In the FIG. 3C-D examples r3 was in the position of such a reference sample. Further for any of the above embodiments, the indication of the one or more prediction helper values from block 402 can be indicated in sets of prediction helper values as explained with respect to FIGs. 3C-D which each had only one such set, where each such set corresponds to one prediction unit PU of a coding unit CU within the encoded video stream. In FIG. 3C-D the CU and the PU it includes were within one image/frame of that video stream.
The example from FIG. 3D showed that for at least one of the sets in such a video stream the predicted values for at least two samples of the set in a same row or same column of the prediction unit are non-linear due to applying a step function to a common prediction helper value when calculating the predicted values for the same row or same column in which lay the at least two samples. In this case +8 was the common prediction helper value.
The example from FIG. 3C showed that for each other set for which no step function is used when calculating the predicted values, for a given row or column of the prediction block corresponding to a non-zero prediction helper value the calculated predicted values accumulate linearly in the direction of prediction by the corresponding non-zero prediction helper value. In the FIG. 3C example the amount of the linear accumulation was +2 per each sequential row of the PU.
While these techniques can also be used for inter prediction, the FIG. 3C-D examples were such that the prediction helper values were restricted to only intra prediction in which the sample and a reference sample are in a same image frame of the decoded video stream. And also for the FIG. 3C-D examples it was stated that each of the samples pi 1 , pl2, p21 , etc. was a pixel and the respective prediction helper value indicated an offset value modifying the samples predicted using a reference sample (which was r3 in FIGs. 3C-D). However, a prediction helper value can also indicate a respective index of a color palette for coloring the respective pixel as compared to another index of the color palette for coloring a corresponding reference pixel. This is not to say the helper value itself needs to be a palette color index; while it is a possible implementation that the indicated helper value is an absolute value corresponding to a palette index, the value of the prediction helper can be used as an offset to the palette index actually used in the reference sample 63, and in this way the offset 'indicated' the respective index when taken relative to the index of the reference sample r3. It is noted that while the description above assumes a transmission of the video stream since streaming video is becoming more ubiquitous and transmitted video sometimes requires a greater level of compression (and thus more encoding) than might be used when only storing the video on a computer readable memory, when video is encoded according to these teachings it would be stored with the indications mentioned at block 402 of FIG. 4 same as if the video were transmitted with them. This is so the decoder can use those same indications regardless of whether the decoder receives the encoded video over a transmission channel (wired or wireless) or reads it from a memory stick that a user plugs into some host device (laptop computer; camera/media playback device; smartphone; etc.) that includes the decoder. An encoder operating according to these teachings may perform the same steps shown at FIG. 4 essentially in reverse, namely, it could obtain a video stream such as a live-recorded stream from a camera in place of block 406, then while encoding that video stream it calculates for one or more prediction samples a prediction value using a prediction direction and one or more prediction helper values in place of block 404, and then it can output the encoded video stream to a memory or to a radio for transmission along with an indication of the prediction mode that defines the prediction direction and one or more indications of the prediction helper value(s) in place of block 402.
Fig. 5 is a schematic diagram illustrating some components of generic host devices for encoding and decoding a video stream according to these teachings, named an encoder device 10 and a decoder device 20. While a wireless channel 1 1 is shown between them for transmitting and receiving the encoded video stream, that encoded video stream can be transferred from encoder device 10 to decoder device 20 via a physical memory such as a removable memory card or stick, or there may be one device such as a camera with a graphical display interface that records the video, encodes it when it saves the recorded video to the memory, and decodes it when accessing the encoded video from the memory for playback on the graphical display. There may also be an intermediate device such that the encoder device 10 records and encodes the video, provides it to the intermediate device such as a server or other central database that may store and index the encoded video within a larger video library, and the intermediate device provides the encoded video to the decoder device 20 such as an individual's laptop computer, smartphone, vehicle-mounted video display device, or the like. The encoder device 10 includes a controller, such as a computer or a data processor (DP) 10D, a computer-readable memory medium embodied as a memory (MEM) 10B that stores a program of computer instructions (PROG) IOC, and it may also have a suitable wireless interface such as radio frequency (RF) transmitter/receiver combination 10D for bidirectional wireless communications with the decoder device 20 or the intermediate device via one or more antennas. The encoder processing may be done by a separate DP as shown, or by the main central processing DP 10A, or some combination of both processing chips or more than only those two.
The wireless link between the encoder device 10 and the decoder device 20 can be direct as shown, or through an intermediate device such as a server on the Internet as described above, or as also mentioned there may be no wireless link for communicating the encoded video if either the encoded video is transferred between different devices 10, 20 via a removable memory or if the encoder device 10 and the decoder device 20 are the same device in which the encoded video is stored in the memory until decoded when it is to be played back at a graphical user interface such as a graphical display or a projector.
The decoder device 20 also includes a controller, such as a computer or a data processor (DP) 20A, a computer-readable memory medium embodied as a memory (MEM) 20B that stores a program of computer instructions (PROG) 20C, and it also may have a suitable wireless interface, such as RF transmitter/receiver combination 20D for communication with the encoder device 10 via one or more antennas. Similar to the encoder processing, the decoder processing may be done by a separate DP as shown, or by the main central processing DP 20A, or some combination of both processing chips or more than only those two.
At least one of the PROGs 10C/20C is assumed to include program instructions that, when executed by the associated DP 10A 20A, enable the device to operate in accordance with exemplary embodiments of this invention as detailed above. That is, various exemplary embodiments of this invention may be implemented at least in part by computer software executable by the DP 10A of the encoder device 10; by the DP 20A of the decoder device 20, or by hardware or by a combination of software and hardware (and firmware).
The computer readable MEMs 10B/20B may be of any type suitable to the local technical environment and may be implemented using any one or more suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory, electromagnetic, infrared, or semiconductor systems. Following is a non-exhaustive list of more specific examples of the computer readable storage medium memory: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The DPs 10A/20A may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on a multicore processor architecture, as non-limiting examples. The wireless interfaces (e.g., the radios 10D/20D) may be of any type suitable to the local technical environment and may be implemented using any suitable communication technology such as individual transmitters, receivers, transceivers or a combination of such components.
In general, the various embodiments of the encoder device 10 and/or the decoder device 20 can include, but are not limited to, smart phones with cameras and/or graphical displays, machine -to- machine (M2M) communication devices, cellular telephones, personal digital assistants (PDAs) having video recording and/or playback capabilities, portable computers having video recording and/or playback capabilities, image capture devices such as digital cameras having video recording and/or playback capabilities, gaming devices having video recording and/or playback capabilities, music storage and playback appliances having video recording and/or playback capabilities, Internet appliances permitting video recording and/or playback capabilities, as well as portable units or terminals that incorporate combinations of such functions. Any of these may be embodied as a hand-portable device, a wearable device, a device that is implanted in whole or in part, a vehicle-mounted communication device, and the like.
It should be understood that the foregoing description is only illustrative. Various alternatives and modifications can be devised by those skilled in the art. For example, features recited in the various dependent claims could be combined with each other in any suitable combination(s). In addition, features from different embodiments described above could be selectively combined into an embodiment that is not specifically detailed herein as separate from the others. Accordingly, the description is intended to embrace all such alternatives, modifications and variances which fall within the scope of the appended claims. The following abbreviations that may be found in the specification and/or the drawing figures are defined as follows. These terms are used consistent with their ordinary meaning, as set forth in the H.265/HEVC standard of ITU-T.
AC Alternating Current (coefficient of the DCT)
AVC Advanced Video Coding (H.264/AVC standard)
CTU Coding Tree Unit
CU Coding Unit
DCT Discrete Cosine Transform
DPB Decoded Picture Buffer
DST Discrete Sine Transform
DC Direct Current (coefficient of the DCT)
HEVC High Efficiency Video Coding (H.265/HEVC standard)
LCU Largest Coding Unit
ITU-T International Telecommunication Union - Telecommunication
Standardization sector
MVC Multi-view Video Coding
MVP Motion Vector Prediction
PU Prediction Unit
SNR Signal to Noise ratio
SVC Scalable Video Coding
TU Transform Unit

Claims

CLAMS :
1 . A method for decoding a video stream, the method comprising:
receiving with an encoded video stream an indication of a prediction mode and an indication of one or more prediction helper values;
while decoding the encoded video stream, calculating a predicted value for each of at least one sample based on the received indication of the prediction mode and on the received one or more prediction helper values; and
tangibly outputting the decoded video stream to at least one of a computer readable memory and a graphical display, such that the decoded video stream that is output incorporates each of the at least one sample as decoded using the respective calculated predicted value.
2. The method according to claim 1 , wherein the indication of the prediction mode indicates at least one of: vertical prediction mode, horizontal prediction mode, combined vertical and horizontal prediction mode, merge mode, scalable mode in which the encoded video stream comprises a base layer and at least one enhancement layer, direct current mode, and planar mode.
3. The method according to any of claims 1 -2, wherein decoding the encoded video stream comprises, for each said sample having a corresponding non-zero prediction helper value:
calculating the predicted value by applying the predicted helper value to a corresponding value of a reference sample that is located along a prediction direction relative to said sample, where the prediction direction is given by the prediction mode.
4. The method according to any of claims 1 -3, wherein the indication of the one or more prediction helper values are indicated in sets of prediction helper values, each set corresponding to one prediction unit of a coding unit within the encoded video stream.
5. The method according to claim 4, wherein for at least one of the sets the predicted values for at least two samples of the set in a same row or same column of the prediction unit are non- linear due to applying a step function to a common prediction helper value when calculating the predicted values for the same row or same column in which lay the at least two samples.
6. The method according to claim 5, wherein for each other set for which no step function is used when calculating the predicted values, for a given row or column of the prediction block corresponding to a non-zero prediction helper value the calculated predicted values accumulate linearly in the direction of prediction by the corresponding non-zero prediction helper value.
7. The method according to any of claims 1 -6, wherein the prediction helper values are restricted to only intra prediction in which the sample and a reference sample are in a same image frame of the decoded video stream.
8. The method according to any of claims 1-7, wherein each of the at least one samples is a pixel and the respective prediction helper value indicates a respective index of a color palette for coloring the respective pixel as compared to another index of the color palette for coloring a corresponding reference pixel.
9. An apparatus for decoding a video stream, the apparatus comprising:
at least one computer readable memory storing computer program instructions; and at least one processor;
wherein the computer readable memory with the computer program instructions is configured, with the at least one processor, to cause the apparatus to at least:
receive with an encoded video stream an indication of a prediction mode and an indication of one or more prediction helper values;
while decoding the encoded video stream, calculate a predicted value for each of at least one sample based on the received indication of the prediction mode and on the received one or more prediction helper values; and
output the decoded video stream to at least one of a computer readable memory and a graphical display, such that the decoded video stream that is output incorporates each of the at least one sample as decoded using the respective calculated predicted value.
10. The apparatus according to claim 9, wherein the indication of the prediction mode indicates at least one of: vertical prediction mode, horizontal prediction mode, combined vertical and horizontal prediction mode, merge mode, scalable mode in which the encoded video stream comprises a base layer and at least one enhancement layer, direct current mode, and planar mode.
11. The apparatus according to any of claims 9-10, wherein the computer readable memory with the computer program instructions is configured with the at least one processor to cause the apparatus to decode the encoded video stream by, for each said sample having a corresponding non-zero prediction helper value:
calculating the predicted value by applying the predicted helper value to a corresponding value of a reference sample that is located along a prediction direction relative to said sample, where the prediction direction is given by the prediction mode.
12. The apparatus according to any of claims 9-1 1 , wherein the indication of the one or more prediction helper values are indicated in sets of prediction helper values, each set corresponding to one prediction unit of a coding unit within the encoded video stream.
13. The apparatus according to claim 12, wherein for at least one of the sets the predicted values for at least two samples of the set in a same row or same column of the prediction unit are non-linear due to applying a step function to a common prediction helper value when calculating the predicted values for the same row or same column in which lay the at least two samples.
14. The apparatus according to claim 13, wherein for each other set for which no step function is used when calculating the predicted values, for a given row or column of the prediction block corresponding to a non-zero prediction helper value the calculated predicted values accumulate linearly in the direction of prediction by the corresponding non-zero prediction helper value.
15. The apparatus according to any of claims 9-14, wherein the prediction helper values are restricted to only intra prediction in which the sample and a reference sample are in a same image frame of the decoded video stream.
16. The apparatus according to any of claims 9-15, wherein each of the at least one samples is a pixel and the respective prediction helper value indicates a respective index of a color palette for coloring the respective pixel as compared to another index of the color palette for coloring a corresponding reference pixel.
17. A computer readable memory storing computer program instructions that, when executed by one or more processors, cause a host decoder device to perform actions directed to decoding a video stream, the actions comprising:
receiving with an encoded video stream an indication of a prediction mode and an indication of one or more prediction helper values;
while decoding the encoded video stream, calculating a predicted value for each of at least one sample based on the received indication of the prediction mode and on the received one or more prediction helper values; and
tangibly outputting the decoded video stream to at least one of a computer readable memory and a graphical display, such that the decoded video stream that is output incorporates each of the at least one sample as decoded using the respective calculated predicted value.
18. The computer readable memory according to claim 17, wherein decoding the encoded video stream comprises, for each said sample having a corresponding non-zero prediction helper value:
calculating the predicted value by applying the predicted helper value to a corresponding value of a reference sample that is located along a prediction direction relative to said sample, where the prediction direction is given by the prediction mode.
19. The computer readable memory according to any of claims 17-18, wherein the indication of the one or more prediction helper values are indicated in sets of prediction helper values, each set corresponding to one prediction unit of a coding unit within the encoded video stream.
20. The computer readable memory according to claim 19, wherein for at least one of the sets the predicted values for at least two samples of the set in a same row or same column of the prediction unit are non-linear due to applying a step function to a common prediction helper value when calculating the predicted values for the same row or same column in which lay the at least two samples.
PCT/FI2016/050715 2015-10-13 2016-10-13 Video coding with helper data for spatial intra-prediction WO2017064370A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP16855007.7A EP3363201A4 (en) 2015-10-13 2016-10-13 Video coding with helper data for spatial intra-prediction
JP2018518710A JP2018530968A (en) 2015-10-13 2016-10-13 Video coding using helper data for spatial intra prediction
CN201680059906.8A CN108353186A (en) 2015-10-13 2016-10-13 There is the Video coding for helping data for Spatial intra-prediction
KR1020187013483A KR20180069850A (en) 2015-10-13 2016-10-13 Video coding using helper data for spatial intra prediction
PH12018500776A PH12018500776A1 (en) 2015-10-13 2018-04-10 Video coding with helper data for spatial intra-prediction

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/881,493 US9743092B2 (en) 2015-10-13 2015-10-13 Video coding with helper data for spatial intra-prediction
US14/881,493 2015-10-13

Publications (1)

Publication Number Publication Date
WO2017064370A1 true WO2017064370A1 (en) 2017-04-20

Family

ID=58500286

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2016/050715 WO2017064370A1 (en) 2015-10-13 2016-10-13 Video coding with helper data for spatial intra-prediction

Country Status (7)

Country Link
US (1) US9743092B2 (en)
EP (1) EP3363201A4 (en)
JP (1) JP2018530968A (en)
KR (1) KR20180069850A (en)
CN (1) CN108353186A (en)
PH (1) PH12018500776A1 (en)
WO (1) WO2017064370A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10021418B2 (en) * 2014-06-19 2018-07-10 Hfi Innovation Inc. Method and apparatus of candidate generation for single sample mode in video coding
CN108810531B (en) * 2017-05-03 2019-11-19 腾讯科技(深圳)有限公司 Video coding processing method, device and electronic equipment
CN111713110B (en) * 2017-12-08 2024-08-23 松下电器(美国)知识产权公司 Image encoding device, image decoding device, image encoding method, and image decoding method
US10904555B2 (en) * 2018-07-11 2021-01-26 Tencent America LLC Method and apparatus for video coding
JP7084808B2 (en) * 2018-07-12 2022-06-15 日本放送協会 Encoding device, decoding device, and program
WO2020048361A1 (en) * 2018-09-05 2020-03-12 华为技术有限公司 Video decoding method and video decoder
CN118175342A (en) 2018-09-05 2024-06-11 华为技术有限公司 Video decoding method and video decoder
EP3850840A1 (en) * 2018-09-13 2021-07-21 FRAUNHOFER-GESELLSCHAFT zur Förderung der angewandten Forschung e.V. Affine linear weighted intra predictions
CN113039780B (en) 2018-11-17 2023-07-28 北京字节跳动网络技术有限公司 Merge with motion vector difference in video processing
WO2020125751A1 (en) * 2018-12-21 2020-06-25 Beijing Bytedance Network Technology Co., Ltd. Information signaling in current picture referencing mode
CN113709496B (en) * 2021-08-24 2024-07-09 天津津航计算技术研究所 Multi-channel video decoding method based on fault tolerance mechanism

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140301462A1 (en) * 2011-10-31 2014-10-09 Nanyang Technological University Lossless image and video compression
US20140301474A1 (en) * 2013-04-05 2014-10-09 Qualcomm Incorporated Determining palettes in palette-based video coding

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7266247B2 (en) * 2002-09-30 2007-09-04 Samsung Electronics Co., Ltd. Image coding method and apparatus using spatial predictive coding of chrominance and image decoding method and apparatus
CN101822062B (en) * 2007-10-15 2013-02-06 日本电信电话株式会社 Image encoding device and decoding device, image encoding method and decoding method
JP2013034162A (en) * 2011-06-03 2013-02-14 Sony Corp Image processing device and image processing method
ES2869204T3 (en) * 2011-06-24 2021-10-25 Mitsubishi Electric Corp Image coding device, image decoding device, image coding method, image decoding method
WO2013002589A2 (en) * 2011-06-28 2013-01-03 삼성전자 주식회사 Prediction method and apparatus for chroma component of image using luma component of image
CA3017176C (en) * 2011-06-28 2020-04-28 Samsung Electronics Co., Ltd. Method and apparatus for image encoding and decoding using intra prediction
WO2014050971A1 (en) * 2012-09-28 2014-04-03 日本電信電話株式会社 Intra-prediction coding method, intra-prediction decoding method, intra-prediction coding device, intra-prediction decoding device, programs therefor and recording mediums on which programs are recorded
CN103248895B (en) * 2013-05-14 2016-06-08 芯原微电子(北京)有限公司 A kind of quick mode method of estimation for HEVC intraframe coding

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140301462A1 (en) * 2011-10-31 2014-10-09 Nanyang Technological University Lossless image and video compression
US20140301474A1 (en) * 2013-04-05 2014-10-09 Qualcomm Incorporated Determining palettes in palette-based video coding

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BLASI, SG ET AL.: "Frequency-Domain Intra Prediction Analysis and Processing for High-Quality Video Coding.", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 25, no. 5, May 2015 (2015-05-01), XP011580035, Retrieved from the Internet <URL:http://ieeexplore.ieee.org/document/6905757> [retrieved on 20160109] *
MATSUO, S ET AL.: "Intra Prediction with Spatial Gradients and Multiple Reference Lines.", PICTURE CODING SYMPOSIUM (PCS 2009, 21 July 2009 (2009-07-21), XP031491705, Retrieved from the Internet <URL:http://ieeexplore.ieee.org/document/5167430> [retrieved on 20160930] *
See also references of EP3363201A4 *

Also Published As

Publication number Publication date
JP2018530968A (en) 2018-10-18
EP3363201A4 (en) 2019-03-20
US9743092B2 (en) 2017-08-22
US20170105003A1 (en) 2017-04-13
CN108353186A (en) 2018-07-31
KR20180069850A (en) 2018-06-25
EP3363201A1 (en) 2018-08-22
PH12018500776A1 (en) 2018-10-15

Similar Documents

Publication Publication Date Title
US9743092B2 (en) Video coding with helper data for spatial intra-prediction
KR102431537B1 (en) Encoders, decoders and corresponding methods using IBC dedicated buffers and default value refreshing for luma and chroma components
US10440396B2 (en) Filter information sharing among color components
CN112235572B (en) Video decoding method and apparatus, computer device, and storage medium
KR102712842B1 (en) Encoder, decoder and method thereof
KR20220162871A (en) Method and apparatus for video coding
TW202005399A (en) Block-based adaptive loop filter (ALF) design and signaling
CN114600466A (en) Image encoding apparatus and method based on cross component filtering
CA2909601A1 (en) Method and apparatus for processing video signal
JP2023508060A (en) Cross-component adaptive loop filtering for video coding
KR20210122854A (en) Encoders, decoders and corresponding methods for inter prediction
KR20220080738A (en) Image encoding/decoding method, apparatus, and bitstream transmission method using lossless color conversion
CN113615173A (en) Method and device for carrying out optical flow prediction correction on affine decoding block
AU2024201345A1 (en) Method and apparatus for intra smoothing
JP2023500644A (en) Image encoding/decoding method, apparatus and method for bitstream transmission using color space conversion
KR20220100991A (en) Video filtering method and device
KR20210129736A (en) Optical flow-based video inter prediction
US20230050376A1 (en) Method and Apparatus of Sample Fetching and Padding for Downsampling Filtering for Cross-Component Linear Model Prediction
US20240187594A1 (en) Method And An Apparatus for Encoding and Decoding of Digital Image/Video Material
JP2022547293A (en) High-level signaling method and apparatus for weighted prediction
US20240244250A1 (en) Bilateral matching based scaling factor derivation for jmvd
KR20220160082A (en) Video filtering method and apparatus
RU2809192C2 (en) Encoder, decoder and related methods of interframe prediction
KR20110087871A (en) Method and apparatus for image interpolation having quarter pixel accuracy using intra prediction modes
KR20230140456A (en) Interpolation filters for adaptive motion vector difference resolution

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16855007

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 12018500776

Country of ref document: PH

WWE Wipo information: entry into national phase

Ref document number: 2018518710

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20187013483

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2016855007

Country of ref document: EP