GB2498234A - Image encoding and decoding methods based on comparison of current prediction modes with reference prediction modes - Google Patents

Image encoding and decoding methods based on comparison of current prediction modes with reference prediction modes Download PDF

Info

Publication number
GB2498234A
GB2498234A GB1206592.6A GB201206592A GB2498234A GB 2498234 A GB2498234 A GB 2498234A GB 201206592 A GB201206592 A GB 201206592A GB 2498234 A GB2498234 A GB 2498234A
Authority
GB
United Kingdom
Prior art keywords
prediction mode
image portion
mode value
current image
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1206592.6A
Other versions
GB201206592D0 (en
Inventor
Edouard Francois
Christophe Gisquet
Patrice Onno
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Publication of GB201206592D0 publication Critical patent/GB201206592D0/en
Publication of GB2498234A publication Critical patent/GB2498234A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Methods and devices are disclosed for encoding or decoding mode information representing a prediction mode for a current image portion by an intra mode coding process, the mode being one of plural prediction modes. In one aspect, the current image portion prediction mode value to be encoded is compared S1003 with reference (e.g., most probable mode (MPM) or a planar mode) prediction mode values derived from prediction modes of respective image portions to determine (select) from plural encoding processes; if the current mode value is different from reference prediction mode values, the current image portion prediction mode value is compared S1004 with a further predefined prediction mode value, e.g. horizontal, vertical or DC prediction mode. In another aspect the method includes deriving a first reference prediction mode based on the prediction mode value of a single or at least two reference image portion(s). Alternatively, the mode information is represented by an angular index of the angular direction of the current prediction mode; where the current angular index value is different to reference angular indexes, the angular index value of the current image portion is coded. Codeword flags may be used.

Description

METHOD AND DEVICE FOR ENCODING OR DECODING INFORMATION
REPRESENTING PREDICTION MODES
Field of the invention
The invention relates to a method and a device for encoding or decoding mode information representative of a prediction mode. Particularly, but not exclusively the invention relates more specifically to intra mode coding in the High Efficiency Video Coding (HEVC) standard under development.
Descrirtion of the rrior-art Video applications are continuously moving towards higher resolution. A large quantity of video material is distributed in digital form over broadcast channels, digital networks and packaged media, with a continuous evolution towards higher quality and resolution (e.g. higher number of pixels per frame, higher frame rate, higher bit-depth or extended color gamut). This technology evolution puts higher pressure on the distribution networks that are already facing difficulties in bringing HDTV resolution and high data rates economically to the end user. Consequently, any further data rate increases will put additional pressure on the networks. To handle this challenge, ITU-T and ISO/MPEG decided to launch in January 2010 a new video coding standard project, named High Efficiency Video Coding (HEVC).
The HEVC codec design is similar to that of previous so-called block-based hybrid transform codecs such as H.263, H.264, MPEG-i, MPEG-2, MPEG-4, SVC. Video compression algorithms such as those standardized by standardization bodies ITU, ISO and SMPTE use the spatial and temporal redundancies of the images in order to generate data bit streams of reduced size compared with the video sequences. Such compression techniques render the transmission and/or storage of the video sequences more effective.
Figure 1 shows an example of an image coding structure used in HEVC.
A video sequence is made up of a sequence of digital images 101 represented by one or more matrices the coefficients of which represent pixels.
An image 101 is made up of one or more slices 102. A slice may be part of the image or, in some cases, the entire image. Slices are divided into non-overlapping blocks, typically referred to as Largest Coding Units (LCUs) 103; LCUs are generally blocks of size 64 pixels x 64 pixels. Each LCU may in turn be iteratively divided into smaller variable size Coding Units (CU5) 104 using a quadtree decomposition.
During video compression in HEVC, each block of an image being processed is predicted spatially by an "Intra" predictor (so-called "Intra" coding mode), or temporally by an "Inter" predictor (so-called "Inter" coding mode).
Each predictor is a block of pixels issued from the same image. In Intra coding mode the predictor (Intra predictor) used for the current block being coded is a block of pixels constructed from information already encoded of the current image. By virtue of the identification of the predictor block and the coding of the residual, it is possible to reduce the quantity of information actually to be encoded.
A CU is thus coded according to an intra coding mode, (samples of the CU are spatially predicted from neighboring samples of the CU) or to an inter coding mode (samples of the CU are temporally predicted from samples of previously coded slices).
Once the CU samples have been predicted, the residual signal between the original CU samples and the prediction CU samples is generated. This residual is then coded after having applied transform and quantization processes.
In the current HEVC design, as well as in previous designs such as MPEG-4 AVC/H.264, intra coding involves deriving an intra prediction block from reconstructed neighboring samples 201 of the block to be encoded (decoded), as illustrated schematically in Figure 2A. Referring to Figure 2B, when coding a current CU 202, Intra mode coding makes use of two neighbouring CUs that have already been coded, namely the Top and Left CUs 203 and 204.
In intra mode coding multiple prediction modes are supported, including directional or non-directional intra prediction modes. When a CU is intra coded, its related intra prediction mode is coded in order to inform a decoder how to decode the coded CU.
Figure 3 illustrates intra prediction modes intraPredMode' supported in the current HEVC design, along with their related mode values used to identify the corresponding intra prediction mode. The number of supported modes depends on the size of a coding unit (CU). As at the filing date of the present application the HEVC specification is still subject to change but at present the following supported modes are contemplated: 18 modes for 4x4 CU (modes 0 to 17), 35 modes for CU of other sizes (8x8 to 64x64).
The intra prediction modes include prediction modes which are not directional including a planar prediction mode and a DC mode.
The other modes are directional, which means that the samples are predicted according to a given angular direction. In Figure 3 (i), intra prediction modes not supported by 4x4 CUs are indicated by shaded boxes.
It can be noticed in Figure 3 (i) that the intra prediction modes are numbered in a specific order, more or less reflecting the probabilities of occurrence of the different intra prediction modes in the initial design of the standard specification. For instance, modes 0 (Planar), 1 (Vertical), 2 (Horizontal) and 3 (DC) are statistically the four most commonly used modes.
This specific order requires the use of a look-up table 304 that gives, for the angular modes, the link between a mode number (intraPredMode') and its corresponding angular index 302 (noted intraPredOrder' in table 304). An additional look-up table 305 is also used to associate an angular value 303 (noted intraPredAngle' in table 305) with the angular index. As depicted in Figure 3(iv) the intraPredAngle actually indicates the side opposite the angle when the adjacent side length is equal to 32.
The following definitions summarize intra mode representation: intraPredMode (301) represents the different possible intra prediction modes defined in HEVC, and correspond to the values that are actually coded/decoded.
* intraPredOrder (302) corresponds to the index of the angular mode, when angular prediction applies.
* intraPredAngle (303) corresponds to a displacement value (306), directly linked to the angular value, to be applied when angular intra prediction applies.
* Look-up table (304) establishes the link between intraPredMode and intraPredOrder.
* Look-up table (305) establishes the link between intraPredOrder and intraPredAngle.
Figure 4 is a flowchart illustrating steps of a known method of Intra mode coding performed in the current HEVC design. In a first step 401 the Intra prediction modes of the neighboring Top and Left CUs 203 and 204, as illustrated in Figure 2B, are identified. The two CUs may share the same Intra prediction mode or may have different Intra prediction modes. Accordingly, in step 401 one or two different intra prediction modes can be identified. In step 402, two so called Most Probable Modes' (MPMs), are derived from the identified neighbouring Top and Left intra prediction modes. In step 403 the prediction mode of the current coding unit is then compared to the two MPMs. If the prediction mode of the current coding unit is equal to either of the MPMs then in step 404 a first coding process (process 1) is applied.
This first coding process involves coding a flag signaling that the mode of the current block is equal to one of the MPMs, and then, coding the index of the MPM concerned.
If in step 403 it is determined that the prediction mode of the current block is not equal to one of the two MPMs, then in step 405 a second coding process (process 2) is applied.
The second coding process involves coding the mode value of the current block using a longer code word compared to coding of the MPM index.
Using MPMs makes the coding process more efficient. The mode of the CU is often equal to one of the MPMs. As fewer bits are used to signal the MPM than to signal the remaining mode, the overall coding cost is reduced.
A drawback of the current design, however, appears when the mode of the current CU is not one of the MPMs. The remaining mode coding may need to be coded using non fixed-length code (FLC) words, which renders the decoding process design more complex. For instance, when initially 35 prediction modes are supported, 33 remaining modes are possible (once the 2 MPMs have been removed from the possible modes list). Coding a value among 33 possible values requires 5 or 6 bits. If only 32 have been supported, fixed-length coding with 5 bits could have been used.
Another drawback of the current design is that in all cases of the decoding process, the full MPM derivation process, involving the identification of the top and left CU modes (401) and the derivation of the MPMs (402), is required leading to added complexity.
When the current CU is 4x4 size, and a neighboring CU is of a larger size, it is possible that the prediction mode derived from the neighboring CU is not supported by the 4x4 CU. When deriving the mode values from the neighboring CUs, a mapping process is therefore required. In the current HEVC design, the following process applies: If a neighboring mode is supported by the 4x4 CU, it is not modified; otherwise it is enforced to planar mode.
Figure 5 schematically illustrates a known decoding tree of the intra mode.. In this figure, italic bold font is used to indicate syntax elements that are decoded. Blocks correspond to operations performed during the decoding process.
Firstly the flag MPM_flag indicating whether or not the prediction mode is one of the MPMs (MFMO 505 and MPM1 506) is decoded in step 501. If the MPM_flag is equal to 1, the MPM index mpmjdx is decoded in step 502 to identify if the mode is equal to MPMO or to MPM1. If the flag MPM_flag is equal to 0, remaining_mode is decoded in step 503. Variable length codes (VLC) decoding is used (4 to 5 bits for 4x4 CUs, 5 to 6 bits for other CUs). The prediction mode is finally deduced from the remaining mode value and from the MPM5 values in step 504.
To summarize, the decoding process works as follows: If the MPM_flag 1' is decoded, the prediction mode is one of the MFMs o If the mpm_idx flag 0' is decoded, mode=MPMO 0 Else mode=MPMI Else, remaining_mode is decoded using VLC codeword; the prediction mode is deduced for remaining_mode and the MPM values.
A drawback of the above described methods is their complexity and heavy use of processing resources.
Summary of the Invention
The present invention has been devised to address one or more of the foregoing concerns.
According to a first aspect of the invention there is provided a method of encoding mode information representing a prediction mode for encoding of a current image portion by an intra mode coding process, the prediction mode being one of a plurality of prediction modes, each prediction mode being represented by a prediction mode value, the method comprising: comparing a prediction made value of the current image portion to be encoded with reference prediction mode values derived from prediction modes of respective image portions in order to determine an encoding process, from among a plurality of encoding processes, to encode the mode information; wherein it the prediction mode value of the current image portion is different from the reference prediction mode values, the prediction mode value of the current image portion is compared with a further predefined prediction mode value, the method further comprising selecting, based on the further comparison, an encoding process for encoding the mode information.
Since there is an added opportunity to encode the mode information by means of a simple flag due to the comparison with the further predefined prediction mode value, the coding process is simplified. At the decoder side, the process is also simplified. After having decoded the first flag signalling that the mode is not one of the MPMs, then the second flag signalling that the mode value is the further predefined prediction mode value, the mode value is directly obtained without additional process.
In an embodiment in the case where the prediction mode value of the current image portion is equal to the further predefined prediction mode value, the selected encoding process comprises encoding information indicating a predefined relationship between the prediction mode value of the current image portion and the further predefined prediction mode value, otherwise if the prediction mode value of the current image portion is different from the further predefined prediction mode value the selected encoding process comprises encoding information representative of the prediction mode value of the current image portion In an embodiment the further predefined prediction mode value is set to a prediction mode value corresponding to a planar prediction mode In an embodiment the further predefined prediction mode value is set to a mode value corresponding to a horizontal prediction mode, a vertical prediction mode or a DC prediction mode.
In an embodiment the further predefined prediction mode value is dependent upon the content of the image being encoded.
In an embodiment the further predefined prediction mode value can be signalled in the bitstream, at image level, or at image portion level.
In an embodiment the further predefined prediction mode value depends on mode probabilities representative of the probability of occurrence of respective prediction modes.
In an embodiment the mode probabilities are regularly computed and the further predefined prediction mode value is adaptively derived based on said mode probabilities.
In an embodiment the reference prediction mode values comprise a first reference prediction mode value based on the prediction mode of a single reference image portion and a second reference prediction mode value based on the respective prediction modes of at least two reference image portions.
In an embodiment the single reference image portion comprises the left neighbouring image portion of the current image portion.
In an embodiment in the case where the prediction mode value of the single reference image portion corresponds to the further predefined prediction mode value the first reference prediction mode value is set to a second predefined prediction mode value, otherwise the first reference prediction mode value is set to the prediction mode value of the single reference image portion.
In an embodiment in the case where the further predefined prediction mode value is planar mode and the prediction mode value of the single reference image portion corresponds to a planar prediction mode the first reference prediction mode value is set to a DC prediction mode value (i.e the second predefined prediction mode value is a DC prediction mode value), otherwise the first reference prediction mode value is set to the prediction mode value of the single reference image portion.
In an embodiment the two reference image portions, comprise a first neighbouring image portion and a second neighbouring image portion.
In an embodiment the two reference image portions comprise the left neighbouring image portion as the first neighbouring image portion and the top neighbouring image portion as the second neighbouring image portion of the current image portion.
In an embodiment in the case where the prediction mode value of the second neighbouring image portion corresponds to the further predefined prediction mode value or to the prediction mode value of the first neighbouring image portion of the current image portion, the second reference prediction mode value is set to a prediction mode value corresponding to an angular direction adjacent and/or superior, to the angular direction of the first neighbouring image portion, otherwise the second reference prediction mode value is set to the prediction mode value of the second neighbouring image portion.
In an embodiment in the case where the further predefined prediction mode value is planar mode and the prediction mode value of the second neighbouring image portion corresponds to a planar prediction mode or to the prediction mode value of the left neighbouring image portion of the current image portion, the second reference prediction mode value is set to a prediction mode value corresponding to an angular direction superior to the angular direction of the first neighbouring image portion, otherwise the second reference prediction mode value is set to the prediction mode value of the second neighbouring image portion.
In an embodiment in the case where the prediction mode value of the first neighbouring image portion corresponds to a non directional prediction mode, the second reference prediction mode value is set to a prediction mode value corresponding to a third predefined prediction mode.
In an embodiment the third predefined prediction mode is the vertical prediction mode.
In an embodiment in the case where the prediction mode value of the second neighbouring image portion corresponds to the further predefined prediction mode value or to the prediction mode value of the first neighbouring image portion of the current image portion, then the second reference prediction mode value is set to a fourth predefined prediction mode value, otherwise the second reference prediction mode value is set to the prediction mode value of the second neighbouring image portion.
In an embodiment in the case where the further predefined prediction mode value is planar mode and the prediction mode value of the second neighbouring image portion corresponds to a planar prediction mode or to the prediction mode value of the first neighbouring image portion of the current image portion, then the second reference prediction mode value is set to a fourth predefined prediction mode value, otherwise the second reference prediction mode value is set to the prediction mode value of the second neighbouring image portion.
In an embodiment the fourth predefined prediction mode value corresponds to a DC prediction mode.
According to a further aspect of the invention, there is provided an encoder for encoding mode information representing a prediction mode for encoding of a current image portion by an intra mode coding process, the prediction mode being one of a plurality of prediction modes each prediction mode being represented by a prediction mode value, the encoder comprising: comparison means for comparing a prediction mode value of the current image portion to be encoded with reference prediction mode values, each derived from one or more prediction modes of respective image portions in order to determine an encoding process, from among a plurality of encoding processes, to encode the mode information; the comparison means being configured to compare the prediction mode value of the current image portion with a further predefined prediction mode value in the case where the prediction mode value of the current image portion is different from the reference prediction mode values; and selection means for selecting, based on the further comparison, an encoding process for encoding the mode information; and encoding means for encoding the mode information using the selected encoding process.
According to another aspect there is provided a method of decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode decoding process, the prediction mode being one of a plurality of prediction modes, the method comprising: receiving a codeword related to the prediction mode of the current image portion; determining, based on the codeword, the prediction mode from among the plurality of prediction modes for decoding the current image portion; decoding the current image portion using the determined prediction mode wherein in the case where the prediction mode value of the current image portion is different from reference prediction mode values derived from prediction modes of respective image portions and is equal to a further predefined prediction mode value, the codeword comprises a flag indicative that the prediction mode is the further predefined prediction mode and the decoding step comprises decoding the current image portion using the predefined prediction mode; otherwise in the case where the prediction mode value of the current image portion is different from reference prediction mode values derived from prediction modes of respective image portions and is different from the further predefined prediction mode value, the codeword comprises information representative of the prediction mode value of the current image portion and the decoding step comprises decoding the current image portion using the prediction mode represented by the prediction mode value.
A further aspect provides a decoder for decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode decoding process, the prediction mode being one of a plurality of prediction modes, the decoder comprising: reception means for receiving a codeword related to the prediction mode of the current image portion; determining means for determining, based on the codeword, the prediction mode from among the plurality of prediction modes for decoding the current image portion; decoding means for decoding the current image portion using the determined prediction mode wherein in the case where the prediction mode value of the current image portion is different from reference prediction mode values derived from prediction modes of respective image portions and is equal to a further predefined prediction mode value, the codeword comprises a flag indicative that the prediction mode is the further predefined prediction mode and the decoding means is configured to decode the current image portion using the predefined prediction mode; otherwise in the case where the prediction mode value of the current image portion is different from reference prediction mode values derived from prediction modes of respective image portions and is different from the further predefined prediction mode value, the codeword comprises information representative of the prediction mode value of the current image portion, and the decoding means is configured to decode the current image portion using the prediction mode represented by the prediction mode value.
According to a second aspect of the invention there is provided a method of encoding mode information representing a prediction mode for encoding of a current image portion by an intra mode coding process, the prediction mode being one of a plurality of prediction modes, the method comprising: deriving a first reference prediction mode value based on the prediction mode value of a single reference image portion; and comparing the prediction mode value of the current image portion with at least the first reference prediction mode value in order to determine an encoding process, from among a plurality of encoding processes, to encode the mode information of the current image portion.
Since for derivation of the first reference prediction mode value, only one neighbouring image portion is accessed, the coding process is simplified.
Moreover,if the prediction mode of the current image portion is equal to the first reference prediction mode value, the encoding process for the first reference prediction mode value is invoked. It is not necessary, both at the encoder side and at the decoder side, to derive the second reference prediction mode value.
In an embodiment if the prediction mode of the current image portion is not equal to the first reference prediction mode value, the prediction mode of the current portion is compared with a second reference prediction mode value.
The second reference prediction mode value may be derived based on the respective prediction modes of at least two reference image portions.
In an embodiment, the single reference image portion comprises a neighbouring image portion, for example the left neighbouring image portion of the current image portion.
In an embodiment in the case where the prediction mode value of the single reference image portion corresponds to the further predefined prediction mode value, the first reference prediction mode value is set to a second predefined prediction mode value otherwise the first reference prediction mode value is set to the prediction mode value of the single reference image portion.
In an embodiment in the case where the further predefined prediction mode value is the planar mode and the prediction mode value of the single reference image portion corresponds to a planar prediction mode, the first reference prediction mode value is set to a DC prediction mode value (i.e. the second predefined prediction mode value is a DC prediction mode value), otherwise the first reference prediction mode value is set to the prediction mode value of the single reference image portion.
In an embodiment, the two reference image portions comprise a first neighbouring image portion and a second neighbouring image portion of the current image portion. The first neighbouring image portion may be the left neighbouring image portion, and the second neighbouring image portion may be the top neighbouring image portion.
In an embodiment, in the case where the prediction mode value of the second neighbouring image portion corresponds to the further predefined prediction mode value or to the prediction mode value of the first neighbouring image portion of the current image portion, the second reference prediction mode value is set to a prediction mode value corresponding to an angular direction adjacent, for example superior, to the angular direction of the first neighbouring image portion, otherwise the second reference prediction mode value is set to the prediction mode value of the second neighbouring image portion.
In an embodiment, in the case where the prediction mode value of the first neighbouring image portion corresponds to a non directional prediction mode, the second reference prediction mode value is set to a third predefined prediction mode value, for example a mode value corresponding to a vertical prediction mode.
In an embodiment, in the case where the prediction mode value of the second neighbouring image portion corresponds to the further predefined prediction mode value or to the prediction mode value of the first neighbouring image portion of the current image portion, then the second reference prediction mode value is set to a fourth predefined prediction mode value, otherwise the second reference prediction mode value is set to the prediction mode value of the second neighbouring image portion.
In an embodiment, the fourth predefined prediction made value corresponds to a DC prediction made.
In an embodiment, each prediction mode value comprises an angular index representative of the angular direction of the corresponding prediction mode; the angular index of the current image portion is compared with first and second reference angular indexes to determine the encoding process; and in the case where the angular index of the current image portion is different from the reference angular indexes, the determined encoding process comprises encoding the angular index of the current image portion.
In an embodiment angular indexes with even numbered values correspond to prediction modes supported by image portions of a specific size, for example of 4x4 pixels.
In an embodiment non-directional prediction modes are attributed angular index values greater than the angular index values of directional prediction modes.
In an embodiment a DC prediction mode has an even number. In an embodiment a planar prediction mode has an odd angular index. In an embodiment a planar prediction mode has an odd angular index of 33.
In an embodiment, in the case where the prediction mode of the reference image portion is not supported by the image portion to be encoded the angular index of the current image portion is set to the closest even number to the angular index of the reference image portion.
In an embodiment, in the case where all the authorized angular indexes, except for a further predefined prediction mode, such as for example the planar mode, of the reference image portion are even, the angular index is divided by two, or, equivalently right-shifted by one bit, prior to encoding.
A further aspect of the invention provides an encoder for encoding mode information representing a prediction mode for encoding of a current image portion by an intra mode coding process, the prediction mode being one of a plurality of prediction modes, each prediction mode being represented by a prediction mode value, the encoder comprising: means for deriving a first reference prediction mode value based on the prediction mode of a single reference image portion; and means for comparing the prediction mode value of the current image portion with at least the first reference prediction mode value ri order to determine an encoding process, from among a plurality of encoding processes, to encode the mode information of the current image portion.
The encoder may further comprise means for deriving a second reference prediction mode value based on the respective prediction modes of at least two reference image portions, and the means for comparing may be configured to further compare the prediction mode value of the current image portion with the second reference prediction mode value.
A further aspect of the invention provides a method of decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode decoding process, the prediction mode being one of a plurality of prediction modes, each prediction mode being represented by a prediction mode value, the method comprising: receiving a codeword related to the prediction mode of the current image portion; determining, based on the codeword, whether the prediction mode value of the current image portion corresponds to reference prediction mode values; wherein in the case where the codeword indicates that the prediction mode value of the current image portion is equal to a first reference prediction mode value, the prediction mode value of the current image portion is derived from the prediction mode value of a single reference image portion, and in the case where the codeword indicates that the prediction mode value of the current image portion is equal to a second reference prediction mode value the prediction mode value of the current portion is derived from the prediction mode values of at least two reference image portions, otherwise it is determined that the codeword comprises data representative of the prediction mode value of the current image portion Another aspect of the invention provides a decoder for decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode decoding process, the prediction mode being one of a plurality of prediction modes, each prediction mode being represented by a prediction mode value, the decoder comprising: reception means for receiving a codeword related to the prediction mode of the current image portion; determining means for determining, based on the codeword, whether the prediction mode value of the current image portion corresponds to reference prediction mode values; deriving means for deriving the prediction mode value wherein the deriving means is configured to derive the prediction mode value of the current image portion from the prediction mode value of a single reference image portion, in the case where the codeword indicates that the prediction mode value of the current image portion is equal to a first reference prediction mode value, to derive the prediction mode value of the current portion from the prediction mode values of at least two reference image portions in the case where the codeword indicates that the prediction mode value of the current image portion is equal to a second reference prediction mode value; to derive the prediction mode value from the codeword in the case where it is determined that the codeword comprises data representative of the prediction mode value of the current image portion.
According to a third aspect of the invention there is provided a method of encoding mode information representing a prediction mode for encoding of a current image portion by an intra mode coding process, the prediction mode being one of a plurality of prediction modes, the method comprising: representing each prediction mode by an angular index representative of the angular direction of the corresponding prediction mode, comparing the angular index of the current image portion with reference angular indexes in order to determine an encoding process, from among a plurality of encoding processes, to encode the mode information; wherein in the case where the angular index of the current image portion is different from the reference angular indexes, the determined encoding process comprises encoding the angular index of the current image portion.
In an embodiment, angular indexes with even numbered values correspond to prediction modes supported by image portions of predetermined size and/or shape, for example square blocks of 4x4 pixels.
In an embodiment, non-directional prediction modes are attributed an angular index value having a value greater than the angular index values of directional prediction modes.
In an embodiment, a DC prediction mode has an even numbered angular index.
In an embodiment, a planar prediction mode has an odd angular index.
In an embodiment, a planar prediction mode has an angular index of 33.
In an embodiment, in the case where the prediction mode of the reference image portion is not supported by the image portion to be encoded the angular index of the current image portion is set to the closest even numbered angular index of the angular index value of the reference image portion.
In an embodiment, in the case where all the authorized angular indexes, except for a further predefined prediction mode such as the planar mode, of the reference image portion are even, the angular index is divided by two, or, equivalently right-shifted by one bit, prior to encoding.
A further aspect of the invention relates to an encoder for encoding mode information representing a prediction mode for encoding of a current image portion by an intra mode coding process, the prediction mode being one of a plurality of prediction modes, wherein each prediction mode is represented by an angular index representative of the angular direction of the corresponding prediction mode; the encoder comprising comparison means for comparing the angular index of the current image portion with reference angular indexes in order to determine an encoding process, from among a plurality of encoding processes, to encode the mode information; encoding means for encoding the angular index of the current image portion in the case where the angular index of the current image portion is different from the reference angular indexes.
A further aspect of the invention provides a method of decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode coding process, the prediction mode being one of a plurality of prediction modes, each prediction mode being represented by a prediction mode value, the method comprising: receiving a codeword related to the prediction mode of the current image portion; wherein the case where the angular index value of the current image portion is different from reference angular indexes values derived from one or more reference image portions, the code word is an angular index value of the current image portion and the method further comprises decoding the angular index value of the current image portion.
Another aspect of the invention provides a decoder for decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode coding process, the prediction mode being one of a plurality of prediction modes, each prediction mode being represented by a prediction mode value, the decoder comprising: reception means for receiving a codeword related to the prediction mode of the current image portion; wherein the case where the angular index value of the current image portion is different from reference angular indexes values derived from one or more reference image portions, the code word is an angular index value of the current image portion; and decoding means for decoding the angular index value of the current image portion.
According to another aspect of the present invention there is provided a method of decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode decoding process, the prediction mode being one of a plurality of prediction modes, the method comprising: receiving a codeword related to the prediction mode of the current image portion; determining, based on the codeword, the prediction mode from among the plurality of prediction modes for decoding the current image portion; decoding the current image portion using the determined prediction mode wherein in the case where the prediction mode value of the current image portion is different from each of a plurality of reference prediction mode values, one of which is a Planar mode value, an adjustment operation is performed on the prediction mode value of the current image portion which involves comparing the prediction mode value of the current image portion with the or each reference prediction mode value except for the Planar mode value.
Because the Planar mode value is the highest ranked mode value it is known a priori that the prediction mode value of the current image portion will not be lower than the Planar mode value so there is no need to compare the prediction mode value of the current image portion with a reference mode equal (most probable mode or MPM) equal to the Planar mode value. Thus, processing time can be saved.
According to another aspect of the present invention there is provided a method of decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode decoding process, the prediction mode being one of a plurality of prediction modes, the method comprising: receiving a codeword related to the prediction mode of the current image portion; determining, based on the codeword, the prediction mode from among the plurality of prediction modes for decoding the current image portion; decoding the current image portion using the determined prediction mode; wherein in the case where the prediction mode value of the current image portion is different from one or more reference prediction mode values and is different from a further predefined mode value, which is a Planar mode value, an adjustment operation is performed on the prediction mode value of the current image portion which involves comparing the prediction mode value of the current image portion with the or each reference prediction mode value but not with the further predefined mode value.
This provides the same advantage as the preceding aspect of the invention but in this case the Planar mode is a further predefined mode value, rather than a reference mode value (MPM).
Preferably, the adjustment operation involves always incrementing the prediction mode value to take account of the Planar mode value and selectively incrementing the prediction mode value depending on the comparison results with the reference prediction mode value(s).
According to another aspect of the present invention there is provided a device for decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode decoding process, the prediction mode being one of a plurality of prediction modes, the device comprising: means for receiving a codeword related to the prediction mode of the current image portion; means for determining, based on the codeword, the prediction mode from among the plurality of prediction modes for decoding the current image portion; means for decoding the current image portion using the determined prediction mode; and means operable, in the case where the prediction mode value of the current image portion is different from each of a plurality of reference prediction mode values, one of which is a Planar mode value, to perform an adjustment operation on the prediction mode value of the current image portion which involves comparing the prediction mode value of the current image portion with the or each reference prediction mode value except for the Planar mode value.
According to another aspect of the present invention there is provided a device for decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode decoding process, the prediction mode being one of a plurality of prediction modes, the device comprising: means for receiving a codeword related to the prediction mode of the current image portion; means for determining, based on the codeword, the prediction mode from among the plurality of prediction modes for decoding the current image portion; means for decoding the current image portion using the determined prediction mode; and means operable, in the case where the prediction mode value of the current image portion is different from one or more reference prediction mode values and is different from a further predefined mode value, which is a Planar mode value, to perform an adjustment operation on the prediction mode value of the current image portion which involves comparing the prediction mode value of the current image portion with the or each reference prediction mode value but not with the further predefined mode value.
At least parts of the methods according to the invention may be computer implemented. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system". Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
Since the present invention can be implemented in software, the present invention can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium. A tangible carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device and the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g. a microwave or RF signal.
Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings in which:-Figure 1 schematically illustrates an example of a HEVC coding structure Figures 2A and 2B, are schematic diagrams for use in explaining how an intra prediction block is derived in a known HEVC design; Figure 3 schematically illustrates intra prediction modes in a known HEVC design, Figure 4 is a flowchart for use in explaining intra mode coding in a known HEVC design; Figure 5 is a flowchart for use in explaining an intra mode decoding tree in a known HEVC design; Figure 6 is a schematic block diagram illustrating a data communication system in which one or more embodiments of the invention may be implemented; Figure 7 is a schematic block diagram illustrating a processing device configured to implement at least one embodiment of the present invention; Figure 8 is a schematic block diagram ram of an encoder according to at least one embodiment of the invention; Figure 9 is a schematic block diagram of a decoder according to at least one embodiment of the invention; Figure 10 is a flow chart illustrating steps of a method according to a first embodiment of the invention for encoding mode information representing a prediction mode; Figure hA is a flow chart illustrating steps of a method according to a second embodiment of the invention for encoding mode information representing a prediction mode; Figure 11 B is a flow chart illustrating steps of a method according to a second embodiment of the invention for encoding mode information representing a prediction mode; Figure 12 is a flow chart illustrating steps of a method according to a third embodiment of the invention for encoding mode information representing a prediction mode; Figure 13 schematically illustrates numbering of prediction modes according to the third embodiment of the invention; Figure 14 is a flow chart illustrating steps of a method according to a further embodiment of the invention for decoding mode information representing a prediction mode; Figures iSa and 15b are flow charts illustrating steps of a method according to an embodiment of the invention for processing of the Most Probable Modes', MPM5; Figures 16a and 16b are flow charts illustrating steps of a method according to a further embodiment of the invention for processing of the Most Probable Modes', MPMs; Figures 17a and lTb are flow charts illustrating steps of a method according to a further embodiment of the invention for processing of the Most Probable Modes', MFMs; Figure 18 is a flow chart illustrating steps of a decoding method according to the invention, and Figure 19 is a schematic block diagram of parts of an encoder according to an embodiment of the invention.
Figure 6 illustrates a data communication system in which one or more embodiments of the invention may be implemented. The data communication system comprises a sending device, in this case a server 601, which is operable to transmit data packets of a data stream to a receiving device, in this case a client terminal 602, via a data communication network 600. The data communication network 600 may be a Wide Area Network (WAN) or a Local Area Network (LAN). Such a network may be for example a wireless network (Wifi / 802.lla or b or g), an Ethernet network, an Internet network or a mixed network composed of several different networks. In a particular embodiment of the invention the data communication system may be, for example, a digital television broadcast system in which the server 601 sends the same data content to multiple clients.
The data stream 604 provided by the server 601 may be composed of multimedia data representing video and audio data. Audio and video data streams may, in some embodiments, be captured by the server 601 using a microphone and a camera respectively. In some embodiments data streams may be stored on the server 601 or received by the server 601 from another data provider. The video and audio streams are coded by an encoder of the server 601 in particular for them to be compressed for transmission.
In order to obtain a better ratio of the quality of transmitted data to quantity of transmitted data, the compression of the video data may be of motion compensation type, for example in accordance with the HEVC format or H.264/AVC format.
A decoder of the client 602 decodes the reconstructed data stream received by the network 600. The reconstructed images may be displayed display device and received audio data may be reproduced by a loud speaker.
Figure 7 schematically illustrates a processing device 700 configured to implement at least one embodiment of the present invention. The processing device 700 may be a device such as a micro-computer, a workstation or a light portable device such as a smart phone and portable computer. The device 700 comprises a communication bus 713 connected to: -a central processing unit 711, such as a microprocessor, denoted CPU; -a read only memory 707, denoted ROM, for storing computer programs for implementing embodiments of the invention; -a random access memory 712, denoted RAM, which may be used for storing the executable code of the method of embodiments of the invention as well as the registers adapted to record variables and parameters necessary for implementing the method of encoding a sequence of digital images and/or the method of decoding a bitstream according to embodiments of the invention; and -a communication interface 702 connected to a communication network 703 over which data to be processed is transmitted or received Optionally, the apparatus 700 may also include the following components: -a data storage means 704 such as a hard disk, for storing computer programs for implementing methods of one or more embodiments of the invention and data used or produced during the implementation of one or more embodiments of the invention; -a disk drive 705 for a disk 706, the disk drive being adapted to read data from the disk 706 or to write data onto said disk; -a screen 709 for displaying data and/or serving as a graphical interface with the user, by means of a keyboard 710 or any other pointing means.
The apparatus 700 can be connected to various peripherals, such as for example a digital camera 720 or a microphone 708, each being connected to an input/output card (not shown) so as to supply multimedia data to the apparatus 700.
The communication bus 713 provides communication and interoperability between the various elements included in the apparatus 700 or connected to it. The representation of the communication bus is not limiting and in particular the central processing unit is operable to communicate instructions to any element of the apparatus 700 directly or by means of another element of the apparatus 700.
The disk 706 can be replaced by any information medium such as for example a compact disk (CD-ROM), rewritable or not, a ZIP disk or a memory card and, in general terms, by an information storage means that can be read by a microcomputer or by a microprocessor, integrated or not into the apparatus, possibly removable and adapted to store one or more programs whose execution enables the method of encoding a sequence of digital images and/or the method of decoding a bitstream according to the invention to be implemented.
The executable code may be stored either in read only memory 707, on the hard disk 704 or on a removable digital medium such as for example a disk 706 as described previously. Moreover in some embodiments, the executable code of the programs can be received by means of the communication network 703, via the interface 702, in order to be stored in one of the storage means of the apparatus 700 before being executed, such as the hard disk 704.
The central processing unit 711 is adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to the invention, instructions that are stored in one of the aforementioned storage means. On powering up, the program or programs that are stored in a non-volatile memory, for example on the hard disk 704 or in the read only memory 707, are transferred into the random access memory 712, which then contains the executable code of the program or programs, as well as registers for storing the variables and parameters necessary for implementing the invention.
In this embodiment, the apparatus is a programmable apparatus which uses software to implement the invention. However, alternatively, the present invention may be implemented in hardware (for example, in the form of an Application Specific Integrated Circuit or ASIC).
Figure 8 illustrates a block diagram of an encoder according to at least one embodiment of the invention. The encoder is represented by connected modules, each module being adapted to implement, for example in the form of programming instructions to be executed by the CPU 711 of device 700, at least one corresponding step of a method implementing at least one embodiment of encoding an image of a sequence of images according to one or more embodiments of the invention.
An original sequence of digital images iO to in 801 is received as an input by the encoder 80. Each digital image is represented by a set of samples, known as pixels. A bitstream 810 is output by the encoder 80 after implementation of the encoding process.
The bitstream 810 comprises a plurality of encoding units or slices, each slice comprising a slice header for transmitting encoding values of encoding parameters used to encode the slice, such as prediction mode information, and a slice body, comprising encoded video data.
The input digital images /0 to in 801 are divided into blocks of pixels by module 802. The blocks correspond to image portions and may be of variable sizes (e.g. 4x4, 8x8, 16x16, 32x32 pixels). A coding mode is selected for each input block or coding unit. Two families of coding modes are provided: coding modes based on spatial prediction coding (Intra prediction), and coding modes based on temporal prediction (Inter coding, Merge, SKIP). The possible coding modes are tested.
Module 803 implements Intra prediction, in which a given block to be encoded is predicted by a predictor computed from pixels of the neighbourhood of said block to be encoded. An indication of the selected Intra predictor and the difference between the given block and its predictor is encoded to provide a residual if the Intra coding is selected.
Temporal prediction is implemented by motion estimation module 804 and motion compensation module 805. Firstly a reference image from among a set of reference images 816 is selected, and a portion of the reference image, also called reference area or image portion, which is the closest area to the given block to be encoded, is selected by the motion estimation module 804.
Motion compensation module 805 then predicts the block to be encoded using the selected area. The difference between the selected reference area and the given block, also called a residual block, is computed by the motion compensation module 805. The selected reference area is indicated by a motion vector.
Thus in both cases (spatial and temporal prediction), a residual is computed by subtracting the prediction from the original block.
In the INTRA prediction implemented by module 803, a prediction direction is encoded. In the temporal prediction, at least one motion vector is encoded. Information relative to the motion vector and the residual block is encoded if the intra prediction is selected. The encoding of mode information representing a prediction mode will be explained in more detail hereafter with reference to any one of Figures 10 to 14.
To further reduce the bitrate, assuming that motion is homogeneous, the motion vector is encoded by difference with respect to a motion vector predictor.
Motion vector predictors of a set of motion information predictors is obtained from the motion vectors field 818 by a motion vector prediction and coding module 817.
The encoder 80 further comprises a selection module 806 for selection of the coding mode by applying an encoding cost criterion, such as a rate-distortion criterion.
In order to further reduce redundancies a transform is applied by transform module 807 to the residual block, the transformed data obtained is then quantized by quantization module 808 and entropy encoded by entropy encoding module 809. Finally, the encoded residual block of the current block being encoded is inserted into the bitstream 810, along with the information relative to the predictor used such as the index of the selected motion vector predictor. For the blocks encoded in SKIP' mode, only an index to the predictor is encoded in the bitstream, without any residual block.
The encoder 80 also performs decoding of the encoded image in order to produce a reference image for the motion estimation of the subsequent images. This enables the encoder and the decoder receiving the bitstream to have the same reference frames. The inverse quantization module 811 performs inverse quantization of the quantized data, followed by an inverse transform by reverse transform module 812. The reverse intra prediction module 813 uses the prediction information to determine which predictor to use for a given block and the reverse motion compensation module 814 actually adds the residual obtained by module 812 to the reference area obtained from the set of reference images 816. Optionally, a deblocking filter 815 is applied to remove the blocking effects and enhance the visual quality of the decoded image. The same deblocking filter is applied at the decoder, so that, if there is no transmission loss, the encoder and the decoder apply the same processing.
Figure 9 illustrates a block diagram of a decoder according to at least one embodiment of the invention. The decoder is represented by connected modules, each module being adapted to implement, for example in the form of programming instructions to be executed by the CPU 311 of device 300, a corresponding step of a decoding method.
The decoder 90 receives a bitstream 901 comprising encoding units, each one being composed of a header containing information on encoding parameters and a body containing the encoded video data. As explained with respect to Figure 8, the encoded video data is entropy encoded, and the motion vector predictors' indexes are encoded, for a given block, on a predefined number of bits. The received encoded video data is entropy decoded by module 902. The residual data are then dequantized by module 903 and then a reverse transform is applied by module 904 to obtain pixel values.
The mode data indicating the coding mode are also entropy decoded and based on the mode, an INTRA type decoding or an INTER type decoding is performed on the encoded blocks of image data.
In the case of INTRA mode, an INTRA predictor is determined by intra reverse prediction module 905 based on the intra prediction mode specified in the bitstream.
If the mode is INTER, the motion prediction information is extracted from the bitstream so as to find the reference area used by the encoder. The motion prediction information is composed of the reference frame index and the motion vector residual. The motion vector predictor is added to the motion vector residual in order to obtain the motion vector by motion vector decoding module 910.
Motion vector decoding module 910 applies motion vector decoding for each current block encoded by motion prediction. Once an index of the motion vector predictor, for the current block has been obtained the actual value of the motion vector associated with the current block can be decoded and used to apply reverse motion compensation by module 906. The reference image portion indicated by the decoded motion vector is extracted from a reference image 908 to apply the reverse motion compensation 906. The motion vector field data 911 is updated with the decoded motion vector in order to be used for the inverse prediction of subsequent decoded motion vectors.
Finally, a decoded block is obtained. A deblocking filter 907 is applied; similarly to the deblocking filter 815 applied at the encoder. A decoded video signal 909 is finally provided by the decoder 90.
Figure 10 is a flow chart illustrating steps of a method according to a first embodiment of the invention for encoding mode information representing a prediction mode for encoding a current coding unit with respect to reference coding units by an intra coding process.
In an initial step SlOOl the Intra prediction modes of the neighboring Top and Left CUs of the current CU to be encoded, are identified. In step Si 002, two reference prediction mode values, referred to as Most Probable Modes' (MPM5), are derived from the identified intra prediction modes. In step 51003 the prediction mode value of the current coding unit is then compared to the two MPMs. If the prediction mode value of the current coding unit is equal to either of the MPMs then in step Si 005 a first coding process (process 1) is applied.
This first coding process involves coding a flag signaling that the mode of the current block is equal to one of the MPMs, and then, coding the index of the MFM concerned.
If in step Si 003 it is determined that the prediction mode value of the iO current coding block is not equal to one of the two MPMs, then in step Si 004 the prediction mode value of the current coding block is compared with a predefined prediction mode value in order to determine whether or not the prediction mode value is equal to the predefined prediction mode value. If it is determined that the prediction mode value of the current coding block is equal is to the predefined prediction mode value then in step Si006 a second coding process (process 2) is applied.
The second coding process involves coding a flag signaling that the prediction mode of the current block is equal to the predefined prediction mode value.
Otherwise, if it is determined in step Si 004 that the prediction made value of the current coding block is not equal to the predefined prediction mode value then in step 5i007 a third coding process (process 3) is applied.
The third coding process involves coding the actual prediction mode value of the current block.
Thus, in the case where the prediction mode value of the current coding block is equal to the predefined prediction mode only a short flag code needs to be transmitted to the decoder to inform the decoder of the prediction mode of the current coding block thereby leading to a reduction in transmission and processing overhead. Once the decoder has decoded the codeword indicative of the prediction mode, it is directly informed of the mode value without any additional processing. Decoding processing is therefore simplified.
In order to use fewer bits for signalling the predefined mode a coding flag may be set as 0 to indicate that the prediction mode corresponds to the predefined prediction mode.
The predefined prediction mode may be selected such that it corresponds to a prediction mode which is statistically more likely to occur. For example, the predefined prediction mode may correspond to a planar prediction mode.
In alternative embodiments, the predefined prediction mode value may be set to a mode value corresponding to a horizontal prediction mode, a vertical prediction mode or a DC prediction mode.
In some embodiments the predefined prediction mode may be set according to the context of the application. For example, the predefined prediction mode value may be dependent upon the content of the image being encoded. In some particular embodiments the predefined prediction mode value may be adaptively derived based on mode probabilities representative of the probability of occurrence of respective prediction modes, said mode probabilities being regularly computed.
The predefined prediction mode value can be signalled in the bitstream, at image level, or at image portion level.
In the case of a decoder, such as that of Figure 9, receiving the codeword resulting from the method of Figure 10, the codeword being indicative of the prediction mode of the current, the following steps are implemented.
In an initial step the decoder receives the codeword related to the prediction mode of the current image portion and then determines, based on the codeword, the prediction mode from among the potential prediction modes for decoding the current image portion.
In the case where the prediction mode value of the current coding unit is different from reference prediction mode values MPMO and MFM1 derived from prediction modes of the neighbouring top and left coding units and is equal to the predefined prediction mode value, the codeword is a flag indicative that the prediction mode is the predefined prediction mode (process 2 of Figure 10).
The decoder in this case decodes the current coding unit using the predefined prediction mode e.g. a planar prediction mode.
Otherwise in the case where the prediction mode value of the current coding unit is different from reference prediction mode values MPMO and MPM1 derived from prediction modes of the neighbouring top and left coding units and is different from the further predefined prediction mode value, the codeword is a coded value derived from the coded prediction mode value of the current coding unit (process 3 of Figure 10) and from the reference prediction mode values and the decoding step comprises decoding the current coding unit using the prediction mode represented by the prediction mode value.
In the case where the codeword indicates that the prediction mode value of the current coding unit is equal to the first reference prediction mode value MPMO (process 1 of Figure 10), the decoder decodes the current coding unit using the prediction mode associated with MPMO.
In the case where the codeword indicates that the prediction mode value of the current coding unit is equal to the second reference prediction mode value MPM1 (process 1 of Figure 10), the decoder decodes the current coding unit using the prediction mode associated with MPM1.
Figure 11 is a flow chart illustrating steps of a method according to a second embodiment of the invention for encoding mode information representing a prediction mode for encoding a current coding unit with respect to reference coding units by an intra coding process.
In an initial step Si 101 a first reference prediction mode value, referred to as MPMO is derived from the intra prediction mode of a single neighbouring coding unit of the current coding unit to be encoded. In step S1i02 a second reference prediction mode value, referred to as MFM1 is derived from intra prediction modes of the top and left neighbouring coding units of the current coding unit to be encoded.
In a particular embodiment, the left coding unit is used as the single neighbouring coding unit for the derivation of MPMO, because it is generally more efficient in practical hardware or software implementations, in terms of access bandwidth, to access to left data than to top data.
In step S1103 the prediction mode value of the current coding unit is then compared to the two reference prediction mode values MPMO and MPM1. If the prediction mode value of the current coding unit is equal to either of the MPMs then in step SI 104 a first coding process (process 1) is applied.
This first coding process involves coding a flag signaling that the mode of the current block is equal to one of the MPMs, and then, coding the index of the MPM concerned.
If in step S1103 it is determined that the prediction mode of the current block is not equal to one of the two MPMs, then in step S1105 a second coding process (process 2) is applied.
The second coding process involves coding the actual prediction mode value of the current block.
Figure 11 B is a flow chart illustrating steps of another possible implementation method according to the second embodiment of the invention for encoding mode information representing a prediction mode for encoding a current coding unit with respect to reference coding units by an intra coding process.
In an initial step S1501 a first reference prediction mode value, referred to as MPMO is derived from the intra prediction mode of a single neighbouring coding unit of the current coding unit to be encoded. In step S1502 the prediction mode value of the current coding unit is then compared to the first reference prediction mode value MPMO. If the prediction mode value of the current coding unit is equal to MPMO then in step Si 503 a first coding process (process ia) is applied.
If in step S1502 it is determined that the prediction mode of the current block is not equal to MPMO, then in step Si504 a second reference prediction mode value, referred to as MPM1 is derived from intra prediction modes of the top and left neighbouring coding units of the current coding unit to be encoded.
In step S1505 the prediction mode value of the current coding unit is then compared to the second reference prediction mode value MPM1. If the prediction mode value of the current coding unit is equal to MPM1 then in step S1506 a second coding process (process ib) is applied.
If in step S1505 it is determined that the prediction mode of the current block is not equal to MPM1, then a third coding process (process 2) is applied.
A method for deriving the first and second reference prediction mode values MPMO and MPMI according to an embodiment of the invention will now be described.
In determining first reference mode prediction value MPMO, in the case where the neighbouring left coding unit of the current coding unit has a first predefined mode value, MPMO is set to a mode prediction value corresponding to a second predefined mode value. Otherwise MPMO is set to a mode prediction value corresponding to that of the mode prediction value of the neighbouring left coding unit.
In an embodiment the first predefined mode value is planar prediction mode and the second predefined mode value is DC prediction mode.
In determining second reference prediction value MPM1, in the case where the neighbouring top coding unit of the current coding unit has the first predefined mode value or the prediction mode value of the neighbouring top coding unit is equal to the neighbouring left coding unit, the second reference prediction mode value MPM1 is set to a prediction mode value corresponding to an angular direction adjacent to the angular direction of the left neighbouring image portion, otherwise the second reference prediction mode value MPM1 is set to the prediction mode value of the top neighbouring image portion.
In one particular embodiment the angular direction adjacent to the angular direction of the left neighbouring image portion is the superior angular direction of the left neighbouring image portion.
In the case where the prediction mode value of the left neighbouring image portion corresponds to a non directional prediction mode then the second reference prediction mode value MPM1 is set to a prediction mode value corresponding to a third predefined mode value.
In one particular embodiment, the third predefined mode value is set to vertical prediction mode.
In one particular embodiment, in the case where the prediction mode value of the top neighbouring image portion corresponds to the first predefined mode value or to the prediction mode value of the left neighbouring image portion of the current image portion, then the second reference prediction mode value MFM1 is set to an fourth predefined prediction mode value, otherwise the second reference prediction mode value MPMI is set to the prediction mode value of the top neighbouring image portion. For example the fourth predefined prediction mode value may correspond to a DC prediction mode.
The method is summarised as follows: -ForMPM0(NoaccesstotopCU) o if ( left==F'Ianar), MPMO = DC o else MPMO = left -ForMPM1: o if ( top==Planar or top==left), MPM1 = left++ o else MPM1 =top * in one embodiment left+-'-is the nearest authorized superior angular direction to the direction of left. If left is not angular mode (that is, DC mode), left++ is set vertical mode.
* in another embodiment, a given predefined mode is used instead of left-'-+ mode (for instance DC mode).
While in this embodiment the mode prediction value of the left neighbouring coding unit is used to determine the first reference prediction mode value it will be appreciated that in alternative embodiments another coding unit may be used such as the neighbouring top coding unit.
Since the mode derivation process accesses only one neighbouring coding unit to derive the first reference prediction mode value, the coding and corresponding decoding processes are simplified.
In the case of a decoder, such as that of Figure 9, receiving the codeword resulting from the method of Figure 11,the codeword being indicative of the prediction mode of the current, the following steps are implemented.
In an initial step the decoder receives the codeword related to the prediction mode of the current image portion and then determines, based on the codeword, the prediction mode from among the potential prediction modes for decoding the current image portion.
The decoder is configured to determine based on the codeword, whether the prediction mode value of the current coding unit corresponds to reference prediction mode values MPMO or MPM1.
In the case where the codeword indicates that the prediction mode value of the current coding unit is equal to the first reference prediction mode value MPMO (process 1 of Figure 11), the decoder decodes the current coding unit using the prediction mode associated with MPMO. In this case only the derivation process of MPMO needs to be applied, which means access to only one single neighboring image portion.
In the case where the codeword indicates that the prediction mode value of the current coding unit is equal to the second reference prediction mode value MPM1 (process 1 of Figure 11), the decoder decodes the current coding unit using the prediction mode associated with MPM1.
Otherwise the decoder determines that the codeword is coded data representative of the prediction mode value of the current coding unit (process 2 of Figure 11) and decodes the coding using the corresponding prediction mode.
Figure 12 is a flow chart illustrating steps of a method according to a third embodiment of the invention for encoding mode information representing a prediction mode for encoding a current coding unit with respect to reference coding units by an intra coding process.
In an initial step S1201 prediction mode information is represented by angular indexes representative of the angular direction of the corresponding prediction mode. In step S1202 the angular indexes of the prediction modes of the neighboring Top and Left CU5 of the current CU to be encoded, are identified. In step S1203, two reference angular index values, referred to as Most Probable Modes' (MPM5), are derived from the identified angular indexes.
In step S1204 the angular index of the current coding unit is then compared to the two reference angular indexes. If the angular index of the current coding unit is equal to either of the reference angular indexes then in step S1205 a first coding process (process 1) is applied.
This first coding process involves coding a flag signaling that the mode of the current block is equal to one of the MPMs, and then, coding the index of the MPM concerned.
Otherwise, if it is determined in step S1204 that the prediction mode value of the current coding block is not equal to the predefined prediction mode value then in step 81206 a second coding process (process 2) is applied.
The second coding process involves coding the angular index of the prediction mode of the current block.
Replacing the current prediction mode numbering by a direct angular index numbering enables a look-up table associating prediction mode values, such as with angular indexes to be omitted.
As illustrated in Figure 13, in this embodiment of the invention the prediction modes are represented as angular indexes ordered in the increasing order.
In Figure 13, the prediction modes represented by the shaded boxes correspond to prediction modes that are only supported for coding units which are larger than 4x4 pixels. Accordingly, it can be noted that angular modes supported by coding units of size 4x4 pixels are represented by even numbered angular index values.
Non-directional prediction modes such as planar prediction mode and DC prediction mode are attributed values 33 and 34, greater than the highest angular index of the directional modes. The DC prediction mode is attributed an even numbered value. It may be noted that DC and planar modes are also supported by coding units of size 4x4 pixels.
As a consequence in this particular embodiment of the invention: -Modes 0 to 32 correspond to angular indexes 0 to 32 -Mode 33 corresponds to PLANAR prediction mode -Mode 34 corresponds to DC prediction mode Therefore, with the exception of the planar mode, all modes supported by 4x4 pixel sized CUs have even numbered angular index values. When a planar mode is considered as a preferred mode, as in the first embodiment of the invention, this enables further modifications and simplifications for the 4x4 CU case, as will be described in what follows.
Since 4X4 sized CUs only support prediction modes represented by even numbered values (except in the case of a planar prediction mode), the mapping process, invoked to map the mode of a neighbouring coding unit to the authorized modes of 4x4 coding units, can thus be improved as follows.
The new mapping process for attributing an angular index value includes the following operation: mapped_mode = ( neighbor_mode>> 1) <C 1 which means that the closest even numbered angular index to the angular index of the neighboring coding unit is used, in the case where the prediction mode of the neighbouring coding unit is not supported by the current coding unit.
Moreover, since 4X4 sized GUs only support prediction modes represented by even numbered values (except in the case of a planar prediction mode) in the case where process 2 applies and the angular index is coded, the angular index value is divided by 2 prior to being encoded as the remaining mode. At the decoder side, the decoded mode has inversely to be multiplied by 2 to obtain the actual 4x4 prediction mode.
The use of angular indexes in place of previous mode numbering provides a number of advantages: Simplification, by virtue of removal the need to use of a look-up tables associating mode numbers with angle indexes; * simple derivation of mode lefti--i-', that simply involves incrementing by one the value of the left mode. In previous designs it was required first to access to the angle index corresponding to the left mode, then to increase this angle index, then to identify to which mode the incremented index corresponded.
Coding efficiency, by virtue of an improved straightforward mapping to 4x4 mode, more logical than the HEVC mapping to planar mode.
In the case of a decoder, such as that of Figure 9, receiving the codeword resulting from the method of Figure 12, the codeword being indicative of the prediction mode of the current, the following steps are implemented.
In an initial step the decoder receives the codeword related to the prediction mode of the current image portion and then determines, based on the codeword, the prediction mode from among the potential prediction modes for decoding the current image portion.
The decoder is configured to determine based on the codeword, whether the prediction mode value of the current coding unit corresponds to reference prediction mode values MPMO or MPM1.
In the case where the codeword indicates that the angular index of the prediction mode of the current coding unit is equal to the first reference prediction mode value MPMO (process 1 of Figure 12), the decoder decodes the current coding unit using the prediction mode associated with the angular index of MPMO. MPMO is built by only accessing the left coding unit.
In the case where the codeword indicates that the angular index of the prediction mode of the current coding unit is equal to the second reference prediction mode value MPM1 (process 1 of Figure 12), the decoder decodes the current coding unit using the prediction mode associated with the angular index of MPM1.
Otherwise the decoder determines that the codeword is coded data representative of the angular index of the prediction mode of the current coding unit (process 2 of Figure 12) and decodes the coding using the corresponding prediction mode.
Figure 14 is a flow chart illustrating steps of a method according to a fourth embodiment of the invention for decoding mode information representing a prediction mode for encoding a current coding unit with respect to reference coding units by an intra coding process. This embodiment combines features of the previous embodiments.
In step Si 401 a first syntax element "MPM_flag" is decoded. If this first syntax element indicates that the prediction mode of the current coding unit to be decoded is equal to one of the reference angular indexes, a second syntax element "mpm dx" is decoded in step 51402. If this second syntax element indicates that the prediction mode of the current coding unit to be decoded is equal to the first reference angular index, this first reference angular index MPMO is derived in step Si 407 from the angular index of the prediction mode of a neighbouring left coding unit of the current coding unit and the mode of the current coding unit to be decoded is set equal to MPMO in step 51407.
Otherwise, if the second syntax element "mpm idx" indicates that the prediction mode of the current coding unit to be decoded is equal to the second reference angular index, this first reference angular index MPM1 is derived in step Si 408 from angular indexes of neighbouring top and left coding units of the current coding unit and the mode of the current coding unit to be decoded is set equal is toMPMi in step S1408.
Otherwise, if the first syntax element "MPM_flag" indicates that the prediction mode of the current coding unit to be decoded is not equal to one of the reference angular indexes, a third syntax element "remaining_flag" is decoded in step Si 403. If this third syntax element indicates that the prediction mode of the current coding unit to be decoded is equal to planar mode, the mode of the current coding unit to be decoded is set equal to planar mode in step S1405. Otherwise if the third syntax element "remaining flag" indicates that the prediction mode of the current coding unit to be decoded is not equal to planar mode, a fourth syntax element "remaining_mode" is decoded in step 51404. In a particular embodiment, in the case of 16 possible remaining modes values for 4x4 CIJs the codeword is composed of 4 bits while for other sized CUs the code word is composed of 5 bits since 32 remaining modes values are possible. In step 51406, the final mode value has to be derived from the decoded remaining mode value. Step Si 406 involves first deriving the two MPM values. Then the remaining mode value is incremented if it is larger than the MPM values. The mode of the current coding unit to be decoded is set equal to the resulting remaining mode value.
In further embodiments of the method illustrated in Figure 14 the mode information may be represented by a mode prediction value instead of angular index. In other embodiments of the method of Figure 14, the first reference MPMO may be derived from the prediction modes of neighbouring top and left coding units instead of a single neighbouring coding unit, or from a single top neighbouring coding unit.
The following figures describe further embodiments for intra mode coding process. More particularly, these further embodiments focus on the processing of the Most Probable Modes', MPMs.
In the following embodiments (unless otherwise informed), the pre-defined intra prediction modes are numbered as follows: -the non-angular modes (the Planar and the DC prediction modes) are the two first modes, numbered "0" and "1"; preferably the Planar prediction mode is set to "0" and the DC prediction mode to "1 "; however the invention is not limited to this case, and could also apply when Planar is set to 1 and DC to 0; -the angular modes are then numbered from 2 to 2+N-1, where N is an integer corresponding to the number of angular modes (in the recent implementation N is equal to 33').
As a variation, the numbering can be adaptively modified, especially the modes set to the mode numbered "0" and "1". For instance, the prediction modes can be adaptively re-ordered according to their statistics. These statistics can be computed for instance from previous pictures, slices, regions of the current picture. Another possibility is to pre-define different possible modes numbering and to signal in the stream which modes numbering is used among the different possible prediction modes numbering. The signalling can be made at sequence, picture or slice level.
Figure iSa describes an example of a new embodiment for the derivation process of the two first MPMs, MPMO and MPM1. Figure 1 5b is a variation of the embodiment illustrated in figure iSa.
In this embodiment, MPMO and MPM1 are considered as reference prediction mode values, that is to say that they correspond to prediction mode values of neighbouring portions of the current image portion, if possible. In this example, the neighbouring portions are the top and the left neighbouring portions (CUs) of the current image portion. However as a variation, the first reference prediction mode value could correspond to the prediction mode value of the left neighbouring portion of the current image portion, and the second reference prediction mode value could be based on both values of the top and the left neighbouring portions of the current image portion. For example, to determine the second reference prediction mode value the value of the top neighouring portion can be compared with the value of the left neighbouring portion (already chosen as the first reference prediction mode value) and, if the two values are different, the top value is chosen as the second reference prediction mode value, whereas if the two values are the same a mode value different from the top value is chosen such as a predetermined value or a value derived from the first reference prediction mode value. This embodiment (and the following ones) is not limited to those examples.
In the embodiment in figure 15a, for both left and top neighbouring portions of the current image portion (or CUs), their modes (respectively signalled as candL" and "candT" for left and top CUs) are set in steps 155a and 159a to the DC mode value if the top and left neighbouring image portions are not available (negative outcomes N in steps 151a and 156a) or if the used coding mode is not intra (negative outcomes N in steps 152a and 157a).
Otherwise, in steps 153a and 155a, their real coding mode (respectively referred as "Left" and "Top") is used.
Then, in step 154a, the first reference prediction mode value MPMO is derived from a value of only a single neighbouring reference image portion ("Left" in this example). Following this derivation, the coding modes candL and candT are compared in step 1510a. If they are different, the second reference prediction mode value MPM1 is set to the value of candT in step 1513a (which corresponds to the "Top" or DC prediction mode value according to the situation).
If the coding modes candL and candT are equal, their common value is compared to two in step 1511a. If it is lower than two (meaning that the common value corresponds to either the value of the Planar or the DC mode), MPM1 is set to the value of "1-MPMO" in step 1512a. Otherwise, MPM1 is derived from the value of MPMO in step 1514a according to a pre-defined function. For example, if MPMO is an angular mode, MPM1=MPMO÷k (k being an chosen integer). In this embodiment, MPMO only depends on a single reference image portion (here, the left CU). This enables reducing the number of operations when the prediction mode value of the current image portion effectively corresponds to MPMO, which is a frequent situation. Practical experiments over a set of representative video sequences show that MPMO is used around 30% of the time.
More precisely, when MPMO is selected, in the worst case, only two operations are required in the corresponding decoding phase.
In this embodiment (as for the embodiment depicted in figure 15b), the second reference prediction mode value MPM1 is determined from the value of a single neighbouring reference image portion. In another embodiment (not illustrated here), N_neighbour neighbouring reference image portions can be used where N_neighbour is greater than "1", and NMPMs are used, and some of the MPMs are derived from different parts of the N_neighbour neighbouring reference image portions.
For example, N neighbour and N may each be set to 3 (i.e. 3 neighbouring reference image portions are used, and 3 MPMs are used), and MPMO may be derived from only a first neighbouring reference image portion, MPM1 may be derived from both the first and a second neighbouring reference image portion, MPM2 may be derived from all three neighbouring reference image portions.
N_neighbour and N could be higher than 3, but the higher they are, the less relevant they are.
In another embodiment, a first MPM (MPMO) can be derived from only a first neighbouring reference image portion, and a second MPM (MPM1) can be derived from N_neighbour neighbouring reference image portions.
Figure 15b illustrates a variation of the previous embodiment. Here the variable "candL" is not used. MPMO is directly set to the prediction mode value "Left" (or "DC" mode value if not possible) in step 1 53b. This embodiment allows reducing the number of used variables.
The other steps are the same as for figure 15a, the references "XXXXa" being replaced by "XXXXb".
Some further MPM5 set to predefined prediction mode values can also be added to the embodiments illustrated in figures 15a and 15b.
Figure 16a depicts a new embodiment for the derivation process of the two first MPMs, MPMO and MPM1. Steps which are the same as those in the embodiment illustrated in figure iSa are not described again.
Moreover, in this case a third MPM, MPM2, is used, and is set to a further predefined prediction mode value, namely mode value "0" corresponding to the Planar prediction mode value, in step 1616a. In an embodiment, the further predefined prediction mode value can be adaptively modified, per sequence, picture or slice, or per group of CU5. This value can also be modified on-the-fly, by taking into account the statistics of the intra mode.
In this case, an additional step is introduced in the embodiment, before deriving the first reference prediction mode value. If the "Left" and/or "Top" prediction mode values correspond to the Planar prediction mode value (negative outcome N in steps 163a and 168a), then they are enforced to "1" (DC mode) in steps 165a and 1610a. As a variation, the test could be to check if the Left" and/or "Top" prediction mode values are lower than two.
Then, as for the previous embodiments in figure 15a and 15b, MPMO is set to the mode derived from a single reference image portion in step 1611a (preferably the left neighbouring portion image, as illustrated here).
As previously seen, if "candL" and "candT" are equal (positive outcome Y in step 1612a), MPMO is compared to the DC mode in step 1613a (or the number "1", the DC mode value). If it is equal to the DC mode (or "1"), then MPM1 is set to another pre-defined mode, namely the Vertical mode, in step 1614a, otherwise MPM1 is derived from MPMO in step 1615a (for instance, MPM1=MPMOtk, if MPMO is an angular mode or MPM1 is set to a pie-defined mode of MPM is not an angular mode).
If in step 1612a the prediction mode values derived from Left and Top are different, then MPM1 is set to "candT" in step 1617a.
Figure 16b illustrates a variation of the embodiment in figure 16a. As for figure 15b, the variable "candL" is not used. MPMO is directly set to the prediction mode value "Left" (or "DC" mode value if not possible), 164b. This embodiment allows reducing the number of used variables.
The other steps are the same as for figure 16a, the references "XXXXa" being replaced by "XXXXb".
For both embodiments illustrated in figures 16a and 16b (and the following embodiments), the Planar and the DC modes can be switched (the DC mode value being set to "0", and the Planar mode value to "1"). In this case, MPM2 is set to the DC mode value. "CandL" and "candT" are set to the Planar mode value if the top and left neighbouring image portions are not available or if the used coding mode is not intra.
Other prediction modes can be preferred instead of the Planar and DC modes, such as, for example, the Vertical mode.
In the embodiments illustrated in figures 16a and 16b, the derivation of MPM2 does not need any computation, given that it is set to a pre-defined mode value. This enables reducing the number of operations in the corresponding decoding phase, when MPM2 is selected. Indeed in the worst case when MPM2 is selected, no check operation is required, contrary to existing methods.
Figure 17a depicts another embodiment of the invention based on another implementation of the intra mode coding process. This embodiment enables to reduce the number of checking operations during the corresponding decoding phase, when MPM2 is set to a further predefined prediction mode value.
More precisely, in step 171a a default mode value "default" is initially set to the DC prediction mode value.
If the left neighbouring image portion (CU) is not available (negative outcome N in step 172a) or if the used coding mode is not intra (negative outcome in step 173a) or if the "Left" value is not greater than "1" (negative outcome in step 174a, meaning that the "Left" value corresponds to the DC or Planar mode value), then in step 176a "candL" is set to "default" and "default" itself is set to another preferred mode value (in this example, the Vertical mode value).
Otherwise, in step 175a, "candL" is set to "Left", the prediction mode value of the left neighbouring image portion.
Then MPMO is setto "candL" in step 1712a.
If the top neighbouring image portion (CU) is not available (negative outcome N in step 177a) or if the used coding mode is not intra (negative outcome N in step 1 78a) or if the "Top" value is not greater than "1" (negative outcome N in step 179a, meaning that the "Top" value corresponds to the DC or Planar mode value), then in step 171 la "Top" is set to "default", which can have the value of the DC mode value or the Vertical mode value according to what occurred previously. Otherwise, in step 1710a, "candT" is set to "Top".
Then "candL" and "candT" are compared to one aonther in step 1713a and if"candL" is equal to "candT", MPM1 is derived from MPMO in step 1714a.
Otherwise MPM1 is set to "candT" in step 1715a.
Figure 17b illustrates a variation of the embodiment in figure 17a. As for figures 15b and 16b, the variable "candL" is not used. MPMO is then directly set to the prediction mode value "Left" (or the DC mode value if not possible) in step 175b. This embodiment allows reducing the number of used syntax elements.
The other steps are the same as for figure 17a, the references "XXXXa" being replaced by "XXXXb".
In these embodiments illustrated in figure 17a and 17b, the only test to be done is checking the equality between "candL" and "candT", which involves fewer operations than existing methods.
Moreover, those two embodiments mentioned above significantly simplify the intra mode decoding tree as shown in figure 18.
Figure 18 depicts the intra mode decoding tree of the embodiments in figures 16a to 1/b. In those embodiments MPM2 has been set to a further predefined prediction mode value (for instance the Planar mode value). First, in step 180, a flag MPM-flag is decoded. If it is equal to "1" (Y), the mode value is equal to one of the MPM5: MPMO, MPM1 or MPM2. Using the index of the MPM, specifically the least significant bit Mpm-idx0 and the next least significant bit Mpm-idxl, it is determined in steps 181, 182, 183, 184, 185 which of the 3 MPMs is indicated by the index. An example of decoding tree for the MPMs is provided in figure 18. However other implementations are possible.
If the first MPM-flag is not equal to "1" (N), a code related to the remaining mode is decoded and the remaining mode decoding process applies (not shown).
Classically, for the decoding step, the prediction modes' indexes are re-numbered, to exclude the prediction modes' indexes which were attributed to the MPMs (here, MPMO, MPM1 and MPM2). Thus, the decoding step includes several steps to determine where the prediction mode of the current image's index is situated regarding MPMO, MPM1 and MPM2, in order to get its real index.
First the MPMs must be ordered. Since MPM2 is equal to mode 0 (Planar), it is in the first rank of the MPMs. Then, if MPM1 is lower than MPMO (positive outcome V in step 186), MPMO and MPM1 must be swapped in step 187. Otherwise they are not modified. Once the MPMs have been ordered, an increment process is applied in step 188. Since MPM2 is equal to mode 0 (Planar), the current index of the prediction mode of the current image portion must be incremented by one.
If the current index of the prediction mode of the current image portion is greater than or equal to MPMO (positive outcome Y in step 189), then the current index of the prediction mode of the current image portion must be incremented in step 1810. Once the current index of the prediction mode of the current image portion has been incremented, or if MPMO is lower than the current index, then it is compared to MPM1 in step 1811.
If MPM1 is lower than or equal to the current index, then the current index is incremented once again in step 1812.
For the remaining mode decoding process, only one checking operation is required to re-number the MPMs. In particular, when MPM2 is set to "0" (Planar prediction mode), it is obviously the first MFM in the ordered list of MPMs. It does not need to be ordered with the other MPMs. No checking operation is required for this MPM. Only two checking operations are required to increment the mode value if the MPMs are of lower values. In total, in the worst case, four checking operations are required.
The following figure 19 illustrates a new embodiment of an improved rate-distortion optimization unit (RDO) according to the invention. Such a rate- distortion optimization unit, is provided on the encoder side, and the rate-distortion optimization process carried out by the unit is applied Coding Unit by Coding Unit.
It allows determining the intra prediction mode that applies to the coding unit.
As illustrated in figure 19, a set of Ki modes is first selected by a first selection section FASTSEL. This first selection section FASTSEL allows improving the RDO.
Then the Ki modes selected by the first selection section FASISEL are tested by a second selection section RDO and a best mode m_opt is selected from among the K1 modes.
The first selection section FASTSEL operates as follows. For each mode m<M, a prediction unit 1911 is able to predict the current Coding Unit according to the considered mode. M predicted current Coding Units are then delivered to the following units.
A distortion Dp(m) is evaluated by a distortion calculation unit 1912 on each received predicted Coding Unit. The distortion can be for instance the mean square error, the mean absolute difference, or the mean absolute of the Hadamard transform of the difference between the predicted Coding Unit and the original one. A coding cost Rp(m) of the mode (evaluated as R(m) (explained below) but without counting the cost of coding a prediction residual) is also evaluated by a coding cost calculation unit 1913, on each received predicted Coding Unit. An estimated rate-distortion cost Cp(m) is then evaluated by a rate-distortion cost calculation unit 1914 using Cp(m) = Dp(m) + Ap.Rp(m), where Ap is another pre-defined parameter possibly depending on the coding parameters (such as quantization parameter, picture or slice type, ...). K modes among the M possible modes are selected to form a candidates set by a selection unit 1915. Those K modes are the modes associated to the K smallest values Cp(m) evaluated by the rate-distortion cost calculation unit 1914. They are delivered to the selection unit 1915 after being sorted in increasing order of coding cost Cp(m).
K is a pre-defined number with KcM (in a preferred embodiment, K=8 for 4x4 and 8x8 Coding Units and K=3 for the other Coding Unit sizes).
The preceding embodiments call for the encoder to set one of the MPMs, for example MPM2, to the Planar mode. However, it is possible that the Planar mode will not be included among the K modes selected by the selection unit 1915. According to the present embodiment, it is possible in the event that the K modes do not include the Planar mode to substitute a candidate, for instance the Kth candidate, in the list delivered by SELb by the Planarmode. This substitution keeps exactly the same number of Ki candidates at the end in order not to increase the encoding time. By enforcing the inclusion of the Planar mode in the Ki modes delivered to the RDO a better coding efficiency is achieved.
A set changing unit 1916 is able to change the candidate set taking account of 0 MPMs (currently 0=3). If one or more of the 0 MPMs is not present in the candidate set formed by the selection unit 1915, the set changing unit 1916 can add them to the candidate set. Alternatively, the set changing unit 1916 is able to replace one or more of the last candidate modes of the candidate set by the corresponding 0 MPMs. For example, in a preferred embodiment, MPM2 replaces the last candidate if it is not already present in the candidates set. In this case, it is not necessary to substitute the Planar mode for one of the existing K modes prior to delivering the K modes to the set changing unit 1916.
In an embodiment, if the prediction modes of the left and the top neighbouring Coding Units of the current Coding Unit are the same, then MPMO is added in the candidates set if it is not present already.
If the prediction modes of the left and the top neighbouring Coding Units of the current Coding Unit are not the same, MPMO and MPM1 are added to the candidate set, if they are not already in. Therefore the final number Ki of candidates can be K, K-'-l or K+2.
These Ki modes are then evaluated by the second selection section RDO means, to select the optimal mode rn_opt, among these K1 candidates, as explained above.
The mode decision is finally be made by a rate-distortion optimization unit or RDO (referenced 1900). As illustrated in figure 19, for each mode m<K1, the coding process is applied by an encoder 1901 which provides the reconstructed signal, Coding Unit by Coding Unit. The resulting distortion D(m) is evaluated by a distortion calculation unit 1902. The distortion can be for instance the mean square error or the mean absolute difference between the reconstructed signal and the original one. The coding cost R(m) of the mode (including the cost of coding the mode and the cost of coding a prediction residual) is also evaluated by a coding cost calculation unit 1903.
R(m) allows estimating the cost of the mode which temporally evolves.
Indeed, the higher the mode frequency, the shorter its index is. Consequently, the coding cost R(m) of the mode temporally evolves, given that a longer index implies a higher cost.
The rate-distortion cost C(m) is then evaluated by a rate-distortion cost calculation unit 1904as C(m) = D(m) + A.R(m), where A is a pre-defined parameter possibly depending on the coding parameters (such as quantization parameter, picture or slice type, ...). The mode m_opt minimizing the rate-distortion cost C(m) is selected by a selection unit 1905 as the best intra mode.
The embodiment illustrated in figure 19 allows reducing the encoding time for encoders implementing intra coding methods. Thanks to the first selection it operates more quickly on the M modes than the second selection unit used alone.
Embodiments of the invention thus provide ways of reducing the computational complexity for the encoding and decoding of a prediction mode in an HEVC encoder.
Although the present invention has been described hereinabove with reference to specific embodiments, the present invention is not limited to the specific embodiments, and modifications will be apparent to a skilled person in the art which lie within the scope of the present invention. In particular the different features from different embodiments may be interchanged or combined, where appropriate.
For example in the embodiment of Figure 10 angular indexes may used to represent the prediction modes and/or the first MPM may be derived using a single coding unit. Similarly, in the embodiments of Figure hA or 11B angular indexes may be used to represent the prediction modes and/or an extra step may be added comparing the prediction mode of the current coding unit with a predefined prediction mode in the case where the prediction mode of the current coding unit does not correspond to the first or second MPM.
Many further modifications and variations will suggest themselves to those versed in the art upon making reference to the foregoing illustrative embodiments, which are given by way of example only and which are not intended to limit the scope of the invention, that being determined solely by the appended claims.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used.

Claims (1)

  1. <claim-text>CLAIMS1. A method of encoding mode information representing a prediction mode for encoding of a current image portion by an intra mode coding process, the prediction mode being one of a plurality of prediction modes each prediction mode being represented by a prediction mode value, the method comprising: comparing a prediction mode value of the current image portion to be encoded with reference prediction mode values, each derived from one or more prediction modes of respective image portions in order to determine an encoding process, from among a plurality of encoding processes, to encode the mode information; wherein if the prediction mode value of the current image portion is different from the reference prediction mode values, the prediction mode value of the current image portion is compared with a further predefined prediction mode value, the method further comprising selecting, based on the further comparison, an encoding process for encoding the mode information.</claim-text> <claim-text>2. A method according to claim 1, wherein if the prediction mode value of the current image portion is equal to the further predefined prediction mode value the selected encoding process comprises encoding information indicating a predefined relationship between the prediction mode value of the current image portion and the further predefined prediction mode value, otherwise if the prediction mode value of the current image portion is different from the further predefined prediction mode value the selected encoding process comprises encoding information representative of the prediction mode value of the current image portion 3. A method according to claim 1 or 2, wherein the further predefined prediction mode value is set to a prediction mode value corresponding to a planar prediction mode 4. A method according to claim 1 or 2, wherein the further predefined prediction mode value is set to a mode value corresponding to a horizontal prediction mode, a vertical prediction mode or a DC prediction mode.5. A method according to claim 1 or 2, wherein the further predefined prediction mode value is dependent upon the content of the image being encoded.6. A method according to claim 1 or 2, wherein the further predefined prediction mode value depends on mode probabilities representative of the probability of occurrence of respective prediction modes.7. A method according to claim 6, wherein the mode probabilities are regularly computed and the further predefined prediction mode value is adaptively derived based on said mode probabilities.8. A method according to any preceding claim wherein the reference prediction mode values comprise a first reference prediction mode value based on the prediction mode of a single reference image portion and a second reference prediction mode value based on the respective prediction modes of at least two reference image portions.9. A method according to claim 8 wherein the single reference image portion comprises a neighbouring portion such as the left neighbouring image portion of the current image portion.10. A method according to claim 8 or9 wherein if the prediction mode value of the single reference image portion corresponds to the further predefined prediction mode value then the first reference prediction mode value is set to a second predefined prediction mode value, otherwise the first reference prediction mode value is set to the prediction mode value of the single reference image portion.11. A method according to any one of claims 8 to 10, wherein the two reference image portions comprise neighbouring image portions such as the left neighbouring image portion and the top neighbouring image portion of the current image portion.12. A method according to claim 11, wherein if the prediction mode value of the top neighbouring image portion corresponds to the further predefined prediction mode or to the prediction mode value of the left neighbouring image portion of the current image portion, then the second reference prediction mode value is set to a prediction mode value corresponding to an angular direction adjacent to the angular direction of the left neighbouring image portion, otherwise the second reference prediction mode value is set to the prediction mode value of the top neighbouring image portion.13. A method according to claim 12, wherein if the prediction mode value of the left neighbouring image portion corresponds to a non directional prediction mode then the second reference prediction mode value is set to a prediction mode value corresponding to a third predefined prediction mode such as a vertical prediction mode.14. A method according to claim 11, wherein if the prediction mode value of the top neighbouring image portion corresponds to the further predefined prediction mode value or to the prediction mode value of the left neighbouring image portion of the current image portion, then the second reference prediction mode value is set to a fourth predefined prediction mode value, otherwise the second reference prediction mode value is set to the prediction mode value of the top neighbouring image portion.15. A method according to claim 14, wherein the fourth predefined prediction mode value corresponds to a DC prediction mode.16. A method according to any preceding claim, wherein prediction mode values are each represented by an angular index representative of the angular direction of the corresponding prediction mode; the angular index of the current image portion is compared with reference angular indexes in order to determine the encoding process; and in the case where the angular index of the current image portion is different from the reference angular indexes, the determined encoding process comprises encoding the angular index value of the current image portion.17. A method according to claim 16 wherein angular indexes with even numbered values correspond to prediction modes supported by image portions of predetermined size and/or shape.18. A method according to claim 16 or 17, wherein non-directional prediction modes are attributed an angular index having a greater value than the angular index values of directional prediction modes.19. A method according to any one of claims 16 to 18 wherein a DC prediction mode has an even number angular index.20. A method according to any one of claims 16 to 19 wherein a planar prediction mode has an odd numbered angular index such as 33.21. A method according to any one of claims 16 to 20 wherein if the prediction mode of the reference image portion is not supported by the image portion to be encoded, the angular index of the current image portion is set to the lowest closest even number to the angular index value of the reference image portion.22. A method according to any one of claims 16 to 21 wherein the angular index is divided by an integer such as two, prior to encoding.23. A method of decoding mode inforrriation representing a prediction mode for decoding of a current image portion by an intra mode decoding process, the prediction mode being one of a plurality of prediction modes, the method comprising: receiving a codeword related to the prediction mode of the current image portion; determining, based on the codeword, the prediction mode from among the plurality of prediction modes for decoding the current image portion; decoding the current image portion using the determined prediction mode wherein in the case where the prediction mode value of the current image portion is different from reference prediction mode values derived from prediction modes of respective image portions and is equal to a further predefined prediction mode value, the codeword comprises a flag indicative that the prediction mode is the further predefined prediction mode and the decoding step comprises decoding the current image portion using the predefined prediction mode; otherwise in the case where the prediction mode value of the current image portion is different from reference prediction mode values derived from prediction modes of respective image portions and is different from the further predefined prediction mode value, the codeword comprises information representative of the prediction mode value of the current image portion and the decoding step comprises decoding the current image portion using the prediction mode represented by the prediction mode value.24. An encoder for encoding mode information representing a prediction mode for encoding of a current image portion by an intra mode coding process, the prediction mode being one of a plurality of prediction modes each prediction mode being represented by a prediction mode value, the encoder comprising: comparison means for comparing a prediction mode value of the current image portion to be encoded with reference prediction mode values, each derived from one or more prediction modes of respective image portions in order to determine an encoding process, from among a plurality of encoding processes, to encode the mode information; the comparison means being configured to compare the prediction mode value of the current image portion with a further predefined prediction mode value in the case where the prediction mode value of the current image portion is different from the reference prediction mode values; and selection means for selecting, based on the further comparison, an encoding process for encoding the mode information; and encoding means for encoding the mode information using the selected encoding process.25. A decoder for decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode decoding process, the prediction mode being one of a plurality of prediction modes, the decoder comprising: reception means for receiving a codeword related to the prediction mode of the current image portion; determining means for determining, based on the codeword, the prediction mode from among the plurality of prediction modes for decoding the current image portion; decoding means for decoding the current image portion using the determined prediction mode wherein in the case where the prediction mode value of the current image portion is different from reference prediction mode values derived from prediction modes of respective image portions and is equal to a further predefined prediction mode value, the codeword comprises a flag indicative that the prediction mode is the further predefined prediction mode and the decoding means is configured to decode the current image portion using the predefined prediction mode; otherwise in the case where the prediction mode value of the current image portion is different from reference prediction mode values derived from prediction modes of respective image portions and is different from the further predefined prediction mode value, the codeword comprises information representative of the prediction mode value of the current image portion, and the decoding means is configured to decode the current image portion using the prediction mode represented by the prediction mode value.26. A method of encoding mode information representing a prediction mode for encoding of a current image portion by an intra mode coding process, the prediction mode being one of a plurality of prediction modes, each prediction mode being represented by a prediction mode value the method comprising: deriving a first reference prediction mode value based on the prediction mode of a single reference image portion; and comparing the prediction mode value of the current image portion with at least the first reference prediction mode value in order to determine an encoding process, from among a plurality of encoding processes, to encode the mode information of the current image portion.27. A method according to claim 26, wherein in the case where the prediction mode value of the current image portion is not equal to the first reference prediction mode value, the prediction mode value of the current image portion is compared with a second reference prediction mode value based on the respective prediction modes of at least two reference image portions.28. A method according to claim 26 or 27 wherein the single reference image portion comprises a neighbouring image portion such as the left neighbouring image portion of the current image portion.29. A method according to any one of claims 26 to 28 wherein if the prediction mode value of the single reference image portion corresponds to the further predefined prediction mode value then the first reference prediction mode value is set to a second predefined prediction mode value, otherwise the first reference prediction mode value is set to the prediction mode value of the single reference image portion.30. A method according to claim 29, wherein the second predefined prediction mode value corresponds to a DC prediction mode.31. A method according to any one of claims 267 to 30, wherein the two reference image portions comprise a first neighbouring image portion and a second neighbouring image portion of the current image portion.32. A method according to claim 31 wherein the first neighbouring image portion is the left neighbouring image portion and the second neighbouring image portion is the top neighbouring image portion.33. A method according to claim 31 or 32, wherein if the prediction mode value of the second neighbouring image portion corresponds to the further predefined prediction mode value or to the prediction mode value of the first neighbouring image portion of the current image portion, then the second reference prediction mode value is set to a prediction mode value corresponding to an angular direction adjacent to the angular direction of the first neighbouring image portion, otherwise the second reference prediction mode value is set to the prediction mode value of the second neighbouring image portion.34. A method according to claim 31 or 32, wherein if the prediction mode value of the first neighbouring image portion corresponds to a non directional prediction mode then the second reference prediction mode value is set to a prediction mode value corresponding to a third predefined prediction mode such as vertical prediction mode.35. A method according to claim 31 or 32, wherein if the prediction mode value of the second neighbouring image portion corresponds to the predefined prediction mode value or to the prediction mode value of the first neighbouring image portion of the current image portion, then the second reference prediction mode value is set to a fourth predefined prediction mode value, otherwise the second reference prediction mode value is set to the prediction mode value of the third neighbouring image portion.36. A method according to claim 35, wherein the fourth predefined prediction mode value corresponds to a DC prediction mode.37. A method according to any one of claims 26 to 36 wherein prediction mode values are represented by respective angular indexes, each representative of the angular direction of the corresponding prediction mode; the angular index of the current image portion is compared with first and second reference angular indexes to determine the encoding process; and in the case where the angular index of the current image portion is different from the reference angular indexes, the determined encoding process comprises encoding the angular index of the current image portion.38. A method according to claim 37 wherein angular indexes with even numbered values correspond to prediction modes supported by image portions of a predetermined size and/or shape.39. A method according to claim 37 or 38, wherein non-directional prediction modes are attributed angular index values greater than the angular index values of directional prediction modes.40. A method according to any one of claims 37 to 39 wherein a DC prediction mode has an even numbered angular index.41. A method according to any one of claims 35 to 38 wherein a planar prediction mode has an odd angular index such as 33.42. A method according to any one of claims 37 to 41, wherein if the prediction mode of the reference image portion is not supported by the image portion to be encoded the angular index of the current image portion is set to the lowest closest even number to the angular index of the reference image portion.43. A method according to any one of claims 37 to 42 wherein the angular index is divided by an integer such as two prior to encoding.44. A method of decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode decoding process, the prediction mode being one of a plurality of prediction modes, each prediction mode being represented by a prediction mode value, the method comprising: receiving a codeword related to the prediction mode of the current image portion; determining, based on the codeword, whether the prediction mode value of the current image portion corresponds to reference prediction mode values; wherein in the case where the codeword indicates that the prediction mode value of the current image portion is equal to a first reference prediction mode value, the prediction mode value of the current image portion is derived from the prediction mode value of a single reference image portion, and in the case where the codeword indicates that the prediction mode value of the current image portion is equal to a second reference prediction mode value the prediction mode value of the current portion is derived from the prediction mode values of at least two reference image portions, otherwise it is determined that the codeword comprises data representative of the prediction mode value of the current image portion 45. An encoder for encoding mode information representing a prediction mode for encoding of a current image portion by an intra mode coding process, the prediction mode being one of a plurality of prediction modes, each prediction mode being represented by a prediction mode value, the encoder comprising: means for deriving a first reference prediction mode value based on the prediction mode of a single reference image portion; and means for comparing the prediction mode value of the current image portion with at least the first reference prediction mode value in order to determine an encoding process, from among a plurality of encoding processes, to encode the mode information of the current image portion.46. An encoder according to claim 45 further comprising means for deriving a second reference prediction mode value based on the respective prediction modes of at least two reference image portions; wherein the means for comparing is configured to further compare the prediction mode value of the current image portion with the second reference prediction mode value.47. A decoder for decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode decoding process, the prediction mode being one of a plurality of prediction modes, each prediction mode being represented by a prediction mode value, the decoder comprising: reception means for receiving a codeword related to the prediction mode of the current image portion; determining means for determining, based on the codeword, whether the prediction mode value of the current image portion corresponds to reference prediction mode values; deriving means for deriving the prediction mode value wherein the deriving means is configured to derive the prediction mode value of the current image portion from the prediction mode value of a single reference image portion, in the case where the codeword indicates that the prediction mode value of the current image portion is equal to a first reference prediction mode value, to derive the prediction mode value of the current portion from the prediction mode values of at least two reference image portions in the case where the codeword indicates that the prediction mode value of the current image portion is equal to a second reference prediction mode value; to derive the prediction mode value from the codeword in the case where it is determined that the codeword comprises data representative of the prediction mode value of the current image portion.48. A method of encoding mode information representing a prediction mode for encoding of a current image portion by an intra mode coding process, the prediction mode being one of a plurality of prediction modes, the method comprising: representing each prediction mode by an angular index representative of the angular direction of the corresponding prediction mode; comparing the angular index of the current image portion with reference angular indexes in order to determine an encoding process, from among a plurality of encoding processes, to encode the mode information; wherein in the case where the angular index of the current image portion is different from the reference angular indexes, the determined encoding process comprises encoding the angular index of the current image portion.49. A method according to claim 48, wherein angular indexes with even numbered values correspond to prediction modes supported by image portions of predetermined shape and/or size.50. A method according to claim 48 or 49, wherein non-directional prediction modes are attributed an angular index value having a value greater than the angular index values of directional prediction modes.51. A method according to any preceding claim wherein a DC prediction mode has an even numbered angular index.52. A method according to any one of claims 48 to 51 wherein a planar prediction mode has an odd numbered angular index, such as angular index of 33.53. A method according to any one of claims 48 to 52 wherein if the prediction mode of the reference image portion is not supported by the image portion to be encoded the angular index of the current image portion is set to the lowest closest even numbered angular index of the angular index value of the reference image portion.54. A method according to any one of claims 48 to 53 wherein the angular index is divided by an integer such as two prior to encoding.55. A method of decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode coding process, the prediction mode being one of a plurality of prediction modes, each prediction mode being represented by a prediction mode value, the method comprising: receiving a codeword related to the prediction mode of the current image portion; wherein the case where the angular index value of the current image portion is different from reference angular indexes values derived from one or more reference image portions, the code word is an angular index value of the current image portion and the method further comprises decoding the angular index value of the current image portion.56. An encoder for encoding mode information representing a prediction mode for encoding of a current image portion by an intra mode coding process, the prediction mode being one of a plurality of prediction modes, wherein each prediction mode is represented by an angular index representative of the angular direction of the corresponding prediction mode; the encoder comprising comparison means for comparing the angular index of the current image portion with reference angular indexes in order to determine an encoding process, from among a plurality of encoding processes, to encode the mode information; encoding means for encoding the angular index of the current image portion in the case where the angular index of the current image portion is different from the reference angular indexes.57. A decoder for decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode coding process, the prediction mode being one of a plurality of prediction modes, each prediction mode being represented by a prediction mode value, the decoder comprising: reception means for receiving a codeword related to the prediction mode of the current image portion; wherein the case where the angular index value of the current image portion is different from reference angular indexes values derived from one or more reference image portions, the code word is an angular index value of the current image portion; and decoding means for decoding the angular index value of the current image portion.58. A computer program product for a programmable apparatus, the computer program product comprising a sequence of instructions for implementing a method according to any one of claims 1 to 23; 26 to 44; or 48 to 55 when loaded into and executed by the programmable apparatus.59. A computer-readable storage medium storing instructions of a computer program for implementing a method, according to any one of claims 1 to 23; 26 to 44; or 48 to 55.60. A method of encoding mode information representing a prediction mode for encoding of a current image portion by an intra mode coding process,, substantially as hereinbefore described with reference to, and as shown in Figure 10, hA, 11B, 12 or 14.61. A method according to claim 17, 38 or 49 wherein the predetermined size or shape corresponds to a square block of 4x4 pixels.62. A method of decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode decoding process, the prediction mode being one of a plurality of prediction modes, the method comprising: receiving a codeword related to the prediction mode of the current image portion; determining, based on the codeword, the prediction mode from among the plurality of prediction modes for decoding the current image portion; decoding the current image portion using the determined prediction mode wherein in the case where the prediction mode value of the current image portion is different from each of a plurality of reference prediction mode values, one of which is a Planar mode value, an adjustment operation is performed on the prediction mode value of the current image portion which involves comparing the prediction mode value of the current image portion with the or each reference prediction mode value except for the Planar mode value.63. A method of decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode decoding process, the prediction mode being one of a plurality of prediction modes, the method comprising: receiving a codeword related to the prediction mode of the current image portion; determining, based on the codeword, the prediction mode from among the plurality of prediction modes for decoding the current image portion; decoding the current image portion using the determined prediction mode; wherein in the case where the prediction mode value of the current image portion is different from one or more reference prediction mode values and is different from a further predefined mode value, which is a Planar mode value, an adjustment operation is performed on the prediction mode value of the current image portion which involves comparing the prediction mode value of the current image portion with the or each reference prediction mode value but not with the further predefined mode value.64. A method according to claim 62 or 63, wherein the adjustment operation involves always incrementing the prediction mode value to take account of the Planar mode value and selectively incrementing the prediction mode value depending on the comparison results with the reference prediction mode value(s).65. A device for decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode decoding process, the prediction mode being one of a plurality of prediction modes, the device comprising: means for receiving a codeword related to the prediction mode of the current image portion; means for determining, based on the codeword, the prediction mode from among the plurality of prediction modes for decoding the current image portion; means for decoding the current image portion using the determined prediction mode; and means operable, in the case where the prediction mode value of the current image portion is different from each of a plurality of reference prediction mode values, one of which is a Planar mode value, to perform an adjustment operation on the prediction mode value of the current image portion which involves comparing the prediction mode value of the current image portion with the or each reference prediction mode value except for the Planar mode value.66. A device for decoding mode information representing a prediction mode for decoding of a current image portion by an intra mode decoding process, the prediction mode being one of a plurality of prediction modes, the device comprising: means for receiving a codeword related to the prediction mode of the current image portion; means for determining, based on the codeword, the prediction mode from among the plurality of prediction modes for decoding the current image portion; means for decoding the current image portion using the determined prediction mode; and means operable, in the case where the prediction mode value of the current image portion is different from one or more reference prediction mode values and is different from a further predefined mode value, which is a Planar mode value, to perform an adjustment operation on the prediction mode value of the current image portion which involves comparing the prediction mode value of the current image portion with the or each reference prediction mode value but not with the further predefined mode value.67. A device according to claim 65 or 66, wherein the adjustment operation involves always incrementing the prediction mode value to take account of the Planar mode value and selectively incrementing the prediction mode value depending on the comparison results with the reference prediction mode value(s).68. A program which, when executed by a processor or computer, causes the processor or computer to carry out the method of any one of claims 62 to 64.69. A computer-readable storage medium storing the program of claim 68.</claim-text>
GB1206592.6A 2012-01-09 2012-04-13 Image encoding and decoding methods based on comparison of current prediction modes with reference prediction modes Withdrawn GB2498234A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1200285.3A GB2498225B (en) 2012-01-09 2012-01-09 Method and device for encoding or decoding information representing prediction modes

Publications (2)

Publication Number Publication Date
GB201206592D0 GB201206592D0 (en) 2012-05-30
GB2498234A true GB2498234A (en) 2013-07-10

Family

ID=45788656

Family Applications (2)

Application Number Title Priority Date Filing Date
GB1200285.3A Active GB2498225B (en) 2012-01-09 2012-01-09 Method and device for encoding or decoding information representing prediction modes
GB1206592.6A Withdrawn GB2498234A (en) 2012-01-09 2012-04-13 Image encoding and decoding methods based on comparison of current prediction modes with reference prediction modes

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB1200285.3A Active GB2498225B (en) 2012-01-09 2012-01-09 Method and device for encoding or decoding information representing prediction modes

Country Status (1)

Country Link
GB (2) GB2498225B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020068668A1 (en) * 2018-09-24 2020-04-02 Qualcomm Incorporated Improved most probable modes (mpms) construction
WO2020133380A1 (en) * 2018-12-29 2020-07-02 富士通株式会社 Image intra-block coding or decoding method, data processing device, and electronic apparatus
RU2769840C1 (en) * 2018-05-10 2022-04-07 Самсунг Электроникс Ко., Лтд. Video encoding method and device and video decoding method and device
RU2788967C2 (en) * 2018-05-10 2023-01-26 Самсунг Электроникс Ко., Лтд. Method and device for video encoding and method and device for video decoding

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11233990B2 (en) 2016-02-08 2022-01-25 Sharp Kabushiki Kaisha Systems and methods for intra prediction coding

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090161757A1 (en) * 2007-12-21 2009-06-25 General Instrument Corporation Method and Apparatus for Selecting a Coding Mode for a Block
US20090296813A1 (en) * 2008-05-28 2009-12-03 Nvidia Corporation Intra prediction mode search scheme

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8588303B2 (en) * 2010-03-31 2013-11-19 Futurewei Technologies, Inc. Multiple predictor sets for intra-frame coding

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090161757A1 (en) * 2007-12-21 2009-06-25 General Instrument Corporation Method and Apparatus for Selecting a Coding Mode for a Block
US20090296813A1 (en) * 2008-05-28 2009-12-03 Nvidia Corporation Intra prediction mode search scheme

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2769840C1 (en) * 2018-05-10 2022-04-07 Самсунг Электроникс Ко., Лтд. Video encoding method and device and video decoding method and device
US11350089B2 (en) 2018-05-10 2022-05-31 Samsung Electronics Co., Ltd. Video encoding method and apparatus, and video decoding method and apparatus
RU2788967C2 (en) * 2018-05-10 2023-01-26 Самсунг Электроникс Ко., Лтд. Method and device for video encoding and method and device for video decoding
US11902513B2 (en) 2018-05-10 2024-02-13 Samsung Electronics Co., Ltd. Video encoding method and apparatus, and video decoding method and apparatus
US11917139B2 (en) 2018-05-10 2024-02-27 Samsung Electronics Co., Ltd. Video encoding method and apparatus, and video decoding method and apparatus
US11917138B2 (en) 2018-05-10 2024-02-27 Samsung Electronics Co., Ltd. Video encoding method and apparatus, and video decoding method and apparatus
US11973941B2 (en) 2018-05-10 2024-04-30 Samsung Electronics Co., Ltd. Video encoding method and apparatus, and video decoding method and apparatus
RU2819421C1 (en) * 2018-05-10 2024-05-21 Самсунг Электроникс Ко., Лтд. Video encoding and decoding method
WO2020068668A1 (en) * 2018-09-24 2020-04-02 Qualcomm Incorporated Improved most probable modes (mpms) construction
WO2020133380A1 (en) * 2018-12-29 2020-07-02 富士通株式会社 Image intra-block coding or decoding method, data processing device, and electronic apparatus
CN112106368A (en) * 2018-12-29 2020-12-18 富士通株式会社 Image intra-block coding or decoding method, data processing device and electronic equipment

Also Published As

Publication number Publication date
GB2498225B (en) 2015-03-18
GB2498225A (en) 2013-07-10
GB201206592D0 (en) 2012-05-30
GB201200285D0 (en) 2012-02-22

Similar Documents

Publication Publication Date Title
US11601687B2 (en) Method and device for providing compensation offsets for a set of reconstructed samples of an image
US10687057B2 (en) Deriving reference mode values and encoding and decoding information representing prediction modes
GB2509563A (en) Encoding or decoding a scalable video sequence using inferred SAO parameters
GB2498234A (en) Image encoding and decoding methods based on comparison of current prediction modes with reference prediction modes
JP7305810B2 (en) Video or image coding based on luma mapping and chroma scaling
WO2023023174A1 (en) Coding enhancement in cross-component sample adaptive offset
WO2023091729A1 (en) Cross-component sample adaptive offset
WO2023038964A1 (en) Coding enhancement in cross-component sample adaptive offset

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)