US9930332B2 - Deriving reference mode values and encoding and decoding information representing prediction modes - Google Patents
Deriving reference mode values and encoding and decoding information representing prediction modes Download PDFInfo
- Publication number
- US9930332B2 US9930332B2 US14/344,315 US201214344315A US9930332B2 US 9930332 B2 US9930332 B2 US 9930332B2 US 201214344315 A US201214344315 A US 201214344315A US 9930332 B2 US9930332 B2 US 9930332B2
- Authority
- US
- United States
- Prior art keywords
- prediction mode
- intra prediction
- mode value
- values
- mode
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/18—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/196—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
- H04N19/463—Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
Definitions
- the invention relates to a method and a device for encoding or decoding mode values representing prediction modes. Particularly, but not exclusively the invention relates more specifically to intra mode coding in the High Efficiency Video Coding (HEVC) standard under development.
- HEVC High Efficiency Video Coding
- Video applications are continuously moving towards higher resolution.
- a large quantity of video material is already distributed in digital form over broadcast channels, digital networks and packaged media, with a continuous evolution towards higher quality and resolution (e.g. higher number of pixels per frame, higher frame rate, higher bit-depth or extended color gamut).
- This technology evolution brings higher pressure on the distribution networks that are already facing difficulties to carry HDTV resolution and data rates economically to the end user. Therefore, any further data rate increase will put additional pressure on the networks.
- ITU-T and ISO/MPEG decided to launch in January 2010 a new video coding standard project, named High Efficiency Video Coding (HEVC).
- HEVC High Efficiency Video Coding
- the HEVC codec design is similar to that of most previous so-called block-based hybrid transform codecs such as H.263, H.264, MPEG-1, MPEG-2, MPEG-4, SVC.
- Video compression algorithms such as those standardized by the standardization bodies ITU, ISO and SMPTE use the spatial and temporal redundancies of the images in order to generate data bit streams of reduced size compared with these video sequences. Such compressions make the transmission and/or storage of the video sequences more effective.
- each block of an image being processed is predicted spatially by an “Intra” predictor (so-called “Intra” coding mode), or temporally by an “Inter” predictor (so-called “Inter” coding mode).
- Each predictor is a block of pixels issued from the same image or another image, from which a difference block (or “residual”) is derived.
- the predictor (Intra predictor) used for the current block is a block of pixels constructed from the information already encoded of the current image.
- the encoded frames are of two types: temporal predicted frames (either predicted from one reference frame called P-frames or predicted from two reference frames called B-frames) and non-temporal predicted frames (called Intra frames or I-frames).
- I-frames only Intra prediction is considered for coding blocks.
- P-frames and B-frames Intra and Inter prediction are considered for coding blocks.
- intra coding involves deriving an intra prediction block from reconstructed neighboring samples 101 of the block to be encoded (decoded), as illustrated schematically in FIGS. 1A and 1B
- Multiple prediction modes are supported, either directional or non-directional.
- the number of supported modes depends on the size of a coding unit (CU).
- the HEVC specification is still subject to change but presently the following supported modes are contemplated: 4 modes for 64 ⁇ 64 CU, 18 modes for 4 ⁇ 4 CU, 35 modes for CU of other sizes (8 ⁇ 8 to 32 ⁇ 32).
- Intra mode coding makes use of two neighbouring CUs that have already been coded, namely the Top and Left CUs 103 and 104 .
- FIG. 2 illustrates intra prediction modes considered in HEVC.
- the intra prediction modes include a planar prediction mode identified by a mode prediction value 0, a DC mode having a mode prediction value 3 and a number of directional prediction modes identified by mode prediction values 4 to 34 for predicting directional structures in an image corresponding to different angles. Also included are horizontal prediction mode 2 and vertical prediction mode 1.
- FIG. 3 is a flowchart for use in explaining how Intra mode coding is performed in the current HEVC design.
- a first step S 201 the Intra prediction modes of the neighboring Top and Left CUs 103 and 104 , as illustrated in FIG. 1B , are identified.
- the two CUs may share the same Intra prediction mode or may have different Intra prediction modes. Accordingly, in step S 201 one or two different intra prediction modes can be identified.
- two ‘Most Probable Modes’ (MPMs) are derived from the identified intra prediction modes.
- MPM0 and MPM1 are set to respectively the minimum and maximum value of the Top and Left CU prediction modes. If the prediction modes from the Top and Left CUs 103 and 104 are equal, and if they do not correspond to the Planar prediction mode, then MPM0 is set equal to Planar mode and MPM1 is set to the prediction mode of the Top or Left CUs prediction mode. If the prediction modes of the Top and Left CUs 103 and 104 both correspond to the Planar mode, then MPM0 is set equal to the Planar mode and MPM1 is set to the DC mode.
- MPM0 and MPM1 are thus ordered according to their prediction mode values, the prediction mode having the smaller mode value being referred to as MPM0 and the prediction mode having the greater mode value being referred to as MPM1.
- step S 203 the prediction mode of the current coding unit is then compared to the two MPMs. If the prediction mode of the current coding unit is equal to either MPM0 or MPM1 then in step S 204 a first coding process (process 1) is applied.
- This first coding process involves coding a flag signaling that the mode of the current block is equal to one of the MPMs, and then, coding the index of the MPM concerned (0 if MPM0, 1 if MPM1).
- step S 203 If in step S 203 it is determined that the prediction mode of the current block is not equal to one of the two MPMs, then in step S 205 a second coding process (process 2) is applied.
- the second coding process involves coding the mode value of the current block.
- process 1 is more often used than process 2.
- a prediction mode is more often equal to one of its MPMs than different from all of the MPMs.
- the entropy coding engine benefits from this property, either by using shorter codewords in process 1 than in process 2, or by exploiting the higher probability of being equal to one of the MPMs (arithmetic coding as used in CABAC efficiently exploits the probability to improve the encoding and reduce the coding cost).
- the present invention has been devised to address one or more of the foregoing concerns and desires. It is desirable to improve the coding efficiency of methods for encoding prediction mode information.
- a method of encoding mode information representing a prediction mode related to a current coding unit by an intra mode coding process a method of encoding a mode value representing a prediction mode related to a current coding unit to be encoded, the method comprising: deriving first and second reference prediction mode values from respective prediction modes of at least two neighbouring coding units of the current coding unit, the first and second reference prediction modes being different from one another; and comparing the prediction mode value to be encoded with one or more of the reference prediction mode values; and selecting, based on the comparison, an encoding process, from among at least first and second encoding processes, to apply to the mode value to be encoded; wherein, the method further comprises deriving a third reference prediction mode value from the first and second reference prediction mode values, the third reference prediction mode being different from each of said first and second reference prediction mode values; and said comparison comprises comparing the prediction mode value to be encoded with at least one of the first, second and third reference prediction mode values.
- the coding efficiency is improved. This is due to the increase in the probability that the prediction mode of the current coding block corresponds to one of the derived most probable modes. Since this enables a more economical encoding process to be used to encode the prediction mode of the current coding block, the overall coding cost is reduced.
- a device for encoding mode information representing a prediction mode related to a current coding unit comprising: derivation means for deriving first and second reference prediction mode values from respective prediction modes of at least two neighbouring coding units of the current coding unit, the first and second reference prediction modes being different from one another; and comparison means for comparing the prediction mode value to be encoded with one or more of the reference prediction mode values; and selection means for selecting, based on the comparison, an encoding process, from among at least first and second encoding processes, to apply to the mode value to be encoded; wherein, the derivation means is operable to derive a third reference prediction mode value from the first and second reference prediction mode values, the third reference prediction mode being different from each of said first and second reference prediction mode values; and said comparison means is operable to compare the prediction mode value to be encoded with at least one of the first, second and third reference prediction mode values.
- a method of decoding a mode value representing a prediction mode related to a current decoding unit to be decoded comprising: deriving first and second reference prediction mode values from respective prediction modes of at least two neighbouring decoding units of the current decoding unit, the first and second reference prediction modes being different from one another; and comparing the prediction mode value to be decoded with one or more of the reference prediction mode values; and selecting, based on the comparison, a decoding process, from among at least first and second decoding processes, to apply to the mode value to be decoded; wherein, the method further comprises deriving a third reference prediction mode value from the first and second reference prediction mode values, the third reference prediction mode being different from each of said first and second reference prediction mode values; and said comparison comprises comparing the prediction mode value to be decoded with at least one of the first, second and third reference prediction mode values.
- a device for decoding a mode value representing a prediction mode related to a current decoding unit to be decoded comprising: derivation means for deriving first and second reference prediction mode values from respective prediction modes of at least two neighbouring decoding units of the current decoding unit, the first and second reference prediction modes being different from one another; and comparison means for comparing the prediction mode value to be encoded with one or more of the reference prediction mode values; and selection means for selecting, based on the comparison, a decoding process, from among at least first and second decoding processes, to apply to the mode value to be decoded; wherein, the derivation means is operable to derive a third reference prediction mode value from the first and second reference prediction mode values, the third reference prediction mode being different from each of said first and second reference prediction mode values; and said comparison means is operable to compare the prediction mode value to be decoded with at least one of the first, second and third reference prediction mode values.
- a method of deriving reference prediction mode values for encoding or decoding of a prediction mode related to a current coding unit comprising: deriving first and second reference prediction mode values from respective prediction modes of at least two neighbouring coding units of the current coding unit, the first and second reference prediction modes being different from one another; and deriving a third reference prediction mode value from the first and second reference prediction mode values, the third reference prediction mode being different from each of said first and second reference prediction mode values; wherein the first, second and third reference prediction mode values are usable for comparison with the prediction mode value to be encoded or decoded.
- the third reference prediction mode value is set to a mode value corresponding to a planar prediction mode if none of said first and second reference prediction mode values corresponds to the planar prediction mode.
- the further prediction mode value is set to a predefined prediction mode value if one of the first and second reference prediction mode values corresponds to a DC prediction mode and the other of the first and second reference prediction mode values corresponds to a planar prediction mode.
- the predefined prediction mode value is signalled in a slice or picture header.
- the predefined prediction mode value has a small prediction mode value, such as for example a prediction mode value less than 5.
- the predefined prediction mode value corresponds to a horizontal prediction mode or a vertical prediction mode.
- the predefined prediction mode value is dependent upon the content of the image being encoded.
- the predefined prediction mode value is adaptively derived based on mode probabilities representative of the probability of occurrence of respective prediction modes, said mode probabilities being regularly computed.
- the third prediction mode value is set at a prediction mode value corresponding to the prediction mode with the next angular direction superior to the direction of the reference prediction mode value concerned.
- the at least two neighbouring encoding or decoding units comprise the left neighbouring encoding unit or decoding and the top neighbouring encoding or decoding unit of the current encoding or decoding unit.
- the first encoding or decoding process comprises encoding or decoding first information indicating a predetermined relationship between the mode value to be encoded or decoded and at least one of the first, second and third reference prediction mode values
- the second encoding or decoding process comprises encoding or decoding second information representing the mode value to be encoded or decoded.
- the first encoding or decoding process is selected when the mode value to be encoded or decoded is equal to at least one of the three reference prediction mode values, and the second encoding or decoding process is selected when the mode value to be encoded or decoded differs from each of the three reference prediction mode values.
- the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit”, “module” or “system”.
- the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
- a tangible carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device and the like.
- a transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g. a microwave or RF signal.
- FIGS. 1A and 1B are schematic diagrams for use in explaining how an intra prediction block is derived in the current HEVC design
- FIG. 2 also discussed hereinbefore, schematically illustrates intra prediction modes in the current HEVC design
- FIG. 3 is a flowchart for use in explaining intra mode coding in the current HEVC design
- FIG. 4 is a flowchart for use in explaining a principle of intra mode coding according to at least one embodiment of the present invention
- FIG. 5 is a flowchart of steps of a method for deriving a reference prediction mode value according to an embodiment of the present invention
- FIG. 6 is a flowchart illustrating steps related to the method of FIG. 5 ;
- FIG. 7 shows a sequence of images
- FIG. 8 shows parts of apparatus suitable for implementing an encoder according to an embodiment of the present invention
- FIG. 9 shows a block diagram of parts of an encoder according to at least one embodiment of the present invention.
- FIG. 10 illustrates a block diagram of parts of a decoder.
- FIG. 4 is a flowchart for use in explaining a principle of an intra mode coding method embodying the present invention.
- the intra mode coding method according to this flowchart is applicable to any entropy coding engines such as CABAC or CAVLC.
- steps S 401 and S 402 are the same as steps S 201 and S 202 , respectively, in FIG. 3 , and the description of these steps is not repeated here.
- a third most probable mode is derived from the first and second most probable modes MPM0 and MPM1 derived from the prediction modes of the neighbouring top and left CUs in step S 402 .
- FIG. 5 is a flow chart illustrating in more detail the steps for deriving the third most probable mode MPM2 according to a first embodiment of the invention.
- step S 501 first and second most probable modes values MPM0 and MPM1 as derived in step S 402 are identified.
- step S 502 it is checked as to whether one of the most probable mode values MPM0 and MPM1 corresponds to a planar prediction mode. This step may involve checking both most probable mode values to check if they correspond to a planar prediction mode.
- the most probable mode values MPM0 and MPM1 have been ordered according to their prediction mode values it may only be necessary to check if MPM0 corresponds to a planar mode since MPM0 will correspond to the lower order prediction mode.
- the further most probable mode MPM2 is set at a mode value corresponding to a planar prediction mode in step S 506 . Since a planar mode is statistically the most frequently used prediction mode, it is beneficial to insert it into the set of MPMs for the later comparison step, since it is a more likely to correspond to the prediction mode of the current block.
- step S 502 If, however, it is determined in step S 502 that either one of the first and second MPMs, MPM0 or MPM1, corresponds to a planar mode, it is then checked in step S 503 if the other MPM0 or MPM1 corresponds to a DC prediction mode. If it is determined that one of the first and second MPMs MPM0 or MPM1 corresponds to a planar prediction mode and the other of the first and second MPMs MPM0 and MPM1 corresponds to a DC prediction mode, the third MPM MPM2 is set at a pre-defined mode value.
- MPM2 is set at a prediction mode value 2 corresponding to the vertical prediction mode.
- a prediction mode value 2 corresponding to horizontal direction prediction could also be chosen, but the vertical direction is statistically more present in natural images than horizontal structures and so is more likely to correspond to the prediction mode of the current block.
- the pre-defined prediction mode may be signaled in the slice or picture header, since it may be dependent on the picture content, for instance depending on the statistics of the modes distribution in the image.
- the pre-defined prediction mode can be adaptively derived, based on mode probabilities representative of the probability of occurrence of respective prediction modes that are regularly computed
- probability tables are defined. Each time a mode is coded, its probability is updated.
- MPM0 and MPM1 are Planar and DC
- MPM2 is computed as the mode different from Planar and DC which has the highest probability value. Therefore the MPM2 is, in this specific case of Planar and DC as the two first MPMs, adaptively computed depending on the image content.
- step S 503 If, however it is determined in step S 503 that neither of the first MPM MPM0 and the second MPM MPM1 correspond to a DC prediction mode and that thus one of the first and second MPMs, MPM0 or MPM1 corresponds to a directional prediction mode MPM_dir, the third MPM MPM2 is set to the directional prediction mode with the nearest authorized superior angular direction to the direction of MPM_dir in step S 505 .
- FIG. 6 which illustrates this process in more detail.
- step S 601 the prediction mode of the neighbouring coding units which is not a planar mode is identified.
- step S 602 it is determined if the identified prediction mode is DC. If yes MPM2 is set at a vertical prediction mode, otherwise if the identified prediction mode is not a DC MPM2 is set at the nearest authorized superior angular direction to the direction (MPM_dir) of mode m in step S 604 .
- MPM2 is set to 24 if the current coding unit is of size 8 ⁇ 8 to 32 ⁇ 32, or 6 if the current coding unit is of size 4 ⁇ 4 (in the current HEVC design, in 4 ⁇ 4 CU, modes with value higher to 17 are forbidden).
- MPM2 is set to 24 if the current coding unit is of size 8 ⁇ 8 to 32 ⁇ 32, or 6 if the current coding unit is of size 4 ⁇ 4 (in the current HEVC design, in 4 ⁇ 4 CU, modes with value higher to 17 are forbidden).
- step S 402 may no include the process of reordering MPM0 and MPM1 according to their prediction mode value, and then MPM0, MPM1 and MPM2 may be ordered according to their prediction mode value after MPM2 has been derived.
- step S 404 it is verified in step S 404 if the prediction mode related to the current coding block is equal to the first MPM MPM0, the second MPM MPM1 or the third MPM MPM2 derived in steps S 402 and S 403 in order to determine whether encoding Process 1 or encoding Process 2 will be applied to encode the prediction mode value of the current coding block.
- Process 1 which is carried out when the mode of the current block is equal to one of the three MPMs MPM0, MPM1 or MPM2, is implemented in step S 405 .
- step S 405 can be the same as step S 204 in FIG. 3 and will not be described in detail here.
- Step S 406 is the same as the corresponding step S 205 in FIG. 3 , and will not be described in detail here.
- FIG. 7 shows the image coding structure 100 used in HEVC.
- the original video sequence 1001 is a succession of digital images “images i”.
- images i As is known per se, a digital image is represented by one or more matrices the coefficients of which represent pixels.
- the images 1002 are divided into slices 1003 .
- a slice is a part of the image or the entire image.
- these slices are divided into non-overlapping Largest Coding Units (LCUs) 1004 , generally blocks of size 64 pixels ⁇ 64 pixels.
- LCUs Largest Coding Units
- Each LCU may in its turn be iteratively divided into smaller variable size Coding Units (CUs) 1005 using a quadtree decomposition.
- CUs variable size Coding Units
- Each CU can be further partitioned into a maximum of 2 symmetric rectangular Partition Units 1006 .
- FIG. 8 illustrates a diagram of apparatus 1000 adapted to implement an encoder according to an embodiment of the present invention or to implement a decoder.
- the apparatus 1000 is for example a micro-computer, a workstation or a light portable device.
- the apparatus 1000 comprises a communication bus 1113 to which there are preferably connected:
- the apparatus 1000 may also have the following components:
- the apparatus 1000 can be connected to various peripherals, such as for example a digital camera 1100 or a microphone 1108 , each being connected to an input/output card (not shown) so as to supply multimedia data to the apparatus 1000 .
- peripherals such as for example a digital camera 1100 or a microphone 1108 , each being connected to an input/output card (not shown) so as to supply multimedia data to the apparatus 1000 .
- the communication bus affords communication and interoperability between the various elements included in the apparatus 1000 or connected to it.
- the representation of the bus is not limiting and in particular the central processing unit is able to communicate instructions to any element of the apparatus 1000 directly or by means of another element of the apparatus 1000 .
- the disk 1106 can be replaced by any information medium such as for example a compact disk (CD-ROM), rewritable or not, a ZIP disk or a memory card and, in general terms, by an information storage means that can be read by a microcomputer or by a microprocessor, integrated or not into the apparatus, possibly removable and adapted to store one or more programs whose execution enables the method of encoding a sequence of digital images and/or the method of decoding a bitstream according to the invention to be implemented.
- CD-ROM compact disk
- ZIP disk or a memory card
- an information storage means that can be read by a microcomputer or by a microprocessor, integrated or not into the apparatus, possibly removable and adapted to store one or more programs whose execution enables the method of encoding a sequence of digital images and/or the method of decoding a bitstream according to the invention to be implemented.
- the executable code may be stored either in read only memory 1107 , on the hard disk 1104 or on a removable digital medium such as for example a disk 1106 as described previously.
- the executable code of the programs can be received by means of the communication network 1103 , via the interface 1102 , in order to be stored in one of the storage means of the apparatus 1000 before being executed, such as the hard disk 1104 .
- the central processing unit 1111 is adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to the invention, instructions that are stored in one of the aforementioned storage means.
- the program or programs that are stored in a non-volatile memory for example on the hard disk 1104 or in the read only memory 1107 , are transferred into the random access memory 1112 , which then contains the executable code of the program or programs, as well as registers for storing the variables and parameters necessary for implementing the invention.
- the apparatus is a programmable apparatus which uses software to implement the invention.
- the present invention may be implemented in hardware (for example, in the form of an Application Specific Integrated Circuit or ASIC).
- FIG. 9 illustrates a block diagram of an encoder 1200 according to an embodiment of the invention.
- the encoder is represented by connected modules, each module being adapted to implement, for example in the form of programming instructions to be executed by the CPU 1111 of apparatus 1000 , a corresponding step of a method implementing an embodiment of the invention.
- An original sequence of digital images i 0 to i n 1001 is received as an input by the encoder 1200 .
- Each digital image is represented by a set of samples, known as pixels.
- a bitstream 1210 is output by the encoder 1200 .
- a CU or PU is a block of pixels.
- the input digital images i are divided into blocks by module 1202 . These blocks are image portions and may be of variable sizes (e.g. 4 ⁇ 4, 8 ⁇ 8, 16 ⁇ 16, 32 ⁇ 32, 64 ⁇ 64).
- each block of an image being processed is predicted spatially by an “Intra” predictor module 1203 , or temporally by an “Inter” predictor module comprising a motion estimation module 1204 and a motion compensation module 1205 .
- Each predictor is a block of pixels issued from the same image or another image, from which a difference block (or “residual”) is derived.
- the encoded frames are of two types: temporal predicted frames (either predicted from one reference frame called P-frames or predicted from two reference frames called B-frames) and non-temporal predicted frames (called Intra frames or I-frames).
- I-frames only Intra prediction is considered for coding CUs/PUs.
- P-frames and B-frames Intra and Inter prediction are considered for coding CUs/PUs.
- the current block is predicted by means of an “Intra” predictor, a block of pixels constructed from the information already encoded of the current image.
- Mono-prediction consists of predicting the block by referring to one reference block from one reference picture.
- Biprediction consists of predicting the block by referring to two reference blocks from one or two reference pictures.
- An estimation of motion is carried out by module 1204 between the current CU or PU and reference images 1216 . This motion estimation is made in order to identify, in one or several of these reference images, one (P-type) or several (B-type) blocks of pixels to use them as predictors of this current block. In a case where several block predictors are used (Btype), they are merged to generate one single prediction block.
- the reference images used consist of images in the video sequence that have already been coded and then reconstructed (by decoding).
- the motion estimation carried out by module 1204 is a block matching algorithm (BMA).
- BMA block matching algorithm
- the predictor obtained by the algorithm is then subtracted from the current data block to be processed so as to obtain a difference block (block residual). This processing is called “motion compensation” and is carried out by module 1205 .
- These two types of coding thus supply several texture residuals (the difference between the current block and the predictor block), which are compared in a module 1206 for selecting the best coding mode.
- an item of information for describing the “Intra” predictor used is coded by an entropic encoding module 1209 before being inserted in the bit stream 1210 .
- Embodiments of the present invention described hereinbefore with reference to FIGS. 4 to 6 are applicable to the entropic encoding module 1209 in FIG. 9 .
- motion information is coded by the entropic encoding module 1209 and inserted in the bit stream 1210 .
- This motion information is in particular composed of one or several motion vectors (indicating the position of the predictor block in the reference images relative to the position of the block to be predicted) and an image index among the reference images.
- the residual obtained according to the coding mode selected by the module 1206 is then transformed by module 1207 .
- the transform applies to a Transform Unit (TU), that is included into a CU.
- a TU can be further split into smaller TUs 1006 using a so-called Residual QuadTree (RQT) decomposition.
- RQT Residual QuadTree
- HEVC generally 2 or 3 levels of decompositions are used and authorized transform sizes are from 32 ⁇ 32, 16 ⁇ 16, 8 ⁇ 8 and 4 ⁇ 4.
- the transform basis is derived from a discrete cosine transform DCT.
- the residual transformed coefficients are then quantized by a quantization module 1208 .
- the coefficients of the quantized transformed residual are then coded by means of the entropic coding module 1209 and then inserted in the compressed bit stream 1210 .
- the encoder performs a decoding of the blocks already encoded by means of a so-called “decoding” loop 1211 - 1215 .
- This decoding loop makes it possible to reconstruct the blocks and images from the quantized transformed residuals.
- the quantized transformed residual is dequantized in module 1211 by applying the reverse quantization to that provided by module 1208 and reconstructed in module 1212 by applying the reverse transform to that of the module 1207 .
- module 1213 the used “Intra” predictor is added to this residual in order to recover a reconstructed block corresponding to the original block modified by the losses resulting from a transformation with loss, here quantization operations.
- the blocks pointed to by the current motion vectors are merged then added to this decoded residual in module 1214 .
- the original block, modified by the losses resulting from the quantization operations is obtained.
- a final loop filter 1215 is applied to the reconstructed signal in order to reduce the effects created by heavy quantization of the residuals obtained and to improve the signal quality.
- the loop filter comprises two steps, a “deblocking” filter and a linear filtering.
- the deblocking filtering smoothes the borders between the blocks in order to visually attenuate these high frequencies created by the coding.
- the linear filtering further improves the signal using filter coefficients adaptively determined at the encoder.
- the filtering by module 1215 is thus applied to an image when all the blocks of pixels of this image have been decoded.
- the filtered images also called reconstructed images, are then stored as reference images 1216 in order to allow the subsequent “Inter” predictions taking place during the compression of the following images of the current video sequence.
- the motion estimation is carried out on N images.
- the best “Inter” predictors of the current block, for the motion compensation are selected in some of the multiple reference images. Consequently two adjoining blocks may have two predictor blocks that come from two distinct reference images. This is in particular the reason why, in the compressed bit stream, the index of the reference image (in addition to the motion vector) used for the predictor block is indicated.
- the use of multiple reference images is both a tool for resisting errors and a tool for improving the compression efficacy.
- the VCEG group recommends limiting the number of reference images to four.
- FIG. 10 illustrates a block diagram of a decoder 1300 according to an embodiment of the invention.
- the decoder is represented by connected modules, each module being adapted to implement, for example in the form of programming instructions to be executed by the CPU 1111 of apparatus 1000 , a corresponding step of a method implementing an embodiment of the invention.
- the decoder 1300 receives as an input a bit stream 1301 corresponding to a video sequence 1210 compressed by an encoder of the HEVC type, such as the one shown in FIG. 9 .
- bit stream 1301 is first of all decoded entropically by a module 1302 .
- the residual of the current block is then dequantized by a dequantization module 1303 . This reverses the quantization carried out by the quantization module 1208 in the encoder 1200 .
- the dequantized data is then reconstructed by a reverse transform module 1304 which performs a transformation the reverse of that carried out by the transform module 1207 in the encoder 1200 .
- the decoding of the data in the video sequence is then carried out image by image and, within an image, block by block.
- the “Inter” or “Intra” coding mode for the current block is extracted from the bit stream 1301 and decoded entropically.
- the number of the predictor is extracted from the bit stream and decoded entropically.
- the Intra predictor block associated with this index is recovered from the data already decoded of the current image.
- the residual associated with the current block is recovered from the bit stream 1301 and then decoded entropically. Finally, the Intra predictor block recovered is added to the residual thus dequantized and reconstructed in a reverse Intra prediction module 1305 in order to obtain the decoded block.
- the motion information is extracted from the bit stream 1301 by the entropic decoding module 1302 and decoded.
- This motion information is used in a reverse motion compensation module 206 in order to determine the “Inter” predictor block contained in the reference images 1308 of the decoder 1300 .
- these reference images 1308 are composed of images that precede the image currently being decoded and that are reconstructed from the bit stream (and therefore decoded previously).
- the residual associated with the current block is, here also, recovered from the bit stream 1301 and then decoded entropically by module 1302 .
- the Inter predictor block determined is then added to the thus dequantized residual reconstructed in the reverse motion compensation module 1306 in order to obtain the decoded block.
- the same loop filter 1307 as the filter 1215 provided at the encoder is used to eliminate the block effects and improve the signal quality in order to obtain the reference images 1308 .
- the images thus decoded constitute the output video signal 1309 of the decoder, which can then be displayed and used.
- any type of image 5 portions to encode or decode can be considered, in particular rectangular portions or more generally geometrical portions.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1115739.3 | 2011-09-12 | ||
GB1115739.3A GB2494468B (en) | 2011-09-12 | 2011-09-12 | Method and device for encoding or decoding information representing prediction modes |
PCT/EP2012/003829 WO2013037489A1 (en) | 2011-09-12 | 2012-09-12 | Deriving reference mode values and encoding and decoding information representing prediction modes |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2012/003829 A-371-Of-International WO2013037489A1 (en) | 2011-09-12 | 2012-09-12 | Deriving reference mode values and encoding and decoding information representing prediction modes |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/891,763 Continuation US10264253B2 (en) | 2011-09-12 | 2018-02-08 | Deriving reference mode values and encoding and decoding information representing prediction modes |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150010054A1 US20150010054A1 (en) | 2015-01-08 |
US9930332B2 true US9930332B2 (en) | 2018-03-27 |
Family
ID=44908429
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/344,315 Active 2035-04-24 US9930332B2 (en) | 2011-09-12 | 2012-09-12 | Deriving reference mode values and encoding and decoding information representing prediction modes |
US15/891,763 Active US10264253B2 (en) | 2011-09-12 | 2018-02-08 | Deriving reference mode values and encoding and decoding information representing prediction modes |
US16/296,069 Active US10687057B2 (en) | 2011-09-12 | 2019-03-07 | Deriving reference mode values and encoding and decoding information representing prediction modes |
US16/295,937 Active US10687056B2 (en) | 2011-09-12 | 2019-03-07 | Deriving reference mode values and encoding and decoding information representing prediction modes |
US16/296,079 Active US10666938B2 (en) | 2011-09-12 | 2019-03-07 | Deriving reference mode values and encoding and decoding information representing prediction modes |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/891,763 Active US10264253B2 (en) | 2011-09-12 | 2018-02-08 | Deriving reference mode values and encoding and decoding information representing prediction modes |
US16/296,069 Active US10687057B2 (en) | 2011-09-12 | 2019-03-07 | Deriving reference mode values and encoding and decoding information representing prediction modes |
US16/295,937 Active US10687056B2 (en) | 2011-09-12 | 2019-03-07 | Deriving reference mode values and encoding and decoding information representing prediction modes |
US16/296,079 Active US10666938B2 (en) | 2011-09-12 | 2019-03-07 | Deriving reference mode values and encoding and decoding information representing prediction modes |
Country Status (16)
Country | Link |
---|---|
US (5) | US9930332B2 (ja) |
EP (6) | EP4228260A1 (ja) |
JP (5) | JP6129178B2 (ja) |
KR (6) | KR20140062509A (ja) |
CN (6) | CN108848387B (ja) |
BR (5) | BR122020002124B1 (ja) |
ES (1) | ES2963368T3 (ja) |
GB (1) | GB2494468B (ja) |
HR (1) | HRP20240151T1 (ja) |
HU (1) | HUE065000T2 (ja) |
IN (1) | IN2014CN02461A (ja) |
PL (1) | PL3518540T3 (ja) |
PT (1) | PT3518540T (ja) |
RS (1) | RS65122B1 (ja) |
RU (7) | RU2575992C2 (ja) |
WO (1) | WO2013037489A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10264253B2 (en) * | 2011-09-12 | 2019-04-16 | Canon Kabushiki Kaisha | Deriving reference mode values and encoding and decoding information representing prediction modes |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3139596B1 (en) | 2011-09-13 | 2019-09-25 | HFI Innovation Inc. | Method and apparatus for intra mode coding in hevc |
CN107197309B (zh) | 2011-10-07 | 2020-02-18 | 英迪股份有限公司 | 对视频信号进行解码的方法 |
CN104918056B (zh) * | 2011-10-24 | 2017-08-18 | 英孚布瑞智有限私人贸易公司 | 对帧内预测模式进行解码的方法 |
CN103931186B (zh) | 2011-10-24 | 2018-03-09 | 英孚布瑞智有限私人贸易公司 | 图像解码装置 |
HUE051689T2 (hu) * | 2011-10-24 | 2021-03-29 | Innotive Ltd | Képdekódoló készülék |
CN105338345B (zh) | 2011-10-24 | 2019-01-04 | 英孚布瑞智有限私人贸易公司 | 用于图像解码的方法和装置 |
KR20130049526A (ko) * | 2011-11-04 | 2013-05-14 | 오수미 | 복원 블록 생성 방법 |
KR20130049522A (ko) | 2011-11-04 | 2013-05-14 | 오수미 | 인트라 예측 블록 생성 방법 |
KR20130049524A (ko) | 2011-11-04 | 2013-05-14 | 오수미 | 인트라 예측 블록 생성 방법 |
US10390016B2 (en) | 2011-11-04 | 2019-08-20 | Infobridge Pte. Ltd. | Apparatus of encoding an image |
KR20130049525A (ko) | 2011-11-04 | 2013-05-14 | 오수미 | 잔차 블록 복원을 위한 역변환 방법 |
US9154796B2 (en) * | 2011-11-04 | 2015-10-06 | Qualcomm Incorporated | Intra-mode video coding |
KR20130049523A (ko) * | 2011-11-04 | 2013-05-14 | 오수미 | 인트라 예측 블록 생성 장치 |
KR20130058524A (ko) | 2011-11-25 | 2013-06-04 | 오수미 | 색차 인트라 예측 블록 생성 방법 |
CN103220506B (zh) * | 2012-01-19 | 2015-11-25 | 华为技术有限公司 | 一种编解码方法和设备 |
EP2806642B1 (en) * | 2012-01-20 | 2019-04-10 | Intellectual Discovery Co., Ltd. | Intra prediction mode mapping method and device using the method |
US9210438B2 (en) * | 2012-01-20 | 2015-12-08 | Sony Corporation | Logical intra mode naming in HEVC video coding |
CN105338363B (zh) * | 2014-07-30 | 2018-12-14 | 联想(北京)有限公司 | 一种视频帧的编码及解码方法和装置 |
CN113438477A (zh) * | 2016-04-06 | 2021-09-24 | 株式会社Kt | 对视频进行编码、解码的方法及存储压缩视频数据的设备 |
KR20180040319A (ko) * | 2016-10-12 | 2018-04-20 | 가온미디어 주식회사 | 영상 처리 방법, 그를 이용한 영상 복호화 및 부호화 방법 |
US20190289301A1 (en) * | 2016-05-23 | 2019-09-19 | Kaonmedia Co., Ltd. | Image processing method, and image encoding and decoding method using same |
CN117336475A (zh) | 2017-01-02 | 2024-01-02 | Lx 半导体科技有限公司 | 图像编码/解码方法、图像数据的发送方法以及存储介质 |
KR102252323B1 (ko) | 2018-05-10 | 2021-05-14 | 삼성전자주식회사 | 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치 |
CN112272950A (zh) * | 2018-06-18 | 2021-01-26 | 交互数字Vc控股公司 | 帧内预测中的平面和dc模式的边界滤波 |
WO2020050697A1 (ko) * | 2018-09-06 | 2020-03-12 | 엘지전자 주식회사 | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 |
US11394965B2 (en) | 2018-09-11 | 2022-07-19 | Lg Electronics Inc. | Method for processing image on basis of intra prediction mode, and device therefor |
CN113347438B (zh) | 2019-01-02 | 2023-09-29 | Oppo广东移动通信有限公司 | 帧内预测方法及装置、视频编码设备、存储介质 |
CN116527893A (zh) * | 2019-01-08 | 2023-08-01 | Lg电子株式会社 | 解码设备、编码设备和数据发送设备 |
WO2020209671A1 (ko) * | 2019-04-10 | 2020-10-15 | 한국전자통신연구원 | 화면 내 예측에서 예측 모드 관련 신호를 시그널링하는 방법 및 장치 |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003105070A1 (en) | 2002-06-01 | 2003-12-18 | Nokia Corporation | Spatial prediction based intra coding |
US20050157784A1 (en) * | 2003-12-24 | 2005-07-21 | Kabushiki Kaisha Toshiba | Moving picture coding method and moving picture coding apparatus |
US20060133486A1 (en) | 2002-10-01 | 2006-06-22 | Thomson Licensing S.A. | Implicit weighting of reference pictures in a video decoder |
RU2314656C2 (ru) | 2002-06-11 | 2008-01-10 | Нокиа Корпорейшн | Внутреннее кодирование, основанное на пространственном прогнозировании |
US20090087110A1 (en) * | 2007-09-28 | 2009-04-02 | Dolby Laboratories Licensing Corporation | Multimedia coding and decoding with additional information capability |
US20090274213A1 (en) | 2008-04-30 | 2009-11-05 | Omnivision Technologies, Inc. | Apparatus and method for computationally efficient intra prediction in a video coder |
WO2010090749A1 (en) | 2009-02-06 | 2010-08-12 | Thomson Licensing | Methods and apparatus for implicit and semi-implicit intra mode signaling for video encoders and decoders |
RU2010101115A (ru) | 2007-06-15 | 2011-07-20 | Квэлкомм Инкорпорейтед (US) | Адаптивное кодирование режима прогнозирования видеоблоков |
US20110243226A1 (en) * | 2010-04-05 | 2011-10-06 | Samsung Electronics Co., Ltd. | Low complexity entropy-encoding/decoding method and apparatus |
EP2391130A2 (en) | 2010-05-30 | 2011-11-30 | LG Electronics Inc. | Enhanced intra mode signaling |
WO2012170812A1 (en) | 2011-06-09 | 2012-12-13 | Qualcomm Incorporated | Enhanced intra-prediction mode signaling for video coding using neighboring mode |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9055298B2 (en) * | 2005-07-15 | 2015-06-09 | Qualcomm Incorporated | Video encoding method enabling highly efficient partial decoding of H.264 and other transform coded information |
JP2007243391A (ja) * | 2006-03-07 | 2007-09-20 | Victor Co Of Japan Ltd | 動画像符号化装置 |
KR101365569B1 (ko) * | 2007-01-18 | 2014-02-21 | 삼성전자주식회사 | 인트라 예측 부호화, 복호화 방법 및 장치 |
KR101365575B1 (ko) * | 2007-02-05 | 2014-02-25 | 삼성전자주식회사 | 인터 예측 부호화, 복호화 방법 및 장치 |
TWI364222B (en) * | 2007-09-05 | 2012-05-11 | Via Tech Inc | Method and device for generating prediction mode parameter |
JP2009218742A (ja) * | 2008-03-07 | 2009-09-24 | Canon Inc | 画像符号化装置 |
US8897359B2 (en) * | 2008-06-03 | 2014-11-25 | Microsoft Corporation | Adaptive quantization for enhancement layer video coding |
CN101330617B (zh) * | 2008-07-31 | 2010-11-17 | 上海交通大学 | 基于模式映射的多标准帧内预测器的硬件实现方法及装置 |
KR20100027384A (ko) * | 2008-09-02 | 2010-03-11 | 삼성전자주식회사 | 예측 모드 결정 방법 및 장치 |
KR101590500B1 (ko) * | 2008-10-23 | 2016-02-01 | 에스케이텔레콤 주식회사 | 동영상 부호화/복호화 장치, 이를 위한 인트라 예측 방향에기반한 디블록킹 필터링 장치 및 필터링 방법, 및 기록 매체 |
JP2010258738A (ja) * | 2009-04-24 | 2010-11-11 | Sony Corp | 画像処理装置および方法、並びにプログラム |
CN101605263B (zh) * | 2009-07-09 | 2012-06-27 | 杭州士兰微电子股份有限公司 | 帧内预测的方法和装置 |
RS57112B1 (sr) * | 2010-08-17 | 2018-06-29 | M&K Holdings Inc | Postupak za obnavljanje intra prediktivnog moda |
PT2887670T (pt) * | 2011-06-28 | 2017-09-18 | Samsung Electronics Co Ltd | Método e aparelho para codificar vídeo e método e aparelho para descodificar vídeo, acompanhados com intra previsão |
GB2494468B (en) | 2011-09-12 | 2014-01-15 | Canon Kk | Method and device for encoding or decoding information representing prediction modes |
-
2011
- 2011-09-12 GB GB1115739.3A patent/GB2494468B/en active Active
-
2012
- 2012-09-12 CN CN201810653729.7A patent/CN108848387B/zh active Active
- 2012-09-12 KR KR1020147009016A patent/KR20140062509A/ko not_active Application Discontinuation
- 2012-09-12 EP EP23170178.0A patent/EP4228260A1/en active Pending
- 2012-09-12 PL PL19157988.7T patent/PL3518540T3/pl unknown
- 2012-09-12 RU RU2014114449/08A patent/RU2575992C2/ru active
- 2012-09-12 BR BR122020002124-1A patent/BR122020002124B1/pt active IP Right Grant
- 2012-09-12 RS RS20240121A patent/RS65122B1/sr unknown
- 2012-09-12 KR KR1020187027326A patent/KR102052292B1/ko active IP Right Grant
- 2012-09-12 BR BR112014005323-5A patent/BR112014005323B1/pt active IP Right Grant
- 2012-09-12 CN CN201810652420.6A patent/CN108632626B/zh active Active
- 2012-09-12 KR KR1020187027325A patent/KR102052291B1/ko active IP Right Grant
- 2012-09-12 PT PT191579887T patent/PT3518540T/pt unknown
- 2012-09-12 HR HRP20240151TT patent/HRP20240151T1/hr unknown
- 2012-09-12 IN IN2461CHN2014 patent/IN2014CN02461A/en unknown
- 2012-09-12 KR KR1020187027324A patent/KR102052290B1/ko active IP Right Grant
- 2012-09-12 EP EP12759373.9A patent/EP2756675B1/en active Active
- 2012-09-12 EP EP19157988.7A patent/EP3518540B1/en active Active
- 2012-09-12 EP EP23170181.4A patent/EP4228261A1/en active Pending
- 2012-09-12 WO PCT/EP2012/003829 patent/WO2013037489A1/en active Application Filing
- 2012-09-12 CN CN201810653568.1A patent/CN108632628B/zh active Active
- 2012-09-12 BR BR122020002127-6A patent/BR122020002127B1/pt active IP Right Grant
- 2012-09-12 CN CN201810653727.8A patent/CN108632617B/zh active Active
- 2012-09-12 KR KR1020167005146A patent/KR101903101B1/ko active IP Right Grant
- 2012-09-12 BR BR122020002125-0A patent/BR122020002125B1/pt active IP Right Grant
- 2012-09-12 BR BR122020002126-8A patent/BR122020002126B1/pt active IP Right Grant
- 2012-09-12 CN CN201280044447.8A patent/CN103797800B/zh active Active
- 2012-09-12 EP EP23170176.4A patent/EP4228259A1/en active Pending
- 2012-09-12 US US14/344,315 patent/US9930332B2/en active Active
- 2012-09-12 JP JP2014528903A patent/JP6129178B2/ja active Active
- 2012-09-12 CN CN201810653510.7A patent/CN108632627B/zh active Active
- 2012-09-12 KR KR1020187027327A patent/KR102068921B1/ko active IP Right Grant
- 2012-09-12 HU HUE19157988A patent/HUE065000T2/hu unknown
- 2012-09-12 ES ES19157988T patent/ES2963368T3/es active Active
- 2012-09-12 EP EP23170182.2A patent/EP4224857A1/en active Pending
-
2016
- 2016-01-25 RU RU2016102177A patent/RU2016102177A/ru not_active Application Discontinuation
-
2017
- 2017-04-12 JP JP2017079151A patent/JP6513120B2/ja active Active
-
2018
- 2018-02-08 US US15/891,763 patent/US10264253B2/en active Active
- 2018-04-18 RU RU2018114274A patent/RU2696252C1/ru active
-
2019
- 2019-02-06 JP JP2019020165A patent/JP6766195B2/ja active Active
- 2019-03-07 US US16/296,069 patent/US10687057B2/en active Active
- 2019-03-07 US US16/295,937 patent/US10687056B2/en active Active
- 2019-03-07 US US16/296,079 patent/US10666938B2/en active Active
- 2019-03-29 JP JP2019066250A patent/JP6766210B2/ja active Active
- 2019-03-29 JP JP2019066251A patent/JP6766211B2/ja active Active
- 2019-07-19 RU RU2019122867A patent/RU2722536C1/ru active
-
2020
- 2020-05-22 RU RU2020116860A patent/RU2738256C1/ru active
- 2020-05-22 RU RU2020116865A patent/RU2732673C1/ru active
- 2020-12-08 RU RU2020140328A patent/RU2759319C1/ru active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003105070A1 (en) | 2002-06-01 | 2003-12-18 | Nokia Corporation | Spatial prediction based intra coding |
RU2314656C2 (ru) | 2002-06-11 | 2008-01-10 | Нокиа Корпорейшн | Внутреннее кодирование, основанное на пространственном прогнозировании |
US20060133486A1 (en) | 2002-10-01 | 2006-06-22 | Thomson Licensing S.A. | Implicit weighting of reference pictures in a video decoder |
RU2335860C2 (ru) | 2002-10-01 | 2008-10-10 | Томсон Лайсенсинг С.А. | Неявное взвешивание опорных изображений в видеодекодере |
US20050157784A1 (en) * | 2003-12-24 | 2005-07-21 | Kabushiki Kaisha Toshiba | Moving picture coding method and moving picture coding apparatus |
RU2010101115A (ru) | 2007-06-15 | 2011-07-20 | Квэлкомм Инкорпорейтед (US) | Адаптивное кодирование режима прогнозирования видеоблоков |
US20090087110A1 (en) * | 2007-09-28 | 2009-04-02 | Dolby Laboratories Licensing Corporation | Multimedia coding and decoding with additional information capability |
US20090274213A1 (en) | 2008-04-30 | 2009-11-05 | Omnivision Technologies, Inc. | Apparatus and method for computationally efficient intra prediction in a video coder |
WO2010090749A1 (en) | 2009-02-06 | 2010-08-12 | Thomson Licensing | Methods and apparatus for implicit and semi-implicit intra mode signaling for video encoders and decoders |
US20110243226A1 (en) * | 2010-04-05 | 2011-10-06 | Samsung Electronics Co., Ltd. | Low complexity entropy-encoding/decoding method and apparatus |
EP2391130A2 (en) | 2010-05-30 | 2011-11-30 | LG Electronics Inc. | Enhanced intra mode signaling |
WO2012170812A1 (en) | 2011-06-09 | 2012-12-13 | Qualcomm Incorporated | Enhanced intra-prediction mode signaling for video coding using neighboring mode |
Non-Patent Citations (20)
Title |
---|
Bjontegaard, et al., "Definition of New Coding Elements from Telenor", 10. VCEG Meeting: May 16, 2000-May 19, 2000; Osaka, JP; (Video Codingexperts Group of ITU-T SG.16),, No. q15j28, May 9, 2000 (May 9, 2000), XP030003057, ISSN: 0000-0466 * section 3.4.8 *. |
Bossen, F: "Common test conditions and software reference configurations", 20110322, No. JCTVC-E700, Mar. 22, 2011 (Mar. 22, 2011), XP030009015, ISSN: 0000-0003 * section 1 *. |
Chien, et al., "Parsing friendly intra mode coding", 6. JCT-VC Meeting; 97. MPEG Meeting; Jul. 14, 2011-Jul. 22, 2011; Torino; (Joint Collaborative Team on Video Coding of ISO/IEC JTC1/SC29/WG11 and ITU-T SG. 16); URL: http://wftp3.itu.int/av-arch/jctvc-site ,, No. JCTVC-F459, Jul. 15, 2011 (Jul. 15, 2011) XP030009482, * sections 2 and 3.1*; p. 2-p. 3. |
Chuang et al., "Luma Intra Prediction Mode Coding", 6. JCT-VC Meeting; 97. MPEG Meeting; Jul. 14, 2011-Jul. 22, 2011; Torino; (Joint Collaborative Team on Video Coding of ISO/IEC JTC1/SC29/WG11 and ITU-T SG.16) URL: http://wftp3.itu.int/av-arch/jctvc-site/ ,, No. JCTVC-FO62, Jul. 15, 2011 (Jul. 15, 2011), XP030009085, * sections 1 and 2 * abstract; figures 2,3 tables 1,2. |
Chuang, et al., "CE6b: Intra prediction mode coding", 7. JCT-VC Meeting; 98. MPEG Meeting; Nov. 21, 2011-Nov. 30, 2011; Geneva; (Joint Collaborative Team on Video Coding of ISO/IEC JCT1/SC29/WG11 and ITU-T SG.16); URL: http://wftp3.itu.int/av-arch/jctvc-site/,, No. JCTVC-G203, Nov. 7, 2011 (Nov. 7, 2011), XP030110187, * section 2.1 *. |
E. FRANCOIS, S. PAUTET (CANON), JOONYOUNG PARK, BYEONGMOON JEON (LG), TZU-DER CHUANG, CHING-YEH CHEN, MEI GUO, XUN GUO, YU-WEN HUA: "CE6b: Intra mode coding with 4 MPMs and mode ranking", 98. MPEG MEETING; 28-11-2011 - 2-12-2011; GENEVA; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11), 21 November 2011 (2011-11-21), XP030050364 |
E. FRANCOIS, S. PAUTET, C. GISQUET (CANON): "Modified Intra Mode Coding", 6. JCT-VC MEETING; 97. MPEG MEETING; 14-7-2011 - 22-7-2011; TORINO; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/, 1 July 2011 (2011-07-01), XP030009292 |
F. BOSSEN: "Common test conditions and software reference configurations", 5. JCT-VC MEETING; 96. MPEG MEETING; 16-3-2011 - 23-3-2011; GENEVA; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/, 22 March 2011 (2011-03-22), XP030009015 |
Francois, et al., "CE6b: Intra Mode coding with 4 MPMs and mode ranking", 98. MPEG Meeting; Nov. 28, 2011-Dec. 2, 2011; Geneva; (Motion Picture Expert Group OR ISO/IEC JTC1/SC29/WG11),, No. m21801, Nov. 21, 2011 (Nov. 21, 2011), XP030050364, * section 2 *. |
Francois, et al., "Modified Intra Mode Coding", 6. JCT-VC Meeting; 97. MPEG Meeting; Jul. 14, 2011-Jul. 22, 2011; Torino; (Joint Collaborative Team on Video Coding of ISO/IEC JTC1/SC29/WG11 and ITU-T SG.16); URL: http://wftp3.itu.int/av-arch/jctvc-site/,, No. JCTVC-F269, Jul. 16, 2011 (Jul. 16, 2011), XP030009292, abstract * introduction *. |
GLENN VAN WALLENDAEL ; SEBASTIAAN VAN LEUVEN ; JAN DE COCK ; PETER LAMBERT ; RIK VAN DE WALLE ; JOERI BARBARIEN ; ADRIAN MUNTEANU: "Improved intra mode signaling for HEVC", MULTIMEDIA AND EXPO (ICME), 2011 IEEE INTERNATIONAL CONFERENCE ON, IEEE, 11 July 2011 (2011-07-11), pages 1 - 6, XP031964819, ISBN: 978-1-61284-348-3, DOI: 10.1109/ICME.2011.6012143 |
Guo, et al., "CE14 Subtest 1: Intra Most Probable Mode Coding for Luma", 20110309, No. JCTVC-E088, Mar. 9, 2011 (Mar. 9, 2011), XP030008594, ISSN 0000-0007 *section 3*. |
M. GUO, X. GUO, S. LEI (MEDIATEK): "CE14 Subtest 1: Intra Most Probable Mode Coding for Luma", 5. JCT-VC MEETING; 96. MPEG MEETING; 16-3-2011 - 23-3-2011; GENEVA; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/, 10 March 2011 (2011-03-10), XP030008594 |
Mei Guo, et al., "Improved Intra Mode Coding", Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, JCTVC-D166, 4th Meeting: Daegu, Korea, Jan. 20-28, 2011. |
T. WIEGAND, B. BROSS, W.-J. HAN, J.-R. OHM, G. J. SULLIVAN: "WD3: Working Draft 3 of High-Efficiency Video Coding", 5. JCT-VC MEETING; 96. MPEG MEETING; 16-3-2011 - 23-3-2011; GENEVA; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/, 29 March 2011 (2011-03-29), XP030009014 |
T.-D. CHUANG, C.-Y. CHEN, M. GUO, X. GUO, Y.-W. HUANG, S. LEI (MEDIATEK), W.-J. CHIEN, X. WANG, M. KARCZEWICZ (QUALCOMM): "CE6b: Intra prediction mode coding", 7. JCT-VC MEETING; 98. MPEG MEETING; 21-11-2011 - 30-11-2011; GENEVA; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/, 7 November 2011 (2011-11-07), XP030110187 |
T.-D. CHUANG, C.-Y. CHEN, M. GUO, X. GUO, Y.-W. HUANG, S. LEI (MEDIATEK): "Luma Intra Prediction Mode Coding", 6. JCT-VC MEETING; 97. MPEG MEETING; 14-7-2011 - 22-7-2011; TORINO; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/, 1 July 2011 (2011-07-01), XP030009085 |
Van Wallendael, et al., "Improved intra mode signaling for HEVC", Multimedia and Expo (ICME), 2011 IEEE International Conference on, IEEE, Jul. 11, 2011 (Jul. 11, 2011), pp. 1-6, XP031964819, DOI: 10.1109/ICME.2011.6012143 ISBN: 978-1-61284-348-3 *sections 2 and 4 * figure 2; table 1. |
W.-J. CHIEN, X. WANG, M.KARCZEWICZ (QUALCOMM): "Parsing friendly intra mode coding", 6. JCT-VC MEETING; 97. MPEG MEETING; 14-7-2011 - 22-7-2011; TORINO; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/, 2 July 2011 (2011-07-02), XP030009482 |
Wiegand, et al, "WD3: Working Draft 3 of High-Efficiency Video Coding", 20110329, No. JCTVC-E603, Mar. 29, 2011 (Mar. 29, 2011), XP030009014, ISSN: 0000-0003 * section 8.3.1 * figure 8.1. |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10264253B2 (en) * | 2011-09-12 | 2019-04-16 | Canon Kabushiki Kaisha | Deriving reference mode values and encoding and decoding information representing prediction modes |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10687056B2 (en) | Deriving reference mode values and encoding and decoding information representing prediction modes | |
US20120327999A1 (en) | Encoding mode values representing prediction modes | |
GB2494469A (en) | Intra mode coding using probabilities of occurrence of different prediction modes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CANON KABUSHIKI KAISHA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FRANCOIS, EDOUARD;LAROCHE, GUILLAUME;ONNO, PATRICE;REEL/FRAME:033715/0912 Effective date: 20140606 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |