US20090196342A1 - Adaptive Geometric Partitioning For Video Encoding - Google Patents

Adaptive Geometric Partitioning For Video Encoding Download PDF

Info

Publication number
US20090196342A1
US20090196342A1 US12/309,540 US30954007A US2009196342A1 US 20090196342 A1 US20090196342 A1 US 20090196342A1 US 30954007 A US30954007 A US 30954007A US 2009196342 A1 US2009196342 A1 US 2009196342A1
Authority
US
United States
Prior art keywords
parametric model
partition
curve
encoder
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/309,540
Inventor
Oscar Divorra Escoda
Peng Yin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Licensing DTV SAS
Original Assignee
Oscar Divorra Escoda
Peng Yin
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oscar Divorra Escoda, Peng Yin filed Critical Oscar Divorra Escoda
Priority to US12/309,540 priority Critical patent/US20090196342A1/en
Assigned to THOMSON LICENSING reassignment THOMSON LICENSING ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ESCODA, OSCAR DIVORRA, YIN, PENG
Publication of US20090196342A1 publication Critical patent/US20090196342A1/en
Assigned to THOMSON LICENSING DTV reassignment THOMSON LICENSING DTV ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: THOMSON LICENSING
Assigned to THOMSON LICENSING DTV reassignment THOMSON LICENSING DTV ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: THOMSON LICENSING
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/57Motion estimation characterised by a search window with variable size or shape
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/507Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction using conditional replenishment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • H04N19/543Motion estimation other than block-based using regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present principles relate generally to video encoding and decoding and, more particularly, to methods and apparatus for adaptive geometric partitioning for video encoding and decoding.
  • Prediction is performed on each frame on a partition basis. That is, each frame is partitioned into blocks or sets of nested blocks in a tree structure, and then each block partition is coded by using an intra or inter predictor plus some residual coding.
  • Frame partitioning into blocks is performed by defining a grid of regions, which are normally blocks (called macroblocks) all over the frame and then each of the macroblocks may also be further partitioned in smaller blocks (also called subblocks or sub-macroblocks).
  • macroblocks on the boundary of objects and/or frame regions with different textures, color, smoothness and/or different motion tend to be further divided into subblocks in order to make the coding of the macroblock as efficient as possible, with as much objective and/or subjective quality as possible.
  • Frame partitioning is a process of key importance in efficient video coding.
  • Recent video compression technologies such as the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) Moving Picture Experts Group-4 (MPEG-4) Part 10 Advanced Video Coding (AVC) standard/International Telecommunication Union, Telecommunication Sector (ITU-T) H.264 recommendation (hereinafter the “MPEG-4 AVC standard”), use a tree-based frame partition.
  • ISO/IEC International Organization for Standardization/International Electrotechnical Commission
  • MPEG-4 AVC Moving Picture Experts Group-4
  • AVC Advanced Video Coding
  • ITU-T International Telecommunication Union, Telecommunication Sector
  • H.264 recommendation hereinafter the “MPEG-4 AVC standard”
  • H.263 Recommendation This seems to be more efficient than a simple uniform block partition, typically used in older video coding standards and recommendations such as the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) Moving Picture Experts Group-2 (MPEG-2) standard and the International Telecommunication Union, Telecommunication Sector (ITU-T) H.263 recommendation (hereinafter the “H.263 Recommendation”).
  • ISO/IEC International Organization for Standardization/International Electrotechnical Commission
  • MPEG-2 Moving Picture Experts Group-2
  • ITU-T International Telecommunication Union, Telecommunication Sector
  • H.261 Recommendation International Telecommunication Union, Telecommunication Sector
  • MPEG-1 Moving Picture Experts Group-1
  • MPEG-2 Standard ISO/IEC MPEG-2 standard/ITU-T H.263 recommendation
  • the ISO/IEC Moving Picture Experts Group-4 (MPEG-4) Part 10 Advanced Video Coding (AVC) standard/ITU-T H.264 recommendation (hereinafter the “MPEG-4 AVC standard”) simple profile or ITU-T H.263(+) Recommendation support both 16 ⁇ 16 and 8 ⁇ 8 partitions for a 16 ⁇ 16 MB.
  • the MPEG-4 AVC standard supports tree-structured hierarchical macroblock partitions. A 16 ⁇ 16 MB can be partitioned into macroblock 5 , partitions of sizes 16 ⁇ 8, 8 ⁇ 16, or 8 ⁇ 8. 8 ⁇ 8 partitions are also known as sub-macroblocks. Sub-macroblocks can be further broken into sub-macroblock partitions of sizes 8 ⁇ 4, 4 ⁇ 8, and 4 ⁇ 4.
  • MPEG-4 AVC standard macroblock division sets are indicated generally by the reference numeral 100 .
  • macroblock partitions are indicated by the reference numeral 110
  • sub-macroblock partitions are indicated by the reference numeral 120 .
  • tree structures have been shown to be sub-optimal for coding image information. Some of these studies demonstrate that tree-based coding systems are unable to optimally code heterogeneous regions separated by a regular edge or contour.
  • the first prior art approach within the framework of a H.263 codec, it is proposed to use two additional diagonal motion compensation modes.
  • concerned macroblocks are partitioned into two similar triangles divided by a diagonal segment. Depending on the coding mode, this goes from lower left corner to upper right corner for one mode, and from upper-left corner to the lower-right one for the second mode.
  • FIGS. 2A and 2B additional motion compensation coding modes corresponding to the designated “first prior art approach” described herein are indicated generally by the reference numerals 200 and 250 , respectively.
  • the motion compensation coding mode 200 corresponds to a right-up diagonal edge coding mode
  • the motion compensation coding mode 250 corresponds to a left-up diagonal edge coding mode.
  • the first prior art approach is very limited in the sense that these modes are simple variations of the 16 ⁇ 8 or 8 ⁇ 16 motion compensation modes by a fixed diagonal direction.
  • the edge they define is very coarse and it is not precise enough to fit the rich variety of edges found in video frames.
  • Two modes are introduced in the list of coding modes, which increases the coding overhead of other coding modes located after these two in the list of modes.
  • a direct evolution from the first prior art approach relates to three other prior art approaches, respectively referred to herein as the second, third, and fourth prior art approaches.
  • Collectively in these works a larger set of motion compensation coding modes are introduced than that described in the first prior art approach.
  • the systems described with respect to the second, third, and fourth prior art approaches introduce a large collection of additional coding modes including oriented partitions. These modes are different translated versions of the 16 ⁇ 8, 8x16 modes as well as different translated versions of the modes proposed in the first prior art approach with a zigzag profile.
  • FIG. 3 motion compensation coding modes relating to the designated “second”, “third”, and “fourth prior art approaches” are indicated generally by the reference numeral 300 . Eighteen motion compensation coding modes are shown.
  • the partitions defined in the second, third, and fourth prior art approaches for motion compensation are very gross and imprecise with video frames content. Even if the set of oriented partitions outnumber those in the first prior art approach, they are still not precise enough for efficient coding of the rich variety of edges found in video frames. In this case, there is no explicit coding of geometric information, which impairs to have an adapted treatment of the geometric information in the encoder. Moreover, the overhead introduced in order to code the much larger set of modes has an even worse effect on the non-directional modes that follow the oriented modes in the list of modes.
  • a fifth prior art approach proposes the use of intra prediction within the partitions of the oriented modes from the second, third, and fourth prior art approaches, in addition to their former purpose for motion compensation based prediction.
  • the limitations of the fifth prior art approach are inherited from the second, third, and fourth prior art approaches, hence all those stated in previous paragraphs also apply to the fifth prior art approach.
  • a sixth prior art approaches proposes the most flexible framework from the works found in the literature.
  • the sixth prior art approach proposes the introduction of only 2 modes where segments connecting two boundary points are used to generate block partitions.
  • the first of the proposed motion compensation coding modes divides a macroblock into two partitions separated by a segment connecting two macroblock boundary points.
  • FIG. 4A macroblock partitioning according to a first motion compensation coding mode of the designated “sixth prior art approach” described herein is indicated generally by the reference numeral 400 .
  • the second proposed mode is based on a primary division of the macroblock into subblocks, and then, each subblock is divided using a segment connecting two points on the boundary of each subblock.
  • FIG. 4B macroblock partitioning according to a second motion compensation coding mode of the designated “sixth prior art approach” described herein is indicated generally by the reference numeral 450 .
  • block partitioning defined as the connection of two boundary points by a segment is not able to handle, efficiently, cases of more complex boundaries or contours.
  • the sixth prior art approach proposes the division of macroblocks into subblocks, and the use of points connecting segments in every subblock, in order to approximate more complex shapes, which is inefficient.
  • partitions are only conceived for motion compensation, disregarding the use of some intra coding technique within the generated partitions. This disables the proposed technique to handle uncovering effects (situations where new data appears from behind an object during a sequence), or simply to code information in a non-temporally predictive way in any of the video frames.
  • partition coding by coding boundary points is not efficient enough in terms of distortion and coding cost. This is because they are not able to properly represent the geometric characteristics of the partitions boundary; hence, they do not properly show the geometric characteristics of the data in the video frame. Indeed, data in video frames typically presents different statistics for geometric information like local orientations and local positions of different video components and/or objects. The simple use of boundary points is unable to reflect such information. Thus, one cannot exploit such statistics for coding purposes.
  • the sixth prior art approach does not appear to handle those pixels lying on the boundary of the partitions which are partly on one side of the boundary, and partly on the other side. These pixels should be able, when needed, to mix information from both partition sides.
  • a video encoder capable of performing video encoding in accordance with the MPEG-4 AVC standard is indicated generally by the reference numeral 800 .
  • the video encoder 800 includes a frame ordering buffer 810 having an output in signal communication with a non-inverting input of a combiner 885 .
  • An output of the combiner 885 is connected in signal communication with a first input of a transformer and quantizer 825 .
  • An output of the transformer and quantizer 825 is connected in signal communication with a first input of an entropy coder 845 and a first input of an inverse transformer and inverse quantizer 850 .
  • An output of the entropy coder 845 is connected in signal communication with a first non-inverting input of a combiner 890 .
  • An output of the combiner 890 is connected in signal communication with a first input of an output buffer 835 .
  • a first output of an encoder controller 805 is connected in signal communication with a second input of the frame ordering buffer 810 , a second input of the inverse transformer and inverse quantizer 850 , an input of a picture-type decision module 815 , an input of a macroblock-type (MB-type) decision module 820 , a second input of an intra-prediction module 860 , a second input of a deblocking filter 865 , a first input of a motion compensator 870 , a first input of a motion estimator 875 , and a second input of a reference picture buffer 880 .
  • MB-type macroblock-type
  • a second output of the encoder controller 805 is connected in signal communication with a first input of a Supplemental Enhancement Information (SEI) inserter 830 , a second input of the transformer and quantizer 825 , a second input of the entropy coder 845 , a second input of the output buffer 835 , and an input of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 840 .
  • SEI Supplemental Enhancement Information
  • a first output of the picture-type decision module 815 is connected in signal communication with a third input of a frame ordering buffer 810 .
  • a second output of the picture-type decision module 815 is connected in signal communication with a second input of a macroblock-type decision module 820 .
  • SPS Sequence Parameter Set
  • PPS Picture Parameter Set
  • An output of the inverse quantizer and inverse transformer 850 is connected in signal communication with a first non-inverting input of a combiner 825 .
  • An output of the combiner 825 is connected in signal communication with a first input of the intra prediction module 860 and a first input of the deblocking filter 865 .
  • An output of the deblocking filter 865 is connected in signal communication with a first input of a reference picture buffer 880 .
  • An output of the reference picture buffer 880 is connected in signal communication with a second input of the motion estimator 875 .
  • a first output of the motion estimator 875 is connected in signal communication with a second input of the motion compensator 870 .
  • a second output of the motion estimator 875 is connected in signal communication with a third input of the entropy coder 845 .
  • An output of the motion compensator 870 is connected in signal communication with a first input of a switch 897 .
  • An output of the intra prediction module 860 is connected in signal communication with a second input of the switch 897 .
  • An output of the macroblock-type decision module 820 is connected in signal communication with a third input of the switch 897 .
  • An output of the switch 897 is connected in signal communication with a second non-inverting input of the combiner 825 .
  • Inputs of the frame ordering buffer 810 and the encoder controller 805 are available as input of the encoder 800 , for receiving an input picture 801 .
  • an input of the Supplemental Enhancement Information (SEI) inserter 830 is available as an input of the encoder 800 , for receiving metadata.
  • An output of the output buffer 835 is available as an output of the encoder 800 , for outputting a bitstream.
  • SEI Supplemental Enhancement Information
  • a video decoder capable of performing video decoding in accordance with the MPEG-4 AVC standard is indicated generally by the reference numeral 1000 .
  • the video decoder 1000 includes an input buffer 1010 having an output connected in signal communication with a first input of an entropy decoder 1045 .
  • a first output of the entropy decoder 1045 is connected in signal communication with a first input of an inverse transformer and inverse quantizer 1050 .
  • An output of the inverse transformer and inverse quantizer 1050 is connected in signal communication with a second non-inverting input of a combiner 1025 .
  • An output of the combiner 1025 is connected in signal communication with a second input of a deblocking filter 1065 and a first input of an intra prediction module 1060 .
  • a second output of the deblocking filter 1065 is connected in signal communication with a first input of a reference picture buffer 1080 .
  • An output of the reference picture buffer 1080 is connected in signal communication with a second input of a motion compensator 1070 .
  • a second output of the entropy decoder 1045 is connected in signal communication with a third input of the motion compensator 1070 and a first input of the deblocking filter 1065 .
  • a third output of the entropy decoder 1045 is connected in signal communication with an input of a decoder controller 1005 .
  • a first output of the decoder controller 1005 is connected in signal communication with a second input of the entropy decoder 1045 .
  • a second output of the decoder controller 1005 is connected in signal communication with a second input of the inverse transformer and inverse quantizer 1050 .
  • a third output of the decoder controller 1005 is connected in signal communication with a third input of the deblocking filter 1065 .
  • a fourth output of the decoder controller 1005 is connected in signal communication with a second input of the intra prediction module 1060 , with a first input of the motion compensator 1070 , and with a second input of the reference picture buffer 1080 .
  • An output of the motion compensator 1070 is connected in signal communication with a first input of a switch 1097 .
  • An output of the intra prediction module 1060 is connected in signal communication with a second input of the switch 1097 .
  • An output of the switch 1097 is connected in signal communication with a first non-inverting input of the combiner 1025 .
  • An input of the input buffer 1010 is available as an input of the decoder 1000 , for receiving an input bitstream.
  • a first output of the deblocking filter 1065 is available as an output of the decoder 1000 , for outputting an output picture.
  • an apparatus includes an encoder for encoding image data corresponding to pictures by adaptively partitioning at least portions of the pictures responsive to at least one parametric model.
  • the at least one parametric model involves at least one of implicit and explicit formulation of at least one curve.
  • the method includes encoding image data corresponding to pictures by adaptively partitioning at least portions of the pictures responsive to at least one parametric model.
  • the at least one parametric model involves at least one of implicit and explicit formulation of at least one curve.
  • an apparatus includes a decoder for decoding image data corresponding to pictures by reconstructing at least portions of the pictures partitioned using at least one parametric model.
  • the at least one parametric model involves at least one of implicit and explicit formulation of at least one curve.
  • the method includes decoding image data corresponding to pictures by reconstructing at least portions of the pictures partitioned using at least one parametric model.
  • the at least one parametric model involves at least one of implicit and explicit formulation of at least one curve.
  • FIG. 1 shows a diagram for MPEG-4 AVC standard macroblock division sets to which the present principles may be applied;
  • FIGS. 2A and 2B show diagrams for additional motion compensation coding modes corresponding to the designated “first prior art approach” described herein;
  • FIG. 3 shows a diagram for motion compensation coding modes relating to the designated “second”, “third”, and “fourth prior art approaches” described herein;
  • FIG. 4A shows a diagram for macroblock partitioning according to a first motion compensation coding mode of the designated “sixth prior art approach” described herein;
  • FIG. 4B shows a diagram for macroblock partitioning according to a second motion compensation coding mode of the designated “sixth prior art approach” described herein;
  • FIG. 5 shows a diagram for a smooth boundary partition based on a polynomial model with partitions P 0 and P 1 , according to an embodiment of the present principles
  • FIG. 6 shows a diagram for an example of using a first order polynomial with parameters described geometry (angle and position) for use as a parametric model, according to an embodiment of the present principles
  • FIG. 7 shows a diagram for a partition mask generated from parametric model f(x,y) using a first degree polynomial, according to an embodiment of the present principles
  • FIG. 8 shows a block diagram for a video encoder capable of performing video encoding in accordance with the MPEG-4 AVC Standard
  • FIG. 9 shows a block diagram for a video encoder capable of performing video encoding in accordance with the MPEG-4 AVC Standard, extended for use with the present principles, according to an embodiment of the present principles;
  • FIG. 10 shows a block diagram for a video decoder capable of performing video decoding in accordance with the MPEG-4 AVC Standard
  • FIG. 11 shows a block diagram for a video decoder capable of performing video decoding in accordance with the MPEG-4 AVC Standard, extended for use with the present principles, according to an embodiment of the present principles;
  • FIG. 12 shows a diagram for a parametric model based partitioned macroblock and its use together with a deblocking procedure, according to an embodiment of the present principles
  • FIG. 13 shows a diagram for an example of partition parameters prediction for the right block from parameters of the left block, according to an embodiment of the present principles
  • FIG. 14 shows a diagram for an example of partition parameters prediction for the lower block from parameters of the upper block, according to an embodiment of the present principles
  • FIG. 15 shows a diagram for an example of partition parameters prediction for the right block from parameters of the upper and left blocks, according to an embodiment of the present principles
  • FIG. 16 shows a diagram for an exemplary method for geometric modes estimation with model-based partition parameters and prediction search, according to an embodiment of the present principles
  • FIG. 17 shows a flow diagram for an exemplary method for coding a geometrically partitioned prediction block, according to an embodiment of the present principles
  • FIG. 18A shows a flow diagram for an exemplary method for coding a geometrically partitioned inter prediction block, according to an embodiment of the present principles
  • FIG. 18B shows a flow diagram for an exemplary method for coding a geometrically partitioned intra prediction block, according to an embodiment of the present principles
  • FIG. 19 shows a flow diagram for an exemplary method for coding with multiple types of models, according to an embodiment of the present principles
  • FIG. 20 shows a flow diagram for an exemplary method for decoding a geometrically partitioned prediction block, according to an embodiment of the present principles
  • FIG. 21A shows a flow diagram for an exemplary method for decoding a geometrically partitioned inter prediction block, according to an embodiment of the present principles
  • FIG. 21B shows a flow diagram for an exemplary method for decoding a geometrically partitioned intra prediction block, according to an embodiment of the present principles
  • FIG. 22 shows a flow diagram for an exemplary method for decoding with multiple types of models, according to an embodiment of the present principles
  • FIG. 23 shows a flow diagram for an exemplary method for slice header syntax coding, according to an embodiment of the present principles
  • FIG. 24 shows a flow diagram for an exemplary method for deriving geometric parameters precision, according to an embodiment of the present principles
  • FIG. 25 shows a flow diagram for an exemplary method for reconstructing geometric blocks, according to an embodiment of the present principles
  • FIG. 26 shows a flow diagram for an exemplary method for searching for the best mode for a current block, according to an embodiment of the present principles.
  • FIG. 27 shows a flow diagram for an exemplary method for slice header syntax decoding, according to an embodiment of the present principles
  • the present principles are directed to methods and apparatus for adaptive geometric partitioning for video encoding and decoding.
  • processor or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (“DSP”) hardware, read-only memory (“ROM”) for storing software, random access memory (“RAM”), and non-volatile storage.
  • DSP digital signal processor
  • ROM read-only memory
  • RAM random access memory
  • any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
  • any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function.
  • the present principles as defined by such claims reside in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
  • existing video coding standard and “video coding recommendation” may refer to any existing video coding standard and recommendation, including those not yet developed, but existing within a time of application of the present principles thereto.
  • standards and recommendations include, but are not limited to, H.261, H.262, H.263, H.263+, H.263++, MPEG-1, MPEG-2, MPEG-4 AVC, and so forth.
  • extended version when used with respect to a video coding standard and/or recommendation, refers to one that is modified, evolved, or otherwise extended.
  • image data is intended to refer to data corresponding to any of still images and moving images (i.e., a sequence of images including motion).
  • high level syntax refers to syntax present in the bitstream that resides hierarchically above the macroblock layer.
  • high level syntax may refer to, but is not limited to, syntax at the slice header level, Supplemental Enhancement Information (SEI) level, picture parameter set level, sequence parameter set level and NAL unit header level.
  • SEI Supplemental Enhancement Information
  • the present principles are directed to methods and apparatus for adaptive geometric partitioning for video encoding and decoding.
  • One or more embodiments of the present principles use parametric models for frame region partitioning, that is able to capture and represent local signal geometry, in order to overcome the inefficiencies of tree based approaches.
  • Parametric modeling as used in various embodiments of the present principles, is defined as defining at least one partition within an image portion (or macroblock) by implicit or explicit formulation of at least one curve (which, in the particular case of a first degree polynomial, becomes a straight line), and where a particular embodiment of this is to jointly define the partitions and curve according to the so-called “implicit curve” formulation.
  • Formulation of a general curve as used in accordance with the present principles is distinguished from the sixth prior art approach described above in that the sixth prior art approach defines boundaries between sliced partitions within a block as a straight-line connection between two given points located on the periphery of the block.
  • a geometric partition mode is tested in addition to those based in classic tree partitioning.
  • the concerned block or region is partitioned into several regions described by one or a set of parametric models.
  • a form of this can be the partition of blocks or regions into two partitions where their boundary is described by a parametric model or function ⁇ (x, y, ⁇ right arrow over (p) ⁇ ), where x and y represent the coordinate axes, and ⁇ right arrow over (p) ⁇ represents the set of parameters including the information describing the shape of the partition.
  • An embodiment of the present principles provides a technique for general geometric frame partitioning adapted to the geometry of two dimensional (2D) data. Each one of the generated regions is then encoded by using the most efficient type of prediction, e.g., inter and/or intra prediction types.
  • An embodiment includes the generation of geometric partitions in blocks or frame regions. Partition of blocks or frame regions into geometrically adapted partitions, instead of classic trees, allows for a reduction of the amount of information to be sent, as well as the amount of residue generated by the prediction procedure.
  • a parametric model is used to generate, approximate and/or code the partition boundaries within each block. Such an approach allows for a better capture of the main geometric properties of the 2D data.
  • the model parameters can be defined to independently carry information involving, for example, but not limited to, partition boundary angle, position, discontinuities, and/or even curvature.
  • the use of parametric models, for partition coding allows for a very compact partition edge description, which minimizes the number of parameters to code.
  • partition model parameters can be defined such as to decouple independent or different geometric information, in order to best code each of the parameters according to their statistics and nature.
  • Such model-based treatment of geometric information also allows for the selective reducing or increasing of the amount of coding information invested per geometric parameter. In addition to coding efficiency, such a feature is useful to control computational complexity while minimizing the impact on coding efficiency.
  • the ⁇ (x, y, ⁇ right arrow over (p) ⁇ ) (also expressed as ⁇ (x,y) in the following) parameters can be operated such that they describe geometric information such as local angle, position and/or some curvature magnitude.
  • block partitions can be represented such that they describe angle and distance with respect to a given set of coordinate axes:
  • FIG. 6 an example of using a first order polynomial with parameters described geometry (angle and position) for use as a parametric model is indicated generally by the reference numeral 600 .
  • pixels may be subject to the influence of the predictor used to describe each one of the partition sides.
  • pixels may be labeled as “partial surface”, with a label different from those of Partition 1 and 0.
  • Partial surface pixels can be thus identified with some value in between, which may also include the information of how much the concerned pixel is into partition 0 (e.g., a value of 1 would indicate completely, 0.5 would indicate half-half, and 0 would indicate nothing).
  • the prediction from the second partition contributes with weight (1 ⁇ Label(x,y)) to the value of the “partial surface” pixel.
  • This generic pixel classification is generated under the form of a partition mask.
  • a partition mask generated from parametric model f(x,y) using a first degree polynomial is indicated generally by the reference numeral 700 .
  • the floating point numbers stated herein above are just an example of possible selection values.
  • threshold values other than 0.5 are possible. Every pixel classified as “partial surface”, can be predicted, also, as a function of one or more neighboring pixels within one of the partitions that overlaps it, or a combination of functions of more than one partition overlapping it. Also, it is to be appreciated by one of ordinary skill in this and related arts that any aspect of the present principles described herein may be adapted for integer implementation, and/or making use of look-up tables.
  • Model parameters need to be encoded and transmitted to allow the decoder determining the partition of the concerned block or region.
  • the precision of partition parameters is limited according to the maximum amount of coding cost one is willing to invest for describing blocks or partition regions.
  • a dictionary of possible partitions is a priori defined by determining the value range and sampling precision for each parameter of ⁇ (x,y).
  • this can be defined such that:
  • ⁇ ⁇ : ⁇ ⁇ ⁇ [ 0 , 2 ⁇ MB Size 2 ) ⁇ and ⁇ ⁇ ⁇ 0 , ⁇ ⁇ ⁇ ⁇ , 2 ⁇ ⁇ ⁇ ⁇ , 3 ⁇ ⁇ ⁇ ⁇ ⁇ , ... ⁇ , ⁇ and ⁇ : ⁇ if ⁇ ⁇ ⁇ 0 ⁇ ⁇ [ 0 , 180 ) else ⁇ ⁇ [ 0 , 360 ) ⁇ ⁇ and ⁇ ⁇ ⁇ ⁇ ⁇ 0 , ⁇ ⁇ ⁇ ⁇ 0 , ⁇ ⁇ ⁇ ⁇ , 2 ⁇ ⁇ ⁇ ⁇ ⁇ , 3 ⁇ ⁇ ⁇ ⁇ ⁇ , ... ⁇ ⁇ ,
  • ⁇ and ⁇ are the selected quantization (parameter precision) steps. Nevertheless, an offset in the selected values can be established.
  • the quantized indices for ⁇ and ⁇ are the information transmitted to code the partitions shape.
  • the decoder needs to know the parameters precision used by the encoder. This can be sent for every type of partition parameter explicitly or implicitly as a function of some already existing data (e.g., the Quantization Parameter in the MPEG-4 AVC standard). Parameters precision can be adapted according to some high level syntax, such as the sequence, picture, and/or slice level.
  • a video communication system using the region partitioning described herein with respect to the present principles should transmit, for every region using it, the set of necessary encoded parameters to describe the shape of the partition.
  • the rest of the transmitted data, for every geometry encoded region, will be of similar kind to that transmitted by tree based partition modes. Indeed, for each model-based partition, prediction information should be transmitted. Additionally, residual prediction error may also eventually be encoded after prediction.
  • the MPEG-4 AVC Standard relies on tree-based frame partitioning in order to optimize coding performance. Extending the MPEG-4 AVC Standard in accordance with an embodiment of the present principles helps to overcome the limitations inherent to tree-based frame partitioning to which the MPEG-4 AVC Standard is subject.
  • the use of parametric model-based region partitioning can be included in the MPEG-4 AVC Standard under the form of new block coding modes.
  • the MPEG-4 AVC Standard tree-based frame partitioning divides each picture, when and where needed, in 16 ⁇ 16, 16 ⁇ 8, 8 ⁇ 16, 8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8 and 4 ⁇ 4 blocks.
  • Each of these partition types is associated with a coding mode, that at the same time, depending on the mode, can be of the type inter or intra.
  • a parametric model ⁇ (x,y) is used to describe the partition within the block.
  • Such a block mode partitioned with a parametric model is referred to herein as “Geometric Mode”.
  • the goal is to generate partitions as big as possible; hence, the purpose of the parametric model is to be applied to 16 ⁇ 16 size blocks or to unions of leaves of tree-based partitions.
  • 8 ⁇ 8 “Geometric Mode” blocks are also considered.
  • the use of 8 ⁇ 8 “Geometric Mode” blocks may also be enabled or disabled depending on complexity factors.
  • a high level syntax can be signaled in order to indicate whether 8 ⁇ 8 “Geometric modes” are used or not. This can save coding overhead when such a mode is unused.
  • Particular examples of syntax level include, but are not limited to, a sequence, picture and/or slice level.
  • the encoder and/or decoder can be modified. As depicted in FIGS. 8 , 9 , 10 , and 11 , functionality of the main building blocks in the MPEG-4 AVC Standard can be modified and extended in order to handle the new modes, able to capture and code geometric information.
  • FIG. 9 a video encoder capable of performing video encoding in accordance with the MPEG-4 AVC standard, extended for use with the present principles, is indicated generally by the reference numeral 900 .
  • the video encoder 900 includes a frame ordering buffer 910 having an output in signal communication with a non-inverting input of a combiner 985 .
  • An output of the combiner 985 is connected in signal communication with a first input of a transformer and quantizer with geometric extensions 927 .
  • An output of the transformer and quantizer with geometric extensions 927 is connected in signal communication with a first input of an entropy coder with geometric extensions 945 and a first input of an inverse transformer and inverse quantizer 950 .
  • An output of the entropy coder with geometric extensions 945 is connected in signal communication with a first non-inverting input of a combiner 990 .
  • An output of the combiner 990 is connected in signal communication with a first input of an output buffer 935 .
  • a first output of an encoder controller with geometric extensions 905 is connected in signal communication with a second input of the frame ordering buffer 910 , a second input of the inverse transformer and inverse quantizer 950 , an input of a picture-type decision module 915 , an input of a macroblock-type (MB-type) decision module with geometric extensions 920 , a second input of an intra prediction module with geometric extensions 960 , a second input of a deblocking filter with geometric extensions 965 , a first input of a motion compensator with geometric extensions 970 , a first input of a motion estimator with geometric extensions 975 , and a second input of a reference picture buffer 980 .
  • MB-type macroblock-type
  • a second output of the encoder controller with geometric extensions 905 is connected in signal communication with a first input of a Supplemental Enhancement Information (SEI) inserter 930 , a second input of the transformer and quantizer with geometric extensions 927 , a second input of the entropy coder with geometric extensions 945 , a second input of the output buffer 935 , and an input of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 940 .
  • SEI Supplemental Enhancement Information
  • a first output of the picture-type decision module 915 is connected in signal communication with a third input of a frame ordering buffer 910 .
  • a second output of the picture-type decision module 915 is connected in signal communication with a second input of a macroblock-type decision module with geometric extensions 920 .
  • SPS Sequence Parameter Set
  • PPS Picture Parameter Set
  • An output of the inverse quantizer and inverse transformer 950 is connected in signal communication with a first non-inverting input of a combiner 925 .
  • An output of the combiner 925 is connected in signal communication with a first input of the intra prediction module with geometric extensions 960 and a first input of the deblocking filter with geometric extensions 965 .
  • An output of the deblocking filter with geometric extensions 965 is connected in signal communication with a first input of a reference picture buffer 980 .
  • An output of the reference picture buffer 980 is connected in signal communication with a second input of the motion estimator with geometric extensions 975 .
  • a first output of the motion estimator with geometric extensions 975 is connected in signal communication with a second input of the motion compensator with geometric extensions 970 .
  • a second output of the motion estimator with geometric extensions 975 is connected in signal communication with a third input of the entropy coder with geometric extensions 945 .
  • An output of the motion compensator with geometric extensions 970 is connected in signal communication with a first input of a switch 997 .
  • An output of the intra prediction module 860 is connected in signal communication with a second input of the switch 997 .
  • An output of the macroblock-type decision module with geometric extensions 920 is connected in signal communication with a third input of the switch 997 .
  • An output of the switch 997 is connected in signal communication with a second non-inverting input of the combiner 925 and with an inverting input of the combiner 985 .
  • Inputs of the frame ordering buffer 910 and the encoder controller with geometric extensions 905 are available as input of the encoder 900 , for receiving an input picture 901 .
  • an input of the Supplemental Enhancement Information (SEI) inserter 930 is available as an input of the encoder 900 , for receiving metadata.
  • An output of the output buffer 935 is available as an output of the encoder 900 , for outputting a bitstream.
  • SEI Supplemental Enhancement Information
  • FIG. 11 a video decoder capable of performing video decoding in accordance with the MPEG-4 AVC standard, extended for use with the present principles, is indicated generally by the reference numeral 1100 .
  • the video decoder 1100 includes an input buffer 1110 having an output connected in signal communication with a first input of an entropy decoder with geometric extensions 1145 .
  • a first output of the entropy decoder with geometric extensions 1145 is connected in signal communication with a first input of an inverse transformer and inverse quantizer with geometric extensions 1150 .
  • An output-of the inverse transformer and inverse quantizer with geometric extensions 1150 is connected in signal communication with a second non-inverting input of a combiner 1125 .
  • An output of the combiner 1125 is connected in signal communication with a second input of a deblocking filter with geometric extensions 1165 and a first input of an intra prediction module with geometric extensions 1160 .
  • a second output of the deblocking filter with geometric extensions 1165 is connected in signal communication with a first input of a reference picture buffer 1180 .
  • An output of the reference picture buffer 1180 is connected in signal communication with a second input of a motion compensator with geometric extensions 1170 .
  • a second output of the entropy decoder with geometric extensions 1145 is connected in signal communication with a third input of the motion compensator with geometric extensions 1170 and a first input of the deblocking filter with geometric extensions 1165 .
  • a third output of the entropy decoder with geometric extensions 1145 is connected in signal communication with an input of a decoder controller with geometric extensions 1105 .
  • a first output of the decoder controller with geometric extensions 1105 is connected in signal communication with a second input of the entropy decoder with geometric extensions 1145 .
  • a second output of the decoder controller with geometric extensions 1105 is connected in signal communication with a second input of the inverse transformer and inverse quantizer with geometric extensions 1150 .
  • a third output of the decoder controller with geometric extensions 1105 is connected in signal communication with a third input of the deblocking filter with geometric extensions 1165 .
  • a fourth output of the decoder controller with geometric extensions 1105 is connected in signal communication with a second input of the intra prediction module with geometric extensions 1160 , with a first input of the motion compensator 1170 , and with a second input of the reference picture buffer 1180 .
  • An output of the motion compensator with geometric extensions 1170 is connected in signal communication with a first input of a switch 1197 .
  • An output of the intra prediction module with geometric extensions 1160 is connected in signal communication with a second input of the switch 1197 .
  • An output of the switch 1197 is connected in signal communication with a first non-inverting input of the combiner 1125 .
  • An input of the input buffer 1110 is available as an input of the decoder 1100 , for receiving an input bitstream.
  • a first output of the deblocking filter with geometric extensions 1165 is available as an output of the decoder 1100 , for outputting an output picture.
  • encoder and/or decoder control modules may be modified/extended to include all the decision rules and coding processes structure necessary for “Geometric Modes”.
  • the motion compensation module may be adapted in order to compensate blocks with arbitrary partitions described by ⁇ (x,y) and its parameters.
  • the motion estimation module may be adapted in order to test and select the most appropriate motion vectors for the different sorts of partitions available in the parametric model-based coding mode.
  • intra frame prediction may be adapted in order to consider parametric model-based block partitioning with the possibility to select the most appropriate prediction mode in each partition.
  • the deblocking in-loop filter module may be adapted in order to handle the more complicate shape of motion regions within blocks with parametric model-based partitions.
  • entropy coding and/or decoding may be adapted and extended in order to code and/or decode the new data associated with the parametric model-based mode.
  • motion prediction may be adapted in order to handle the more complicate shape of motion regions.
  • Predictors for efficiently coding parametric model-based partition parameters may also be generated and used.
  • the encoder control module may be extended in order to take into account the new modes based on the parametric model-based block partition.
  • These modes (called Geometric Modes) are inserted within the existing ones in the MPEG-4 AVC standard.
  • 16 ⁇ 16 and 8 ⁇ 8 parametric model-based partitioned blocks are, respectively, inserted within the Macroblock-size modes and within the sub Macroblock-size modes.
  • these modes are logically inserted before, between, or after 16 ⁇ 8 and/or 8 ⁇ 16 for the Geometric 16 ⁇ 16 Mode, and before, between, or after 8 ⁇ 4 and/or 4 ⁇ 8 for the Geometric 8 ⁇ 8 Mode.
  • 16 ⁇ 16 and 8 ⁇ 8 Geometric Modes are inserted right after their MPEG-4 AVC directional homologues. According to their global usage statistics, we can also insert them right before the MPEG-4 AVC directional modes (and sub-modes), as shown in TABLE 1 and TABLE 2.
  • Sub-Macroblock Modes 16 ⁇ 16 block 8 ⁇ 8 block 16 ⁇ 16 Geometric block 8 ⁇ 8 Geometric block 16 ⁇ 8 block 8 ⁇ 4 block 8 ⁇ 16 block 4 ⁇ 8 block 8 ⁇ 8 Sub-macroblock 4 ⁇ 4 block . . .
  • the motion estimation module may be adapted to handle, when needed, geometry adapted block partitions.
  • Geometric Mode motion is described in the same way as for classic tree based partition modes 16 ⁇ 8, 8 ⁇ 16, 8 ⁇ 4 or 4 ⁇ 8. Indeed, these modes may function like some particular instances of the present parametric model-based partition mode. As such, they are excluded from the possible configurations of the parametric model in use. Every partition can be modeled with one or multiple references, depending on the needs, and whether a P or B block is being coded.
  • P-mode example In a full P-mode parametric model-based partitioned block, both partitions are modeled by a matching patch selected from a reference frame. Each patch must have a shape tailored to fit the selected geometric partition.
  • a motion vector is transmitted per partition.
  • motion vectors as well as ⁇ (x,y) model parameters are selected such that the information included in the block is best described in terms of some distortion measure (D) and some coding cost measure (R). For this purpose, all parameters are jointly optimized for each block such that D and R are jointly minimized:
  • ⁇ MV 1 , MV 0 , ⁇ , ⁇ ⁇ argmin MV 1 ⁇ ⁇ MV 1 , ⁇ MV 0 ⁇ ⁇ MV 0 ⁇ ⁇ ⁇ , ⁇ ⁇ ⁇ ⁇ D ( MV 1 , MV 0 , ⁇ , ⁇ ) + ⁇ ⁇ ⁇ R ⁇ ( MV 1 , MV 0 , ⁇ , ⁇ ) ,
  • is a multiplying factor
  • MV 1 and MV 0 stand for both motion vectors in the partition
  • e and ⁇ represent partition parameters for the particular case of the first order polynomial and each ⁇ x represents the set of valid values for each kind of information.
  • An example of the adaptation of a distortion measure for use with one or more embodiments of the present principles is the use of the generated masks for each partition (see mask example in FIG. 7 ). Then, any classic block-based distortion measure can be modified to take partitions into account, such that:
  • D ⁇ ( MV 1 , MV 0 , ⁇ , ⁇ ) ⁇ x _ ⁇ block ⁇ ⁇ D ⁇ ( I ⁇ ( x _ ) , I ⁇ t ⁇ ( x _ - MV 1 ) ) ⁇ MASK P1 ⁇ ( x , y ) + ⁇ x _ ⁇ block ⁇ D ⁇ ( I ⁇ ( x _ ) , I ⁇ t ⁇ ( x _ - MV 0 ) ) ⁇ MASK P0 ⁇ ( x , y )
  • MASK P1 (x,y) and MASK P0 (x,y) respectively represent each of the ⁇ (x,y) partitions.
  • Fast implementations of this are possible by reducing the number of addition operations for those mask values being very small (for example, smaller than a given threshold (such as, for example, 0.5) to zero.
  • An example of such a simplification can also be to generate a simplified mask where all values equal or smaller than 0.5 are rounded to zero and all values greater than 0.5 are rounded to one. Then, in an embodiment, only those positions where the mask is 1 are summed to compute the distortion. In such a case, only addition operations are necessary and all positions with zero value in each mask can be ignored.
  • partitions themselves should be determined together with the motion information.
  • a search is performed on ⁇ (x,y) parameters as well.
  • FIG. 16 an exemplary method for geometric modes estimation with model-based partition parameters and prediction search (e.g., motion vectors search for motion estimation) is indicated generally by the reference numeral 1600 .
  • the method 1600 includes a start block 1605 that passes control to a loop limit block 1610 .
  • the loop limit block 1610 performs a loop for a total number of possible edges (wherein the amount of edges is geometric precision dependent), and initializes a variable i, and passes control to a function block 1615 .
  • the function block 1615 generates a partition with a parameter set i, and passes control to a function block 1620 .
  • the function block 1620 searches the best predictors given partitions set i, and passes control to a decision block 1625 .
  • the decision block 1625 determines whether the best partition and the best prediction have been determined. If so, then control is passed to a function block 1630 . Otherwise, control is passed to a loop limit block 1635 .
  • the function block 1630 stores the best geometric parameters and the predictor choice, and passes control to the loop limit block 1635 .
  • the loop limit block 1635 ends the loop for the total number of possible edges, and passes control to an end block 1640 .
  • motion estimation may involve testing the different models in order to find the best model adapted to the data. Selection of the best model at the decoder side may be handled by sending the necessary side information.
  • Entropy Coding may be extended in order to code geometric parameters according to their statistics as well as prediction models from neighboring encoded-decoded blocks which may themselves include geometric partitions information.
  • Motion vector predictors for blocks partitioned with parametric models are adapted to the geometry of their respective partitioned block as well as to that of the neighboring, already encoded blocks.
  • Each geometric partition motion vector is predicted from an adaptively selected set of motion vectors from spatial and/or temporal neighboring blocks. An embodiment of this is the use, depending on the geometry of the current block partition, of 1 or 3 spatial neighbors. When the number of motion vectors is 3, these are median filtered. Then, predicted motion vectors are coded according to the MPEG-4 AVC Standard, either using variable length coding (VLC) or arithmetic coding (AC) based coding.
  • VLC variable length coding
  • AC arithmetic coding
  • a first exemplary coding approach for model-based partition parameters such parameters are coded without prediction when no neighboring model-based (or geometric) block exist. Then, for the first order polynomial case, in one embodiment of variable length coding, angles can be coded with uniform codes and the radius can use a Golomb code.
  • such parameters are coded with prediction when at least one neighboring model-based (or geometric) blocks exists.
  • An embodiment of parameter prediction is performed by projecting the parametric models from previous neighboring blocks into the current block. Indeed, for the first degree polynomial case, an example is to predict parameters by continuing the line of a previous block into the current block. When two blocks are available, then, the predicted line is the one connecting both crossing points of neighboring lines with macroblock boundaries.
  • FIG. 13 an example of partition parameters prediction for the right block from parameters of the left block is indicated generally by the reference numeral 1300 .
  • FIG. 14 an example of partition parameters prediction for the lower block from parameters of the upper block is indicated generally by the reference numeral 1400 .
  • FIG. 15 an example of partition parameters prediction for the right block from parameters of the upper and left blocks is indicated generally by the reference numeral 1500 .
  • Predicted parameters are then coded differentially using Golomb codes.
  • Golomb codes In the particular case of angle, its periodicity property may be exploited in order to have the best possible statistics for posterior VLC or AC coding.
  • VLC one can use Golomb codes.
  • FIGS. 17 , 18 , and 19 depict a particular embodiment of coding flowcharts for general parametric model based blocks. Indeed, in order to code parametric model-based blocks, in addition to motion data, at some point of the block coding procedure, partition parameters are to be encoded.
  • an exemplary method for coding a geometrically partitioned prediction block is indicated generally by the reference numeral 1700 .
  • the method 1700 includes a start block 1705 that passes control to a decision block 1710 .
  • the decision block 1710 determines whether or not the current mode type is a geometric mode type. If so, then control is passed to a function block 1715 . Otherwise, control is passed to an end block 1730 .
  • the function block 1715 codes the geometric mode type, and passes control to a function block 1720 .
  • the function block 1720 codes the geometric partition parameters, and passes control to a function block 1725 .
  • the function block 1725 codes the partitions prediction, and passes control to the end block 1730 .
  • FIG. 18A an exemplary method for coding a geometrically partitioned inter prediction block is indicated generally by the reference numeral 1800 .
  • the method 1800 includes a start block 1802 that passes control to a decision block 1804 .
  • the decision block 1804 determines whether or not the current mode type is a geometric inter mode type. If so, then control is passed to a function block 1806 . Otherwise, control is passed to an end block 1812 .
  • the function block 1806 codes the geometric inter mode type, and passes control to a function block 1808 .
  • the function block 1808 codes the geometric partition parameters (for example, using neighboring geometric data if available for prediction, and adapting coding tables accordingly), and passes control to a function block 1810 .
  • the function block 1810 codes the partitions inter prediction (for example, using neighboring decoded data if available for prediction, and adapting coding tables accordingly), and passes control to the end block 1812 .
  • FIG. 18B an exemplary method for coding a geometrically partitioned intra prediction block is indicated generally by the reference numeral 1850 .
  • the method 1850 includes a start block 1852 that passes control to a decision block 1854 .
  • the decision block 1854 determines whether or not the current mode type is a geometric inter mode type. If so, then control is passed to a function block 1856 . Otherwise, control is passed to an end block 1862 .
  • the function block 1856 codes the geometric inter mode type, and passes control to a function block 1858 .
  • the function block 1858 codes the geometric partition parameters (for example, using neighboring geometric data if available for prediction, and adapting coding tables accordingly), and passes control to a function block 1860 .
  • the function block 1860 codes the partitions inter prediction (for example, using neighboring decoded data if available for prediction, and adapting coding tables accordingly), and passes control to the end block 1862 .
  • FIG. 19 an exemplary method for coding with multiple types of models is indicated generally by the reference numeral 1900 .
  • the method 1900 includes a start block 1905 that passes control to a decision block 1910 .
  • the decision block 1910 determines whether or not the current mode type is a geometric mode type. If so, then control is passed to a function block 1915 . Otherwise, control is passed to an end block 1950 .
  • the function block 1915 codes the geometric mode type, and passes control to a preparation block 1920 .
  • the preparation block 1920 selects parametric model A or B for the current partition. If parametric model A is selected, then control is passed to a function block 1935 . Otherwise, if parametric model B is selected, then control is passed to a function block 1925 .
  • the function block 1935 designates the code to correspond to parametric model A, and passes control to a function block 1940 .
  • the function block 1940 codes the geometric partition parameters for parametric model A, and passes control to a function block 1945 .
  • the function block 1925 designates the code to correspond to parametric model B, and passes control to a function block 1930 .
  • the function block 1930 codes the geometric partition parameters for parametric model B, and passes control to the function block 1945 .
  • the function block 1945 codes the partitions prediction, and passes control to the end block 1950 .
  • the motion compensation module may be extended in order to compensate the non-squared/non-rectangular partitions in parametric model-based partitioned blocks.
  • Block reconstruction for the motion compensation procedure directly follows from the motion estimation procedure described herein above. Indeed, compensation corresponds to use as a predictor the best set of partitions together with the two, partition shaped, pixmaps associated with the motion vectors.
  • “Partial Surface” pixels are computed as a combination, according to a given rule, of the pixmaps associated with the motion vectors.
  • Intra prediction is upgraded in order to predict intra data according to the parametric model based partition of the block.
  • Intra prediction with parametric model-based partition is defined in the same way as motion compensation and motion estimation with parametric model-based partitions, with the basic difference that intra prediction is used, instead, in order to fill each one of the generated partitions.
  • In-loop de-blocking filtering reduces blocking artifacts introduced by the block structure of the prediction, as well as, by the residual coding Discrete Cosine. Transform (DCT).
  • In-loop de-blocking filtering adapts filter strength depending on the encoded video data, as well as, depending on local intensity differences between pixels across block boundaries.
  • An embodiment of the present principles introduces a new form of video data representation.
  • Blocks including a parametric model-based partition do not necessarily have constant motion vector values, or constant reference frame values on every 4 ⁇ 4 block. Indeed, with the parametric model-based partition, in such arbitrary partitioned blocks, the area, and block boundaries affected by a given motion vector are defined by the shape enforced by the parametric model.
  • a 4 ⁇ 4 block may appear to be half into one partition, and the other half into another partition, with all the implications this has, concerning the motion vector used and the reference frame used at a given location.
  • the in-loop deblocking filter module is extended, thus, by adapting the process of the filter strength decision. This process should now be able to decide the filter strength taking into account the particular shape of internal block partitions. Depending on the part of the block boundary to filter, it needs to get the appropriate motion vector and reference frame according to the partition shape, and not according to the 4 ⁇ 4 block, as done by other MPEG-4 AVC modes.
  • a parametric model based partitioned macroblock is indicated generally by the reference numeral 1200 .
  • the parametric model based partitioned macroblock includes some examples of de-blocking areas with an indication of how information is selected for a deblocking filtering strength decision filtering strength is computed once per each 4 ⁇ 4 block side that is subject to de-blocking filtering.
  • the partition considered for filtering strength computation is selected by choosing the partition that overlaps the most with the block side to filter.
  • a second alternative method, in order to simplify computation in corner blocks is to consider the whole transform block to have the motion and reference frame information from the partition that includes the most part of both block edges subject to filtering.
  • a third alternative method for combining deblocking in-loop filtering with the use of parametric model-based blocks partitioning is to always allow some degree of filtering through block boundaries whenever and wherever the block boundary is affected by a model-based block partitioned mode (e.g., Geometric Mode).
  • the Geometric Mode can be any of the blocks affecting/neighbbring the boundary.
  • deblocking filtering may or may not be applied to those transform blocks, in a geometric mode, that are not located on the boundary of a macroblock.
  • a fourth alternative for combining deblocking in-loop filtering considers any of the two first methods but adds to the set of conditions that trigger the use of some degree of filtering in a transform block, the following: if the block boundary is affected by the transform block that includes the junction between the model-based partition curve and the macroblock boundary, then use some degree of deblocking.
  • the decoder control module may be extended in order to take into account the new modes based on the parametric model-based block partition. These modes (Geometric Modes) are inserted within the existing ones in the MPEG-4 AVC Standard in the same way as performed at the encoder end.
  • the decoder control module may be modified in order to perfectly match the structure and decoding procedures sequence of the encoder in order to recover exactly the information encoded at the encoder side.
  • Entropy decoding may be extended for model-based block partitioning usage. According to the entropy coding procedure described above, entropy decoding needs to be extended such that it matches the encoding procedure described above.
  • FIGS. 20 , 21 , and 22 describe possible particular embodiments of this for decoding the information related to parametric model-based coding modes, once the codeword, indicating which block mode is used, has been already decoded and is available for decoder control.
  • FIG. 20 an exemplary method for decoding a geometrically partitioned prediction block is indicated generally by the reference numeral 2000 .
  • the method 2000 includes a start block 2005 that passes control to a function block 2010 .
  • the function block 2010 determines whether or not the current mode type is a geometric mode type. If so, then control is passed to a function block 2015 . Otherwise, control is passed to an end block 2025 .
  • the function block 2015 decodes the geometric partition parameters, and passes control to a function block 2020 .
  • the function block 2020 decodes the partitions prediction, and passes control to the end block 2025 .
  • FIG. 21A an exemplary method for decoding a geometrically partitioned inter prediction block is indicated generally by the reference numeral 2100 .
  • the method 2100 includes a start block 2112 that passes control to a function block 2114 .
  • the function block 2114 determines whether or not the current mode type is a geometric mode type. If so, then control is passed to a function block 2116 . Otherwise, control is passed to an end block 2120 .
  • the function block 2116 decodes the geometric partition parameters (for example, using neighboring geometric data if available for prediction, and adapting coding tables accordingly), and passes control to a function block 2118 .
  • the function block 2118 decodes the partitions inter prediction (for example, using neighboring decoded data if available for prediction, and adapting coding tables accordingly), and passes control to the end block 2120 .
  • an exemplary method for decoding a geometrically partitioned intra prediction block is indicated generally by the reference numeral 2150 .
  • the method 2150 includes a start block 2162 that passes control to a function block 2164 .
  • the function block 2164 determines whether or not the current mode type is a geometric mode type. If so, then control is passed to a function block 2166 . Otherwise, control is passed to an end block 2170 .
  • the function block 2166 decodes the geometric partition parameters (for example, using neighboring geometric data if available for prediction, and adapting coding tables accordingly), and passes control to a function block 2168 .
  • the function block 2168 decodes the partitions intra prediction (for example, using neighboring decoded data if available for prediction, and adapting coding tables accordingly), and passes control to the end block 2170 .
  • an exemplary method for decoding with multiple types of models is indicated generally by the reference numeral 2200 .
  • the method 2200 includes a start block 2205 that passes control to a decision block 2210 .
  • the decision block 2210 determines whether or not the current mode type is a geometric mode type. If so, then control is passed to a function block 2215 . Otherwise, control is passed to an end block 2240 .
  • the function block 2215 decodes the parametric model selection, and passes control to a preparation block 2220 .
  • the preparation block 2220 selects parametric model A or B for the current partition. If parametric model A is selected, then control is passed to a function block 2225 . Otherwise, if parametric model B is selected, then control is passed to a function block 2230 .
  • the function block 225 decodes the geometric partition parameters for parametric model A, and passes control to a function block 2235 .
  • the function block 2230 decodes the geometric partition parameters for parametric model B, and passes control to the function block 2235 .
  • the function block 2235 decodes the partitions prediction, and passes control to an end block 2240 .
  • an exemplary method for slice header syntax coding is indicated generally by the reference numeral 2300 .
  • the method 2300 includes a start block that passes control to a function block 2310 .
  • the function block 2310 codes slice related information I, and passes control to a function block 2315 .
  • the function block 2315 codes the slice quality (QP) coding information, and passes control to a function block 2320 .
  • the function block 2320 codes the geometric parameters precision information, and passes control to a function block 2325 .
  • the function block 2325 codes the slice related information II, and passes control to an end block 230 .
  • the phrases “slice related information I” and “slice related information” denote slice header related information, such that the geometric precision parameters are inserted within the existing syntax of the slice header.
  • FIG. 24 an exemplary method for deriving geometric parameters precision is indicated generally by the reference numeral 2400 .
  • the method 2400 includes a start block 2405 that passes control to a function block 2410 .
  • the function block 2410 gets the QP parameter for the present (i.e., current) macroblock, and passes control to a function block 2415 .
  • the function block 2415 computes the geometric parameter precision, and passes control to an end block 2420 .
  • an exemplary method for reconstructing geometric blocks is indicated generally by the reference numeral 2500 .
  • the method 2500 includes a start block 2505 that passes control to a function block 2510 .
  • the function block 2510 determines the geometric partition from the parameters, and passes control to a function block 2515 .
  • the function block 2515 recomposes the partitions prediction, and passes control to a function block 2520 .
  • the function block 2520 applies an anti-aliasing procedure, and passes control to a function block 2525 .
  • the function block 2525 adds the reconstructed residual, and passes control to an end block 2530 .
  • an exemplary method for searching for the best mode for a current block is indicated generally by the reference numeral 2600 .
  • the method 2600 includes a start block 2605 that passes control to a function block 2610 , a function block 2615 , a function block 2620 , a function block 2625 , and a function block 2630 .
  • the function block 2610 tests the 16 ⁇ 16 block mode, and passes control to a function block 2635 .
  • the function block 2615 tests the 16 ⁇ 8 block mode, and passes control to a function block 2635 .
  • the function block 2620 tests the 8 ⁇ 16 block mode, and passes control to a function block 2635 .
  • the function block 2625 tests the 16 ⁇ 16 geometric block mode, and passes control to a function block 2635 .
  • the function block 2630 tests the 8 ⁇ 8 block modes, and passes control to a function block 2635 .
  • the function block 2635 selects the best mode for the current block, and passes control to an end block 2640 .
  • an exemplary method for slice header syntax decoding is indicated generally by the reference numeral 2700 .
  • the method 2700 includes a start block 2705 that passes control to a function block 2710 .
  • the function block 2710 decodes the slice related information I, and passes control to a function block 2715 .
  • the function block 2715 decodes the slice quality (QP) coding information, and passes control to a function block 2720 .
  • the function block 2720 decodes the geometric parameters precision information, and passes control to a function block 2725 .
  • the function block 2725 decodes the slice related information II, and passes control to an end block 2730 .
  • one advantage/feature is an apparatus that includes an encoder for encoding image data corresponding to pictures by adaptively partitioning at least portions of the pictures responsive to at least one parametric model.
  • the at least one parametric model involves at least one of implicit and explicit formulation of at least one curve.
  • Another advantage/feature is the apparatus having the encoder as described above, wherein at least one of the at least one parametric model and the at least one curve are derived from a geometric signal model.
  • Yet another advantage/feature is the apparatus having the encoder as described above, wherein at least one of the at least one parametric model and the at least one curve describe at least one of, one or more image contours, and, one or more motion boundaries.
  • Still another advantage/feature is the apparatus having the encoder as described above, wherein at least one polynomial is used as at least one of the at least one parametric model and the at least one curve.
  • another advantage/feature is the apparatus having the encoder as described above, wherein a first order polynomial model is used as at least one of the at least one parametric model and the at least one curve.
  • Another advantage/feature is the apparatus having the encoder wherein a first order polynomial model is used as described above, wherein the first order polynomial model includes an angle parameter and a distance parameter.
  • another advantage/feature is the apparatus having the encoder as described above, wherein the at least one parametric model for a given image portion is adaptively selected from a set of models when more than one parametric model is available, and the selection is explicitly or implicitly coded.
  • another advantage/feature is the apparatus having the encoder as described above, wherein the encoder performs explicit or implicit coding of a precision of parameters of at least one of the at least one parametric model and the at least one curve using at least one high level syntax element.
  • another advantage/feature is the apparatus having the encoder that uses the least one high level syntax element as described above, wherein the at least one high level syntax element is placed at least one of a slice header level, a Supplemental Enhancement Information (SEI) level, a picture parameter set level, a sequence parameter set level and a network abstraction layer unit header level.
  • SEI Supplemental Enhancement Information
  • Another advantage/feature is the apparatus having the encoder as described above, wherein a precision of parameters of at least one of the at least one parametric model and the at least one curve is adapted in order to control at least one of compression efficiency and encoder complexity.
  • Another advantage/feature is the apparatus having the encoder as described above, wherein the precision of the parameters of at least one of the at least one parametric model and the at least one curve is adapted depending on a compression quality parameter.
  • Another advantage/feature is the apparatus having the encoder as described above, wherein predictor data, associated with at least one partition of at least one of the pictures, is predicted from at least one of spatial neighboring blocks and temporal neighboring blocks.
  • Another advantage/feature is the apparatus having the encoder as described above, wherein partition model parameters for at least one of the at least one parametric model and the at least one curve are predicted from at least one of spatial neighboring blocks and temporal neighboring blocks.
  • another advantage/feature is the apparatus having the encoder as described above, wherein the encoder computes prediction values for pixels that, according to at least one of the at least one parametric model and the at least one curve, lay partly in more than one partition, using at least one of an anti-aliasing procedure, a combination of a part of prediction values for corresponding positions of the pixels, a totality of the prediction values for the corresponding positions of the pixels, a neighborhood, predictors of different partitions, from among the more than one partition, where the pixel is deemed to partly lay.
  • Another advantage/feature is the apparatus having the encoder as described above, wherein the encoder is an extended version of an existing hybrid predictive encoder of an existing video coding standard or video coding recommendation.
  • another advantage/feature is the apparatus having the encoder that is the extended version of the existing hybrid predictive encoder of the existing video coding standard or video coding recommendation as described above, wherein the encoder applies parametric model based partitions to at least one of macroblocks and sub-macroblocks of the pictures as coding modes for at least one of the macroblocks and the sub-macroblocks, respectively.
  • Another advantage/feature is the apparatus having the encoder that applies the parametric model based partitions as described above, wherein parametric model-based coding modes are inserted within existing macroblock and sub-macroblock coding modes of an existing video coding standard or video coding recommendation.
  • Another advantage/feature is the apparatus having the encoder that applies the parametric model based partitions as described above, wherein the encoder encodes model parameters of at least one of the at least one parametric model and the at least one curve to generate the parametric model-based partitions along with partitions prediction data.
  • another advantage/feature is the apparatus having the encoder that applies the parametric model based partitions as described above, wherein the encoder selects model parameters of at least one of the at least one parametric model, the at least one curve, and partition predictions in order to jointly minimize at least one of a distortion measure and a coding cost measure.
  • Another advantage/feature is the apparatus having the encoder that applies the parametric model based partitions as described above, wherein pixels of at least one of the pictures that overlap at least two parametric model-based partitions are a weighted linear average from predictions of the at least two parametric model-based partitions.
  • Another advantage/feature is the apparatus having the encoder that applies the parametric model based partitions as described above, wherein partition predictions are of at least one of the type inter and intra.
  • Another advantage/feature is the apparatus having the encoder that applies the parametric model based partitions as described above, wherein the encoder selectively uses parameter predictions for at least one of the at least one parametric model and the at least one curve for partition model parameters coding.
  • another advantage/feature is the apparatus having the encoder that selectively uses the parameter predictions as described above, wherein a prediction for a current block of a particular one of the pictures is based on curve extrapolation from neighboring blocks into the current block.
  • another advantage/feature is the apparatus having the encoder that selectively uses the parameter predictions as described above, wherein the encoder uses different contexts or coding tables to encode the image data depending on whether or not parameters of at least one of the at least one parametric model and the at least one curve are predicted.
  • Another advantage/feature is the apparatus having the encoder that applies the parametric model based partitions as described above, wherein the encoder is an extended version of an encoder for the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) Moving Picture Experts Group-4 (MPEG-4) Part 10 Advanced Video Coding (AVC) standard/International Telecommunication Union, Telecommunication Sector (ITU-T) H.264 recommendation.
  • ISO/IEC International Organization for Standardization/International Electrotechnical Commission
  • MPEG-4 Moving Picture Experts Group-4
  • AVC Advanced Video Coding
  • ITU-T International Telecommunication Union, Telecommunication Sector
  • another advantage/feature is the apparatus having the encoder that applies the parametric model based partitions as described above, wherein the encoder applies at least one of deblocking filtering and reference frame filtering adapted to handle transform-size blocks affected by at least one parametric model-based partition due to non-tree-based partitioning of the at least one of the macroblocks and the sub-macroblocks when parametric model-based partition modes are used.
  • the teachings of the present principles are implemented as a combination of hardware and software.
  • the software may be implemented as an application program tangibly embodied on a program storage unit.
  • the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
  • the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPU”), a random access memory (“RAM”), and input/output (“I/O”) interfaces.
  • CPU central processing units
  • RAM random access memory
  • I/O input/output
  • the computer platform may also include an operating system and microinstruction code.
  • the various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU.
  • various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

There are provided methods and apparatus for adaptive geometric partitioning for video encoding and decoding. An apparatus includes an encoder for encoding image data corresponding to pictures by adaptively partitioning at least portions of the pictures responsive to at least one parametric model. The at least one parametric model involves at least one of implicit and explicit formulation of at least one curve.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application Ser. No. 60/834,993, filed 2 Aug. 2006, which is incorporated by reference herein in its entirety. Further, this application is related to the non-provisional application, Attorney Docket No. PU070128, entitled “METHODS AND APPARATUS FOR ADAPTIVE GEOMETRIC PARTITIONING FOR VIDEO DECODING”, which is commonly assigned, incorporated by reference herein, and concurrently filed herewith.
  • TECHNICAL FIELD
  • The present principles relate generally to video encoding and decoding and, more particularly, to methods and apparatus for adaptive geometric partitioning for video encoding and decoding.
  • BACKGROUND
  • Most video coding techniques use prediction plus residual coding to model video images. Other approaches may also consider prediction as a step into some process of signal transformation, like when lifting schemes are used to generate wavelet transform (with or without motion compensation). Prediction is performed on each frame on a partition basis. That is, each frame is partitioned into blocks or sets of nested blocks in a tree structure, and then each block partition is coded by using an intra or inter predictor plus some residual coding. Frame partitioning into blocks is performed by defining a grid of regions, which are normally blocks (called macroblocks) all over the frame and then each of the macroblocks may also be further partitioned in smaller blocks (also called subblocks or sub-macroblocks). Typically, macroblocks on the boundary of objects and/or frame regions with different textures, color, smoothness and/or different motion, tend to be further divided into subblocks in order to make the coding of the macroblock as efficient as possible, with as much objective and/or subjective quality as possible.
  • In recent studies, tree structures have been shown to be sub-optimal for coding image information. These studies sustain that tree-based coding of images is unable to optimally code heterogeneous regions (here, regions are considered to have a well-defined and uniform characteristic, such as a flat, smooth, or stationary texture) separated by a regular edge or contour. This problem arises from the fact that tree structures are not able to optimally catch the geometrical redundancy existing along edges, contours or oriented textures. This concept implies that adaptive tree partitioning of macroblocks, even if better than simple fixed-size frame partitioning, is still not optimal enough to capture the geometric information included in 2D data for coding purposes in an efficient manner.
  • Frame partitioning is a process of key importance in efficient video coding. Recent video compression technologies such as the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) Moving Picture Experts Group-4 (MPEG-4) Part 10 Advanced Video Coding (AVC) standard/International Telecommunication Union, Telecommunication Sector (ITU-T) H.264 recommendation (hereinafter the “MPEG-4 AVC standard”), use a tree-based frame partition. This seems to be more efficient than a simple uniform block partition, typically used in older video coding standards and recommendations such as the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) Moving Picture Experts Group-2 (MPEG-2) standard and the International Telecommunication Union, Telecommunication Sector (ITU-T) H.263 recommendation (hereinafter the “H.263 Recommendation”). However, tree based frame partitioning still does not code the video information as efficiently as possible, as it is unable to efficiently capture the geometric structure of two-dimensional (2D) data.
  • Tree-structured macroblock partitioning is adopted in current major video coding standards. The International Telecommunication Union, Telecommunication Sector (ITU-T) H.261 recommendation (hereinafter the “H.261 Recommendation”), the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) Moving Picture Experts Group-1 (MPEG-1) standard, and the ISO/IEC MPEG-2 standard/ITU-T H.263 recommendation (hereinafter the “MPEG-2 Standard”) support only 16×16 macroblock (MB) partition. The ISO/IEC Moving Picture Experts Group-4 (MPEG-4) Part 10 Advanced Video Coding (AVC) standard/ITU-T H.264 recommendation (hereinafter the “MPEG-4 AVC standard”) simple profile or ITU-T H.263(+) Recommendation support both 16×16 and 8×8 partitions for a 16×16 MB. The MPEG-4 AVC standard supports tree-structured hierarchical macroblock partitions. A 16×16 MB can be partitioned into macroblock 5, partitions of sizes 16×8, 8×16, or 8×8. 8×8 partitions are also known as sub-macroblocks. Sub-macroblocks can be further broken into sub-macroblock partitions of sizes 8×4, 4×8, and 4×4. Turning to FIG. 1, MPEG-4 AVC standard macroblock division sets are indicated generally by the reference numeral 100. In particular, macroblock partitions are indicated by the reference numeral 110, and sub-macroblock partitions are indicated by the reference numeral 120. In recent studies, tree structures have been shown to be sub-optimal for coding image information. Some of these studies demonstrate that tree-based coding systems are unable to optimally code heterogeneous regions separated by a regular edge or contour.
  • Some prior work on the subject experimentally identified the need for other types of block partitioning than that supplied by simple tree based partitioning for motion compensation. These techniques propose, in addition to tree based block partition, the use of some additional macroblock partitions able to better adapt to motion edges for motion estimation and compensation.
  • In one prior art approach (hereinafter “the first prior art approach”) within the framework of a H.263 codec, it is proposed to use two additional diagonal motion compensation modes. When one of these modes is selected, concerned macroblocks are partitioned into two similar triangles divided by a diagonal segment. Depending on the coding mode, this goes from lower left corner to upper right corner for one mode, and from upper-left corner to the lower-right one for the second mode. Turning to FIGS. 2A and 2B, additional motion compensation coding modes corresponding to the designated “first prior art approach” described herein are indicated generally by the reference numerals 200 and 250, respectively. The motion compensation coding mode 200 corresponds to a right-up diagonal edge coding mode, and the motion compensation coding mode 250 corresponds to a left-up diagonal edge coding mode.
  • The first prior art approach is very limited in the sense that these modes are simple variations of the 16×8 or 8×16 motion compensation modes by a fixed diagonal direction. The edge they define is very coarse and it is not precise enough to fit the rich variety of edges found in video frames. There is no explicit coding of geometric information, which impairs from having an adapted treatment of this information in the encoder. Two modes are introduced in the list of coding modes, which increases the coding overhead of other coding modes located after these two in the list of modes.
  • A direct evolution from the first prior art approach relates to three other prior art approaches, respectively referred to herein as the second, third, and fourth prior art approaches. Collectively in these works, a larger set of motion compensation coding modes are introduced than that described in the first prior art approach. The systems described with respect to the second, third, and fourth prior art approaches introduce a large collection of additional coding modes including oriented partitions. These modes are different translated versions of the 16×8, 8x16 modes as well as different translated versions of the modes proposed in the first prior art approach with a zigzag profile. Turning to FIG. 3, motion compensation coding modes relating to the designated “second”, “third”, and “fourth prior art approaches” are indicated generally by the reference numeral 300. Eighteen motion compensation coding modes are shown.
  • As in the case of the first prior art approach, the partitions defined in the second, third, and fourth prior art approaches for motion compensation are very gross and imprecise with video frames content. Even if the set of oriented partitions outnumber those in the first prior art approach, they are still not precise enough for efficient coding of the rich variety of edges found in video frames. In this case, there is no explicit coding of geometric information, which impairs to have an adapted treatment of the geometric information in the encoder. Moreover, the overhead introduced in order to code the much larger set of modes has an even worse effect on the non-directional modes that follow the oriented modes in the list of modes.
  • A fifth prior art approach proposes the use of intra prediction within the partitions of the oriented modes from the second, third, and fourth prior art approaches, in addition to their former purpose for motion compensation based prediction. The limitations of the fifth prior art approach are inherited from the second, third, and fourth prior art approaches, hence all those stated in previous paragraphs also apply to the fifth prior art approach.
  • A sixth prior art approaches proposes the most flexible framework from the works found in the literature. The sixth prior art approach proposes the introduction of only 2 modes where segments connecting two boundary points are used to generate block partitions. The first of the proposed motion compensation coding modes divides a macroblock into two partitions separated by a segment connecting two macroblock boundary points. Turning to FIG. 4A, macroblock partitioning according to a first motion compensation coding mode of the designated “sixth prior art approach” described herein is indicated generally by the reference numeral 400.
  • The second proposed mode is based on a primary division of the macroblock into subblocks, and then, each subblock is divided using a segment connecting two points on the boundary of each subblock. Turning to FIG. 4B, macroblock partitioning according to a second motion compensation coding mode of the designated “sixth prior art approach” described herein is indicated generally by the reference numeral 450.
  • Several limitations still exist with respect to the scheme outlined in the sixth prior art approach, and include the following.
  • In a first limitation related to the sixth prior art approach, block partitioning defined as the connection of two boundary points by a segment is not able to handle, efficiently, cases of more complex boundaries or contours. For this, the sixth prior art approach proposes the division of macroblocks into subblocks, and the use of points connecting segments in every subblock, in order to approximate more complex shapes, which is inefficient.
  • In a second limitation related to the sixth prior art approach, partitions are only conceived for motion compensation, disregarding the use of some intra coding technique within the generated partitions. This disables the proposed technique to handle uncovering effects (situations where new data appears from behind an object during a sequence), or simply to code information in a non-temporally predictive way in any of the video frames.
  • In a third limitation related to the sixth prior art approach, partition coding by coding boundary points is not efficient enough in terms of distortion and coding cost. This is because they are not able to properly represent the geometric characteristics of the partitions boundary; hence, they do not properly show the geometric characteristics of the data in the video frame. Indeed, data in video frames typically presents different statistics for geometric information like local orientations and local positions of different video components and/or objects. The simple use of boundary points is unable to reflect such information. Thus, one cannot exploit such statistics for coding purposes.
  • In a fourth limitation related to the sixth prior art approach, different video compression qualities have different geometric information precision requirements in order to achieve the best distortion versus coding cost trade-off. The sixth prior art approach does not adapt the information sent to encode the block partitions depending on the video compression quality. Moreover, as the sixth prior art approach does not have and/or otherwise describe a proper representation of partition geometric information, the sixth prior art approach cannot favor, if needed, the encoding of some kind of geometric information with higher precision than some other kind of geometric information.
  • In a fifth limitation related to the sixth prior art approach, the sixth prior art approach does not appear to handle those pixels lying on the boundary of the partitions which are partly on one side of the boundary, and partly on the other side. These pixels should be able, when needed, to mix information from both partition sides.
  • Turning to FIG. 8, a video encoder capable of performing video encoding in accordance with the MPEG-4 AVC standard is indicated generally by the reference numeral 800.
  • The video encoder 800 includes a frame ordering buffer 810 having an output in signal communication with a non-inverting input of a combiner 885. An output of the combiner 885 is connected in signal communication with a first input of a transformer and quantizer 825. An output of the transformer and quantizer 825 is connected in signal communication with a first input of an entropy coder 845 and a first input of an inverse transformer and inverse quantizer 850. An output of the entropy coder 845 is connected in signal communication with a first non-inverting input of a combiner 890. An output of the combiner 890 is connected in signal communication with a first input of an output buffer 835.
  • A first output of an encoder controller 805 is connected in signal communication with a second input of the frame ordering buffer 810, a second input of the inverse transformer and inverse quantizer 850, an input of a picture-type decision module 815, an input of a macroblock-type (MB-type) decision module 820, a second input of an intra-prediction module 860, a second input of a deblocking filter 865, a first input of a motion compensator 870, a first input of a motion estimator 875, and a second input of a reference picture buffer 880.
  • A second output of the encoder controller 805 is connected in signal communication with a first input of a Supplemental Enhancement Information (SEI) inserter 830, a second input of the transformer and quantizer 825, a second input of the entropy coder 845, a second input of the output buffer 835, and an input of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 840.
  • A first output of the picture-type decision module 815 is connected in signal communication with a third input of a frame ordering buffer 810. A second output of the picture-type decision module 815 is connected in signal communication with a second input of a macroblock-type decision module 820.
  • An output of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 840 is connected in signal communication with a third non-inverting input of the combiner 890.
  • An output of the inverse quantizer and inverse transformer 850 is connected in signal communication with a first non-inverting input of a combiner 825. An output of the combiner 825 is connected in signal communication with a first input of the intra prediction module 860 and a first input of the deblocking filter 865. An output of the deblocking filter 865 is connected in signal communication with a first input of a reference picture buffer 880. An output of the reference picture buffer 880 is connected in signal communication with a second input of the motion estimator 875. A first output of the motion estimator 875 is connected in signal communication with a second input of the motion compensator 870. A second output of the motion estimator 875 is connected in signal communication with a third input of the entropy coder 845.
  • An output of the motion compensator 870 is connected in signal communication with a first input of a switch 897. An output of the intra prediction module 860 is connected in signal communication with a second input of the switch 897. An output of the macroblock-type decision module 820 is connected in signal communication with a third input of the switch 897. An output of the switch 897 is connected in signal communication with a second non-inverting input of the combiner 825.
  • Inputs of the frame ordering buffer 810 and the encoder controller 805 are available as input of the encoder 800, for receiving an input picture 801. Moreover, an input of the Supplemental Enhancement Information (SEI) inserter 830 is available as an input of the encoder 800, for receiving metadata. An output of the output buffer 835 is available as an output of the encoder 800, for outputting a bitstream.
  • Turning to FIG. 10, a video decoder capable of performing video decoding in accordance with the MPEG-4 AVC standard is indicated generally by the reference numeral 1000.
  • The video decoder 1000 includes an input buffer 1010 having an output connected in signal communication with a first input of an entropy decoder 1045. A first output of the entropy decoder 1045 is connected in signal communication with a first input of an inverse transformer and inverse quantizer 1050. An output of the inverse transformer and inverse quantizer 1050 is connected in signal communication with a second non-inverting input of a combiner 1025. An output of the combiner 1025 is connected in signal communication with a second input of a deblocking filter 1065 and a first input of an intra prediction module 1060. A second output of the deblocking filter 1065 is connected in signal communication with a first input of a reference picture buffer 1080. An output of the reference picture buffer 1080 is connected in signal communication with a second input of a motion compensator 1070.
  • A second output of the entropy decoder 1045 is connected in signal communication with a third input of the motion compensator 1070 and a first input of the deblocking filter 1065. A third output of the entropy decoder 1045 is connected in signal communication with an input of a decoder controller 1005. A first output of the decoder controller 1005 is connected in signal communication with a second input of the entropy decoder 1045. A second output of the decoder controller 1005 is connected in signal communication with a second input of the inverse transformer and inverse quantizer 1050. A third output of the decoder controller 1005 is connected in signal communication with a third input of the deblocking filter 1065. A fourth output of the decoder controller 1005 is connected in signal communication with a second input of the intra prediction module 1060, with a first input of the motion compensator 1070, and with a second input of the reference picture buffer 1080.
  • An output of the motion compensator 1070 is connected in signal communication with a first input of a switch 1097. An output of the intra prediction module 1060 is connected in signal communication with a second input of the switch 1097. An output of the switch 1097 is connected in signal communication with a first non-inverting input of the combiner 1025.
  • An input of the input buffer 1010 is available as an input of the decoder 1000, for receiving an input bitstream. A first output of the deblocking filter 1065 is available as an output of the decoder 1000, for outputting an output picture.
  • SUMMARY
  • These and other drawbacks and disadvantages of the prior art are addressed by the present principles, which are directed to methods and apparatus for adaptive geometric partitioning for video encoding and decoding.
  • According to an aspect of the present principles, there is provided an apparatus. The apparatus includes an encoder for encoding image data corresponding to pictures by adaptively partitioning at least portions of the pictures responsive to at least one parametric model. The at least one parametric model involves at least one of implicit and explicit formulation of at least one curve.
  • According to another aspect of the present principles, there is provided a method. The method includes encoding image data corresponding to pictures by adaptively partitioning at least portions of the pictures responsive to at least one parametric model. The at least one parametric model involves at least one of implicit and explicit formulation of at least one curve.
  • According to yet another aspect of the present principles, there is provided an apparatus. The apparatus includes a decoder for decoding image data corresponding to pictures by reconstructing at least portions of the pictures partitioned using at least one parametric model. The at least one parametric model involves at least one of implicit and explicit formulation of at least one curve.
  • According to still another aspect of the present principles, there is provided a method. The method includes decoding image data corresponding to pictures by reconstructing at least portions of the pictures partitioned using at least one parametric model. The at least one parametric model involves at least one of implicit and explicit formulation of at least one curve.
  • These and other aspects, features and advantages of the present principles will become apparent from the following detailed description of exemplary embodiments, which is to be read in connection with the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present principles may be better understood in accordance with the following exemplary figures, in which:
  • FIG. 1 shows a diagram for MPEG-4 AVC standard macroblock division sets to which the present principles may be applied;
  • FIGS. 2A and 2B show diagrams for additional motion compensation coding modes corresponding to the designated “first prior art approach” described herein;
  • FIG. 3 shows a diagram for motion compensation coding modes relating to the designated “second”, “third”, and “fourth prior art approaches” described herein;
  • FIG. 4A shows a diagram for macroblock partitioning according to a first motion compensation coding mode of the designated “sixth prior art approach” described herein;
  • FIG. 4B shows a diagram for macroblock partitioning according to a second motion compensation coding mode of the designated “sixth prior art approach” described herein;
  • FIG. 5 shows a diagram for a smooth boundary partition based on a polynomial model with partitions P0 and P1, according to an embodiment of the present principles;
  • FIG. 6 shows a diagram for an example of using a first order polynomial with parameters described geometry (angle and position) for use as a parametric model, according to an embodiment of the present principles;
  • FIG. 7 shows a diagram for a partition mask generated from parametric model f(x,y) using a first degree polynomial, according to an embodiment of the present principles;
  • FIG. 8 shows a block diagram for a video encoder capable of performing video encoding in accordance with the MPEG-4 AVC Standard;
  • FIG. 9 shows a block diagram for a video encoder capable of performing video encoding in accordance with the MPEG-4 AVC Standard, extended for use with the present principles, according to an embodiment of the present principles;
  • FIG. 10 shows a block diagram for a video decoder capable of performing video decoding in accordance with the MPEG-4 AVC Standard;
  • FIG. 11 shows a block diagram for a video decoder capable of performing video decoding in accordance with the MPEG-4 AVC Standard, extended for use with the present principles, according to an embodiment of the present principles;
  • FIG. 12 shows a diagram for a parametric model based partitioned macroblock and its use together with a deblocking procedure, according to an embodiment of the present principles;
  • FIG. 13 shows a diagram for an example of partition parameters prediction for the right block from parameters of the left block, according to an embodiment of the present principles;
  • FIG. 14 shows a diagram for an example of partition parameters prediction for the lower block from parameters of the upper block, according to an embodiment of the present principles;
  • FIG. 15 shows a diagram for an example of partition parameters prediction for the right block from parameters of the upper and left blocks, according to an embodiment of the present principles;
  • FIG. 16 shows a diagram for an exemplary method for geometric modes estimation with model-based partition parameters and prediction search, according to an embodiment of the present principles;
  • FIG. 17 shows a flow diagram for an exemplary method for coding a geometrically partitioned prediction block, according to an embodiment of the present principles;
  • FIG. 18A shows a flow diagram for an exemplary method for coding a geometrically partitioned inter prediction block, according to an embodiment of the present principles;
  • FIG. 18B shows a flow diagram for an exemplary method for coding a geometrically partitioned intra prediction block, according to an embodiment of the present principles;
  • FIG. 19 shows a flow diagram for an exemplary method for coding with multiple types of models, according to an embodiment of the present principles;
  • FIG. 20 shows a flow diagram for an exemplary method for decoding a geometrically partitioned prediction block, according to an embodiment of the present principles;
  • FIG. 21A shows a flow diagram for an exemplary method for decoding a geometrically partitioned inter prediction block, according to an embodiment of the present principles;
  • FIG. 21B shows a flow diagram for an exemplary method for decoding a geometrically partitioned intra prediction block, according to an embodiment of the present principles;
  • FIG. 22 shows a flow diagram for an exemplary method for decoding with multiple types of models, according to an embodiment of the present principles;
  • FIG. 23 shows a flow diagram for an exemplary method for slice header syntax coding, according to an embodiment of the present principles;
  • FIG. 24 shows a flow diagram for an exemplary method for deriving geometric parameters precision, according to an embodiment of the present principles;
  • FIG. 25 shows a flow diagram for an exemplary method for reconstructing geometric blocks, according to an embodiment of the present principles;
  • FIG. 26 shows a flow diagram for an exemplary method for searching for the best mode for a current block, according to an embodiment of the present principles; and
  • FIG. 27 shows a flow diagram for an exemplary method for slice header syntax decoding, according to an embodiment of the present principles;
  • DETAILED DESCRIPTION
  • The present principles are directed to methods and apparatus for adaptive geometric partitioning for video encoding and decoding.
  • The present description illustrates the present principles. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the present principles and are included within its spirit and scope.
  • All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the present principles and the concepts contributed by the inventor(s) to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
  • Moreover, all statements herein reciting principles, aspects, and embodiments of the present principles, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
  • Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the present principles. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
  • The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (“DSP”) hardware, read-only memory (“ROM”) for storing software, random access memory (“RAM”), and non-volatile storage.
  • Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
  • In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The present principles as defined by such claims reside in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
  • Reference in the specification to “one embodiment” or “an embodiment” of the present principles means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present principles. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment” appearing in various places throughout the specification are not necessarily all referring to the same embodiment.
  • It is to be appreciated that the terms “blocks” and “regions” are used interchangeably herein.
  • It is to be further appreciated that the phrase “existing video coding standard” and “video coding recommendation” may refer to any existing video coding standard and recommendation, including those not yet developed, but existing within a time of application of the present principles thereto. Such standards and recommendations include, but are not limited to, H.261, H.262, H.263, H.263+, H.263++, MPEG-1, MPEG-2, MPEG-4 AVC, and so forth.
  • Moreover, the term “extended version” when used with respect to a video coding standard and/or recommendation, refers to one that is modified, evolved, or otherwise extended.
  • Also, it is to be appreciated that the phrase “image data” is intended to refer to data corresponding to any of still images and moving images (i.e., a sequence of images including motion).
  • Additionally, as used herein, “high level syntax” refers to syntax present in the bitstream that resides hierarchically above the macroblock layer. For example, high level syntax, as used herein, may refer to, but is not limited to, syntax at the slice header level, Supplemental Enhancement Information (SEI) level, picture parameter set level, sequence parameter set level and NAL unit header level.
  • It is to be appreciated that the use of the term “and/or”, for example, in the case of “A and/or B”, is intended to encompass the selection of the first listed option (A), the selection of the second listed option (B), or the selection of both options (A and B). As a further example, in the case of “A, B, and/or C”, such phrasing is intended to encompass the selection of the first listed option (A), the selection of the second listed option (B), the selection of the third listed option (C), the selection of the first and the second listed options (A and B), the selection of the first and third listed options (A and C), the selection of the second and third listed options (B and C), or the selection of all three options (A and B and C). This may be extended, as readily apparent by one of ordinary skill in this and related arts, for as many items listed.
  • As noted above, the present principles are directed to methods and apparatus for adaptive geometric partitioning for video encoding and decoding.
  • One or more embodiments of the present principles use parametric models for frame region partitioning, that is able to capture and represent local signal geometry, in order to overcome the inefficiencies of tree based approaches. Parametric modeling, as used in various embodiments of the present principles, is defined as defining at least one partition within an image portion (or macroblock) by implicit or explicit formulation of at least one curve (which, in the particular case of a first degree polynomial, becomes a straight line), and where a particular embodiment of this is to jointly define the partitions and curve according to the so-called “implicit curve” formulation. Formulation of a general curve as used in accordance with the present principles is distinguished from the sixth prior art approach described above in that the sixth prior art approach defines boundaries between sliced partitions within a block as a straight-line connection between two given points located on the periphery of the block.
  • Given a region or block of a frame to be predicted, a geometric partition mode is tested in addition to those based in classic tree partitioning. The concerned block or region is partitioned into several regions described by one or a set of parametric models. In particular, a form of this can be the partition of blocks or regions into two partitions where their boundary is described by a parametric model or function ƒ(x, y, {right arrow over (p)}), where x and y represent the coordinate axes, and {right arrow over (p)} represents the set of parameters including the information describing the shape of the partition. Once the frame block or region is divided into partitions using ƒ(x, y, {right arrow over (p)}), each generated partition is predicted by the most appropriate predictor, based on some distortion and coding cost measure trade-off.
  • The reason that such a partition description is of interest is because in recent studies, tree structures have been demonstrated to be sub-optimal for coding image information. These studies maintain that tree-based coding of images is unable to optimally code heterogeneous regions separated by a regular edge or contour. This problem arises from the fact that tree structures are not able to optimally catch the geometrical redundancy existing along edges, contours or oriented textures. In video sequences, different instances of situations where edges and/or contours need to be coded are common. One of them is when intra coded data is encoded. Boundaries between different kinds of visual data are one of the most relevant kinds of information, e.g., edges and object contours. In inter coded data, contours around moving objects and between regions of differing motion are also of relevant importance.
  • An embodiment of the present principles provides a technique for general geometric frame partitioning adapted to the geometry of two dimensional (2D) data. Each one of the generated regions is then encoded by using the most efficient type of prediction, e.g., inter and/or intra prediction types. An embodiment includes the generation of geometric partitions in blocks or frame regions. Partition of blocks or frame regions into geometrically adapted partitions, instead of classic trees, allows for a reduction of the amount of information to be sent, as well as the amount of residue generated by the prediction procedure. In accordance with the present principles, a parametric model is used to generate, approximate and/or code the partition boundaries within each block. Such an approach allows for a better capture of the main geometric properties of the 2D data. For example, the model parameters can be defined to independently carry information involving, for example, but not limited to, partition boundary angle, position, discontinuities, and/or even curvature. The use of parametric models, for partition coding, allows for a very compact partition edge description, which minimizes the number of parameters to code. Furthermore, partition model parameters can be defined such as to decouple independent or different geometric information, in order to best code each of the parameters according to their statistics and nature. Such model-based treatment of geometric information also allows for the selective reducing or increasing of the amount of coding information invested per geometric parameter. In addition to coding efficiency, such a feature is useful to control computational complexity while minimizing the impact on coding efficiency.
  • One of the advantages of using parametric model based partition descriptions is the possibility to efficiently describe smooth partition boundaries between two partitions in a block. Many times, boundaries between two different moving objects, or edges in an intra frame, can be modeled and finely approximated by some kind of polynomial ƒp(x, y, {right arrow over (p)}). Turning to FIG. 5, a smooth boundary partition based on a polynomial model with partitions P0 and P1 is indicated generally by the reference numeral 500.
  • For the purpose of geometric image and video coding, the ƒ(x, y, {right arrow over (p)}) (also expressed as ƒ(x,y) in the following) parameters can be operated such that they describe geometric information such as local angle, position and/or some curvature magnitude. Hence, in the particular case of a first order polynomial fp(x,y,{right arrow over (p)}), block partitions can be represented such that they describe angle and distance with respect to a given set of coordinate axes:

  • ƒ(x,y)=x cos θ+y sin θ−ρ,
  • where partitions boundary is defined over those positions (x,y) such that ƒ(x,y)=0.
  • Turning to FIG. 6, an example of using a first order polynomial with parameters described geometry (angle and position) for use as a parametric model is indicated generally by the reference numeral 600.
  • In an embodiment directed to the generation of two regions-out of every block, an implicit formulation as follows could be used to describe the partitions:
  • GEO_Partition = { if f ( x , y ) > 0 Partition 0 if f ( x , y ) = 0 Line Boundary if f ( x , y ) < 0 Partition 1.
  • All pixels located on one side of the zero line (ƒ(x,y)=0) are classified as belonging to one partition region (e.g., Partition 1). All pixels located at the other side, are classified in the alternative region (e.g., Partition 0).
  • Optional Method to Handle Pixels on Partition Boundaries:
  • Considering the discrete nature of the partitions, we find that on the neighborhoods of the separation line or curve, some pixels can only be considered to partly belong to one or the other partition. This is due to the fact that the parametric model formulation is continuous and the partitions realization is discrete.
  • Such pixels may be subject to the influence of the predictor used to describe each one of the partition sides. Hence, pixels may be labeled as “partial surface”, with a label different from those of Partition 1 and 0. For simplicity, we adopt the convention of labeling pixels in one or the other partition with a 1 or 0. “Partial surface” pixels can be thus identified with some value in between, which may also include the information of how much the concerned pixel is into partition 0 (e.g., a value of 1 would indicate completely, 0.5 would indicate half-half, and 0 would indicate nothing). Of course, the preceding numbering arrangements hereinbefore and throughout are provided for purposes of illustration and clarity and, given the teachings of the present principles provided herein, one of ordinary skill in this and related arts will contemplate these and various other numbering arrangements for use with the present principles, while maintaining the spirit of the present principles. The preceding is formally expressed by the following definition of labeling for Partition 0:
  • Label ( x , y ) = { if f ( x , y ) >= 0.5 then 1 if 0.5 > f ( x , y ) > - 0.5 then f ( x , y ) + 0 .5 if f ( x , y ) <= - 0.5 then 0
  • Label(x,y)=1 indicates whether that pixel is included within the first partition. Label(x,y)=0 indicates it is in the second partition, the rest of the values state, for that particular pixel, that it is partially classified, indicating also the weight of contribution to that value of the prediction from the first partition. The prediction from the second partition contributes with weight (1−Label(x,y)) to the value of the “partial surface” pixel. This generic pixel classification is generated under the form of a partition mask. Turning to FIG. 7 a partition mask generated from parametric model f(x,y) using a first degree polynomial is indicated generally by the reference numeral 700. As noted above, the floating point numbers stated herein above are just an example of possible selection values. Indeed, depending on f(x,y), threshold values other than 0.5 are possible. Every pixel classified as “partial surface”, can be predicted, also, as a function of one or more neighboring pixels within one of the partitions that overlaps it, or a combination of functions of more than one partition overlapping it. Also, it is to be appreciated by one of ordinary skill in this and related arts that any aspect of the present principles described herein may be adapted for integer implementation, and/or making use of look-up tables.
  • Considerations for Sampling Partition Function, ƒ(x,y), Parameter Space:
  • Model parameters need to be encoded and transmitted to allow the decoder determining the partition of the concerned block or region. For this purpose, the precision of partition parameters is limited according to the maximum amount of coding cost one is willing to invest for describing blocks or partition regions.
  • Without loss of generality, a dictionary of possible partitions (or geometric models) is a priori defined by determining the value range and sampling precision for each parameter of ƒ(x,y). In the case of the geometric first order polynomial boundary, for example, this can be defined such that:
  • ρ : ρ [ 0 , 2 MB Size 2 ) and ρ { 0 , Δ ρ , 2 · Δ ρ , 3 · Δ ρ , , } and θ : { if ρ = 0 θ [ 0 , 180 ) else θ [ 0 , 360 ) and θ { 0 , Δ θ , 2 · Δ θ , 3 · Δ θ , } ,
  • where Δρ and Δθ are the selected quantization (parameter precision) steps. Nevertheless, an offset in the selected values can be established. The quantized indices for θ and ρ are the information transmitted to code the partitions shape. However, in the case where vertical and horizontal directional modes (as defined for the MPEG-4 AVC standard) are used as separate coding modes, geometric partitions with ρ=0 and angles 0 and 90, are removed from the set of possible partitions configuration. This may save bits as well as reduce complexity.
  • The decoder needs to know the parameters precision used by the encoder. This can be sent for every type of partition parameter explicitly or implicitly as a function of some already existing data (e.g., the Quantization Parameter in the MPEG-4 AVC standard). Parameters precision can be adapted according to some high level syntax, such as the sequence, picture, and/or slice level.
  • A video communication system using the region partitioning described herein with respect to the present principles should transmit, for every region using it, the set of necessary encoded parameters to describe the shape of the partition. The rest of the transmitted data, for every geometry encoded region, will be of similar kind to that transmitted by tree based partition modes. Indeed, for each model-based partition, prediction information should be transmitted. Additionally, residual prediction error may also eventually be encoded after prediction.
  • The use of parametric, model based, geometric regions partitioning influences all the processes in a video encoder/decoder that depend on the partitioning of the frame. Some of the more common processes/modules in video systems able to profit from the present principles, and that may be adapted to the present principles, include, but are not limited to: general control of the encoder/decoder; region prediction (motion compensation/intra data prediction); motion estimation; entropy coding/decoding; and in-loop filtering for artifacts reduction.
  • Hereinafter, an embodiment is described with respect to the MPEG-4 AVC Standard framework. However, it is to be appreciated that the present principles are not limited solely to the MPEG-4 AVC and may be readily utilized with respect to other video coding standards and recommendations, while maintaining the spirit of the present principles.
  • Extension of the MPEG-4 AVC Standard Video Encoder and Decoder to Consider Parametric Model Partitions in Accordance with the Present Principles:
  • An embodiment will now be described relating to an extension of the MPEG-4 AVC Standard in accordance with the present principles. The MPEG-4 AVC Standard relies on tree-based frame partitioning in order to optimize coding performance. Extending the MPEG-4 AVC Standard in accordance with an embodiment of the present principles helps to overcome the limitations inherent to tree-based frame partitioning to which the MPEG-4 AVC Standard is subject.
  • The use of parametric model-based region partitioning can be included in the MPEG-4 AVC Standard under the form of new block coding modes. The MPEG-4 AVC Standard tree-based frame partitioning divides each picture, when and where needed, in 16×16, 16×8, 8×16, 8×8, 8×4, 4×8 and 4×4 blocks. Each of these partition types is associated with a coding mode, that at the same time, depending on the mode, can be of the type inter or intra. In addition to these block partition modes, we introduce an additional partition block mode such that a parametric model ƒ(x,y) is used to describe the partition within the block. Such a block mode partitioned with a parametric model is referred to herein as “Geometric Mode”. The goal is to generate partitions as big as possible; hence, the purpose of the parametric model is to be applied to 16×16 size blocks or to unions of leaves of tree-based partitions. However, when compression efficiency is of concern, 8×8 “Geometric Mode” blocks are also considered. The use of 8×8 “Geometric Mode” blocks may also be enabled or disabled depending on complexity factors. A high level syntax can be signaled in order to indicate whether 8×8 “Geometric modes” are used or not. This can save coding overhead when such a mode is unused. Particular examples of syntax level include, but are not limited to, a sequence, picture and/or slice level.
  • In order to insert such a new family of coding modes, the encoder and/or decoder can be modified. As depicted in FIGS. 8, 9, 10, and 11, functionality of the main building blocks in the MPEG-4 AVC Standard can be modified and extended in order to handle the new modes, able to capture and code geometric information.
  • Turning to FIG. 9, a video encoder capable of performing video encoding in accordance with the MPEG-4 AVC standard, extended for use with the present principles, is indicated generally by the reference numeral 900.
  • The video encoder 900 includes a frame ordering buffer 910 having an output in signal communication with a non-inverting input of a combiner 985. An output of the combiner 985 is connected in signal communication with a first input of a transformer and quantizer with geometric extensions 927. An output of the transformer and quantizer with geometric extensions 927 is connected in signal communication with a first input of an entropy coder with geometric extensions 945 and a first input of an inverse transformer and inverse quantizer 950. An output of the entropy coder with geometric extensions 945 is connected in signal communication with a first non-inverting input of a combiner 990. An output of the combiner 990 is connected in signal communication with a first input of an output buffer 935.
  • A first output of an encoder controller with geometric extensions 905 is connected in signal communication with a second input of the frame ordering buffer 910, a second input of the inverse transformer and inverse quantizer 950, an input of a picture-type decision module 915, an input of a macroblock-type (MB-type) decision module with geometric extensions 920, a second input of an intra prediction module with geometric extensions 960, a second input of a deblocking filter with geometric extensions 965, a first input of a motion compensator with geometric extensions 970, a first input of a motion estimator with geometric extensions 975, and a second input of a reference picture buffer 980.
  • A second output of the encoder controller with geometric extensions 905 is connected in signal communication with a first input of a Supplemental Enhancement Information (SEI) inserter 930, a second input of the transformer and quantizer with geometric extensions 927, a second input of the entropy coder with geometric extensions 945, a second input of the output buffer 935, and an input of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 940.
  • A first output of the picture-type decision module 915 is connected in signal communication with a third input of a frame ordering buffer 910. A second output of the picture-type decision module 915 is connected in signal communication with a second input of a macroblock-type decision module with geometric extensions 920.
  • An output of the Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) inserter 940 is connected in signal communication with a third non-inverting input of the combiner 990.
  • An output of the inverse quantizer and inverse transformer 950 is connected in signal communication with a first non-inverting input of a combiner 925. An output of the combiner 925 is connected in signal communication with a first input of the intra prediction module with geometric extensions 960 and a first input of the deblocking filter with geometric extensions 965. An output of the deblocking filter with geometric extensions 965 is connected in signal communication with a first input of a reference picture buffer 980. An output of the reference picture buffer 980 is connected in signal communication with a second input of the motion estimator with geometric extensions 975. A first output of the motion estimator with geometric extensions 975 is connected in signal communication with a second input of the motion compensator with geometric extensions 970. A second output of the motion estimator with geometric extensions 975 is connected in signal communication with a third input of the entropy coder with geometric extensions 945.
  • An output of the motion compensator with geometric extensions 970 is connected in signal communication with a first input of a switch 997. An output of the intra prediction module 860 is connected in signal communication with a second input of the switch 997. An output of the macroblock-type decision module with geometric extensions 920 is connected in signal communication with a third input of the switch 997. An output of the switch 997 is connected in signal communication with a second non-inverting input of the combiner 925 and with an inverting input of the combiner 985.
  • Inputs of the frame ordering buffer 910 and the encoder controller with geometric extensions 905 are available as input of the encoder 900, for receiving an input picture 901. Moreover, an input of the Supplemental Enhancement Information (SEI) inserter 930 is available as an input of the encoder 900, for receiving metadata. An output of the output buffer 935 is available as an output of the encoder 900, for outputting a bitstream.
  • Turning to FIG. 11, a video decoder capable of performing video decoding in accordance with the MPEG-4 AVC standard, extended for use with the present principles, is indicated generally by the reference numeral 1100.
  • The video decoder 1100 includes an input buffer 1110 having an output connected in signal communication with a first input of an entropy decoder with geometric extensions 1145. A first output of the entropy decoder with geometric extensions 1145 is connected in signal communication with a first input of an inverse transformer and inverse quantizer with geometric extensions 1150. An output-of the inverse transformer and inverse quantizer with geometric extensions 1150 is connected in signal communication with a second non-inverting input of a combiner 1125. An output of the combiner 1125 is connected in signal communication with a second input of a deblocking filter with geometric extensions 1165 and a first input of an intra prediction module with geometric extensions 1160. A second output of the deblocking filter with geometric extensions 1165 is connected in signal communication with a first input of a reference picture buffer 1180. An output of the reference picture buffer 1180 is connected in signal communication with a second input of a motion compensator with geometric extensions 1170.
  • A second output of the entropy decoder with geometric extensions 1145 is connected in signal communication with a third input of the motion compensator with geometric extensions 1170 and a first input of the deblocking filter with geometric extensions 1165. A third output of the entropy decoder with geometric extensions 1145 is connected in signal communication with an input of a decoder controller with geometric extensions 1105. A first output of the decoder controller with geometric extensions 1105 is connected in signal communication with a second input of the entropy decoder with geometric extensions 1145. A second output of the decoder controller with geometric extensions 1105 is connected in signal communication with a second input of the inverse transformer and inverse quantizer with geometric extensions 1150. A third output of the decoder controller with geometric extensions 1105 is connected in signal communication with a third input of the deblocking filter with geometric extensions 1165. A fourth output of the decoder controller with geometric extensions 1105 is connected in signal communication with a second input of the intra prediction module with geometric extensions 1160, with a first input of the motion compensator 1170, and with a second input of the reference picture buffer 1180.
  • An output of the motion compensator with geometric extensions 1170 is connected in signal communication with a first input of a switch 1197. An output of the intra prediction module with geometric extensions 1160 is connected in signal communication with a second input of the switch 1197. An output of the switch 1197 is connected in signal communication with a first non-inverting input of the combiner 1125.
  • An input of the input buffer 1110 is available as an input of the decoder 1100, for receiving an input bitstream. A first output of the deblocking filter with geometric extensions 1165 is available as an output of the decoder 1100, for outputting an output picture.
  • Regarding a possible modification/extension relating to the use of the present principles with respect to the MPEG-4 AVG Standard, encoder and/or decoder control modules may be modified/extended to include all the decision rules and coding processes structure necessary for “Geometric Modes”.
  • Regarding another possible modification/extension relating to the use of the present principles with respect to the MPEG-4 AVG Standard, the motion compensation module may be adapted in order to compensate blocks with arbitrary partitions described by ƒ(x,y) and its parameters.
  • Regarding yet another possible modification/extension relating to the use of the present principles with respect to the MPEG-4 AVC Standard, the motion estimation module may be adapted in order to test and select the most appropriate motion vectors for the different sorts of partitions available in the parametric model-based coding mode.
  • Regarding still another possible modification/extension relating to the use of the present principles with respect to the MPEG-4 AVC Standard, intra frame prediction may be adapted in order to consider parametric model-based block partitioning with the possibility to select the most appropriate prediction mode in each partition.
  • Regarding a further possible modification/extension relating to the use of the present principles with respect to the MPEG-4 AVC Standard, the deblocking in-loop filter module may be adapted in order to handle the more complicate shape of motion regions within blocks with parametric model-based partitions.
  • Regarding a yet further possible modification/extension relating to the use of the present principles with respect to the MPEG-4 AVC Standard, entropy coding and/or decoding may be adapted and extended in order to code and/or decode the new data associated with the parametric model-based mode. Moreover, motion prediction may be adapted in order to handle the more complicate shape of motion regions. Predictors for efficiently coding parametric model-based partition parameters may also be generated and used.
  • Encoder Specific Blocks:
  • Encoder Control:
  • The encoder control module may be extended in order to take into account the new modes based on the parametric model-based block partition. These modes (called Geometric Modes) are inserted within the existing ones in the MPEG-4 AVC standard. In the particular case of inter modes for motion compensation, 16×16 and 8×8 parametric model-based partitioned blocks. Each of these modes is, respectively, inserted within the Macroblock-size modes and within the sub Macroblock-size modes. By structural similarity, these modes are logically inserted before, between, or after 16×8 and/or 8×16 for the Geometric 16×16 Mode, and before, between, or after 8×4 and/or 4×8 for the Geometric 8×8 Mode. In an example implementation, in order to allow a low-cost usage of 16×8 and 8×16, as well as, 8×4 and 4×8 modes for low bit-rate, 16×16 and 8×8 Geometric Modes are inserted right after their MPEG-4 AVC directional homologues. According to their global usage statistics, we can also insert them right before the MPEG-4 AVC directional modes (and sub-modes), as shown in TABLE 1 and TABLE 2.
  • TABLE 1
    Macroblock Modes Sub-Macroblock Modes:
    16 × 16 block 8 × 8 block
    16 × 8 block 8 × 4 block
     8 × 16 block 4 × 8 block
    16 × 16 Geometric block 8 × 8 Geometric block
     8 × 8 Sub-macroblock 4 × 4 block
    . . .
  • TABLE 2
    Macroblock Modes: Sub-Macroblock Modes:
    16 × 16 block 8 × 8 block
    16 × 16 Geometric block 8 × 8 Geometric block
    16 × 8 block 8 × 4 block
     8 × 16 block 4 × 8 block
     8 × 8 Sub-macroblock 4 × 4 block
    . . .
  • Motion Estimation:
  • The motion estimation module may be adapted to handle, when needed, geometry adapted block partitions. As an example, in Geometric Mode, motion is described in the same way as for classic tree based partition modes 16×8, 8×16, 8×4 or 4×8. Indeed, these modes may function like some particular instances of the present parametric model-based partition mode. As such, they are excluded from the possible configurations of the parametric model in use. Every partition can be modeled with one or multiple references, depending on the needs, and whether a P or B block is being coded.
  • P-mode example: In a full P-mode parametric model-based partitioned block, both partitions are modeled by a matching patch selected from a reference frame. Each patch must have a shape tailored to fit the selected geometric partition. In the same way as in P macroblocks and P sub-macroblocks, a motion vector is transmitted per partition. In one example of this, motion vectors as well as ƒ(x,y) model parameters are selected such that the information included in the block is best described in terms of some distortion measure (D) and some coding cost measure (R). For this purpose, all parameters are jointly optimized for each block such that D and R are jointly minimized:
  • { MV 1 , MV 0 , θ , ρ } = argmin MV 1 Ω MV 1 , MV 0 Ω MV 0 θ Ω θ , ρ Ω ρ D ( MV 1 , MV 0 , θ , ρ ) + λ R ( MV 1 , MV 0 , θ , ρ ) ,
  • where λ is a multiplying factor, MV1 and MV0 stand for both motion vectors in the partition, e and ρ represent partition parameters for the particular case of the first order polynomial and each Ωx represents the set of valid values for each kind of information.
  • An example of the adaptation of a distortion measure for use with one or more embodiments of the present principles is the use of the generated masks for each partition (see mask example in FIG. 7). Then, any classic block-based distortion measure can be modified to take partitions into account, such that:
  • D ( MV 1 , MV 0 , θ , ρ ) = x _ block D ( I ( x _ ) , I ~ t ( x _ - MV 1 ) ) · MASK P1 ( x , y ) + x _ block D ( I ( x _ ) , I ~ t ( x _ - MV 0 ) ) · MASK P0 ( x , y )
  • In the expression above, MASKP1(x,y) and MASKP0(x,y) respectively represent each of the ƒ(x,y) partitions. Fast implementations of this are possible by reducing the number of addition operations for those mask values being very small (for example, smaller than a given threshold (such as, for example, 0.5) to zero. An example of such a simplification can also be to generate a simplified mask where all values equal or smaller than 0.5 are rounded to zero and all values greater than 0.5 are rounded to one. Then, in an embodiment, only those positions where the mask is 1 are summed to compute the distortion. In such a case, only addition operations are necessary and all positions with zero value in each mask can be ignored.
  • In an embodiment, in addition to performing a motion search at every partition, partitions themselves should be determined together with the motion information. Hence, a search is performed on ƒ(x,y) parameters as well. Turning to FIG. 16, an exemplary method for geometric modes estimation with model-based partition parameters and prediction search (e.g., motion vectors search for motion estimation) is indicated generally by the reference numeral 1600.
  • The method 1600 includes a start block 1605 that passes control to a loop limit block 1610. The loop limit block 1610 performs a loop for a total number of possible edges (wherein the amount of edges is geometric precision dependent), and initializes a variable i, and passes control to a function block 1615. The function block 1615 generates a partition with a parameter set i, and passes control to a function block 1620. The function block 1620 searches the best predictors given partitions set i, and passes control to a decision block 1625. The decision block 1625 determines whether the best partition and the best prediction have been determined. If so, then control is passed to a function block 1630. Otherwise, control is passed to a loop limit block 1635.
  • The function block 1630 stores the best geometric parameters and the predictor choice, and passes control to the loop limit block 1635.
  • The loop limit block 1635 ends the loop for the total number of possible edges, and passes control to an end block 1640.
  • In case the use of several possible types of models for block partition is desired, motion estimation may involve testing the different models in order to find the best model adapted to the data. Selection of the best model at the decoder side may be handled by sending the necessary side information.
  • Entropy Coding:
  • Entropy Coding may be extended in order to code geometric parameters according to their statistics as well as prediction models from neighboring encoded-decoded blocks which may themselves include geometric partitions information. Motion vector predictors for blocks partitioned with parametric models are adapted to the geometry of their respective partitioned block as well as to that of the neighboring, already encoded blocks. Each geometric partition motion vector is predicted from an adaptively selected set of motion vectors from spatial and/or temporal neighboring blocks. An embodiment of this is the use, depending on the geometry of the current block partition, of 1 or 3 spatial neighbors. When the number of motion vectors is 3, these are median filtered. Then, predicted motion vectors are coded according to the MPEG-4 AVC Standard, either using variable length coding (VLC) or arithmetic coding (AC) based coding.
  • Two exemplary coding approaches for model-based partition parameters will now be described.
  • In a first exemplary coding approach for model-based partition parameters, such parameters are coded without prediction when no neighboring model-based (or geometric) block exist. Then, for the first order polynomial case, in one embodiment of variable length coding, angles can be coded with uniform codes and the radius can use a Golomb code.
  • In a second exemplary coding approach for model-based partition parameters, such parameters are coded with prediction when at least one neighboring model-based (or geometric) blocks exists. An embodiment of parameter prediction is performed by projecting the parametric models from previous neighboring blocks into the current block. Indeed, for the first degree polynomial case, an example is to predict parameters by continuing the line of a previous block into the current block. When two blocks are available, then, the predicted line is the one connecting both crossing points of neighboring lines with macroblock boundaries.
  • Turning to FIG. 13, an example of partition parameters prediction for the right block from parameters of the left block is indicated generally by the reference numeral 1300.
  • Turning to FIG. 14, an example of partition parameters prediction for the lower block from parameters of the upper block is indicated generally by the reference numeral 1400.
  • Turning to FIG. 15, an example of partition parameters prediction for the right block from parameters of the upper and left blocks is indicated generally by the reference numeral 1500.
  • Predicted parameters are then coded differentially using Golomb codes. In the particular case of angle, its periodicity property may be exploited in order to have the best possible statistics for posterior VLC or AC coding. In one example of VLC, one can use Golomb codes.
  • Relating to the coding procedure structure of a geometric block mode, FIGS. 17, 18, and 19 depict a particular embodiment of coding flowcharts for general parametric model based blocks. Indeed, in order to code parametric model-based blocks, in addition to motion data, at some point of the block coding procedure, partition parameters are to be encoded.
  • Turning to FIG. 17, an exemplary method for coding a geometrically partitioned prediction block is indicated generally by the reference numeral 1700.
  • The method 1700 includes a start block 1705 that passes control to a decision block 1710. The decision block 1710 determines whether or not the current mode type is a geometric mode type. If so, then control is passed to a function block 1715. Otherwise, control is passed to an end block 1730.
  • The function block 1715 codes the geometric mode type, and passes control to a function block 1720. The function block 1720 codes the geometric partition parameters, and passes control to a function block 1725. The function block 1725 codes the partitions prediction, and passes control to the end block 1730.
  • Turning to FIG. 18A, an exemplary method for coding a geometrically partitioned inter prediction block is indicated generally by the reference numeral 1800.
  • The method 1800 includes a start block 1802 that passes control to a decision block 1804. The decision block 1804 determines whether or not the current mode type is a geometric inter mode type. If so, then control is passed to a function block 1806. Otherwise, control is passed to an end block 1812.
  • The function block 1806 codes the geometric inter mode type, and passes control to a function block 1808. The function block 1808 codes the geometric partition parameters (for example, using neighboring geometric data if available for prediction, and adapting coding tables accordingly), and passes control to a function block 1810. The function block 1810 codes the partitions inter prediction (for example, using neighboring decoded data if available for prediction, and adapting coding tables accordingly), and passes control to the end block 1812.
  • Turning to FIG. 18B, an exemplary method for coding a geometrically partitioned intra prediction block is indicated generally by the reference numeral 1850.
  • The method 1850 includes a start block 1852 that passes control to a decision block 1854. The decision block 1854 determines whether or not the current mode type is a geometric inter mode type. If so, then control is passed to a function block 1856. Otherwise, control is passed to an end block 1862.
  • The function block 1856 codes the geometric inter mode type, and passes control to a function block 1858. The function block 1858 codes the geometric partition parameters (for example, using neighboring geometric data if available for prediction, and adapting coding tables accordingly), and passes control to a function block 1860. The function block 1860 codes the partitions inter prediction (for example, using neighboring decoded data if available for prediction, and adapting coding tables accordingly), and passes control to the end block 1862.
  • Turning to FIG. 19, an exemplary method for coding with multiple types of models is indicated generally by the reference numeral 1900.
  • The method 1900 includes a start block 1905 that passes control to a decision block 1910. The decision block 1910 determines whether or not the current mode type is a geometric mode type. If so, then control is passed to a function block 1915. Otherwise, control is passed to an end block 1950.
  • The function block 1915 codes the geometric mode type, and passes control to a preparation block 1920. The preparation block 1920 selects parametric model A or B for the current partition. If parametric model A is selected, then control is passed to a function block 1935. Otherwise, if parametric model B is selected, then control is passed to a function block 1925.
  • The function block 1935 designates the code to correspond to parametric model A, and passes control to a function block 1940. The function block 1940 codes the geometric partition parameters for parametric model A, and passes control to a function block 1945.
  • The function block 1925 designates the code to correspond to parametric model B, and passes control to a function block 1930. The function block 1930 codes the geometric partition parameters for parametric model B, and passes control to the function block 1945.
  • The function block 1945 codes the partitions prediction, and passes control to the end block 1950.
  • Encoder/Decoder Shared Blocks:
  • Motion Compensation:
  • The motion compensation module may be extended in order to compensate the non-squared/non-rectangular partitions in parametric model-based partitioned blocks. Block reconstruction for the motion compensation procedure directly follows from the motion estimation procedure described herein above. Indeed, compensation corresponds to use as a predictor the best set of partitions together with the two, partition shaped, pixmaps associated with the motion vectors. As defined above, “Partial Surface” pixels are computed as a combination, according to a given rule, of the pixmaps associated with the motion vectors.
  • Intra Prediction:
  • Intra prediction is upgraded in order to predict intra data according to the parametric model based partition of the block. Intra prediction with parametric model-based partition is defined in the same way as motion compensation and motion estimation with parametric model-based partitions, with the basic difference that intra prediction is used, instead, in order to fill each one of the generated partitions.
  • In-Loop De-Blocking Filter:
  • In-loop de-blocking filtering reduces blocking artifacts introduced by the block structure of the prediction, as well as, by the residual coding Discrete Cosine. Transform (DCT). In-loop de-blocking filtering adapts filter strength depending on the encoded video data, as well as, depending on local intensity differences between pixels across block boundaries. An embodiment of the present principles introduces a new form of video data representation. Blocks including a parametric model-based partition do not necessarily have constant motion vector values, or constant reference frame values on every 4×4 block. Indeed, with the parametric model-based partition, in such arbitrary partitioned blocks, the area, and block boundaries affected by a given motion vector are defined by the shape enforced by the parametric model. Hence, a 4×4 block may appear to be half into one partition, and the other half into another partition, with all the implications this has, concerning the motion vector used and the reference frame used at a given location. The in-loop deblocking filter module is extended, thus, by adapting the process of the filter strength decision. This process should now be able to decide the filter strength taking into account the particular shape of internal block partitions. Depending on the part of the block boundary to filter, it needs to get the appropriate motion vector and reference frame according to the partition shape, and not according to the 4×4 block, as done by other MPEG-4 AVC modes. Turning to FIG. 12, a parametric model based partitioned macroblock is indicated generally by the reference numeral 1200. The parametric model based partitioned macroblock includes some examples of de-blocking areas with an indication of how information is selected for a deblocking filtering strength decision filtering strength is computed once per each 4×4 block side that is subject to de-blocking filtering.
  • The partition considered for filtering strength computation is selected by choosing the partition that overlaps the most with the block side to filter. However, a second alternative method, in order to simplify computation in corner blocks, is to consider the whole transform block to have the motion and reference frame information from the partition that includes the most part of both block edges subject to filtering.
  • A third alternative method for combining deblocking in-loop filtering with the use of parametric model-based blocks partitioning is to always allow some degree of filtering through block boundaries whenever and wherever the block boundary is affected by a model-based block partitioned mode (e.g., Geometric Mode). The Geometric Mode can be any of the blocks affecting/neighbbring the boundary. At the same time, deblocking filtering may or may not be applied to those transform blocks, in a geometric mode, that are not located on the boundary of a macroblock.
  • A fourth alternative for combining deblocking in-loop filtering considers any of the two first methods but adds to the set of conditions that trigger the use of some degree of filtering in a transform block, the following: if the block boundary is affected by the transform block that includes the junction between the model-based partition curve and the macroblock boundary, then use some degree of deblocking.
  • Decoder Specific Blocks:
  • Decoder Control Module:
  • The decoder control module may be extended in order to take into account the new modes based on the parametric model-based block partition. These modes (Geometric Modes) are inserted within the existing ones in the MPEG-4 AVC Standard in the same way as performed at the encoder end. The decoder control module may be modified in order to perfectly match the structure and decoding procedures sequence of the encoder in order to recover exactly the information encoded at the encoder side.
  • Entropy Decoding:
  • Entropy decoding may be extended for model-based block partitioning usage. According to the entropy coding procedure described above, entropy decoding needs to be extended such that it matches the encoding procedure described above. FIGS. 20, 21, and 22 describe possible particular embodiments of this for decoding the information related to parametric model-based coding modes, once the codeword, indicating which block mode is used, has been already decoded and is available for decoder control.
  • Turning to FIG. 20, an exemplary method for decoding a geometrically partitioned prediction block is indicated generally by the reference numeral 2000.
  • The method 2000 includes a start block 2005 that passes control to a function block 2010. The function block 2010 determines whether or not the current mode type is a geometric mode type. If so, then control is passed to a function block 2015. Otherwise, control is passed to an end block 2025.
  • The function block 2015 decodes the geometric partition parameters, and passes control to a function block 2020. The function block 2020 decodes the partitions prediction, and passes control to the end block 2025.
  • Turning to FIG. 21A, an exemplary method for decoding a geometrically partitioned inter prediction block is indicated generally by the reference numeral 2100.
  • The method 2100 includes a start block 2112 that passes control to a function block 2114. The function block 2114 determines whether or not the current mode type is a geometric mode type. If so, then control is passed to a function block 2116. Otherwise, control is passed to an end block 2120.
  • The function block 2116 decodes the geometric partition parameters (for example, using neighboring geometric data if available for prediction, and adapting coding tables accordingly), and passes control to a function block 2118. The function block 2118 decodes the partitions inter prediction (for example, using neighboring decoded data if available for prediction, and adapting coding tables accordingly), and passes control to the end block 2120.
  • Turning to FIG. 21B, an exemplary method for decoding a geometrically partitioned intra prediction block is indicated generally by the reference numeral 2150.
  • The method 2150 includes a start block 2162 that passes control to a function block 2164. The function block 2164 determines whether or not the current mode type is a geometric mode type. If so, then control is passed to a function block 2166. Otherwise, control is passed to an end block 2170.
  • The function block 2166 decodes the geometric partition parameters (for example, using neighboring geometric data if available for prediction, and adapting coding tables accordingly), and passes control to a function block 2168. The function block 2168 decodes the partitions intra prediction (for example, using neighboring decoded data if available for prediction, and adapting coding tables accordingly), and passes control to the end block 2170.
  • Turning to FIG. 22, an exemplary method for decoding with multiple types of models is indicated generally by the reference numeral 2200.
  • The method 2200 includes a start block 2205 that passes control to a decision block 2210. The decision block 2210 determines whether or not the current mode type is a geometric mode type. If so, then control is passed to a function block 2215. Otherwise, control is passed to an end block 2240.
  • The function block 2215 decodes the parametric model selection, and passes control to a preparation block 2220. The preparation block 2220 selects parametric model A or B for the current partition. If parametric model A is selected, then control is passed to a function block 2225. Otherwise, if parametric model B is selected, then control is passed to a function block 2230.
  • The function block 225 decodes the geometric partition parameters for parametric model A, and passes control to a function block 2235.
  • The function block 2230 decodes the geometric partition parameters for parametric model B, and passes control to the function block 2235.
  • The function block 2235 decodes the partitions prediction, and passes control to an end block 2240.
  • Turning to FIG. 23, an exemplary method for slice header syntax coding is indicated generally by the reference numeral 2300.
  • The method 2300 includes a start block that passes control to a function block 2310. The function block 2310 codes slice related information I, and passes control to a function block 2315. The function block 2315 codes the slice quality (QP) coding information, and passes control to a function block 2320. The function block 2320 codes the geometric parameters precision information, and passes control to a function block 2325. The function block 2325 codes the slice related information II, and passes control to an end block 230. The phrases “slice related information I” and “slice related information” denote slice header related information, such that the geometric precision parameters are inserted within the existing syntax of the slice header.
  • Turning to FIG. 24, an exemplary method for deriving geometric parameters precision is indicated generally by the reference numeral 2400.
  • The method 2400 includes a start block 2405 that passes control to a function block 2410. The function block 2410 gets the QP parameter for the present (i.e., current) macroblock, and passes control to a function block 2415. The function block 2415 computes the geometric parameter precision, and passes control to an end block 2420.
  • Turning to FIG. 25, an exemplary method for reconstructing geometric blocks is indicated generally by the reference numeral 2500.
  • The method 2500 includes a start block 2505 that passes control to a function block 2510. The function block 2510 determines the geometric partition from the parameters, and passes control to a function block 2515. The function block 2515 recomposes the partitions prediction, and passes control to a function block 2520. The function block 2520 applies an anti-aliasing procedure, and passes control to a function block 2525. The function block 2525 adds the reconstructed residual, and passes control to an end block 2530.
  • Turning to FIG. 26, an exemplary method for searching for the best mode for a current block is indicated generally by the reference numeral 2600.
  • The method 2600 includes a start block 2605 that passes control to a function block 2610, a function block 2615, a function block 2620, a function block 2625, and a function block 2630. The function block 2610 tests the 16×16 block mode, and passes control to a function block 2635. The function block 2615 tests the 16×8 block mode, and passes control to a function block 2635. The function block 2620 tests the 8×16 block mode, and passes control to a function block 2635. The function block 2625 tests the 16×16 geometric block mode, and passes control to a function block 2635. The function block 2630 tests the 8×8 block modes, and passes control to a function block 2635.
  • The function block 2635 selects the best mode for the current block, and passes control to an end block 2640.
  • Turning to FIG. 27, an exemplary method for slice header syntax decoding is indicated generally by the reference numeral 2700.
  • The method 2700 includes a start block 2705 that passes control to a function block 2710. The function block 2710 decodes the slice related information I, and passes control to a function block 2715. The function block 2715 decodes the slice quality (QP) coding information, and passes control to a function block 2720. The function block 2720 decodes the geometric parameters precision information, and passes control to a function block 2725. The function block 2725 decodes the slice related information II, and passes control to an end block 2730.
  • A description will now be given of some of the many attendant advantages/features of the present invention, some of which have been mentioned above. For example, one advantage/feature is an apparatus that includes an encoder for encoding image data corresponding to pictures by adaptively partitioning at least portions of the pictures responsive to at least one parametric model. The at least one parametric model involves at least one of implicit and explicit formulation of at least one curve.
  • Another advantage/feature is the apparatus having the encoder as described above, wherein at least one of the at least one parametric model and the at least one curve are derived from a geometric signal model.
  • Yet another advantage/feature is the apparatus having the encoder as described above, wherein at least one of the at least one parametric model and the at least one curve describe at least one of, one or more image contours, and, one or more motion boundaries.
  • Still another advantage/feature is the apparatus having the encoder as described above, wherein at least one polynomial is used as at least one of the at least one parametric model and the at least one curve.
  • Moreover, another advantage/feature is the apparatus having the encoder as described above, wherein a first order polynomial model is used as at least one of the at least one parametric model and the at least one curve.
  • Further, another advantage/feature is the apparatus having the encoder wherein a first order polynomial model is used as described above, wherein the first order polynomial model includes an angle parameter and a distance parameter.
  • Also, another advantage/feature is the apparatus having the encoder as described above, wherein the at least one parametric model for a given image portion is adaptively selected from a set of models when more than one parametric model is available, and the selection is explicitly or implicitly coded.
  • Additionally, another advantage/feature is the apparatus having the encoder as described above, wherein the encoder performs explicit or implicit coding of a precision of parameters of at least one of the at least one parametric model and the at least one curve using at least one high level syntax element.
  • Moreover, another advantage/feature is the apparatus having the encoder that uses the least one high level syntax element as described above, wherein the at least one high level syntax element is placed at least one of a slice header level, a Supplemental Enhancement Information (SEI) level, a picture parameter set level, a sequence parameter set level and a network abstraction layer unit header level.
  • Further, another advantage/feature is the apparatus having the encoder as described above, wherein a precision of parameters of at least one of the at least one parametric model and the at least one curve is adapted in order to control at least one of compression efficiency and encoder complexity.
  • Also, another advantage/feature is the apparatus having the encoder as described above, wherein the precision of the parameters of at least one of the at least one parametric model and the at least one curve is adapted depending on a compression quality parameter.
  • Additionally, another advantage/feature is the apparatus having the encoder as described above, wherein predictor data, associated with at least one partition of at least one of the pictures, is predicted from at least one of spatial neighboring blocks and temporal neighboring blocks.
  • Moreover, another advantage/feature is the apparatus having the encoder as described above, wherein partition model parameters for at least one of the at least one parametric model and the at least one curve are predicted from at least one of spatial neighboring blocks and temporal neighboring blocks.
  • Further, another advantage/feature is the apparatus having the encoder as described above, wherein the encoder computes prediction values for pixels that, according to at least one of the at least one parametric model and the at least one curve, lay partly in more than one partition, using at least one of an anti-aliasing procedure, a combination of a part of prediction values for corresponding positions of the pixels, a totality of the prediction values for the corresponding positions of the pixels, a neighborhood, predictors of different partitions, from among the more than one partition, where the pixel is deemed to partly lay.
  • Also, another advantage/feature is the apparatus having the encoder as described above, wherein the encoder is an extended version of an existing hybrid predictive encoder of an existing video coding standard or video coding recommendation.
  • Additionally, another advantage/feature is the apparatus having the encoder that is the extended version of the existing hybrid predictive encoder of the existing video coding standard or video coding recommendation as described above, wherein the encoder applies parametric model based partitions to at least one of macroblocks and sub-macroblocks of the pictures as coding modes for at least one of the macroblocks and the sub-macroblocks, respectively.
  • Moreover, another advantage/feature is the apparatus having the encoder that applies the parametric model based partitions as described above, wherein parametric model-based coding modes are inserted within existing macroblock and sub-macroblock coding modes of an existing video coding standard or video coding recommendation.
  • Further, another advantage/feature is the apparatus having the encoder that applies the parametric model based partitions as described above, wherein the encoder encodes model parameters of at least one of the at least one parametric model and the at least one curve to generate the parametric model-based partitions along with partitions prediction data.
  • Also, another advantage/feature is the apparatus having the encoder that applies the parametric model based partitions as described above, wherein the encoder selects model parameters of at least one of the at least one parametric model, the at least one curve, and partition predictions in order to jointly minimize at least one of a distortion measure and a coding cost measure.
  • Additionally, another advantage/feature is the apparatus having the encoder that applies the parametric model based partitions as described above, wherein pixels of at least one of the pictures that overlap at least two parametric model-based partitions are a weighted linear average from predictions of the at least two parametric model-based partitions.
  • Moreover, another advantage/feature is the apparatus having the encoder that applies the parametric model based partitions as described above, wherein partition predictions are of at least one of the type inter and intra.
  • Further, another advantage/feature is the apparatus having the encoder that applies the parametric model based partitions as described above, wherein the encoder selectively uses parameter predictions for at least one of the at least one parametric model and the at least one curve for partition model parameters coding.
  • Also, another advantage/feature is the apparatus having the encoder that selectively uses the parameter predictions as described above, wherein a prediction for a current block of a particular one of the pictures is based on curve extrapolation from neighboring blocks into the current block.
  • Additionally, another advantage/feature is the apparatus having the encoder that selectively uses the parameter predictions as described above, wherein the encoder uses different contexts or coding tables to encode the image data depending on whether or not parameters of at least one of the at least one parametric model and the at least one curve are predicted.
  • Moreover, another advantage/feature is the apparatus having the encoder that applies the parametric model based partitions as described above, wherein the encoder is an extended version of an encoder for the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) Moving Picture Experts Group-4 (MPEG-4) Part 10 Advanced Video Coding (AVC) standard/International Telecommunication Union, Telecommunication Sector (ITU-T) H.264 recommendation.
  • Further, another advantage/feature is the apparatus having the encoder that applies the parametric model based partitions as described above, wherein the encoder applies at least one of deblocking filtering and reference frame filtering adapted to handle transform-size blocks affected by at least one parametric model-based partition due to non-tree-based partitioning of the at least one of the macroblocks and the sub-macroblocks when parametric model-based partition modes are used.
  • These and other features and advantages of the present principles may be readily ascertained by one of ordinary skill in the pertinent art based on the teachings herein. It is to be understood that the teachings of the present principles may be implemented in various forms of hardware, software, firmware, special purpose processors, or combinations thereof.
  • Most preferably, the teachings of the present principles are implemented as a combination of hardware and software. Moreover, the software may be implemented as an application program tangibly embodied on a program storage unit. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPU”), a random access memory (“RAM”), and input/output (“I/O”) interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.
  • It is to be further understood that, because some of the constituent system components and methods depicted in the accompanying drawings are preferably implemented in software, the actual connections between the system components or the process function blocks may differ depending upon the manner in which the present principles are programmed. Given the teachings herein, one of ordinary skill in the pertinent art will be able to contemplate these and similar implementations or configurations of the present principles.
  • Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the present principles are not limited to those precise embodiments, and that various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present principles. All such changes and modifications are intended to be included within the scope of the present principles as set forth in the appended claims.

Claims (55)

1. An apparatus, comprising:
an encoder for encoding image data corresponding to pictures by adaptively partitioning at least portions of the pictures responsive to at least one parametric model, wherein the at least one parametric model involves at least one of implicit and explicit formulation of at least one curve.
2. The apparatus of claim 1, wherein at least one of the at least one parametric model and the at least one curve are derived from a geometric signal model.
3. The apparatus of claim 1, wherein at least one of the at least one parametric model and the at least one curve describe at least one of, one or more image contours, and, one or more motion boundaries.
4. The apparatus of claim 1, wherein at least one polynomial is used as at least one of the at least one parametric model and the at least one curve.
5. The apparatus of claim 1, wherein a first order polynomial model is used as at least one of the at least one parametric model and the at least one curve.
6. The apparatus of claim 5, wherein the first order polynomial model includes an angle parameter and a distance parameter.
7. The apparatus of claim 1, wherein the at least one parametric model for a given image portion is adaptively selected from a set of models when more than one parametric model is available, and the selection is explicitly or implicitly coded.
8. The apparatus of claim 1, wherein said encoder performs explicit or implicit coding of a precision of parameters of at least one of the at least one parametric model and the at least one curve using at least one high level syntax element.
9. The apparatus of claim 8, wherein the at least one high level syntax element is placed at least one of a slice header level, a Supplemental Enhancement Information (SEI) level, a picture parameter set level, a sequence parameter set level and a network abstraction layer unit header level.
10. The apparatus of claim 1, wherein a precision of parameters of at least one of the at least one parametric model and the at least one curve is adapted in order to control at least one of compression efficiency and encoder complexity.
11. The apparatus of claim 10, wherein the precision of the parameters of at least one of the at least one parametric model and the at least one curve is adapted depending on a compression quality parameter.
12. The apparatus of claim 1, wherein predictor data, associated with at least one partition of at least one of the pictures, is predicted from at least one of spatial neighboring blocks and temporal neighboring blocks.
13. The apparatus of claim 1, wherein partition model parameters for at least one of the at least one parametric model and the at least one curve are predicted from at least one of spatial neighboring blocks and temporal neighboring blocks.
14. The apparatus of claim 1, wherein said encoder computes prediction values for pixels that, according to at least one of the at least one parametric model and the at least one curve, lay partly in more than one partition, using at least one of an anti-aliasing procedure, a combination of a part of prediction values for corresponding positions of the pixels, a totality of the prediction values for the corresponding positions of the pixels, a neighborhood, predictors of different partitions, from among the more than one partition, where the pixel is deemed to partly lay.
15. The apparatus of claim 1, wherein said encoder is an extended version of an existing hybrid predictive encoder of an existing video coding standard or video coding recommendation.
16. The apparatus of claim 15, wherein said encoder applies parametric model based partitions to at least one of macroblocks and sub-macroblocks of the pictures as coding modes for at least one of the macroblocks and the sub-macroblocks, respectively.
17. The apparatus of claim 16, wherein parametric model-based coding modes are inserted within existing macroblock and sub-macroblock coding modes of an existing video coding standard or video coding recommendation.
18. The apparatus of claim 16, wherein said encoder encodes model parameters of at least one of the at least one parametric model and the at least one curve to generate the parametric model-based partitions along with partitions prediction data.
19. The apparatus of claim 16, wherein said encoder selects model parameters of at least one of the at least one parametric model, the at least one curve, and partition predictions in order to jointly minimize at least one of a distortion measure and a coding cost measure.
20. The apparatus of claim 16, wherein pixels of at least one of the pictures that overlap at least two parametric model-based partitions are a weighted linear average from predictions of the at least two parametric model-based partitions.
21. The apparatus of claim 16, wherein partition predictions are of at least one of the type inter and intra.
22. The apparatus of claim 16, wherein said encoder selectively uses parameter predictions for at least one of the at least one parametric model and the at least one curve for partition model parameters coding.
23. The apparatus of claim 22, wherein a prediction for a current block of a particular one of the pictures is based on curve extrapolation from neighboring blocks into the current block.
24. The apparatus of claim 22, wherein said encoder uses different contexts or coding tables to encode the image data depending on whether or not parameters of at least one of the at least one parametric model and the at least one curve are predicted.
25. The apparatus of claim 16, wherein said encoder is an extended version of an encoder for the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) Moving Picture Experts Group-4 (MPEG-4) Part 10 Advanced Video Coding (AVC) standard/International Telecommunication Union, Telecommunication Sector (ITU-T) H.264 recommendation.
26. The apparatus of claim 16, wherein said encoder applies at least one of deblocking filtering and reference frame filtering adapted to handle transform-size blocks affected by at least one parametric model-based partition due to non-tree-based partitioning of the at least one of the macroblocks and the sub-macroblocks when parametric model-based partition modes are used, and wherein the deblocking filtering and the reference frame filtering is dependent upon at least one of whichever one of the at least one parametric model-based partition is used and a selected shape of the at least one parametric model-based partition.
27. The apparatus of claim 15, wherein said encoder adapts at least one of a residual transform and inverse residual transform pair and a quantization procedure de-quantization procedure pair depending on a selected parametric model-based partition.
28. A method, comprising:
encoding image data corresponding to pictures by adaptively partitioning at least portions of the pictures responsive to at least one parametric model, wherein the at least one parametric model involves at least one of implicit and explicit formulation of at least one curve.
29. The method of claim 28, wherein at least one of the at least one parametric model and the at least one curve are derived from a geometric signal model.
30. The method of claim 28, wherein at least one of the at least one parametric model and the at least one curve describe at least one of, one or more image contours, and, one or more motion boundaries.
31. The method of claim 28, wherein at least one polynomial is used as at least one of the at least one parametric model and the at least one curve.
32. The method of claim 28, wherein a first order polynomial model is used as at least one of the at least one parametric model and the at least one curve.
33. The method of claim 32, wherein the first order polynomial model includes an angle parameter and a distance parameter.
34. The method of claim 28, wherein the at least one parametric model for a given image portion is adaptively selected from a set of models when more than one parametric model is available, and the selection is explicitly or implicitly coded.
35. The method of claim 28, wherein said encoding step performs explicit or implicit coding of a precision of parameters of at least one of the at least one parametric model and the at least one curve using at least one high level syntax element.
36. The method of claim 35, wherein the at least one high level syntax element is placed at least one of a slice header level, a Supplemental Enhancement Information (SEI) level, a picture parameter set level, a sequence parameter set level and a network abstraction layer unit header level.
37. The method of claim 28, wherein a precision of parameters of at least one of the at least one parametric model and the at least one curve is adapted in order to control at least one of compression efficiency and encoder complexity.
38. The method of claim 37, wherein the precision of the parameters of at least one of the at least one parametric model and the at least one curve is adapted depending on a compression quality parameter.
39. The method of claim 28, wherein predictor data, associated with at least one partition of at least one of the pictures, is predicted from at least one of spatial neighboring blocks and temporal neighboring blocks.
40. The method of claim 28, wherein partition model parameters for at least one of the at least one parametric model and the at least one curve are predicted from at least one of spatial neighboring blocks and temporal neighboring blocks.
41. The method of claim 28, wherein said encoding step computes prediction values for pixels that, according to at least one of the at least one parametric model and the at least one curve, lay partly in more than one partition, using at least one of an anti-aliasing procedure, a combination of a part of prediction values for corresponding positions of the pixels, a totality of the prediction values for the corresponding positions of the pixels, a neighborhood, predictors of different partitions, from among the more than one partition, where the pixel is deemed to partly lay.
42. The method of claim 28, wherein the encoding step is performed in an encoder that is an extended version of an existing hybrid predictive encoder of an existing video coding standard or video coding recommendation.
43. The method of claim 42, wherein said encoding step applies parametric model based partitions to at least one of macroblocks and sub-macroblocks of the pictures as coding modes for at least one of the macroblocks and the sub-macroblocks, respectively.
44. The method of claim 43, wherein parametric model-based coding modes are inserted within existing macroblock and sub-macroblock coding modes of an existing video coding standard or video coding recommendation.
45. The method of claim 43, wherein said encoding step encodes model parameters of at least one of the at least one parametric model and the at least one curve to generate the parametric model-based partitions along with partitions prediction data.
46. The method of claim 43, wherein said encoding step selects model parameters of at least one of the at least one parametric model, the at least one curve, and partition predictions in order to jointly minimize at least one of a distortion measure and a coding cost measure.
47. The method of claim 43, wherein pixels of at least one of the pictures that overlap at least two parametric model-based partitions are a weighted linear average from predictions of the at least two parametric model-based partitions.
48. The method of claim 43, wherein partitions predictions are of at least one of the type inter and intra.
49. The method of claim 43, wherein said encoding step selectively uses parameter predictions for at least one of the at least one parametric model and the at least one curve for partition model parameters coding.
50. The method of claim 49, wherein a prediction for a current block of a particular one of the pictures is based on curve extrapolation from neighboring blocks into the current block.
51. The method of claim 49, wherein said encoding step uses different contexts or coding tables to encode the image data depending on whether or not parameters of at least one of the at least one parametric model and the at least one curve are predicted.
52. The method of claim 43, wherein said encoding step is performed in an extended version of an encoder for the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) Moving Picture Experts Group-4 (MPEG-4) Part 10 Advanced Video Coding (AVC) standard/International Telecommunication Union, Telecommunication Sector (ITU-T) H.264 recommendation.
53. The method of claim 43, wherein said encoding step applies at least one of deblocking filtering and reference frame filtering adapted to handle transform-size blocks affected by at least one parametric model-based partition due to non-tree-based partitioning of the at least one of the macroblocks and the sub-macroblocks when parametric model-based partition modes are used, and wherein the deblocking filtering and the reference frame filtering is dependent upon at least one of whichever one of the at least one parametric model-based partition is used and a selected shape of the at least one parametric model-based partition.
54. The method of claim 42, wherein said encoding adapts at least on of a residual transform and inverse residual transform pair and a quantization procedure de-quantization procedure pair depending on a selected parametric model-based partition.
55. A video signal structure for video encoding, comprising:
image data corresponding to pictures encoded by adaptively partitioning at least portions of the pictures responsive to at least one parametric model, wherein the at least one parametric model involves at least one of implicit and explicit formulation of at least one curve.
US12/309,540 2006-08-02 2007-07-31 Adaptive Geometric Partitioning For Video Encoding Abandoned US20090196342A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/309,540 US20090196342A1 (en) 2006-08-02 2007-07-31 Adaptive Geometric Partitioning For Video Encoding

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US83499306P 2006-08-02 2006-08-02
US12/309,540 US20090196342A1 (en) 2006-08-02 2007-07-31 Adaptive Geometric Partitioning For Video Encoding
PCT/US2007/017118 WO2008016609A2 (en) 2006-08-02 2007-07-31 Adaptive geometric partitioning for video encoding

Publications (1)

Publication Number Publication Date
US20090196342A1 true US20090196342A1 (en) 2009-08-06

Family

ID=38997679

Family Applications (6)

Application Number Title Priority Date Filing Date
US12/309,496 Abandoned US20120177106A1 (en) 2006-08-02 2007-07-31 Methods and apparatus for adaptive geometric partitioning for video decoding
US12/309,540 Abandoned US20090196342A1 (en) 2006-08-02 2007-07-31 Adaptive Geometric Partitioning For Video Encoding
US15/482,191 Abandoned US20170280156A1 (en) 2006-08-02 2017-04-07 Methods and apparatus for adaptive geometric partitioning for video decoding
US17/083,007 Active US11252435B2 (en) 2006-08-02 2020-10-28 Method and apparatus for parametric, model-based, geometric frame partitioning for video coding
US17/568,311 Active US11895327B2 (en) 2006-08-02 2022-01-04 Method and apparatus for parametric, model- based, geometric frame partitioning for video coding
US18/391,517 Pending US20240129524A1 (en) 2006-08-02 2023-12-20 Method and apparatus for parametric, model-based, geometric frame partitioning for video coding

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/309,496 Abandoned US20120177106A1 (en) 2006-08-02 2007-07-31 Methods and apparatus for adaptive geometric partitioning for video decoding

Family Applications After (4)

Application Number Title Priority Date Filing Date
US15/482,191 Abandoned US20170280156A1 (en) 2006-08-02 2017-04-07 Methods and apparatus for adaptive geometric partitioning for video decoding
US17/083,007 Active US11252435B2 (en) 2006-08-02 2020-10-28 Method and apparatus for parametric, model-based, geometric frame partitioning for video coding
US17/568,311 Active US11895327B2 (en) 2006-08-02 2022-01-04 Method and apparatus for parametric, model- based, geometric frame partitioning for video coding
US18/391,517 Pending US20240129524A1 (en) 2006-08-02 2023-12-20 Method and apparatus for parametric, model-based, geometric frame partitioning for video coding

Country Status (7)

Country Link
US (6) US20120177106A1 (en)
EP (2) EP2050279B1 (en)
JP (5) JP2009545919A (en)
KR (2) KR101380580B1 (en)
CN (2) CN101502120B (en)
BR (2) BRPI0715507A2 (en)
WO (2) WO2008016609A2 (en)

Cited By (92)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080240246A1 (en) * 2007-03-28 2008-10-02 Samsung Electronics Co., Ltd. Video encoding and decoding method and apparatus
US20080253457A1 (en) * 2007-04-10 2008-10-16 Moore Darnell J Method and system for rate distortion optimization
US20080304569A1 (en) * 2007-06-08 2008-12-11 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding image using object boundary based partition
US20090268810A1 (en) * 2006-09-29 2009-10-29 Congxia Dai Geometric intra prediction
US20100195715A1 (en) * 2007-10-15 2010-08-05 Huawei Technologies Co., Ltd. Method and apparatus for adaptive frame prediction
US20100278267A1 (en) * 2008-01-07 2010-11-04 Thomson Licensing Methods and apparatus for video encoding and decoding using parametric filtering
US20110103475A1 (en) * 2008-07-02 2011-05-05 Samsung Electronics Co., Ltd. Image encoding method and device, and decoding method and device therefor
US20110200110A1 (en) * 2010-02-18 2011-08-18 Qualcomm Incorporated Smoothing overlapped regions resulting from geometric motion partitioning
US20110249734A1 (en) * 2010-04-09 2011-10-13 Segall Christopher A Methods and Systems for Intra Prediction
US20110249743A1 (en) * 2010-04-09 2011-10-13 Jie Zhao Super-block for high performance video coding
US20110274158A1 (en) * 2010-05-10 2011-11-10 Mediatek Inc. Method and Apparatus of Adaptive Loop Filtering
US20110310976A1 (en) * 2010-06-17 2011-12-22 Qualcomm Incorporated Joint Coding of Partition Information in Video Coding
WO2011139476A3 (en) * 2010-05-06 2012-03-08 Intel Corporation Boundary detection in media streams
US20120106647A1 (en) * 2009-07-03 2012-05-03 France Telecom Prediction of a movement vector of a current image partition having a different geometric shape or size from that of at least one adjacent reference image partition and encoding and decoding using one such prediction
US20120106627A1 (en) * 2009-06-26 2012-05-03 Thomson Licensing Methods and apparatus for video encoding and decoding using adaptive geometric partitioning
US20120236943A1 (en) * 2007-07-31 2012-09-20 Samsung Electronics Co., Ltd. Video encoding and decoding method and apparatus using weighted prediction
US20130034157A1 (en) * 2010-04-13 2013-02-07 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Inheritance in sample array multitree subdivision
US20130094580A1 (en) * 2011-10-18 2013-04-18 Qualcomm Incorporated Detecting availabilities of neighboring video units for video coding
US20130148726A1 (en) * 2007-01-18 2013-06-13 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding based on intra prediction
US20130188884A1 (en) * 2010-09-30 2013-07-25 Electronics And Telecommunications Research Institute Method for encoding and decoding images and apparatus for encoding and decoding using same
US20130202030A1 (en) * 2010-07-29 2013-08-08 Sk Telecom Co., Ltd. Method and device for image encoding/decoding using block split prediction
US20130215959A1 (en) * 2011-01-03 2013-08-22 Media Tek Inc. Method of Filter-Unit Based In-Loop Filtering
US8619857B2 (en) 2010-04-09 2013-12-31 Sharp Laboratories Of America, Inc. Methods and systems for intra prediction
GB2504069A (en) * 2012-07-12 2014-01-22 Canon Kk Intra-prediction using a parametric displacement transformation
US8644375B2 (en) 2010-04-09 2014-02-04 Sharp Laboratories Of America, Inc. Methods and systems for intra prediction
CN103634612A (en) * 2012-08-22 2014-03-12 成都爱斯顿测控技术有限公司 Industrial-grade audio and video processing platform
US20140233647A1 (en) * 2011-09-22 2014-08-21 Lg Electronics Inc. Method and apparatus for signaling image information, and decoding method and apparatus using same
US8861617B2 (en) 2010-10-05 2014-10-14 Mediatek Inc Method and apparatus of region-based adaptive loop filtering
US20140348231A1 (en) * 2009-09-04 2014-11-27 STMicoelectronics International N.V. System and method for object based parametric video coding
US8917763B2 (en) 2011-03-07 2014-12-23 Panasonic Corporation Motion compensation apparatus, video coding apparatus, video decoding apparatus, motion compensation method, program, and integrated circuit
US8964833B2 (en) 2011-07-19 2015-02-24 Qualcomm Incorporated Deblocking of non-square blocks for video coding
US8989256B2 (en) 2011-05-25 2015-03-24 Google Inc. Method and apparatus for using segmentation-based coding of prediction information
US20150189310A1 (en) * 2010-05-26 2015-07-02 Newracom Inc. Method of predicting motion vectors in video codec in which multiple references are allowed, and motion vector encoding/decoding apparatus using the same
US9094681B1 (en) 2012-02-28 2015-07-28 Google Inc. Adaptive segmentation
US20150229949A1 (en) * 2009-05-29 2015-08-13 Mitsubishi Electric Corporation Image encoding device, image decoding device, image encoding method, and image decoding method
US9185429B1 (en) 2012-04-30 2015-11-10 Google Inc. Video encoding and decoding using un-equal error protection
US9225985B2 (en) 2011-01-14 2015-12-29 Siemens Aktiengesellschaft Methods and devices for forming a prediction value
US9247257B1 (en) 2011-11-30 2016-01-26 Google Inc. Segmentation based entropy encoding and decoding
US9247251B1 (en) 2013-07-26 2016-01-26 Google Inc. Right-edge extension for quad-tree intra-prediction
US20160050440A1 (en) * 2014-08-15 2016-02-18 Ying Liu Low-complexity depth map encoder with quad-tree partitioned compressed sensing
US9332276B1 (en) 2012-08-09 2016-05-03 Google Inc. Variable-sized super block based direct prediction mode
US9332273B2 (en) 2011-11-08 2016-05-03 Samsung Electronics Co., Ltd. Method and apparatus for motion vector determination in video encoding or decoding
US9338476B2 (en) 2011-05-12 2016-05-10 Qualcomm Incorporated Filtering blockiness artifacts for video coding
US9350988B1 (en) 2012-11-20 2016-05-24 Google Inc. Prediction mode-based block ordering in video coding
KR101624659B1 (en) 2015-01-05 2016-05-27 삼성전자주식회사 Method and apparatus for decoding video
KR101624660B1 (en) 2015-04-14 2016-05-27 삼성전자주식회사 Method and apparatus for decoding video
US9374591B2 (en) 2009-08-17 2016-06-21 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
US9380298B1 (en) 2012-08-10 2016-06-28 Google Inc. Object-based intra-prediction
US9426487B2 (en) 2010-04-09 2016-08-23 Huawei Technologies Co., Ltd. Video coding and decoding methods and apparatuses
US9531990B1 (en) 2012-01-21 2016-12-27 Google Inc. Compound prediction using multiple sources or prediction modes
US9532059B2 (en) 2010-10-05 2016-12-27 Google Technology Holdings LLC Method and apparatus for spatial scalability for video coding
TWI573439B (en) * 2016-05-03 2017-03-01 上海兆芯集成電路有限公司 Methods for rdo (rate-distortion optimization) based on curve fittings and apparatuses using the same
US9591335B2 (en) 2010-04-13 2017-03-07 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US9609343B1 (en) * 2013-12-20 2017-03-28 Google Inc. Video coding using compound prediction
US9621369B2 (en) 2011-11-29 2017-04-11 Samsung Electronics Co., Ltd. Method and system for providing user interface for device control
US9628790B1 (en) 2013-01-03 2017-04-18 Google Inc. Adaptive composite intra prediction for image and video compression
US20170134750A1 (en) * 2014-06-19 2017-05-11 Sharp Kabushiki Kaisha Image decoding device, image coding device, and predicted image generation device
US9681128B1 (en) 2013-01-31 2017-06-13 Google Inc. Adaptive pre-transform scanning patterns for video and image compression
US9813700B1 (en) 2012-03-09 2017-11-07 Google Inc. Adaptively encoding a media stream with compound prediction
US9826229B2 (en) 2012-09-29 2017-11-21 Google Technology Holdings LLC Scan pattern determination from base layer pixel information for scalable extension
US9883190B2 (en) 2012-06-29 2018-01-30 Google Inc. Video encoding using variance for selecting an encoding mode
US10129567B2 (en) * 2011-04-21 2018-11-13 Intellectual Discovery Co., Ltd. Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering
US10178396B2 (en) 2009-09-04 2019-01-08 Stmicroelectronics International N.V. Object tracking
US20190089962A1 (en) 2010-04-13 2019-03-21 Ge Video Compression, Llc Inter-plane prediction
US10248966B2 (en) 2010-04-13 2019-04-02 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10382772B1 (en) * 2018-07-02 2019-08-13 Tencent America LLC Method and apparatus for video coding
WO2019246535A1 (en) * 2018-06-22 2019-12-26 Op Solutions, Llc Block level geometric partitioning
WO2020072494A1 (en) 2018-10-01 2020-04-09 Op Solutions, Llc Methods and systems of exponential partitioning
CN111147855A (en) * 2018-11-02 2020-05-12 北京字节跳动网络技术有限公司 Coordination between geometric partitioning prediction modes and other tools
US10708625B2 (en) * 2018-06-12 2020-07-07 Alibaba Group Holding Limited Adaptive deblocking filter
US10742972B1 (en) * 2019-03-08 2020-08-11 Tencent America LLC Merge list construction in triangular prediction
US10742973B2 (en) * 2015-05-12 2020-08-11 Samsung Electronics Co., Ltd. Image decoding method for performing intra prediction and device thereof, and image encoding method for performing intra prediction and device thereof
US10771808B2 (en) 2017-02-06 2020-09-08 Huawei Technologies Co., Ltd. Video encoder and decoder for predictive partitioning
US10812803B2 (en) * 2010-12-13 2020-10-20 Electronics And Telecommunications Research Institute Intra prediction method and apparatus
CN111886861A (en) * 2018-02-22 2020-11-03 Lg电子株式会社 Image decoding method and apparatus according to block division structure in image coding system
US10965943B2 (en) * 2016-12-28 2021-03-30 Sony Corporation Image processing apparatus and image processing method
WO2021101791A1 (en) * 2019-11-21 2021-05-27 Tencent America LLC Geometric partitioning mode in video coding
US11039137B2 (en) 2017-06-30 2021-06-15 Huawei Technologies Co., Ltd. Encoder, decoder, computer program and computer program product for processing a frame of a video sequence
US11044491B2 (en) * 2018-01-30 2021-06-22 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method
US11070800B2 (en) * 2016-10-26 2021-07-20 Intellectual Discovery Co., Ltd. Video coding method and apparatus using any types of block partitioning
US11089296B2 (en) 2016-09-30 2021-08-10 Interdigital Madison Patent Holdings, Sas Method and apparatus for omnidirectional video coding and decoding with adaptive intra prediction
US20210297670A1 (en) * 2018-12-21 2021-09-23 Beijing Bytedance Network Technology Co., Ltd. Intra prediction using polynomial model
US11159793B2 (en) 2017-10-16 2021-10-26 Digitalinsights Inc. Method, device, and recording medium storing bit stream, for encoding/decoding image
CN113647105A (en) * 2019-01-28 2021-11-12 Op方案有限责任公司 Inter prediction for exponential partitions
US11190777B2 (en) * 2019-06-30 2021-11-30 Tencent America LLC Method and apparatus for video coding
US20220021883A1 (en) * 2019-06-21 2022-01-20 Huawei Technologies Co.,Ltd. Chroma sample weight derivation for geometric partition mode
CN114128295A (en) * 2019-07-14 2022-03-01 北京字节跳动网络技术有限公司 Construction of candidate list of geometric partitioning mode in video coding and decoding
US11317090B2 (en) * 2019-08-12 2022-04-26 Tencent America LLC Method and apparatus for video coding
US11375243B2 (en) * 2019-07-17 2022-06-28 Tencent America LLC Method and apparatus for video coding
CN115118995A (en) * 2017-08-22 2022-09-27 松下电器(美国)知识产权公司 Image encoder, image decoder, and non-transitory computer readable medium
US11695922B2 (en) 2019-01-28 2023-07-04 Op Solutions, Llc Inter prediction in geometric partitioning with an adaptive number of regions
US20230237612A1 (en) * 2022-01-26 2023-07-27 Intuitive Research And Technology Corporation Determining volume of a selectable region using extended reality

Families Citing this family (78)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101502120B (en) 2006-08-02 2012-08-29 汤姆逊许可公司 Adaptive geometric partitioning method and device for video decoding
US7756348B2 (en) * 2006-10-30 2010-07-13 Hewlett-Packard Development Company, L.P. Method for decomposing a video sequence frame
KR101658669B1 (en) * 2007-04-12 2016-09-21 톰슨 라이센싱 Methods and apparatus for fast geometric mode decision in a video encoder
CN101822056B (en) * 2007-10-12 2013-01-02 汤姆逊许可公司 Methods and apparatus for video encoding and decoding geometrically partitioned bi-predictive mode partitions
KR101496324B1 (en) * 2007-10-17 2015-02-26 삼성전자주식회사 Method and apparatus for video encoding, and method and apparatus for video decoding
US9967590B2 (en) 2008-04-10 2018-05-08 Qualcomm Incorporated Rate-distortion defined interpolation for video coding based on fixed filter or adaptive filter
US8831086B2 (en) 2008-04-10 2014-09-09 Qualcomm Incorporated Prediction techniques for interpolation in video coding
US8842731B2 (en) * 2008-04-15 2014-09-23 Orange Coding and decoding of an image or of a sequence of images sliced into partitions of pixels of linear form
US8787693B2 (en) 2008-04-15 2014-07-22 Orange Prediction of images by prior determination of a family of reference pixels, coding and decoding using such a prediction
US8325796B2 (en) 2008-09-11 2012-12-04 Google Inc. System and method for video coding using adaptive segmentation
KR101597253B1 (en) * 2008-10-27 2016-02-24 에스케이 텔레콤주식회사 / Video encoding/decoding apparatus Adaptive Deblocking filter and deblocing filtering method and Recording Medium therefor
JP2012089905A (en) * 2009-01-13 2012-05-10 Hitachi Ltd Image encoder and image encoding method, and image decoder and image decoding method
CN102349298B (en) * 2009-03-12 2016-08-03 汤姆森特许公司 The method and apparatus selected for the filter parameter based on region of de-artifact filtering
PT3567852T (en) * 2009-03-23 2023-01-11 Ntt Docomo Inc Image predictive decoding device and image predictive decoding method
US9357221B2 (en) * 2009-07-23 2016-05-31 Thomson Licensing Methods and apparatus for adaptive transform selection for video encoding and decoding
KR101456498B1 (en) * 2009-08-14 2014-10-31 삼성전자주식회사 Method and apparatus for video encoding considering scanning order of coding units with hierarchical structure, and method and apparatus for video decoding considering scanning order of coding units with hierarchical structure
JP2011049740A (en) * 2009-08-26 2011-03-10 Sony Corp Image processing apparatus and method
KR101629475B1 (en) * 2009-09-23 2016-06-22 삼성전자주식회사 Device and method for coding of depth image using geometry based block partitioning intra prediction
CN102714741B (en) 2009-10-14 2016-01-20 汤姆森特许公司 The method and apparatus of depth map process
KR101484280B1 (en) * 2009-12-08 2015-01-20 삼성전자주식회사 Method and apparatus for video encoding by motion prediction using arbitrary partition, and method and apparatus for video decoding by motion compensation using arbitrary partition
USRE47243E1 (en) 2009-12-09 2019-02-12 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
KR101700358B1 (en) 2009-12-09 2017-01-26 삼성전자주식회사 Method and apparatus for encoding video, and method and apparatus for decoding video
KR101675118B1 (en) 2010-01-14 2016-11-10 삼성전자 주식회사 Method and apparatus for video encoding considering order of skip and split, and method and apparatus for video decoding considering order of skip and split
US9020043B2 (en) 2010-05-10 2015-04-28 Google Inc. Pathway indexing in flexible partitioning
TWI600318B (en) * 2010-05-18 2017-09-21 Sony Corp Image processing apparatus and image processing method
JP2012023597A (en) * 2010-07-15 2012-02-02 Sony Corp Image processing device and image processing method
KR101903643B1 (en) 2010-07-20 2018-10-02 가부시키가이샤 엔.티.티.도코모 Image prediction decoding device and image prediction decoding method
EP2421266A1 (en) * 2010-08-19 2012-02-22 Thomson Licensing Method for reconstructing a current block of an image and corresponding encoding method, corresponding devices as well as storage medium carrying an images encoded in a bit stream
JP2012080369A (en) * 2010-10-01 2012-04-19 Sony Corp Image processing apparatus and image processing method
KR101712156B1 (en) * 2010-12-06 2017-03-06 에스케이 텔레콤주식회사 Method and Apparatus for Image Encoding/Decoding by Inter Prediction Using Arbitrary Shape Block
CN102611884B (en) 2011-01-19 2014-07-09 华为技术有限公司 Image encoding and decoding method and encoding and decoding device
US8718389B2 (en) * 2011-04-13 2014-05-06 Huawei Technologies Co., Ltd. Image encoding and decoding methods and related devices
KR20130050149A (en) * 2011-11-07 2013-05-15 오수미 Method for generating prediction block in inter prediction mode
EP2777286B1 (en) 2011-11-11 2017-01-04 GE Video Compression, LLC Effective wedgelet partition coding
KR101663394B1 (en) 2011-11-11 2016-10-06 지이 비디오 컴프레션, 엘엘씨 Adaptive partition coding
KR20230098693A (en) * 2011-11-11 2023-07-04 지이 비디오 컴프레션, 엘엘씨 Effective prediction using partition coding
US20130136180A1 (en) * 2011-11-29 2013-05-30 Futurewei Technologies, Inc. Unified Partitioning Structures and Signaling Methods for High Efficiency Video Coding
US20130287109A1 (en) * 2012-04-29 2013-10-31 Qualcomm Incorporated Inter-layer prediction through texture segmentation for video coding
CN102833551B (en) * 2012-09-25 2014-10-29 中南大学 Slice level coding/decoding end combined time minimization method
WO2014050971A1 (en) 2012-09-28 2014-04-03 日本電信電話株式会社 Intra-prediction coding method, intra-prediction decoding method, intra-prediction coding device, intra-prediction decoding device, programs therefor and recording mediums on which programs are recorded
KR101369174B1 (en) * 2013-03-20 2014-03-10 에스케이텔레콤 주식회사 High Definition Video Encoding/Decoding Method and Apparatus
CN103313053B (en) * 2013-05-14 2016-05-25 浙江万里学院 A kind of shape coding method towards visual object
CN105284110B (en) * 2013-07-31 2019-04-23 太阳专利托管公司 Image encoding method and picture coding device
US9392272B1 (en) 2014-06-02 2016-07-12 Google Inc. Video coding using adaptive source variance based partitioning
CN104036510A (en) * 2014-06-20 2014-09-10 常州艾格勒信息技术有限公司 Novel image segmentation system and method
US9578324B1 (en) 2014-06-27 2017-02-21 Google Inc. Video coding using statistical-based spatially differentiated partitioning
US20170332092A1 (en) * 2014-10-31 2017-11-16 Samsung Electronics Co., Ltd. Method and device for encoding or decoding image
JP6510902B2 (en) * 2015-06-15 2019-05-08 日本放送協会 Encoding device, decoding device and program
CN116916005A (en) 2016-04-29 2023-10-20 世宗大学校产学协力团 Video signal encoding/decoding method and apparatus
KR102365937B1 (en) * 2016-04-29 2022-02-22 세종대학교산학협력단 Method and apparatus for encoding/decoding a video signal
CN117221583A (en) * 2016-06-22 2023-12-12 Lx 半导体科技有限公司 Image encoding/decoding apparatus and apparatus for transmitting image data
CN109565592B (en) 2016-06-24 2020-11-17 华为技术有限公司 Video coding device and method using partition-based video coding block partitioning
CN109565595B (en) 2016-06-24 2021-06-22 华为技术有限公司 Video coding device and method using partition-based video coding block partitioning
US20190238888A1 (en) 2017-07-17 2019-08-01 Ki Baek Kim Image data encoding/decoding method and apparatus
KR20190052129A (en) 2016-10-04 2019-05-15 김기백 Image data encoding / decoding method and apparatus
CN110870308A (en) * 2017-06-30 2020-03-06 夏普株式会社 System and method for converting pictures into video blocks for video encoding by geometrically adaptive block partitioning
CA3151032A1 (en) 2017-06-30 2019-01-03 Huawei Technologies Co., Ltd. Motion vector determination for video frame block inter-prediction
EP3454556A1 (en) 2017-09-08 2019-03-13 Thomson Licensing Method and apparatus for video encoding and decoding using pattern-based block filtering
WO2019069782A1 (en) * 2017-10-06 2019-04-11 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device, decoding device, encoding method and decoding method
US10284844B1 (en) 2018-07-02 2019-05-07 Tencent America LLC Method and apparatus for video coding
MX2021002557A (en) * 2018-09-07 2021-04-29 Panasonic Ip Corp America System and method for video coding.
WO2020094059A1 (en) 2018-11-06 2020-05-14 Beijing Bytedance Network Technology Co., Ltd. Complexity reduction in parameter derivation for intra prediction
CN112219400B (en) 2018-11-06 2024-03-26 北京字节跳动网络技术有限公司 Position dependent storage of motion information
WO2020103934A1 (en) 2018-11-22 2020-05-28 Beijing Bytedance Network Technology Co., Ltd. Construction method for inter prediction with geometry partition
CN113170122B (en) * 2018-12-01 2023-06-27 北京字节跳动网络技术有限公司 Parameter derivation for intra prediction
WO2020114404A1 (en) * 2018-12-03 2020-06-11 Beijing Bytedance Network Technology Co., Ltd. Pruning method in different prediction mode
CN112584170B (en) 2018-12-28 2022-04-26 杭州海康威视数字技术股份有限公司 Coding and decoding method and equipment thereof
WO2020248105A1 (en) * 2019-06-10 2020-12-17 Oppo广东移动通信有限公司 Predicted value determination method, coder and computer storage medium
AU2020294669B2 (en) * 2019-06-21 2024-03-28 Huawei Technologies Co., Ltd. An encoder, a decoder and corresponding methods for sub-block partitioning mode
WO2021015581A1 (en) * 2019-07-23 2021-01-28 한국전자통신연구원 Method, apparatus, and recording medium for encoding/decoding image by using geometric partitioning
MX2022003940A (en) 2019-10-03 2022-04-25 Huawei Tech Co Ltd Coding process for geometric partition mode.
US11317094B2 (en) * 2019-12-24 2022-04-26 Tencent America LLC Method and apparatus for video coding using geometric partitioning mode
MX2022007973A (en) * 2019-12-30 2022-07-05 Fg innovation co ltd Device and method for coding video data.
CN113473141A (en) * 2020-03-31 2021-10-01 Oppo广东移动通信有限公司 Inter prediction method, encoder, decoder, and computer-readable storage medium
WO2022047099A1 (en) * 2020-08-28 2022-03-03 Op Solutions, Llc Methods and systems of adaptive geometric partitioning
WO2022047117A1 (en) * 2020-08-28 2022-03-03 Op Solutions, Llc Methods and systems of adaptive geometric partitioning
WO2023158765A1 (en) * 2022-02-16 2023-08-24 Beijing Dajia Internet Information Technology Co., Ltd. Methods and devices for geometric partitioning mode split modes reordering with pre-defined modes order
WO2023224279A1 (en) * 2022-05-16 2023-11-23 현대자동차주식회사 Method and apparatus for video coding using geometric motion prediction

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2500439B2 (en) * 1993-05-14 1996-05-29 日本電気株式会社 Predictive coding method for moving images
JPH08205172A (en) * 1995-01-26 1996-08-09 Mitsubishi Electric Corp Area division type motion predicting circuit, area division type motion predicting circuit incorporated image encoding device, and area division type motion predictive image decoding device
JPH0965338A (en) * 1995-08-28 1997-03-07 Graphics Commun Lab:Kk Image coder and image decoder
JP3392628B2 (en) 1996-03-29 2003-03-31 富士通株式会社 Outline extraction method and system
ES2170744T3 (en) * 1996-05-28 2002-08-16 Matsushita Electric Ind Co Ltd PREDICTION AND DECODING DEVICE DEVICE.
EP2096872B1 (en) * 2001-09-14 2014-11-12 NTT DoCoMo, Inc. Coding method, decoding method, coding apparatus, decoding apparatus, image processing system, coding program, and decoding program
WO2003092297A1 (en) * 2002-04-23 2003-11-06 Nokia Corporation Method and device for indicating quantizer parameters in a video coding system
US20040091047A1 (en) 2002-11-11 2004-05-13 Sony Corporation Method and apparatus for nonlinear multiple motion model and moving boundary extraction
KR20050105271A (en) * 2003-03-03 2005-11-03 코닌클리케 필립스 일렉트로닉스 엔.브이. Video encoding
KR100513014B1 (en) * 2003-05-22 2005-09-05 엘지전자 주식회사 Video communication system and video coding method
JP2005123732A (en) * 2003-10-14 2005-05-12 Matsushita Electric Ind Co Ltd Apparatus and method for deblocking filter processing
JP4142563B2 (en) * 2003-12-12 2008-09-03 株式会社エヌ・ティ・ティ・ドコモ Moving picture coding apparatus, moving picture coding method, and moving picture coding program
JP4313710B2 (en) * 2004-03-25 2009-08-12 パナソニック株式会社 Image encoding method and image decoding method
CN100473161C (en) * 2005-09-09 2009-03-25 海信集团有限公司 4x4 discrete cosine transform rapid parallel device based on AVS and its method
CN101502120B (en) 2006-08-02 2012-08-29 汤姆逊许可公司 Adaptive geometric partitioning method and device for video decoding
JP6327003B2 (en) 2014-06-20 2018-05-23 三菱ケミカル株式会社 Method for producing iminodiacetic acid type chelating resin

Cited By (256)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090268810A1 (en) * 2006-09-29 2009-10-29 Congxia Dai Geometric intra prediction
US20130148726A1 (en) * 2007-01-18 2013-06-13 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding based on intra prediction
US20080240246A1 (en) * 2007-03-28 2008-10-02 Samsung Electronics Co., Ltd. Video encoding and decoding method and apparatus
US8160150B2 (en) * 2007-04-10 2012-04-17 Texas Instruments Incorporated Method and system for rate distortion optimization
US20080253457A1 (en) * 2007-04-10 2008-10-16 Moore Darnell J Method and system for rate distortion optimization
US20080304569A1 (en) * 2007-06-08 2008-12-11 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding image using object boundary based partition
US20120236943A1 (en) * 2007-07-31 2012-09-20 Samsung Electronics Co., Ltd. Video encoding and decoding method and apparatus using weighted prediction
US20100195715A1 (en) * 2007-10-15 2010-08-05 Huawei Technologies Co., Ltd. Method and apparatus for adaptive frame prediction
US20100278267A1 (en) * 2008-01-07 2010-11-04 Thomson Licensing Methods and apparatus for video encoding and decoding using parametric filtering
US8625672B2 (en) * 2008-01-07 2014-01-07 Thomson Licensing Methods and apparatus for video encoding and decoding using parametric filtering
US20140105296A1 (en) * 2008-07-02 2014-04-17 Samsung Electronics Co., Ltd. Image encoding method and device, and decoding method and device therefor
US9402079B2 (en) 2008-07-02 2016-07-26 Samsung Electronics Co., Ltd. Image encoding method and device, and decoding method and device therefor
US9118913B2 (en) 2008-07-02 2015-08-25 Samsung Electronics Co., Ltd. Image encoding method and device, and decoding method and device therefor
US20140105287A1 (en) * 2008-07-02 2014-04-17 Samsung Electronics Co., Ltd. Image encoding method and device, and decoding method and device therefor
US8837590B2 (en) * 2008-07-02 2014-09-16 Samsung Electronics Co., Ltd. Image decoding device which obtains predicted value of coding unit using weighted average
US20110103475A1 (en) * 2008-07-02 2011-05-05 Samsung Electronics Co., Ltd. Image encoding method and device, and decoding method and device therefor
US8902979B2 (en) * 2008-07-02 2014-12-02 Samsung Electronics Co., Ltd. Image decoding device which obtains predicted value of coding unit using weighted average
US8879626B2 (en) * 2008-07-02 2014-11-04 Samsung Electronics Co., Ltd. Image encoding method and device, and decoding method and device therefor
US9924190B2 (en) * 2009-05-29 2018-03-20 Mitsubishi Electric Corporation Optimized image decoding device and method for a predictive encoded bit stream
US20150304677A1 (en) * 2009-05-29 2015-10-22 Mitsubishi Electric Corporation Image encoding device, image decoding device, image encoding method, and image decoding method
US20150229949A1 (en) * 2009-05-29 2015-08-13 Mitsubishi Electric Corporation Image encoding device, image decoding device, image encoding method, and image decoding method
US9930355B2 (en) * 2009-05-29 2018-03-27 Mitsubishi Electric Corporation Optimized image decoding device and method for a predictive encoded BIT stream
US9930356B2 (en) * 2009-05-29 2018-03-27 Mitsubishi Electric Corporation Optimized image decoding device and method for a predictive encoded bit stream
US20120106627A1 (en) * 2009-06-26 2012-05-03 Thomson Licensing Methods and apparatus for video encoding and decoding using adaptive geometric partitioning
US9326003B2 (en) * 2009-06-26 2016-04-26 Thomson Licensing Methods and apparatus for video encoding and decoding using adaptive geometric partitioning
US10051283B2 (en) * 2009-07-03 2018-08-14 France Telecom Prediction of a movement vector of a current image partition having a different geometric shape or size from that of at least one adjacent reference image partition and encoding and decoding using one such prediction
US20120106647A1 (en) * 2009-07-03 2012-05-03 France Telecom Prediction of a movement vector of a current image partition having a different geometric shape or size from that of at least one adjacent reference image partition and encoding and decoding using one such prediction
US9374591B2 (en) 2009-08-17 2016-06-21 Samsung Electronics Co., Ltd. Method and apparatus for encoding video, and method and apparatus for decoding video
US10178396B2 (en) 2009-09-04 2019-01-08 Stmicroelectronics International N.V. Object tracking
US9813731B2 (en) * 2009-09-04 2017-11-07 Stmicroelectonics International N.V. System and method for object based parametric video coding
US20140348231A1 (en) * 2009-09-04 2014-11-27 STMicoelectronics International N.V. System and method for object based parametric video coding
US20110200097A1 (en) * 2010-02-18 2011-08-18 Qualcomm Incorporated Adaptive transform size selection for geometric motion partitioning
US20110200110A1 (en) * 2010-02-18 2011-08-18 Qualcomm Incorporated Smoothing overlapped regions resulting from geometric motion partitioning
US20110200109A1 (en) * 2010-02-18 2011-08-18 Qualcomm Incorporated Fixed point implementation for geometric motion partitioning
US9020030B2 (en) 2010-02-18 2015-04-28 Qualcomm Incorporated Smoothing overlapped regions resulting from geometric motion partitioning
US10250908B2 (en) * 2010-02-18 2019-04-02 Qualcomm Incorporated Adaptive transform size selection for geometric motion partitioning
US20110200111A1 (en) * 2010-02-18 2011-08-18 Qualcomm Incorporated Encoding motion vectors for geometric motion partitioning
US20170201770A1 (en) * 2010-02-18 2017-07-13 Qualcomm Incorporated Adaptive transform size selection for geometric motion partitioning
US9654776B2 (en) * 2010-02-18 2017-05-16 Qualcomm Incorporated Adaptive transform size selection for geometric motion partitioning
US8879632B2 (en) 2010-02-18 2014-11-04 Qualcomm Incorporated Fixed point implementation for geometric motion partitioning
US20110249734A1 (en) * 2010-04-09 2011-10-13 Segall Christopher A Methods and Systems for Intra Prediction
US8619857B2 (en) 2010-04-09 2013-12-31 Sharp Laboratories Of America, Inc. Methods and systems for intra prediction
US10123041B2 (en) 2010-04-09 2018-11-06 Huawei Technologies Co., Ltd. Video coding and decoding methods and apparatuses
US8644375B2 (en) 2010-04-09 2014-02-04 Sharp Laboratories Of America, Inc. Methods and systems for intra prediction
US9426487B2 (en) 2010-04-09 2016-08-23 Huawei Technologies Co., Ltd. Video coding and decoding methods and apparatuses
US9955184B2 (en) 2010-04-09 2018-04-24 Huawei Technologies Co., Ltd. Video coding and decoding methods and apparatuses
US20110249743A1 (en) * 2010-04-09 2011-10-13 Jie Zhao Super-block for high performance video coding
US9591335B2 (en) 2010-04-13 2017-03-07 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10432979B2 (en) 2010-04-13 2019-10-01 Ge Video Compression Llc Inheritance in sample array multitree subdivision
US10805645B2 (en) 2010-04-13 2020-10-13 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10803483B2 (en) 2010-04-13 2020-10-13 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US11900415B2 (en) 2010-04-13 2024-02-13 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10803485B2 (en) 2010-04-13 2020-10-13 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10848767B2 (en) 2010-04-13 2020-11-24 Ge Video Compression, Llc Inter-plane prediction
US11856240B1 (en) 2010-04-13 2023-12-26 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10771822B2 (en) 2010-04-13 2020-09-08 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10764608B2 (en) 2010-04-13 2020-09-01 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US20230412850A1 (en) * 2010-04-13 2023-12-21 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US11810019B2 (en) 2010-04-13 2023-11-07 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10748183B2 (en) 2010-04-13 2020-08-18 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US11785264B2 (en) * 2010-04-13 2023-10-10 Ge Video Compression, Llc Multitree subdivision and inheritance of coding parameters in a coding block
US10855991B2 (en) 2010-04-13 2020-12-01 Ge Video Compression, Llc Inter-plane prediction
US11778241B2 (en) 2010-04-13 2023-10-03 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10855995B2 (en) 2010-04-13 2020-12-01 Ge Video Compression, Llc Inter-plane prediction
US10719850B2 (en) 2010-04-13 2020-07-21 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US11765362B2 (en) 2010-04-13 2023-09-19 Ge Video Compression, Llc Inter-plane prediction
US11765363B2 (en) 2010-04-13 2023-09-19 Ge Video Compression, Llc Inter-plane reuse of coding parameters
US10721495B2 (en) 2010-04-13 2020-07-21 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10721496B2 (en) 2010-04-13 2020-07-21 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US11734714B2 (en) 2010-04-13 2023-08-22 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10708629B2 (en) * 2010-04-13 2020-07-07 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US11910029B2 (en) 2010-04-13 2024-02-20 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division preliminary class
US11736738B2 (en) 2010-04-13 2023-08-22 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using subdivision
US20160309197A1 (en) * 2010-04-13 2016-10-20 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
CN106060561A (en) * 2010-04-13 2016-10-26 Ge视频压缩有限责任公司 Decoder, array reconstruction method, coder, coding method, and data flow
US11611761B2 (en) 2010-04-13 2023-03-21 Ge Video Compression, Llc Inter-plane reuse of coding parameters
US10856013B2 (en) 2010-04-13 2020-12-01 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US11553212B2 (en) * 2010-04-13 2023-01-10 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US10708628B2 (en) 2010-04-13 2020-07-07 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10694218B2 (en) * 2010-04-13 2020-06-23 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US11546642B2 (en) 2010-04-13 2023-01-03 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10687085B2 (en) * 2010-04-13 2020-06-16 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US9596488B2 (en) 2010-04-13 2017-03-14 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US11546641B2 (en) 2010-04-13 2023-01-03 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US10687086B2 (en) 2010-04-13 2020-06-16 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10681390B2 (en) 2010-04-13 2020-06-09 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US20220217419A1 (en) * 2010-04-13 2022-07-07 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US20170134761A1 (en) 2010-04-13 2017-05-11 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10672028B2 (en) 2010-04-13 2020-06-02 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US11983737B2 (en) 2010-04-13 2024-05-14 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10855990B2 (en) 2010-04-13 2020-12-01 Ge Video Compression, Llc Inter-plane prediction
US11102518B2 (en) 2010-04-13 2021-08-24 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10621614B2 (en) 2010-04-13 2020-04-14 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10863208B2 (en) 2010-04-13 2020-12-08 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US11087355B2 (en) 2010-04-13 2021-08-10 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US9807427B2 (en) 2010-04-13 2017-10-31 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US20210211743A1 (en) 2010-04-13 2021-07-08 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10873749B2 (en) 2010-04-13 2020-12-22 Ge Video Compression, Llc Inter-plane reuse of coding parameters
US11051047B2 (en) 2010-04-13 2021-06-29 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US10460344B2 (en) 2010-04-13 2019-10-29 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10448060B2 (en) * 2010-04-13 2019-10-15 Ge Video Compression, Llc Multitree subdivision and inheritance of coding parameters in a coding block
US11037194B2 (en) 2010-04-13 2021-06-15 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10440400B2 (en) 2010-04-13 2019-10-08 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US11910030B2 (en) * 2010-04-13 2024-02-20 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US10432978B2 (en) 2010-04-13 2019-10-01 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US10432980B2 (en) 2010-04-13 2019-10-01 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US10880581B2 (en) 2010-04-13 2020-12-29 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US20190197579A1 (en) 2010-04-13 2019-06-27 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US10003828B2 (en) 2010-04-13 2018-06-19 Ge Video Compression, Llc Inheritance in sample array multitree division
US10038920B2 (en) * 2010-04-13 2018-07-31 Ge Video Compression, Llc Multitree subdivision and inheritance of coding parameters in a coding block
US20180220164A1 (en) * 2010-04-13 2018-08-02 Ge Video Compression, Llc Multitree subdivision and inheritance of coding parameters in a coding block
US20130034157A1 (en) * 2010-04-13 2013-02-07 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Inheritance in sample array multitree subdivision
US10051291B2 (en) * 2010-04-13 2018-08-14 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US20190174148A1 (en) * 2010-04-13 2019-06-06 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US20180324466A1 (en) 2010-04-13 2018-11-08 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US20190164188A1 (en) 2010-04-13 2019-05-30 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
US20190158887A1 (en) * 2010-04-13 2019-05-23 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US10893301B2 (en) 2010-04-13 2021-01-12 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10250913B2 (en) 2010-04-13 2019-04-02 Ge Video Compression, Llc Coding of a spatial sampling of a two-dimensional information signal using sub-division
US10880580B2 (en) 2010-04-13 2020-12-29 Ge Video Compression, Llc Inheritance in sample array multitree subdivision
US20190089962A1 (en) 2010-04-13 2019-03-21 Ge Video Compression, Llc Inter-plane prediction
US10248966B2 (en) 2010-04-13 2019-04-02 Ge Video Compression, Llc Region merging and coding parameter reuse via merging
WO2011139476A3 (en) * 2010-05-06 2012-03-08 Intel Corporation Boundary detection in media streams
US8521006B2 (en) 2010-05-06 2013-08-27 Intel Corporation Boundary detection in media streams
US20170163982A1 (en) * 2010-05-10 2017-06-08 Hfi Innovation Inc. Method and Apparatus of Adaptive Loop Filtering
US20110274158A1 (en) * 2010-05-10 2011-11-10 Mediatek Inc. Method and Apparatus of Adaptive Loop Filtering
US9615093B2 (en) 2010-05-10 2017-04-04 Hfi Innovation Inc. Method and appartus of adaptive loop filtering
US20130259117A1 (en) * 2010-05-10 2013-10-03 Mediatek Inc. Method and Apparatus of Adaptive Loop Filtering
US9998737B2 (en) * 2010-05-10 2018-06-12 Hfi Innovation Inc. Method and apparatus of adaptive loop filtering
US9094658B2 (en) * 2010-05-10 2015-07-28 Mediatek Inc. Method and apparatus of adaptive loop filtering
US9154778B2 (en) * 2010-05-10 2015-10-06 Mediatek Inc. Method and apparatus of adaptive loop filtering
US9781441B2 (en) * 2010-05-26 2017-10-03 Intellectual Value, Inc. Method for encoding and decoding coding unit
US20150189310A1 (en) * 2010-05-26 2015-07-02 Newracom Inc. Method of predicting motion vectors in video codec in which multiple references are allowed, and motion vector encoding/decoding apparatus using the same
US10142649B2 (en) 2010-05-26 2018-11-27 Hangzhou Hikvision Digital Technology Co., Ltd. Method for encoding and decoding coding unit
US20110310976A1 (en) * 2010-06-17 2011-12-22 Qualcomm Incorporated Joint Coding of Partition Information in Video Coding
US9973750B2 (en) * 2010-07-29 2018-05-15 Sk Telecom Co., Ltd. Method and device for image encoding/decoding using block split prediction
US20130202030A1 (en) * 2010-07-29 2013-08-08 Sk Telecom Co., Ltd. Method and device for image encoding/decoding using block split prediction
US20130188884A1 (en) * 2010-09-30 2013-07-25 Electronics And Telecommunications Research Institute Method for encoding and decoding images and apparatus for encoding and decoding using same
US20160044327A1 (en) * 2010-09-30 2016-02-11 Electronics And Telecommunications Research Institute Method for encoding and decoding images and apparatus for encoding and decoding using same
US9510010B2 (en) * 2010-09-30 2016-11-29 Electronics And Telecommunications Research Instit Method for decoding images based upon partition information determinations and apparatus for decoding using same
US9202289B2 (en) * 2010-09-30 2015-12-01 Electronics And Telecommunications Research Institute Method for coding and decoding target block partition information using information about neighboring blocks
US9532059B2 (en) 2010-10-05 2016-12-27 Google Technology Holdings LLC Method and apparatus for spatial scalability for video coding
US8861617B2 (en) 2010-10-05 2014-10-14 Mediatek Inc Method and apparatus of region-based adaptive loop filtering
US10812803B2 (en) * 2010-12-13 2020-10-20 Electronics And Telecommunications Research Institute Intra prediction method and apparatus
US10567751B2 (en) 2011-01-03 2020-02-18 Hfi Innovation Inc. Method of filter-unit based in-loop filtering
US9877019B2 (en) * 2011-01-03 2018-01-23 Hfi Innovation Inc. Method of filter-unit based in-loop filtering
US20130215959A1 (en) * 2011-01-03 2013-08-22 Media Tek Inc. Method of Filter-Unit Based In-Loop Filtering
US9225985B2 (en) 2011-01-14 2015-12-29 Siemens Aktiengesellschaft Methods and devices for forming a prediction value
US8917763B2 (en) 2011-03-07 2014-12-23 Panasonic Corporation Motion compensation apparatus, video coding apparatus, video decoding apparatus, motion compensation method, program, and integrated circuit
US10785503B2 (en) 2011-04-21 2020-09-22 Intellectual Discovery Co., Ltd. Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering
US11381844B2 (en) 2011-04-21 2022-07-05 Dolby Laboratories Licensing Corporation Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering
US10129567B2 (en) * 2011-04-21 2018-11-13 Intellectual Discovery Co., Ltd. Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering
US9338476B2 (en) 2011-05-12 2016-05-10 Qualcomm Incorporated Filtering blockiness artifacts for video coding
US8989256B2 (en) 2011-05-25 2015-03-24 Google Inc. Method and apparatus for using segmentation-based coding of prediction information
US8964833B2 (en) 2011-07-19 2015-02-24 Qualcomm Incorporated Deblocking of non-square blocks for video coding
US11412252B2 (en) 2011-09-22 2022-08-09 Lg Electronics Inc. Method and apparatus for signaling image information, and decoding method and apparatus using same
US11743494B2 (en) 2011-09-22 2023-08-29 Lg Electronics Inc. Method and apparatus for signaling image information, and decoding method and apparatus using same
US10791337B2 (en) 2011-09-22 2020-09-29 Lg Electronics Inc. Method and apparatus for signaling image information, and decoding method and apparatus using same
US20140233647A1 (en) * 2011-09-22 2014-08-21 Lg Electronics Inc. Method and apparatus for signaling image information, and decoding method and apparatus using same
US10321154B2 (en) 2011-09-22 2019-06-11 Lg Electronics Inc. Method and apparatus for signaling image information, and decoding method and apparatus using same
US9571834B2 (en) * 2011-09-22 2017-02-14 Lg Electronics Inc. Method and apparatus for signaling image information, and decoding method and apparatus using same
US9838692B2 (en) * 2011-10-18 2017-12-05 Qualcomm Incorporated Detecting availabilities of neighboring video units for video coding
US20130094580A1 (en) * 2011-10-18 2013-04-18 Qualcomm Incorporated Detecting availabilities of neighboring video units for video coding
US9332273B2 (en) 2011-11-08 2016-05-03 Samsung Electronics Co., Ltd. Method and apparatus for motion vector determination in video encoding or decoding
US9451282B2 (en) 2011-11-08 2016-09-20 Samsung Electronics Co., Ltd. Method and apparatus for motion vector determination in video encoding or decoding
TWI556648B (en) * 2011-11-08 2016-11-01 三星電子股份有限公司 Method for decoding image
US11314379B2 (en) 2011-11-29 2022-04-26 Samsung Electronics Co., Ltd Method and system for providing user interface for device control
US9621369B2 (en) 2011-11-29 2017-04-11 Samsung Electronics Co., Ltd. Method and system for providing user interface for device control
US9247257B1 (en) 2011-11-30 2016-01-26 Google Inc. Segmentation based entropy encoding and decoding
US9531990B1 (en) 2012-01-21 2016-12-27 Google Inc. Compound prediction using multiple sources or prediction modes
US9094681B1 (en) 2012-02-28 2015-07-28 Google Inc. Adaptive segmentation
US9813700B1 (en) 2012-03-09 2017-11-07 Google Inc. Adaptively encoding a media stream with compound prediction
US9185429B1 (en) 2012-04-30 2015-11-10 Google Inc. Video encoding and decoding using un-equal error protection
US9883190B2 (en) 2012-06-29 2018-01-30 Google Inc. Video encoding using variance for selecting an encoding mode
GB2504069B (en) * 2012-07-12 2015-09-16 Canon Kk Method and device for predicting an image portion for encoding or decoding of an image
US9779516B2 (en) 2012-07-12 2017-10-03 Canon Kabushiki Kaisha Method and device for predicting an image portion for encoding or decoding of an image
GB2504069A (en) * 2012-07-12 2014-01-22 Canon Kk Intra-prediction using a parametric displacement transformation
US9332276B1 (en) 2012-08-09 2016-05-03 Google Inc. Variable-sized super block based direct prediction mode
US9380298B1 (en) 2012-08-10 2016-06-28 Google Inc. Object-based intra-prediction
CN103634612A (en) * 2012-08-22 2014-03-12 成都爱斯顿测控技术有限公司 Industrial-grade audio and video processing platform
US9826229B2 (en) 2012-09-29 2017-11-21 Google Technology Holdings LLC Scan pattern determination from base layer pixel information for scalable extension
US9350988B1 (en) 2012-11-20 2016-05-24 Google Inc. Prediction mode-based block ordering in video coding
US11785226B1 (en) 2013-01-03 2023-10-10 Google Inc. Adaptive composite intra prediction for image and video compression
US9628790B1 (en) 2013-01-03 2017-04-18 Google Inc. Adaptive composite intra prediction for image and video compression
US9681128B1 (en) 2013-01-31 2017-06-13 Google Inc. Adaptive pre-transform scanning patterns for video and image compression
US9247251B1 (en) 2013-07-26 2016-01-26 Google Inc. Right-edge extension for quad-tree intra-prediction
US9609343B1 (en) * 2013-12-20 2017-03-28 Google Inc. Video coding using compound prediction
US10165283B1 (en) * 2013-12-20 2018-12-25 Google Llc Video coding using compound prediction
US10200717B2 (en) * 2014-06-19 2019-02-05 Sharp Kabushiki Kaisha Image decoding device, image coding device, and predicted image generation device
US20170134750A1 (en) * 2014-06-19 2017-05-11 Sharp Kabushiki Kaisha Image decoding device, image coding device, and predicted image generation device
US20160050440A1 (en) * 2014-08-15 2016-02-18 Ying Liu Low-complexity depth map encoder with quad-tree partitioned compressed sensing
KR101624659B1 (en) 2015-01-05 2016-05-27 삼성전자주식회사 Method and apparatus for decoding video
KR101624660B1 (en) 2015-04-14 2016-05-27 삼성전자주식회사 Method and apparatus for decoding video
US10742973B2 (en) * 2015-05-12 2020-08-11 Samsung Electronics Co., Ltd. Image decoding method for performing intra prediction and device thereof, and image encoding method for performing intra prediction and device thereof
TWI573439B (en) * 2016-05-03 2017-03-01 上海兆芯集成電路有限公司 Methods for rdo (rate-distortion optimization) based on curve fittings and apparatuses using the same
US11089296B2 (en) 2016-09-30 2021-08-10 Interdigital Madison Patent Holdings, Sas Method and apparatus for omnidirectional video coding and decoding with adaptive intra prediction
US11563941B2 (en) 2016-10-26 2023-01-24 Dolby Laboratories Licensing Corporation Video coding method and apparatus using any types of block partitioning
US11070800B2 (en) * 2016-10-26 2021-07-20 Intellectual Discovery Co., Ltd. Video coding method and apparatus using any types of block partitioning
US11870990B2 (en) 2016-10-26 2024-01-09 Dolby Laboratories Licensing Corporation Video coding method and apparatus using any types of block partitioning
US10965943B2 (en) * 2016-12-28 2021-03-30 Sony Corporation Image processing apparatus and image processing method
US10771808B2 (en) 2017-02-06 2020-09-08 Huawei Technologies Co., Ltd. Video encoder and decoder for predictive partitioning
US11570437B2 (en) 2017-06-30 2023-01-31 Huawei Technologies Co., Ltd. Encoder, decoder, computer program and computer program product for processing a frame of a video sequence
US11039137B2 (en) 2017-06-30 2021-06-15 Huawei Technologies Co., Ltd. Encoder, decoder, computer program and computer program product for processing a frame of a video sequence
CN115118993A (en) * 2017-08-22 2022-09-27 松下电器(美国)知识产权公司 Image encoding method, image decoding method, and non-transitory computer readable medium
CN115118992A (en) * 2017-08-22 2022-09-27 松下电器(美国)知识产权公司 Image encoder, image decoder, and non-transitory computer readable medium
US11876991B2 (en) 2017-08-22 2024-01-16 Panasonic Intellectual Property Corporation Of America Image decoder and image decoding method capable of blending operation between partitions
CN115150613A (en) * 2017-08-22 2022-10-04 松下电器(美国)知识产权公司 Image encoder, image decoder, and non-transitory computer readable medium
CN115118994A (en) * 2017-08-22 2022-09-27 松下电器(美国)知识产权公司 Image encoder, image decoder, and non-transitory computer readable medium
CN115118995A (en) * 2017-08-22 2022-09-27 松下电器(美国)知识产权公司 Image encoder, image decoder, and non-transitory computer readable medium
US11159793B2 (en) 2017-10-16 2021-10-26 Digitalinsights Inc. Method, device, and recording medium storing bit stream, for encoding/decoding image
US11265543B2 (en) 2017-10-16 2022-03-01 Digitalinsights Inc. Method, device, and recording medium storing bit stream, for encoding/decoding image
US11831870B2 (en) 2017-10-16 2023-11-28 Digitalinsights Inc. Method, device, and recording medium storing bit stream, for encoding/decoding image
US11044491B2 (en) * 2018-01-30 2021-06-22 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method
US11889103B2 (en) 2018-01-30 2024-01-30 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method
US11889104B2 (en) 2018-01-30 2024-01-30 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method
US11889105B2 (en) 2018-01-30 2024-01-30 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method
US11895323B2 (en) 2018-01-30 2024-02-06 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method
US11558635B2 (en) 2018-01-30 2023-01-17 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method
US11895322B2 (en) 2018-01-30 2024-02-06 Panasonic Intellectual Property Corporation Of America Encoder, decoder, encoding method, and decoding method
US11627319B2 (en) * 2018-02-22 2023-04-11 Lg Electronics Inc. Image decoding method and apparatus according to block division structure in image coding system
US11233996B2 (en) * 2018-02-22 2022-01-25 Lg Electronics Inc. Image decoding method and apparatus according to block division structure in image coding system
US20220109837A1 (en) * 2018-02-22 2022-04-07 Lg Electronics Inc. Image decoding method and apparatus according to block division structure in image coding system
CN111886861A (en) * 2018-02-22 2020-11-03 Lg电子株式会社 Image decoding method and apparatus according to block division structure in image coding system
US10708625B2 (en) * 2018-06-12 2020-07-07 Alibaba Group Holding Limited Adaptive deblocking filter
WO2019246535A1 (en) * 2018-06-22 2019-12-26 Op Solutions, Llc Block level geometric partitioning
US11695967B2 (en) 2018-06-22 2023-07-04 Op Solutions, Llc Block level geometric partitioning
US11350119B2 (en) 2018-07-02 2022-05-31 Tencent America LLC Method and apparatus for video coding
US11706436B2 (en) 2018-07-02 2023-07-18 Tencent America LLC Method and apparatus for video coding
US10911766B2 (en) 2018-07-02 2021-02-02 Tencent America LLC Method and apparatus for video coding
US10382772B1 (en) * 2018-07-02 2019-08-13 Tencent America LLC Method and apparatus for video coding
JP2022508522A (en) * 2018-10-01 2022-01-19 オーピー ソリューションズ, エルエルシー Exponential division method and system
WO2020072494A1 (en) 2018-10-01 2020-04-09 Op Solutions, Llc Methods and systems of exponential partitioning
JP7479062B2 (en) 2018-10-01 2024-05-08 オーピー ソリューションズ, エルエルシー Method and system for exponential division
EP3861732A4 (en) * 2018-10-01 2022-07-06 OP Solutions, LLC Methods and systems of exponential partitioning
CN113039793A (en) * 2018-10-01 2021-06-25 Op方案有限责任公司 Index partitioning method and system
CN111147855A (en) * 2018-11-02 2020-05-12 北京字节跳动网络技术有限公司 Coordination between geometric partitioning prediction modes and other tools
US11595657B2 (en) 2018-12-21 2023-02-28 Beijing Bytedance Network Technology Co., Ltd. Inter prediction using polynomial model
US11711516B2 (en) * 2018-12-21 2023-07-25 Beijing Bytedance Network Technology Co., Ltd Intra prediction using polynomial model
US20210297670A1 (en) * 2018-12-21 2021-09-23 Beijing Bytedance Network Technology Co., Ltd. Intra prediction using polynomial model
JP2022523309A (en) * 2019-01-28 2022-04-22 オーピー ソリューションズ, エルエルシー Inter-prediction in exponential division
US11695922B2 (en) 2019-01-28 2023-07-04 Op Solutions, Llc Inter prediction in geometric partitioning with an adaptive number of regions
CN113647105A (en) * 2019-01-28 2021-11-12 Op方案有限责任公司 Inter prediction for exponential partitions
EP3918791A4 (en) * 2019-01-28 2022-03-16 OP Solutions, LLC Inter prediction in exponential partitioning
US10742972B1 (en) * 2019-03-08 2020-08-11 Tencent America LLC Merge list construction in triangular prediction
US20220021883A1 (en) * 2019-06-21 2022-01-20 Huawei Technologies Co.,Ltd. Chroma sample weight derivation for geometric partition mode
US11190777B2 (en) * 2019-06-30 2021-11-30 Tencent America LLC Method and apparatus for video coding
US11812037B2 (en) * 2019-06-30 2023-11-07 Tencent America LLC Method and apparatus for video coding
US20210400283A1 (en) * 2019-06-30 2021-12-23 Tencent America LLC Method and apparatus for video coding
CN114128295A (en) * 2019-07-14 2022-03-01 北京字节跳动网络技术有限公司 Construction of candidate list of geometric partitioning mode in video coding and decoding
US11375243B2 (en) * 2019-07-17 2022-06-28 Tencent America LLC Method and apparatus for video coding
US11317090B2 (en) * 2019-08-12 2022-04-26 Tencent America LLC Method and apparatus for video coding
US11863744B2 (en) * 2019-08-12 2024-01-02 Tencent America LLC Context modeling for spilt flag
US20220210412A1 (en) * 2019-08-12 2022-06-30 Tencent America LLC Method and apparatus for video coding
CN113796083A (en) * 2019-11-21 2021-12-14 腾讯美国有限责任公司 Geometric partitioning modes in video coding and decoding
WO2021101791A1 (en) * 2019-11-21 2021-05-27 Tencent America LLC Geometric partitioning mode in video coding
US20230237612A1 (en) * 2022-01-26 2023-07-27 Intuitive Research And Technology Corporation Determining volume of a selectable region using extended reality

Also Published As

Publication number Publication date
JP2015144487A (en) 2015-08-06
KR20090046815A (en) 2009-05-11
CN101502120A (en) 2009-08-05
US20210044826A1 (en) 2021-02-11
KR101526914B1 (en) 2015-06-08
KR20090046814A (en) 2009-05-11
EP2050279A2 (en) 2009-04-22
US20220132162A1 (en) 2022-04-28
EP2047687B1 (en) 2018-05-16
WO2008016609A3 (en) 2008-10-09
KR101380580B1 (en) 2014-04-02
JP2014060763A (en) 2014-04-03
CN101502119B (en) 2012-05-23
JP2014060764A (en) 2014-04-03
JP6109712B2 (en) 2017-04-05
WO2008016605A3 (en) 2008-10-23
US20240129524A1 (en) 2024-04-18
WO2008016609A2 (en) 2008-02-07
US20120177106A1 (en) 2012-07-12
US20170280156A1 (en) 2017-09-28
BRPI0715507A2 (en) 2013-06-18
JP2009545919A (en) 2009-12-24
US11895327B2 (en) 2024-02-06
CN101502119A (en) 2009-08-05
JP2009545920A (en) 2009-12-24
BRPI0714859A2 (en) 2013-05-21
EP2050279B1 (en) 2018-08-29
CN101502120B (en) 2012-08-29
EP2047687A2 (en) 2009-04-15
WO2008016605A2 (en) 2008-02-07
US11252435B2 (en) 2022-02-15

Similar Documents

Publication Publication Date Title
US11252435B2 (en) Method and apparatus for parametric, model-based, geometric frame partitioning for video coding
US20230051065A1 (en) Methods and apparatus for transform selection in video encoding and decoding
KR101740039B1 (en) Methods and apparatus for video encoding and decoding using adaptive geometric partitioning
US20180091817A1 (en) Methods and apparatus for transform selection in video encoding and decoding
KR101524394B1 (en) Encoding method and device, decoding method and device, and computer­readable recording medium
US20100208827A1 (en) Methods and apparatus for video encoding and decoding geometerically partitioned super macroblocks

Legal Events

Date Code Title Description
AS Assignment

Owner name: THOMSON LICENSING, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ESCODA, OSCAR DIVORRA;YIN, PENG;REEL/FRAME:022180/0934

Effective date: 20061028

AS Assignment

Owner name: THOMSON LICENSING DTV, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THOMSON LICENSING;REEL/FRAME:041370/0433

Effective date: 20170113

AS Assignment

Owner name: THOMSON LICENSING DTV, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THOMSON LICENSING;REEL/FRAME:041378/0630

Effective date: 20170113

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION