WO2020159982A1 - Shape adaptive discrete cosine transform for geometric partitioning with an adaptive number of regions - Google Patents

Shape adaptive discrete cosine transform for geometric partitioning with an adaptive number of regions Download PDF

Info

Publication number
WO2020159982A1
WO2020159982A1 PCT/US2020/015401 US2020015401W WO2020159982A1 WO 2020159982 A1 WO2020159982 A1 WO 2020159982A1 US 2020015401 W US2020015401 W US 2020015401W WO 2020159982 A1 WO2020159982 A1 WO 2020159982A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
decoder
current block
bitstream
line segment
Prior art date
Application number
PCT/US2020/015401
Other languages
French (fr)
Inventor
Borivoje Furht
Hari Kalva
Velibor Adzic
Original Assignee
Op Solutions, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Op Solutions, Llc filed Critical Op Solutions, Llc
Priority to JP2021543476A priority Critical patent/JP7482536B2/en
Priority to MX2021009030A priority patent/MX2021009030A/en
Priority to BR112021014671-7A priority patent/BR112021014671A2/en
Priority to KR1020217027274A priority patent/KR20210118166A/en
Priority to EP20749417.0A priority patent/EP3918784A4/en
Priority to SG11202107974YA priority patent/SG11202107974YA/en
Priority to CN202080022269.3A priority patent/CN113597757A/en
Publication of WO2020159982A1 publication Critical patent/WO2020159982A1/en
Priority to US17/386,126 priority patent/US12075046B2/en
Priority to JP2024069016A priority patent/JP2024095835A/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Definitions

  • the present invention generally relates to the field of video compression.
  • the present invention is directed to a shape adaptive discrete cosine transform for geometric partitioning with an adaptive number of regions.
  • a video codec can include an electronic circuit or software that compresses or decompresses digital video. It can convert uncompressed video to a compressed format or vice versa.
  • a device that compresses video (and/or performs some function thereof) can typically be called an encoder, and a device that decompresses video (and/or performs some function thereof) can be called a decoder.
  • a format of the compressed data can conform to a standard video compression specification.
  • the compression can be lossy in that the compressed video lacks some information present in the original video. A consequence of this can include that decompressed video can have lower quality than the original uncompressed video because there is insufficient information to accurately reconstruct the original video.
  • a decoder includes circuitry configured to receive a bitstream, determine a first region, a second region, and a third region of a current block according to a geometric partitioning mode, and decode the current block using an inverse discrete cosine transformation for each of the first region, the second region, and the third region.
  • a decoder includes circuitry configured to receive a bitstream determine a first region, a second region, and a third region of a current block and according to a geometric partitioning mode, determine, from a signal contained in the bitstream, a coding transformation type to decode each of the first region, the second region, and/or the third region, the coding transformation type characterizing at least an inverse block discrete cosine transformation and an inverse shape adaptive discrete cosine transformation and decode the current block, the decoding of the current block including using the determined transformation type for inverse transformation for each of the first region, the second region and/or the third region.
  • a method includes receiving, by a decoder, a bitstream, determining a first region, a second region, and a third region of a current block and according to a geometric partitioning mode, determining, from a signal contained in the bitstream, a coding transformation type to decode the first region, the second region, and/or the third region, the coding
  • transformation type characterizing at least an inverse block discrete cosine transformation or an inverse shape adaptive discrete cosine transformation, and decoding the current block, the decoding of the current block including using the determined transformation type for inverse transformation for each of the first region, the second region, and/or the third region.
  • FIG. 1 is an illustration showing an example of a residual block (e.g., current block) with exponential partitioning where there are three segments with different prediction errors
  • FIG. 2 is a system block diagram illustrating an example video encoder capable of shape adaptive discrete cosine transformation (SA-DCT) for geometric partitioning with an adaptive number of regions that can improve complexity and processing performance for video encoding and decoding;
  • SA-DCT shape adaptive discrete cosine transformation
  • FIG. 3 is a process flow diagram illustrating an example process of encoding a video with SA- DCT for geometric partitioning with an adaptive number of regions;
  • FIG. 4 is a system block diagram illustrating an example decoder capable of decoding a bitstream using SA-DCT for geometric partitioning with an adaptive number of regions;
  • FIG. 5 is a process flow diagram illustrating an example process of decoding a bitstream using SA-DCT for geometric partitioning with an adaptive number of regions
  • FIG. 6 is a block diagram of a computing system that can be used to implement any one or more of the methodologies disclosed herein and any one or more portions thereof.
  • Embodiments presented in this disclosure pertain to encoding and decoding blocks in geometric partitioning, where not all blocks are necessarily rectangular. Embodiments may include and/or be configured to perform encoding and/or decoding using discrete cosine transformations (DCT) and/or inverse DCTs. In some embodiments presented herein, a choice of DCT is made as a function of information content in geometrically partitioned blocks. In some existing video encoding and decoding approaches, all blocks are rectangular, and the residual is encoded using a regular Block DCT (B-DCT) for the entire rectangular block.
  • B-DCT Block DCT
  • an encoder may use Shape Adaptive DCT (SA-DCT) alternatively or additionally to B-DCT.
  • SA-DCT Shape Adaptive DCT
  • an encoder may select between B-DCT and SA-DCT for each region of a block such as a geometrically partitioned block, based on a level of prediction error of that region; selection may be signaled in the bitstream for use in decoding.
  • a bitrate of transmission in a bitstream may be reduced because a residual may be represented more efficiently, and computational resources required to perform the processing may be reduced as a result.
  • the current subject matter may be applicable to relatively larger blocks, such as blocks with a size of 128 x 128 or 64 x 64, for example.
  • geometric partitioning may involve partitioning a current block into an adaptive number of regions, such as three or more regions for a given current block; a DCT transform type (e.g., B-DCT or SA-DCT) may be signaled for each region.
  • a B-DCT may be a DCT performed using an NXN invertible matrix on an NXN block of numerical values such as without limitation chroma and/or luma values of a corresponding NXN array of pixels.
  • NXN invertible matrix
  • a“DCT-I” transformation may compute each element of a transformed matrix as:
  • a“DCT-II” transformation may compute transformed matrix values as:
  • the generalized discrete cosine transform matrix may include a generalized discrete cosine transform II matrix taking the form of:
  • a generalized discrete cosine transform matrix may include a generalized discrete cosine transform II matrix taking the form of:
  • Inverse B-DCT may be computed by a second matrix multiplication using the same NxN transform matrix; a resulting output may be normalized to recover original values. For instance,
  • an inverse DCT-I may be multiplied by w-i for normalization.
  • An SA-DCT may be performed on a non-rectangular array of pixels.
  • an SA-DCT may be computed by performing a one-dimensional version of a DCT such as a DCT-I, DCT-II, or the like against vectors representing vertical columns of pixel values in a shape in interest, followed by resulting values being grouped into horizontal vectors and subjected to a one-dimensional DCT a second time; the second DCT may result in a completed transformation of pixel values.
  • Variations of SA-DCT may further scale and/or normalize by coefficients to correct for mean weighting defects and/or non-orthonormal defects introduced by the above transformation, quantization of outputs of the above transformation and/or inversion of transformation outputs and/or quantized transformation outputs. Further corrections may be performed, without limitation, by preceding the above SA-DCT process by subtracting an individual mean value of a subject image region from each pixel value or a scaled version thereof, potentially in combination with one or other of the scaling processes applied before and/or after transformation, quantization, and/or inverse transformation.
  • Persons skilled in the art upon reviewing the entirety of this disclosure, will be aware of various alternative or additional variations on an SA-DCT process that may be applied consistently with the above description.
  • Motion compensation may include an approach to predict a video frame or a portion thereof given previous and/or future frames by accounting for motion of a camera and/or of objects in a video containing and/or represented by current, previous, and/or future frames.
  • Motion compensation may be employed in encoding and decoding of video data for video compression, for example in encoding and decoding using the Motion Picture Experts Group (MPEG)-2 (also referred to as advanced video coding (AVC)) standard.
  • Motion compensation may describe a picture in terms of a transformation of a reference picture to a current picture. Reference picture may be previous in time or from the future when compared to current picture. When images can be accurately synthesized from previously transmitted and/or stored images, compression efficiency can be improved.
  • Block partitioning may refer to a method in video coding to find regions of similar motion. Some form of block partitioning can be found in video codec standards including MPEG-2, H.264 (also referred to as AVC or MPEG-4 Part 10), and H.265 (also referred to as High Efficiency Video Coding (HEVC)).
  • HEVC High Efficiency Video Coding
  • non-overlapping blocks of a video frame may be partitioned into rectangular sub blocks to find block partitions that contain pixels with similar motion. This approach may work well when all pixels of a block partition have similar motion. Motion of pixels in a block may be determined relative to previously coded frames.
  • Shape-adaptive DCT and/or B-DCT may be effectively used in geometric partitioning with adaptive number of regions.
  • FIG. 1 is an illustration showing a non-limiting example of a residual block (e.g., current block) 100 sized 64x64 or 128x128 with geometric partitioning where there are three segments, SO, SI, and S2 with different prediction errors; although three segments are illustrated in FIG. 1 for exemplary purposes, a greater or lesser number of segments may alternatively or additionally be employed.
  • Current block may be geometrically partitioned according to two line segments (P1P2 and P3P4), which may divide the current block into the three regions SO, SI, and S2.
  • SO may have a relatively high prediction error while SI and S2 can have a relatively lower prediction error.
  • the encoder may select and use B-DCT for residual coding.
  • the encoder may select and use SA-DCT.
  • the selection of residual encoding transformation can be based on a prediction error (e.g., size of the residual). Because SA-DCT algorithm is relatively simpler in terms of complexity and does not require as many computation as the B-DCT, utilizing SA-DCT for lower prediction error residual coding may improve complexity and processing performance for video encoding and decoding.
  • SA-DCT may be signaled as an additional transform choice to full block DCT for segments with low prediction errors.
  • What is considered low or high error may be a parameter that can be set at the encoder and may vary based on application.
  • a choice of transformation type may be signaled in the bitstream.
  • bitstream may be parsed, and for a given current block, a residual may be decoded using a transform type signaled in the bitstream.
  • a number of coefficients associated with the transform may alternatively or additionally be signaled in the bitstream.
  • geometric partitioning with an adaptive number of regions may include techniques for video encoding and decoding in which a rectangular block is further divided into two or more regions that may be non-rectangular.
  • FIG. 1 illustrates a non-limiting example of geometric partitioning at the pixel level with an adaptive number of regions.
  • An example rectangular block 100 (which can have a width of M pixels and a height of N pixels, denoted as MxN pixels) may be divided along a line segment P1P2 and P3P4 into three regions (SO, SI, and S2).
  • SO a motion vector may describe the motion of all pixels in that region; the motion vector can be used to compress region SO.
  • an associated motion vector may describe the motion of pixels in region SI.
  • an associated motion vector may describe the motion of pixels in region S2.
  • Such a geometric partition may be signaled to the receiver (e.g., decoder) by encoding positions PI, P2, P3, P4 and/or representations of these positions, such as without limitation using coordinates such as polar coordinates, cartesian coordinates, or the like, indices into predefined templates, or other characterizations of the partitions) in a video bitstream.
  • a line segment P1P2 (or more specifically points PI and P2) may be determined.
  • the possible combinations of points PI and P2 depends on M and N, which are the block width and height.
  • MxN there are (M-l)x(N-l) x 3 possible partitions.
  • partitioning occurs iteratively in that a first partition can be determined (e.g., determine line P1P2 and associated regions) forming two regions, and then one of those regions is further partitioned.
  • a first partition can be determined (e.g., determine line P1P2 and associated regions) forming two regions, and then one of those regions is further partitioned.
  • the partitioning described with reference to FIG. 1 can be performed to partition a block into two regions.
  • One of those regions can be further partitioned (e.g., to form new region SI and region S2).
  • the process can continue to perform block level geometric partitioning until a stopping criteria is reached.
  • FIG. 2 is a system block diagram illustrating an example video encoder 200 capable of SA-DCT and/or B-DCT for geometric partitioning with an adaptive number of regions that can improve complexity and processing performance for video encoding and decoding.
  • the example video encoder 200 receives an input video 205, which can be initially segmented or dividing according to a processing scheme, such as a tree-structured macro block partitioning scheme (e.g., quad-tree plus binary tree).
  • a tree-structured macro block partitioning scheme can include partitioning a picture frame into large block elements called coding tree units (CTU).
  • CTU coding tree units
  • each CTU can be further partitioned one or more times into a number of sub-blocks called coding units (CU).
  • CU coding unit
  • the final result of this partitioning can include a group of sub-blocks that can be called predictive units (PU). Transform units (TU) can also be utilized.
  • PU predictive units
  • TU Transform units
  • Such a partitioning scheme can include performing geometric partitioning with an adaptive number of regions according to some aspects of the current subject matter.
  • the example video encoder 200 includes an intra prediction processor 215, a motion estimation / compensation processor 220 (also referred to as an inter prediction processor) capable of supporting geometric partitioning with an adaptive number of regions, a transform /quantization processor 225, an inverse quantization / inverse transform processor 230, an in-loop filter 235, a decoded picture buffer 240, and an entropy coding processor 245.
  • the motion estimation / compensation processor 220 can perform geometric partitioning. Bitstream parameters that signal geometric partitioning modes can be input to the entropy coding processor 245 for inclusion in the output bitstream 250.
  • the intra prediction processor 210 can perform the processing to output the predictor. If the block is to be processed via motion estimation / compensation, the motion estimation / compensation processor 220 can perform the processing including use of geometric partitioning to output the predictor.
  • a residual can be formed by subtracting the predictor from the input video.
  • the residual can be received by the transform / quantization processor 225, which can determine whether the prediction error (e.g., residual size) is considered“high” or“low” error (for example, by comparing a size or error metric of the residual to a threshold). Based on the determination, the transform /quantization processor 225 can select a transform type, which can include B-DCT and SA-DCT. In some implementations, the transform / quantization processor 225 selects a transform type of B-DCT where the residual is considered to have a high error and selects a transform type of SA-DCT where the residual is considered to have a low error.
  • the transform /quantization processor 225 can perform transformation processing (e.g., SA-DCT or B-DCT) to produce coefficients, which can be quantized.
  • transformation processing e.g., SA-DCT or B-DCT
  • the quantized coefficients and any associated signaling information can be provided to the entropy coding processor 245 for entropy encoding and inclusion in the output bitstream 250.
  • the entropy encoding processor 245 can support encoding of signaling information related to SA-DCT for geometric partitioning with adaptive number of regions.
  • the quantized coefficients can be provided to the inverse quantization / inverse transformation processor 230, which can reproduce pixels, which can be combined with the predictor and processed by the in loop filter 235, the output of which is stored in the decoded picture buffer 240 for use by the motion estimation / compensation processor 220 that is capable of supporting geometric partitioning with an adaptive number of regions.
  • a process flow diagram illustrating an example process 300 of encoding a video with SA-DCT for geometric partitioning with an adaptive number of regions that can improve complexity and processing performance for video encoding and decoding is illustrated.
  • a video frame may undergo initial block segmentation, for example, using a tree-structured macro block partitioning scheme that may include partitioning a picture frame into CTUs and CUs.
  • a block may be selected for geometric partitioning. Selection may include identifying according to a metric rule that a block is to be processed according to a geometric partitioning mode.
  • a selected block may be partitioned into three or more non-rectangular regions according to geometric partitioning mode.
  • a transform type (also referred to as a transformation type) for each geometrically partitioned region may be determined. This may include determining whether a prediction error (e.g., residual size) is considered“high” or“low” error (for example, by comparing a size or error metric of the residual to a threshold). Based on determination, a transform type may be selected, for instance using a quadtree plus binary decision tree process as described below, which transform type may include without limitation B-DCT or SA-DCT. In some implementations, a transform type of B-DCT is selected where residual is considered to have a high error and a transform type of SA-DCT is selected where residual is considered to have a low error. Based on the selected transform type, transformation processing (e.g., SA-DCT or B-DCT) may be performed to produce coefficients, which may be quantized.
  • a prediction error e.g., residual size
  • a transform type may be selected, for instance using a quadtree plus binary decision tree process as described below, which
  • a determined transform type may be signaled in the bitstream.
  • the transformed and quantized residual can be included in the bitstream.
  • the number of transform coefficients can be signaled in the bitstream.
  • FIG. 4 is a system block diagram illustrating a non-limiting example of a decoder 400 capable of decoding a bitstream 470 using DCT, including without limitation SA-DCT and/or B- DCT, for geometric partitioning with an adaptive number of regions, which may improve complexity and processing performance for video encoding and decoding.
  • Decoder 400 includes an entropy decoder processor 410, an inverse quantization and inverse transformation processor 420, a deblocking filter 430, a frame buffer 440, motion compensation processor 450 and intra prediction processor 460.
  • bitstream 470 includes parameters that signal a geometric partitioning mode and transformation type.
  • bitstream 470 includes parameters that signal the number of transform coefficients.
  • the motion compensation processor 450 can reconstruct pixel information using geometric partitioning as described herein.
  • bitstream 470 may be received by the decoder 400 and input to entropy decoder processor 410, which may entropy decode the bitstream into quantized coefficients.
  • Quantized coefficients may be provided to inverse quantization and inverse transformation processor 420, which may determine a coding transformation type (e.g., B-DCT or SA-DCT) and perform inverse quantization and inverse transformation according to the determined coding transformation type to create a residual signal.
  • inverse quantization and inverse transformation processor 420 may determine a number of transform coefficients and perform inverse transformation according to the determined number of transform coefficients.
  • residual signal may be added to an output of motion
  • Output of a motion compensation processor 450 and intra prediction processor 460 may include a block prediction based on a previously decoded block.
  • a sum of the prediction and residual may be processed by deblocking filter 430 and stored in a frame buffer 440.
  • motion compensation processor 450 may construct a prediction based on the geometric partition approach described herein.
  • FIG. 5 is a process flow diagram illustrating an example process 500 of decoding a bitstream using SA-DCT for geometric partitioning with an adaptive number of regions, which can improve complexity and processing performance for video encoding and decoding.
  • a bitstream is received, which may include a current block (e.g., CTU, CU, PU).
  • Receiving may include extracting and/or parsing current block and associated signaling information from bitstream.
  • Decoder may extract or determine one or more parameters that characterize the geometric partitioning.
  • These parameters may include, for example, indices of a start and end of a line segment (e.g., PI, P2, P3, P4); extraction or determining may include identifying and retrieving the parameters from the bitstream (e.g., parsing the bitstream).
  • a first region, a second region, and a third region of the current block may be determined and according to a geometric partitioning mode.
  • Determining may include determining whether geometric partitioning mode is enabled (e.g., true) for the current block. If geometric partitioning mode is not enabled (e.g., false), decoder may process current block using an alternative partitioning mode. If geometric partitioning mode is enabled (e.g., true), three or more regions may be determined and/or processed.
  • a coding transformation type may be determined.
  • a coding transformation type may be signaled in bitstream.
  • bitstream may be parsed to determine a coding transformation type, which may specify B-DCT or SA-DCT.
  • Determined coding transformation type may be for decoding a first region, a second region, and/or a third region.
  • a current block may be decoded.
  • Decoding of current block may include using a determined transform type for inverse transformation for each of a first region, a second region, and/or a third region.
  • Decoding may include determining an associated motion information for each region and according to geometric partitioning mode.
  • the geometric partitioning can be signaled in the bitstream based on rate-distortion decisions in the encoder.
  • the coding can be based on a combination of regular pre-defmed partitions (e.g., templates), temporal and spatial prediction of the
  • Each geometrically partitioned region can utilize motion compensated prediction or intra-prediction.
  • the boundary of the predicted regions can be smoothed before the residual is added.
  • a quadtree plus binary decision tree may be implemented.
  • QTBT quadtree plus binary decision tree
  • partition parameters of QTBT are dynamically derived to adapt to the local characteristics without transmitting any overhead.
  • a joint-classifier decision tree structure may eliminate unnecessary iterations and control the risk of false prediction.
  • geometric partitioning with an adaptive number of regions may be available as an additional partitioning option available at every leaf node of the QTBT.
  • a decoder may include a partition processor that generates geometric partition for a current block and provides all partition-related information for dependent processes. Partition processor may directly influence motion compensation as it may be performed segment-wise in case a block is geometrically partitioned. Further, partition processor may provide shape information to intra-prediction processor and transform coding processor.
  • additional syntax elements may be signaled at different hierarchy levels of the bitstream.
  • an enable flag may be coded in a Sequence Parameter Set (SPS).
  • SPS Sequence Parameter Set
  • a CTU flag may be coded at the coding tree unit (CTU) level to indicate whether any coding units (CU) use geometric partitioning with an adaptive number of regions.
  • a CU flag may be coded to indicate whether a current coding unit utilizes geometric partitioning with an adaptive number of regions. Parameters which specify a line segment on block may be coded.
  • a flag may be decoded, which may specify whether a current region is inter- or intra-predicted.
  • a minimum region size may be specified.
  • some implementations of the current subject matter can provide for partitioning of blocks that reduces complexity while increasing compression efficiency.
  • blocking artifacts at object boundaries can be reduced.
  • any one or more of the aspects and embodiments described herein may be conveniently implemented using digital electronic circuitry, integrated circuitry, specially designed application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof, as realized and/or implemented in one or more machines (e.g ., one or more computing devices that are utilized as a user computing device for an electronic document, one or more server devices, such as a document server, etc.) programmed according to the teachings of the present specification, as will be apparent to those of ordinary skill in the computer art.
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • aspects or features may include implementation in one or more computer programs and/or software that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
  • a programmable processor which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
  • Appropriate software coding may readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those of ordinary skill in the software art.
  • aspects and implementations discussed above employing software and/or software modules may also include appropriate hardware for assisting in the implementation of the machine executable instructions of the software and/or software module.
  • Such software may be a computer program product that employs a machine-readable storage medium.
  • a machine-readable storage medium may be any medium that is capable of storing and/or encoding a sequence of instructions for execution by a machine (e.g., a computing device) and that causes the machine to perform any one of the methodologies and/or embodiments described herein. Examples of a machine-readable storage medium include, but are not limited to, a magnetic disk, an optical disc ( e.g ., CD, CD-R, DVD, DVD-R, etc.), a magneto optical disk, a read-only memory“ROM” device, a random access memory“RAM” device, a magnetic card, an optical card, a solid-state memory device, an EPROM, an EEPROM,
  • a machine-readable medium is intended to include a single medium as well as a collection of physically separate media, such as, for example, a collection of compact discs or one or more hard disk drives in combination with a computer memory.
  • a machine-readable storage medium does not include transitory forms of signal transmission.
  • Such software may also include information (e.g., data) carried as a data signal on a data carrier, such as a carrier wave.
  • a data carrier such as a carrier wave.
  • machine-executable information may be included as a data-carrying signal embodied in a data carrier in which the signal encodes a sequence of instruction, or portion thereof, for execution by a machine (e.g, a computing device) and any related information (e.g, data structures and data) that causes the machine to perform any one of the methodologies and/or embodiments described herein.
  • Examples of a computing device include, but are not limited to, an electronic book reading device, a computer workstation, a terminal computer, a server computer, a handheld device (e.g, a tablet computer, a smartphone, etc.), a web appliance, a network router, a network switch, a network bridge, any machine capable of executing a sequence of instructions that specify an action to be taken by that machine, and any combinations thereof.
  • a computing device may include and/or be included in a kiosk.
  • FIG. 6 shows a diagrammatic representation of one embodiment of a computing device in the exemplary form of a computer system 600 within which a set of instructions for causing a control system to perform any one or more of the aspects and/or methodologies of the present disclosure may be executed. It is also contemplated that multiple computing devices may be utilized to implement a specially configured set of instructions for causing one or more of the devices to perform any one or more of the aspects and/or methodologies of the present disclosure.
  • Computer system 600 includes a processor 604 and a memory 608 that communicate with each other, and with other components, via a bus 612.
  • Bus 612 may include any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures.
  • Memory 608 may include various components (e.g ., machine-readable media) including, but not limited to, a random-access memory component, a read only component, and any combinations thereof.
  • a basic input/output system 616 (BIOS), including basic routines that help to transfer information between elements within computer system 600, such as during start-up, may be stored in memory 608.
  • BIOS basic input/output system
  • Memory 608 may also include (e.g., stored on one or more machine-readable media) instructions (e.g, software) 620 embodying any one or more of the aspects and/or methodologies of the present disclosure.
  • memory 608 may further include any number of program modules including, but not limited to, an operating system, one or more application programs, other program modules, program data, and any combinations thereof.
  • Computer system 600 may also include a storage device 624.
  • a storage device e.g, storage device 6234
  • Examples of a storage device include, but are not limited to, a hard disk drive, a magnetic disk drive, an optical disc drive in combination with an optical medium, a solid-state memory device, and any combinations thereof.
  • Storage device 624 may be connected to bus 612 by an appropriate interface (not shown).
  • Example interfaces include, but are not limited to, SCSI, advanced technology attachment (ATA), serial ATA, universal serial bus (USB), IEEE 1394 (FIREWIRE), and any combinations thereof.
  • storage device 624 (or one or more components thereof) may be removably interfaced with computer system 600 (e.g, via an external port connector (not shown)).
  • storage device 624 and an associated machine-readable medium 628 may provide nonvolatile and/or volatile storage of machine- readable instructions, data structures, program modules, and/or other data for computer system 600.
  • software 620 may reside, completely or partially, within machine-readable medium 628. In another example, software 620 may reside, completely or partially, within processor 604.
  • Computer system 600 may also include an input device 632.
  • a user of computer system 600 may enter commands and/or other information into computer system 600 via input device 632.
  • Examples of an input device 632 include, but are not limited to, an alpha numeric input device (e.g, a keyboard), a pointing device, a joystick, a gamepad, an audio input device (e.g, a microphone, a voice response system, etc.), a cursor control device (e.g, a mouse), a touchpad, an optical scanner, a video capture device ( e.g ., a still camera, a video camera), a touchscreen, and any combinations thereof.
  • an alpha numeric input device e.g, a keyboard
  • a pointing device e.g., a joystick, a gamepad
  • an audio input device e.g, a microphone, a voice response system, etc.
  • a cursor control device e.g, a mouse
  • a touchpad e.g
  • Input device 632 may be interfaced to bus 612 via any of a variety of interfaces (not shown) including, but not limited to, a serial interface, a parallel interface, a game port, a USB interface, a FIREWIRE interface, a direct interface to bus 612, and any combinations thereof.
  • Input device 632 may include a touch screen interface that may be a part of or separate from display 636, discussed further below.
  • Input device 632 may be utilized as a user selection device for selecting one or more graphical representations in a graphical interface as described above.
  • a user may also input commands and/or other information to computer system 600 via storage device 624 (e.g., a removable disk drive, a flash drive, etc.) and/or network interface device 640.
  • a network interface device such as network interface device 640, may be utilized for connecting computer system 600 to one or more of a variety of networks, such as network 644, and one or more remote devices 648 connected thereto. Examples of a network interface device include, but are not limited to, a network interface card (e.g, a mobile network interface card, a LAN card), a modem, and any combination thereof.
  • Examples of a network include, but are not limited to, a wide area network (e.g, the Internet, an enterprise network), a local area network (e.g, a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a data network associated with a telephone/voice provider (e.g, a mobile communications provider data and/or voice network), a direct connection between two computing devices, and any combinations thereof.
  • a network such as network 644, may employ a wired and/or a wireless mode of communication. In general, any network topology may be used.
  • Information e.g, data, software 620, etc.
  • Computer system 600 may further include a video display adapter 652 for
  • a display device such as display device 636.
  • Examples of a display device include, but are not limited to, a liquid crystal display (LCD), a cathode ray tube (CRT), a plasma display, a light emitting diode (LED) display, and any combinations thereof.
  • Display adapter 652 and display device 636 may be utilized in combination with processor 604 to provide graphical representations of aspects of the present disclosure.
  • computer system 600 may include one or more other peripheral output devices including, but not limited to, an audio speaker, a printer, and any combinations thereof.
  • peripheral output devices may be connected to bus 612 via a peripheral interface 656. Examples of a peripheral interface include, but are not limited to, a serial port, a USB connection, a FIREWIRE connection, a parallel connection, and any combinations thereof.
  • phrases such as“at least one of’ or“one or more of’ may occur followed by a conjunctive list of elements or features.
  • the term“and/or” may also occur in a list of two or more elements or features. Unless otherwise implicitly or explicitly contradicted by the context in which it is used, such a phrase is intended to mean any of the listed elements or features individually or any of the recited elements or features in combination with any of the other recited elements or features.
  • the phrases“at least one of A and B;”“one or more of A and B;” and“A and/or B” are each intended to mean“A alone, B alone, or A and B together.”
  • a similar interpretation is also intended for lists including three or more items. For example, the phrases“at least one of A, B, and C;”“one or more of A,

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A decoder includes circuitry configured to receive a bitstream; determine a first region, a second region, and a third region of a current block and according to a geometric partitioning mode and decoding the current block. Related apparatus, systems, techniques and articles are also described. Decoder may determine, from a signal contained in the bitstream, a coding transformation type to decode the first region, the second region, and/or the third region, the coding transformation type characterizing at least an inverse block discrete cosine transformation and an inverse shape adaptive discrete cosine transformation, and the decoding of the current block may include using the determined transformation type for inverse transformation for each of the first region, the second region and/or the third region

Description

SHAPE ADAPTIVE DISCRETE COSINE TRANSFORM FOR GEOMETRIC
PARTITIONING WITH AN ADAPTIVE NUMBER OF REGIONS CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of priority of U.S. Provisional Patent Application Serial No. 62/797,799, filed on January 28, 2019, and titled“SHAPE ADAPTIVE DISCRETE COSINE TRANSFORM FOR GEOMETRIC PARTITIONING WITH AN ADAPTIVE NUMBER OF REGIONS,” which is incorporated by reference herein in its entirety.
FIELD OF THE INVENTION
The present invention generally relates to the field of video compression. In particular, the present invention is directed to a shape adaptive discrete cosine transform for geometric partitioning with an adaptive number of regions.
BACKGROUND
A video codec can include an electronic circuit or software that compresses or decompresses digital video. It can convert uncompressed video to a compressed format or vice versa. In the context of video compression, a device that compresses video (and/or performs some function thereof) can typically be called an encoder, and a device that decompresses video (and/or performs some function thereof) can be called a decoder.
A format of the compressed data can conform to a standard video compression specification. The compression can be lossy in that the compressed video lacks some information present in the original video. A consequence of this can include that decompressed video can have lower quality than the original uncompressed video because there is insufficient information to accurately reconstruct the original video.
There can be complex relationships between the video quality, the amount of data used to represent the video (e.g., determined by the bit rate), the complexity of the encoding and decoding algorithms, sensitivity to data losses and errors, ease of editing, random access, end-to- end delay (e.g., latency), and the like.
SUMMARY OF THE DISCLOSURE
In an aspect, a decoder includes circuitry configured to receive a bitstream, determine a first region, a second region, and a third region of a current block according to a geometric partitioning mode, and decode the current block using an inverse discrete cosine transformation for each of the first region, the second region, and the third region.
In another aspect, a decoder includes circuitry configured to receive a bitstream determine a first region, a second region, and a third region of a current block and according to a geometric partitioning mode, determine, from a signal contained in the bitstream, a coding transformation type to decode each of the first region, the second region, and/or the third region, the coding transformation type characterizing at least an inverse block discrete cosine transformation and an inverse shape adaptive discrete cosine transformation and decode the current block, the decoding of the current block including using the determined transformation type for inverse transformation for each of the first region, the second region and/or the third region.
In another aspect, a method includes receiving, by a decoder, a bitstream, determining a first region, a second region, and a third region of a current block and according to a geometric partitioning mode, determining, from a signal contained in the bitstream, a coding transformation type to decode the first region, the second region, and/or the third region, the coding
transformation type characterizing at least an inverse block discrete cosine transformation or an inverse shape adaptive discrete cosine transformation, and decoding the current block, the decoding of the current block including using the determined transformation type for inverse transformation for each of the first region, the second region, and/or the third region.
The details of one or more variations of the subject matter described herein are set forth in the accompanying drawings and the description below. Other features and advantages of the subject matter described herein will be apparent from the description and drawings, and from the claims.
DESCRIPTION OF DRAWINGS
For the purpose of illustrating the invention, the drawings show aspects of one or more embodiments of the invention. However, it should be understood that the present invention is not limited to the precise arrangements and instrumentalities shown in the drawings, wherein: FIG. 1 is an illustration showing an example of a residual block (e.g., current block) with exponential partitioning where there are three segments with different prediction errors; FIG. 2 is a system block diagram illustrating an example video encoder capable of shape adaptive discrete cosine transformation (SA-DCT) for geometric partitioning with an adaptive number of regions that can improve complexity and processing performance for video encoding and decoding;
FIG. 3 is a process flow diagram illustrating an example process of encoding a video with SA- DCT for geometric partitioning with an adaptive number of regions;
FIG. 4 is a system block diagram illustrating an example decoder capable of decoding a bitstream using SA-DCT for geometric partitioning with an adaptive number of regions;
FIG. 5 is a process flow diagram illustrating an example process of decoding a bitstream using SA-DCT for geometric partitioning with an adaptive number of regions; and
FIG. 6 is a block diagram of a computing system that can be used to implement any one or more of the methodologies disclosed herein and any one or more portions thereof.
The drawings are not necessarily to scale and may be illustrated by phantom lines, diagrammatic representations and fragmentary views. In certain instances, details that are not necessary for an understanding of the embodiments or that render other details difficult to perceive may have been omitted. Like reference symbols in the various drawings indicate like elements.
DETAILED DESCRIPTION
Embodiments presented in this disclosure pertain to encoding and decoding blocks in geometric partitioning, where not all blocks are necessarily rectangular. Embodiments may include and/or be configured to perform encoding and/or decoding using discrete cosine transformations (DCT) and/or inverse DCTs. In some embodiments presented herein, a choice of DCT is made as a function of information content in geometrically partitioned blocks. In some existing video encoding and decoding approaches, all blocks are rectangular, and the residual is encoded using a regular Block DCT (B-DCT) for the entire rectangular block.
However, in geometric partitioning where a block can be partitioned into multiple non- rectangular regions, use of regular B-DCT can inefficiently represent the underlying pixel information for some blocks and can require unnecessary computing resources to perform. In some implementations of the current subject matter, when using a geometric partitioning mode, an encoder may use Shape Adaptive DCT (SA-DCT) alternatively or additionally to B-DCT. In some embodiments, an encoder may select between B-DCT and SA-DCT for each region of a block such as a geometrically partitioned block, based on a level of prediction error of that region; selection may be signaled in the bitstream for use in decoding. By encoding and/or decoding a non-rectangular region using either B-DCT or SA-DCT and signaling such selection, a bitrate of transmission in a bitstream may be reduced because a residual may be represented more efficiently, and computational resources required to perform the processing may be reduced as a result. The current subject matter may be applicable to relatively larger blocks, such as blocks with a size of 128 x 128 or 64 x 64, for example. In some implementations, geometric partitioning may involve partitioning a current block into an adaptive number of regions, such as three or more regions for a given current block; a DCT transform type (e.g., B-DCT or SA-DCT) may be signaled for each region.
In an embodiment, a B-DCT may be a DCT performed using an NXN invertible matrix on an NXN block of numerical values such as without limitation chroma and/or luma values of a corresponding NXN array of pixels. For instance, and as a non-limiting example, where an NxN matrix Ais to be transformed, a“DCT-I” transformation may compute each element of a transformed matrix as:
Figure imgf000006_0001
For k = 0, ... , N— 1. As a further non-limiting example, a“DCT-II” transformation may compute transformed matrix values as:
Figure imgf000006_0002
For k = 0, ... , N— 1. As an illustrative example, where blocks are 4 x 4 blocks of pixels, the generalized discrete cosine transform matrix may include a generalized discrete cosine transform II matrix taking the form of:
Figure imgf000006_0003
In some implementations, an integer approximation of a transform matrix may be utilized, which may be used for efficient hardware and software implementations. For example, where blocks are 4x4 blocks of pixels, a generalized discrete cosine transform matrix may include a generalized discrete cosine transform II matrix taking the form of:
Figure imgf000007_0001
Inverse B-DCT may be computed by a second matrix multiplication using the same NxN transform matrix; a resulting output may be normalized to recover original values. For instance,
Figure imgf000007_0002
an inverse DCT-I may be multiplied by w-i for normalization.
An SA-DCT may be performed on a non-rectangular array of pixels. In an embodiment, an SA-DCT may be computed by performing a one-dimensional version of a DCT such as a DCT-I, DCT-II, or the like against vectors representing vertical columns of pixel values in a shape in interest, followed by resulting values being grouped into horizontal vectors and subjected to a one-dimensional DCT a second time; the second DCT may result in a completed transformation of pixel values. Variations of SA-DCT may further scale and/or normalize by coefficients to correct for mean weighting defects and/or non-orthonormal defects introduced by the above transformation, quantization of outputs of the above transformation and/or inversion of transformation outputs and/or quantized transformation outputs. Further corrections may be performed, without limitation, by preceding the above SA-DCT process by subtracting an individual mean value of a subject image region from each pixel value or a scaled version thereof, potentially in combination with one or other of the scaling processes applied before and/or after transformation, quantization, and/or inverse transformation. Persons skilled in the art, upon reviewing the entirety of this disclosure, will be aware of various alternative or additional variations on an SA-DCT process that may be applied consistently with the above description.
Motion compensation may include an approach to predict a video frame or a portion thereof given previous and/or future frames by accounting for motion of a camera and/or of objects in a video containing and/or represented by current, previous, and/or future frames. Motion compensation may be employed in encoding and decoding of video data for video compression, for example in encoding and decoding using the Motion Picture Experts Group (MPEG)-2 (also referred to as advanced video coding (AVC)) standard. Motion compensation may describe a picture in terms of a transformation of a reference picture to a current picture. Reference picture may be previous in time or from the future when compared to current picture. When images can be accurately synthesized from previously transmitted and/or stored images, compression efficiency can be improved.
Block partitioning, as used in this disclosure, may refer to a method in video coding to find regions of similar motion. Some form of block partitioning can be found in video codec standards including MPEG-2, H.264 (also referred to as AVC or MPEG-4 Part 10), and H.265 (also referred to as High Efficiency Video Coding (HEVC)). In example block partitioning approaches, non-overlapping blocks of a video frame may be partitioned into rectangular sub blocks to find block partitions that contain pixels with similar motion. This approach may work well when all pixels of a block partition have similar motion. Motion of pixels in a block may be determined relative to previously coded frames.
Shape-adaptive DCT and/or B-DCT may be effectively used in geometric partitioning with adaptive number of regions. FIG. 1 is an illustration showing a non-limiting example of a residual block (e.g., current block) 100 sized 64x64 or 128x128 with geometric partitioning where there are three segments, SO, SI, and S2 with different prediction errors; although three segments are illustrated in FIG. 1 for exemplary purposes, a greater or lesser number of segments may alternatively or additionally be employed. Current block may be geometrically partitioned according to two line segments (P1P2 and P3P4), which may divide the current block into the three regions SO, SI, and S2. In this example, SO may have a relatively high prediction error while SI and S2 can have a relatively lower prediction error. For segment SO (also referred to as region), the encoder may select and use B-DCT for residual coding. For segments SI and S2 with low prediction error, the encoder may select and use SA-DCT. The selection of residual encoding transformation can be based on a prediction error (e.g., size of the residual). Because SA-DCT algorithm is relatively simpler in terms of complexity and does not require as many computation as the B-DCT, utilizing SA-DCT for lower prediction error residual coding may improve complexity and processing performance for video encoding and decoding.
Accordingly, and still referring to FIG. 1, SA-DCT may be signaled as an additional transform choice to full block DCT for segments with low prediction errors. What is considered low or high error may be a parameter that can be set at the encoder and may vary based on application. A choice of transformation type may be signaled in the bitstream. At a decoder, bitstream may be parsed, and for a given current block, a residual may be decoded using a transform type signaled in the bitstream. In some implementations, a number of coefficients associated with the transform may alternatively or additionally be signaled in the bitstream.
In more detail, and continuing to refer to FIG. 1, geometric partitioning with an adaptive number of regions may include techniques for video encoding and decoding in which a rectangular block is further divided into two or more regions that may be non-rectangular. For example, FIG. 1 illustrates a non-limiting example of geometric partitioning at the pixel level with an adaptive number of regions. An example rectangular block 100 (which can have a width of M pixels and a height of N pixels, denoted as MxN pixels) may be divided along a line segment P1P2 and P3P4 into three regions (SO, SI, and S2). When pixels in SO have similar motion, a motion vector may describe the motion of all pixels in that region; the motion vector can be used to compress region SO. Similarly, when pixels in region SI have similar motion, an associated motion vector may describe the motion of pixels in region SI. Similarly, when pixels in region S2 have similar motion, an associated motion vector may describe the motion of pixels in region S2. Such a geometric partition may be signaled to the receiver (e.g., decoder) by encoding positions PI, P2, P3, P4 and/or representations of these positions, such as without limitation using coordinates such as polar coordinates, cartesian coordinates, or the like, indices into predefined templates, or other characterizations of the partitions) in a video bitstream.
Still referring to FIG. 1, when encoding video data utilizing geometric partitioning at the pixel level, a line segment P1P2 (or more specifically points PI and P2) may be determined. In order to determine line segment P1P2 (or more specifically points PI and P2) that best divides the block when utilizing geometric partitioning at the pixel level, the possible combinations of points PI and P2 depends on M and N, which are the block width and height. For a block of size MxN, there are (M-l)x(N-l) x 3 possible partitions. Identifying the right partition thus can become a computationally expensive task of evaluating motion estimation for all possible partitions, which can increase the amount of time and/or processing power required to encode a video as compared to encoding using rectangular partitioning (e.g., without geometric partitioning at the pixel level). What constitutes the best or right partition can be determined according to a metric and may change from implementation to implementation. In some implementations, and still referring to FIG. 1, partitioning occurs iteratively in that a first partition can be determined (e.g., determine line P1P2 and associated regions) forming two regions, and then one of those regions is further partitioned. For example, the partitioning described with reference to FIG. 1 can be performed to partition a block into two regions. One of those regions can be further partitioned (e.g., to form new region SI and region S2). The process can continue to perform block level geometric partitioning until a stopping criteria is reached.
FIG. 2 is a system block diagram illustrating an example video encoder 200 capable of SA-DCT and/or B-DCT for geometric partitioning with an adaptive number of regions that can improve complexity and processing performance for video encoding and decoding. The example video encoder 200 receives an input video 205, which can be initially segmented or dividing according to a processing scheme, such as a tree-structured macro block partitioning scheme (e.g., quad-tree plus binary tree). An example of a tree-structured macro block partitioning scheme can include partitioning a picture frame into large block elements called coding tree units (CTU). In some implementations, each CTU can be further partitioned one or more times into a number of sub-blocks called coding units (CU). The final result of this partitioning can include a group of sub-blocks that can be called predictive units (PU). Transform units (TU) can also be utilized. Such a partitioning scheme can include performing geometric partitioning with an adaptive number of regions according to some aspects of the current subject matter.
With continued reference to FIG. 2, the example video encoder 200 includes an intra prediction processor 215, a motion estimation / compensation processor 220 (also referred to as an inter prediction processor) capable of supporting geometric partitioning with an adaptive number of regions, a transform /quantization processor 225, an inverse quantization / inverse transform processor 230, an in-loop filter 235, a decoded picture buffer 240, and an entropy coding processor 245. In some implementations, the motion estimation / compensation processor 220 can perform geometric partitioning. Bitstream parameters that signal geometric partitioning modes can be input to the entropy coding processor 245 for inclusion in the output bitstream 250.
In operation, and continuing to refer to FIG. 2, for each block of a frame of the input video 205, whether to process the block via intra picture prediction or using motion estimation / compensation can be determined. The block can be provided to the intra prediction processor 210 or the motion estimation / compensation processor 220. If the block is to be processed via intra prediction, the intra prediction processor 210 can perform the processing to output the predictor. If the block is to be processed via motion estimation / compensation, the motion estimation / compensation processor 220 can perform the processing including use of geometric partitioning to output the predictor.
Still referring to FIG. 2, a residual can be formed by subtracting the predictor from the input video. The residual can be received by the transform / quantization processor 225, which can determine whether the prediction error (e.g., residual size) is considered“high” or“low” error (for example, by comparing a size or error metric of the residual to a threshold). Based on the determination, the transform /quantization processor 225 can select a transform type, which can include B-DCT and SA-DCT. In some implementations, the transform / quantization processor 225 selects a transform type of B-DCT where the residual is considered to have a high error and selects a transform type of SA-DCT where the residual is considered to have a low error. Based on the selected transform type, the transform /quantization processor 225 can perform transformation processing (e.g., SA-DCT or B-DCT) to produce coefficients, which can be quantized. The quantized coefficients and any associated signaling information (which can include the selected transform type and/or the number of coefficients used) can be provided to the entropy coding processor 245 for entropy encoding and inclusion in the output bitstream 250. The entropy encoding processor 245 can support encoding of signaling information related to SA-DCT for geometric partitioning with adaptive number of regions. In addition, the quantized coefficients can be provided to the inverse quantization / inverse transformation processor 230, which can reproduce pixels, which can be combined with the predictor and processed by the in loop filter 235, the output of which is stored in the decoded picture buffer 240 for use by the motion estimation / compensation processor 220 that is capable of supporting geometric partitioning with an adaptive number of regions.
Referring now to FIG. 3, a process flow diagram illustrating an example process 300 of encoding a video with SA-DCT for geometric partitioning with an adaptive number of regions that can improve complexity and processing performance for video encoding and decoding is illustrated. At step 310, a video frame may undergo initial block segmentation, for example, using a tree-structured macro block partitioning scheme that may include partitioning a picture frame into CTUs and CUs. At 320, a block may be selected for geometric partitioning. Selection may include identifying according to a metric rule that a block is to be processed according to a geometric partitioning mode. At step 330, a selected block may be partitioned into three or more non-rectangular regions according to geometric partitioning mode.
At step 340, and still referring to FIG. 3, a transform type (also referred to as a transformation type) for each geometrically partitioned region may be determined. This may include determining whether a prediction error (e.g., residual size) is considered“high” or“low” error (for example, by comparing a size or error metric of the residual to a threshold). Based on determination, a transform type may be selected, for instance using a quadtree plus binary decision tree process as described below, which transform type may include without limitation B-DCT or SA-DCT. In some implementations, a transform type of B-DCT is selected where residual is considered to have a high error and a transform type of SA-DCT is selected where residual is considered to have a low error. Based on the selected transform type, transformation processing (e.g., SA-DCT or B-DCT) may be performed to produce coefficients, which may be quantized.
At step 350, and continuing to refer to FIG. 3, a determined transform type may be signaled in the bitstream. The transformed and quantized residual can be included in the bitstream. In some implementations, the number of transform coefficients can be signaled in the bitstream.
FIG. 4 is a system block diagram illustrating a non-limiting example of a decoder 400 capable of decoding a bitstream 470 using DCT, including without limitation SA-DCT and/or B- DCT, for geometric partitioning with an adaptive number of regions, which may improve complexity and processing performance for video encoding and decoding. Decoder 400 includes an entropy decoder processor 410, an inverse quantization and inverse transformation processor 420, a deblocking filter 430, a frame buffer 440, motion compensation processor 450 and intra prediction processor 460. In some implementations, bitstream 470 includes parameters that signal a geometric partitioning mode and transformation type. In some implementations, bitstream 470 includes parameters that signal the number of transform coefficients. The motion compensation processor 450 can reconstruct pixel information using geometric partitioning as described herein.
In operation, and still referring to FIG. 4, bitstream 470 may be received by the decoder 400 and input to entropy decoder processor 410, which may entropy decode the bitstream into quantized coefficients. Quantized coefficients may be provided to inverse quantization and inverse transformation processor 420, which may determine a coding transformation type (e.g., B-DCT or SA-DCT) and perform inverse quantization and inverse transformation according to the determined coding transformation type to create a residual signal. In some implementations, inverse quantization and inverse transformation processor 420 may determine a number of transform coefficients and perform inverse transformation according to the determined number of transform coefficients.
Still referring to FIG. 4, residual signal may be added to an output of motion
compensation processor 450 or intra prediction processor 460 according to a processing mode. Output of a motion compensation processor 450 and intra prediction processor 460 may include a block prediction based on a previously decoded block. A sum of the prediction and residual may be processed by deblocking filter 430 and stored in a frame buffer 440. For a given block, (e.g., CU or PU), when a bitstream 470 signals that a partitioning mode is block level geometric partitioning, motion compensation processor 450 may construct a prediction based on the geometric partition approach described herein.
FIG. 5 is a process flow diagram illustrating an example process 500 of decoding a bitstream using SA-DCT for geometric partitioning with an adaptive number of regions, which can improve complexity and processing performance for video encoding and decoding. At step 510, a bitstream is received, which may include a current block (e.g., CTU, CU, PU). Receiving may include extracting and/or parsing current block and associated signaling information from bitstream. Decoder may extract or determine one or more parameters that characterize the geometric partitioning. These parameters may include, for example, indices of a start and end of a line segment (e.g., PI, P2, P3, P4); extraction or determining may include identifying and retrieving the parameters from the bitstream (e.g., parsing the bitstream).
At step 520, and still referring to FIG. 5, a first region, a second region, and a third region of the current block may be determined and according to a geometric partitioning mode.
Determining may include determining whether geometric partitioning mode is enabled (e.g., true) for the current block. If geometric partitioning mode is not enabled (e.g., false), decoder may process current block using an alternative partitioning mode. If geometric partitioning mode is enabled (e.g., true), three or more regions may be determined and/or processed.
At optional step 530, and continuing to refer to FIG. 5, a coding transformation type may be determined. A coding transformation type may be signaled in bitstream. For example, bitstream may be parsed to determine a coding transformation type, which may specify B-DCT or SA-DCT. Determined coding transformation type may be for decoding a first region, a second region, and/or a third region.
At 540, and still referring to FIG. 5, a current block may be decoded. Decoding of current block may include using a determined transform type for inverse transformation for each of a first region, a second region, and/or a third region. Decoding may include determining an associated motion information for each region and according to geometric partitioning mode.
Although a few variations have been described in detail above, other modifications or additions are possible. For example, the geometric partitioning can be signaled in the bitstream based on rate-distortion decisions in the encoder. The coding can be based on a combination of regular pre-defmed partitions (e.g., templates), temporal and spatial prediction of the
partitioning, and additional offsets. Each geometrically partitioned region can utilize motion compensated prediction or intra-prediction. The boundary of the predicted regions can be smoothed before the residual is added.
In some implementations, a quadtree plus binary decision tree (QTBT) may be implemented. In QTBT, at the Coding Tree Unit level, partition parameters of QTBT are dynamically derived to adapt to the local characteristics without transmitting any overhead. Subsequently, at the Coding Unit level, a joint-classifier decision tree structure may eliminate unnecessary iterations and control the risk of false prediction. In some implementations, geometric partitioning with an adaptive number of regions may be available as an additional partitioning option available at every leaf node of the QTBT.
In some implementations, a decoder may include a partition processor that generates geometric partition for a current block and provides all partition-related information for dependent processes. Partition processor may directly influence motion compensation as it may be performed segment-wise in case a block is geometrically partitioned. Further, partition processor may provide shape information to intra-prediction processor and transform coding processor.
In some implementations, additional syntax elements may be signaled at different hierarchy levels of the bitstream. For enabling geometric partitioning with an adaptive number of regions for an entire sequence, an enable flag may be coded in a Sequence Parameter Set (SPS). Further, a CTU flag may be coded at the coding tree unit (CTU) level to indicate whether any coding units (CU) use geometric partitioning with an adaptive number of regions. A CU flag may be coded to indicate whether a current coding unit utilizes geometric partitioning with an adaptive number of regions. Parameters which specify a line segment on block may be coded.
For each region, a flag may be decoded, which may specify whether a current region is inter- or intra-predicted.
In some implementations, a minimum region size may be specified.
The subject matter described herein provides many technical advantages. For example, some implementations of the current subject matter can provide for partitioning of blocks that reduces complexity while increasing compression efficiency. In some implementations, blocking artifacts at object boundaries can be reduced.
It is to be noted that any one or more of the aspects and embodiments described herein may be conveniently implemented using digital electronic circuitry, integrated circuitry, specially designed application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof, as realized and/or implemented in one or more machines ( e.g ., one or more computing devices that are utilized as a user computing device for an electronic document, one or more server devices, such as a document server, etc.) programmed according to the teachings of the present specification, as will be apparent to those of ordinary skill in the computer art. These various aspects or features may include implementation in one or more computer programs and/or software that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. Appropriate software coding may readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those of ordinary skill in the software art. Aspects and implementations discussed above employing software and/or software modules may also include appropriate hardware for assisting in the implementation of the machine executable instructions of the software and/or software module.
Such software may be a computer program product that employs a machine-readable storage medium. A machine-readable storage medium may be any medium that is capable of storing and/or encoding a sequence of instructions for execution by a machine (e.g., a computing device) and that causes the machine to perform any one of the methodologies and/or embodiments described herein. Examples of a machine-readable storage medium include, but are not limited to, a magnetic disk, an optical disc ( e.g ., CD, CD-R, DVD, DVD-R, etc.), a magneto optical disk, a read-only memory“ROM” device, a random access memory“RAM” device, a magnetic card, an optical card, a solid-state memory device, an EPROM, an EEPROM,
Programmable Logic Devices (PLDs), and/or any combinations thereof. A machine-readable medium, as used herein, is intended to include a single medium as well as a collection of physically separate media, such as, for example, a collection of compact discs or one or more hard disk drives in combination with a computer memory. As used herein, a machine-readable storage medium does not include transitory forms of signal transmission.
Such software may also include information (e.g., data) carried as a data signal on a data carrier, such as a carrier wave. For example, machine-executable information may be included as a data-carrying signal embodied in a data carrier in which the signal encodes a sequence of instruction, or portion thereof, for execution by a machine (e.g, a computing device) and any related information (e.g, data structures and data) that causes the machine to perform any one of the methodologies and/or embodiments described herein.
Examples of a computing device include, but are not limited to, an electronic book reading device, a computer workstation, a terminal computer, a server computer, a handheld device (e.g, a tablet computer, a smartphone, etc.), a web appliance, a network router, a network switch, a network bridge, any machine capable of executing a sequence of instructions that specify an action to be taken by that machine, and any combinations thereof. In one example, a computing device may include and/or be included in a kiosk.
FIG. 6 shows a diagrammatic representation of one embodiment of a computing device in the exemplary form of a computer system 600 within which a set of instructions for causing a control system to perform any one or more of the aspects and/or methodologies of the present disclosure may be executed. It is also contemplated that multiple computing devices may be utilized to implement a specially configured set of instructions for causing one or more of the devices to perform any one or more of the aspects and/or methodologies of the present disclosure. Computer system 600 includes a processor 604 and a memory 608 that communicate with each other, and with other components, via a bus 612. Bus 612 may include any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures.
Memory 608 may include various components ( e.g ., machine-readable media) including, but not limited to, a random-access memory component, a read only component, and any combinations thereof. In one example, a basic input/output system 616 (BIOS), including basic routines that help to transfer information between elements within computer system 600, such as during start-up, may be stored in memory 608. Memory 608 may also include (e.g., stored on one or more machine-readable media) instructions (e.g, software) 620 embodying any one or more of the aspects and/or methodologies of the present disclosure. In another example, memory 608 may further include any number of program modules including, but not limited to, an operating system, one or more application programs, other program modules, program data, and any combinations thereof.
Computer system 600 may also include a storage device 624. Examples of a storage device (e.g, storage device 624) include, but are not limited to, a hard disk drive, a magnetic disk drive, an optical disc drive in combination with an optical medium, a solid-state memory device, and any combinations thereof. Storage device 624 may be connected to bus 612 by an appropriate interface (not shown). Example interfaces include, but are not limited to, SCSI, advanced technology attachment (ATA), serial ATA, universal serial bus (USB), IEEE 1394 (FIREWIRE), and any combinations thereof. In one example, storage device 624 (or one or more components thereof) may be removably interfaced with computer system 600 (e.g, via an external port connector (not shown)). Particularly, storage device 624 and an associated machine-readable medium 628 may provide nonvolatile and/or volatile storage of machine- readable instructions, data structures, program modules, and/or other data for computer system 600. In one example, software 620 may reside, completely or partially, within machine-readable medium 628. In another example, software 620 may reside, completely or partially, within processor 604.
Computer system 600 may also include an input device 632. In one example, a user of computer system 600 may enter commands and/or other information into computer system 600 via input device 632. Examples of an input device 632 include, but are not limited to, an alpha numeric input device (e.g, a keyboard), a pointing device, a joystick, a gamepad, an audio input device (e.g, a microphone, a voice response system, etc.), a cursor control device (e.g, a mouse), a touchpad, an optical scanner, a video capture device ( e.g ., a still camera, a video camera), a touchscreen, and any combinations thereof. Input device 632 may be interfaced to bus 612 via any of a variety of interfaces (not shown) including, but not limited to, a serial interface, a parallel interface, a game port, a USB interface, a FIREWIRE interface, a direct interface to bus 612, and any combinations thereof. Input device 632 may include a touch screen interface that may be a part of or separate from display 636, discussed further below. Input device 632 may be utilized as a user selection device for selecting one or more graphical representations in a graphical interface as described above.
A user may also input commands and/or other information to computer system 600 via storage device 624 (e.g., a removable disk drive, a flash drive, etc.) and/or network interface device 640. A network interface device, such as network interface device 640, may be utilized for connecting computer system 600 to one or more of a variety of networks, such as network 644, and one or more remote devices 648 connected thereto. Examples of a network interface device include, but are not limited to, a network interface card (e.g, a mobile network interface card, a LAN card), a modem, and any combination thereof. Examples of a network include, but are not limited to, a wide area network (e.g, the Internet, an enterprise network), a local area network (e.g, a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a data network associated with a telephone/voice provider (e.g, a mobile communications provider data and/or voice network), a direct connection between two computing devices, and any combinations thereof. A network, such as network 644, may employ a wired and/or a wireless mode of communication. In general, any network topology may be used. Information (e.g, data, software 620, etc.) may be communicated to and/or from computer system 600 via network interface device 640.
Computer system 600 may further include a video display adapter 652 for
communicating a displayable image to a display device, such as display device 636. Examples of a display device include, but are not limited to, a liquid crystal display (LCD), a cathode ray tube (CRT), a plasma display, a light emitting diode (LED) display, and any combinations thereof. Display adapter 652 and display device 636 may be utilized in combination with processor 604 to provide graphical representations of aspects of the present disclosure. In addition to a display device, computer system 600 may include one or more other peripheral output devices including, but not limited to, an audio speaker, a printer, and any combinations thereof. Such peripheral output devices may be connected to bus 612 via a peripheral interface 656. Examples of a peripheral interface include, but are not limited to, a serial port, a USB connection, a FIREWIRE connection, a parallel connection, and any combinations thereof.
The foregoing has been a detailed description of illustrative embodiments of the invention. Various modifications and additions can be made without departing from the spirit and scope of this invention. Features of each of the various embodiments described above may be combined with features of other described embodiments as appropriate in order to provide a multiplicity of feature combinations in associated new embodiments. Furthermore, while the foregoing describes a number of separate embodiments, what has been described herein is merely illustrative of the application of the principles of the present invention. Additionally, although particular methods herein may be illustrated and/or described as being performed in a specific order, the ordering is highly variable within ordinary skill to achieve embodiments as disclosed herein. Accordingly, this description is meant to be taken only by way of example, and not to otherwise limit the scope of this invention.
In the descriptions above and in the claims, phrases such as“at least one of’ or“one or more of’ may occur followed by a conjunctive list of elements or features. The term“and/or” may also occur in a list of two or more elements or features. Unless otherwise implicitly or explicitly contradicted by the context in which it is used, such a phrase is intended to mean any of the listed elements or features individually or any of the recited elements or features in combination with any of the other recited elements or features. For example, the phrases“at least one of A and B;”“one or more of A and B;” and“A and/or B” are each intended to mean“A alone, B alone, or A and B together.” A similar interpretation is also intended for lists including three or more items. For example, the phrases“at least one of A, B, and C;”“one or more of A,
B, and C;” and“A, B, and/or C” are each intended to mean“A alone, B alone, C alone, A and B together, A and C together, B and C together, or A and B and C together.” In addition, use of the term“based on,” above and in the claims is intended to mean,“based at least in part on,” such that an unrecited feature or element is also permissible.
The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and sub-combinations of the disclosed features and/or combinations and sub-combinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims.

Claims

WHAT IS CLAIMED IS:
1. A decoder, the decoder comprising circuitry configured to:
receive a bitstream;
determine a first region, a second region, and a third region of a current block according to a geometric partitioning mode; and
decode the current block using an inverse discrete cosine transformation for each of the first region, the second region, and the third region.
2. The decoder of claim 1, wherein the current block has a size of 128 x 128 or 64 x 64.
3. The decoder of claim 1, wherein a number of coefficients for inverse transformation of the first region, the second region, and/or the third is signaled in the bitstream.
4. The decoder of claim 1, further configured to:
determine whether the geometric partitioning mode is enabled;
determine a first line segment for the current block; and
determine a second line segment for the current block;
wherein:
the decoding of the current block includes reconstructing pixel data using the first line segment and the second line segment; and
the first line segment and the second line segment partition the current block into the first region, the second region, and the third region.
5. The decoder of claim 4, wherein the first line segment characterizes the first region and the second line segment characterizes the second region and the third region.
6. The decoder of claim 4, wherein reconstructing pixel data includes computing a predictor for the first region using an associated motion vector contained in the bitstream.
7. The decoder of claim 1, further comprising:
an entropy decoder processor configured to receive the bitstream and decode the
bitstream into quantized coefficients;
an inverse quantization and inverse transformation processor configured to process the quantized coefficients including performing an inverse discrete cosine according to the determined coding transformation type;
a deblocking filter;
a frame buffer; and an intra prediction processor.
8. The decoder of claim 1, wherein the bitstream includes a parameter indicating whether geometric partitioning mode is enabled for the current block.
9. The decoder of claim 1, wherein the current block forms part of a quadtree plus binary decision tree.
10. The decoder of claim 1, wherein the current block is a non-leaf node of the quadtree plus binary decision tree.
11. The decoder of claim 1, wherein the current block is a coding tree unit or a coding unit.
12. The decoder of claim 1, wherein the first region is a coding unit or a prediction unit.
13. A decoder, the decoder comprising circuitry configured to:
receive a bitstream;
determine a first region, a second region, and a third region of a current block and
according to a geometric partitioning mode;
determine, from a signal contained in the bitstream, a coding transformation type to
decode each of the first region, the second region, and/or the third region, the coding transformation type characterizing at least an inverse block discrete cosine transformation and an inverse shape adaptive discrete cosine transformation; and decode the current block, the decoding of the current block including using the
determined transformation type for inverse transformation for each of the first region, the second region and/or the third region.
14. The decoder of claim 13, wherein the current block has a size of 128 x 128 or 64 x 64.
15. The decoder of claim 13, wherein a number of coefficients for inverse transformation of the first region, the second region, and/or the third is signaled in the bitstream.
16. The decoder of claim 13, further configured to:
determine whether the geometric partitioning mode is enabled;
determine a first line segment for the current block; and
determine a second line segment for the current block;
wherein:
the decoding of the current block includes reconstructing pixel data using the first line segment and the second line segment; and the first line segment and the second line segment partition the current block into the first region, the second region, and the third region.
17. The decoder of claim 16, wherein the first line segment characterizes the first region and the second line segment characterizes the second region and the third region.
18. The decoder of claim 16, wherein reconstructing pixel data includes computing a
predictor for the first region using an associated motion vector contained in the bitstream.
19. The decoder of claim 13, further comprising:
an entropy decoder processor configured to receive the bitstream and decode the
bitstream into quantized coefficients;
an inverse quantization and inverse transformation processor configured to process the quantized coefficients including performing an inverse discrete cosine according to the determined coding transformation type;
a deblocking filter;
a frame buffer; and
an intra prediction processor.
20. The decoder of claim 13, wherein the bitstream includes a parameter indicating whether geometric partitioning mode is enabled for the current block.
21. The decoder of claim 13, wherein the current block forms part of a quadtree plus binary decision tree.
22. The decoder of claim 13, wherein the current block is a non-leaf node of the quadtree plus binary decision tree.
23. The decoder of claim 13, wherein the current block is a coding tree unit or a coding unit.
24. The decoder of claim 13, wherein the first region is a coding unit or a prediction unit.
25. A method comprising:
receiving, by a decoder, a bitstream;
determining a first region, a second region, and a third region of a current block and
according to a geometric partitioning mode;
determining, from a signal contained in the bitstream, a coding transformation type to decode the first region, the second region, and/or the third region, the coding transformation type characterizing at least an inverse block discrete cosine transformation or an inverse shape adaptive discrete cosine transformation; and decoding the current block, the decoding of the current block including using the
determined transformation type for inverse transformation for each of the first region, the second region, and/or the third region.
26. The method of claim 25, wherein the current block has a size of 128 x 128 or 64 x 64.
27. The method of claim 25, wherein a number of coefficients for inverse transformation of the first region, the second region, and/or the third is signaled in the bitstream.
28. The method of claim 25, further comprising:
determining, by the decoder, whether the geometric partitioning mode is enabled;
determining, by the decoder, a first line segment for the current block; and
determining, by the decoder, a second line segment for the current block;
wherein:
the decoding of the current block includes reconstructing pixel data using the first line segment and the second line segment; and
the first line segment and the second line segment partition the current block into the first region, the second region, and the third region.
29. The method of claim 28, wherein the first line segment characterizes the first region and the second line segment characterizes the second region and the third region.
30. The method of claim 28, wherein reconstructing pixel data includes computing a
predictor for the first region using an associated motion vector contained in the bitstream.
31. The method of claim 25, wherein the decoder comprises:
an entropy decoder processor configured to receive the bitstream and decode the
bitstream into quantized coefficients;
an inverse quantization and inverse transformation processor configured to process the quantized coefficients including performing an inverse discrete cosine according to the determined coding transformation type;
a deblocking filter;
a frame buffer; and
an intra prediction processor.
32. The method of claim 25, wherein the bitstream includes a parameter indicating whether block level geometric partitioning mode is enabled for the block.
33. The method of claim 25, wherein the current block forms part of a quadtree plus binary decision tree.
34. The method of claim 25, wherein the current block is a non-leaf node of the quadtree plus binary decision tree.
35. The method of claim 25, wherein the current block is a coding tree unit or a coding unit.
36. The method of claim 25, wherein the first region is a coding unit or a prediction unit.
PCT/US2020/015401 2019-01-28 2020-01-28 Shape adaptive discrete cosine transform for geometric partitioning with an adaptive number of regions WO2020159982A1 (en)

Priority Applications (9)

Application Number Priority Date Filing Date Title
JP2021543476A JP7482536B2 (en) 2019-01-28 2020-01-28 Shape-adaptive discrete cosine transform for geometric partitioning with an adaptive number of regions.
MX2021009030A MX2021009030A (en) 2019-01-28 2020-01-28 Shape adaptive discrete cosine transform for geometric partitioning with an adaptive number of regions.
BR112021014671-7A BR112021014671A2 (en) 2019-01-28 2020-01-28 DISCRETE TRANSFORM FROM ADAPTIVE FORMAT COSINE TO GEOMETRIC PARTITIONING WITH AN ADAPTIVE NUMBER OF REGIONS
KR1020217027274A KR20210118166A (en) 2019-01-28 2020-01-28 Shape Adaptive Discrete Cosine Transform for Geometric Partitioning with Adaptive Number of Regions
EP20749417.0A EP3918784A4 (en) 2019-01-28 2020-01-28 Shape adaptive discrete cosine transform for geometric partitioning with an adaptive number of regions
SG11202107974YA SG11202107974YA (en) 2019-01-28 2020-01-28 Shape adaptive discrete cosine transform for geometric partitioning with an adaptive number of regions
CN202080022269.3A CN113597757A (en) 2019-01-28 2020-01-28 Shape adaptive discrete cosine transform with region number adaptive geometric partitioning
US17/386,126 US12075046B2 (en) 2019-01-28 2021-07-27 Shape adaptive discrete cosine transform for geometric partitioning with an adaptive number of regions
JP2024069016A JP2024095835A (en) 2019-01-28 2024-04-22 Shape adaptive discrete cosine transform for geometric partitioning with adaptive number of regions

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962797799P 2019-01-28 2019-01-28
US62/797,799 2019-01-28

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/386,126 Continuation US12075046B2 (en) 2019-01-28 2021-07-27 Shape adaptive discrete cosine transform for geometric partitioning with an adaptive number of regions

Publications (1)

Publication Number Publication Date
WO2020159982A1 true WO2020159982A1 (en) 2020-08-06

Family

ID=71840394

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/015401 WO2020159982A1 (en) 2019-01-28 2020-01-28 Shape adaptive discrete cosine transform for geometric partitioning with an adaptive number of regions

Country Status (9)

Country Link
US (1) US12075046B2 (en)
EP (1) EP3918784A4 (en)
JP (2) JP7482536B2 (en)
KR (1) KR20210118166A (en)
CN (1) CN113597757A (en)
BR (1) BR112021014671A2 (en)
MX (1) MX2021009030A (en)
SG (1) SG11202107974YA (en)
WO (1) WO2020159982A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022218256A1 (en) * 2021-04-12 2022-10-20 Alibaba (China) Co., Ltd. Method, apparatus, and non-transitory computer-readable storage medium for motion vector refinement for geometric partition mode

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10733766B2 (en) * 2016-10-19 2020-08-04 Google, Llc Methods and apparatus to encode and/or decode normals of geometric representations of surfaces
US11323748B2 (en) * 2018-12-19 2022-05-03 Qualcomm Incorporated Tree-based transform unit (TU) partition for video coding
WO2023195762A1 (en) * 2022-04-05 2023-10-12 한국전자통신연구원 Method, apparatus, and recording medium for image encoding/decoding
WO2024119404A1 (en) * 2022-12-07 2024-06-13 Intel Corporation Visual quality enhancement in cloud gaming by 3d information-based segmentation and per-region rate distortion optimization

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140341290A1 (en) * 2011-11-11 2014-11-20 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Effective wedgelet partition coding using spatial prediction
US20150271517A1 (en) * 2014-03-21 2015-09-24 Qualcomm Incorporated Search region determination for intra block copy in video coding

Family Cites Families (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19719383A1 (en) * 1997-05-07 1998-11-19 Siemens Ag Method and device for coding and decoding a digitized image
WO2003001787A2 (en) 2001-06-21 2003-01-03 Walker Digital, Llc Methods and systems for documenting a player's experience in a casino environment
US6788301B2 (en) 2001-10-18 2004-09-07 Hewlett-Packard Development Company, L.P. Active pixel determination for line generation in regionalized rasterizer displays
JP4313710B2 (en) * 2004-03-25 2009-08-12 パナソニック株式会社 Image encoding method and image decoding method
US7742636B2 (en) 2006-01-26 2010-06-22 Nethra Imaging Inc. Method and apparatus for scaling down a bayer domain image
CN101502119B (en) 2006-08-02 2012-05-23 汤姆逊许可公司 Adaptive geometric partitioning for video decoding
US8681855B2 (en) 2007-10-12 2014-03-25 Thomson Licensing Method and apparatus for video encoding and decoding geometrically partitioned bi-predictive mode partitions
KR101740039B1 (en) 2009-06-26 2017-05-25 톰슨 라이센싱 Methods and apparatus for video encoding and decoding using adaptive geometric partitioning
US8879632B2 (en) * 2010-02-18 2014-11-04 Qualcomm Incorporated Fixed point implementation for geometric motion partitioning
JP2013524730A (en) 2010-04-12 2013-06-17 クゥアルコム・インコーポレイテッド Fixed point implementation for geometric motion segmentation
EP2421266A1 (en) 2010-08-19 2012-02-22 Thomson Licensing Method for reconstructing a current block of an image and corresponding encoding method, corresponding devices as well as storage medium carrying an images encoded in a bit stream
US20120147961A1 (en) 2010-12-09 2012-06-14 Qualcomm Incorporated Use of motion vectors in evaluating geometric partitioning modes
US9747255B2 (en) * 2011-05-13 2017-08-29 Texas Instruments Incorporated Inverse transformation using pruning for video coding
US20130107962A1 (en) 2011-10-26 2013-05-02 Intellectual Discovery Co., Ltd. Scalable video coding method and apparatus using inter prediction mode
US20140247876A1 (en) 2011-10-31 2014-09-04 Mitsubishi Electric Corporation Video encoding device, video decoding device, video encoding method, and video decoding method
JP5986639B2 (en) * 2011-11-03 2016-09-06 トムソン ライセンシングThomson Licensing Video coding and decoding based on image refinement
EP2942961A1 (en) 2011-11-23 2015-11-11 HUMAX Holdings Co., Ltd. Methods for encoding/decoding of video using common merging candidate set of asymmetric partitions
US20130287109A1 (en) * 2012-04-29 2013-10-31 Qualcomm Incorporated Inter-layer prediction through texture segmentation for video coding
WO2013189257A1 (en) 2012-06-20 2013-12-27 Mediatek Inc. Method and apparatus of bi-directional prediction for scalable video coding
CN108712652A (en) * 2012-06-29 2018-10-26 韩国电子通信研究院 Method for video coding and computer-readable medium
KR101677406B1 (en) * 2012-11-13 2016-11-29 인텔 코포레이션 Video codec architecture for next generation video
US9986236B1 (en) 2013-11-19 2018-05-29 Google Llc Method and apparatus for encoding a block using a partitioned block and weighted prediction values
US10042887B2 (en) 2014-12-05 2018-08-07 International Business Machines Corporation Query optimization with zone map selectivity modeling
WO2016090568A1 (en) 2014-12-10 2016-06-16 Mediatek Singapore Pte. Ltd. Binary tree block partitioning structure
KR20180085714A (en) 2015-12-17 2018-07-27 삼성전자주식회사 Video decoding method and video decoding apparatus using merge candidate list
CN109416718B (en) * 2015-12-24 2023-05-12 英特尔公司 Trusted deployment of application containers in cloud data centers
WO2018034373A1 (en) 2016-08-19 2018-02-22 엘지전자(주) Image processing method and apparatus therefor
US10116957B2 (en) 2016-09-15 2018-10-30 Google Inc. Dual filter type for motion compensated prediction in video coding
WO2018141416A1 (en) * 2017-02-06 2018-08-09 Huawei Technologies Co., Ltd. Video encoder and decoder for predictive partitioning
WO2019102888A1 (en) * 2017-11-24 2019-05-31 ソニー株式会社 Image processing device and method
JP7036123B2 (en) * 2017-12-05 2022-03-15 株式会社ソシオネクスト Coding method, decoding method, coding device, decoding device, coding program and decoding program
EP3811611A4 (en) * 2018-06-22 2022-06-15 OP Solutions, LLC Block level geometric partitioning
CN117499638A (en) 2018-06-27 2024-02-02 数字洞察力有限公司 Method of encoding/decoding image and method of transmitting bitstream
MX2021003854A (en) 2018-10-01 2021-05-27 Op Solutions Llc Methods and systems of exponential partitioning.
WO2020116402A1 (en) 2018-12-04 2020-06-11 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device, decoding device, encoding method, and decoding method
SG11202108103WA (en) * 2019-01-28 2021-08-30 Op Solutions Llc Inter prediction in geometric partitioning with an adaptive number of regions
MX2021009028A (en) 2019-01-28 2021-10-13 Op Solutions Llc Inter prediction in exponential partitioning.

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140341290A1 (en) * 2011-11-11 2014-11-20 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Effective wedgelet partition coding using spatial prediction
US20150271517A1 (en) * 2014-03-21 2015-09-24 Qualcomm Incorporated Search region determination for intra block copy in video coding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3918784A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022218256A1 (en) * 2021-04-12 2022-10-20 Alibaba (China) Co., Ltd. Method, apparatus, and non-transitory computer-readable storage medium for motion vector refinement for geometric partition mode
US11876973B2 (en) 2021-04-12 2024-01-16 Alibaba (China) Co., Ltd. Method, apparatus, and non-transitory computer-readable storage medium for motion vector refinement for geometric partition mode

Also Published As

Publication number Publication date
US12075046B2 (en) 2024-08-27
US20210360246A1 (en) 2021-11-18
JP2022524916A (en) 2022-05-11
CN113597757A (en) 2021-11-02
JP7482536B2 (en) 2024-05-14
MX2021009030A (en) 2021-10-13
SG11202107974YA (en) 2021-08-30
EP3918784A4 (en) 2022-04-13
JP2024095835A (en) 2024-07-10
BR112021014671A2 (en) 2021-09-28
EP3918784A1 (en) 2021-12-08
KR20210118166A (en) 2021-09-29

Similar Documents

Publication Publication Date Title
US12075046B2 (en) Shape adaptive discrete cosine transform for geometric partitioning with an adaptive number of regions
US20210360271A1 (en) Inter prediction in exponential partitioning
US11259014B2 (en) Inter prediction in geometric partitioning with an adaptive number of regions
EP3959883A1 (en) Global motion constrained motion vector in inter prediction
WO2020219940A1 (en) Global motion for merge mode candidates in inter prediction
US20230239464A1 (en) Video processing method with partial picture replacement
EP3959887A1 (en) Candidates in frames with global motion
EP3959889A1 (en) Adaptive motion vector prediction candidates in frames with global motion
WO2020219948A1 (en) Selective motion vector prediction candidates in frames with global motion
WO2020219961A1 (en) Global motion models for motion vector inter prediction
US11265566B2 (en) Signaling of global motion relative to available reference frames
US11825075B2 (en) Online and offline selection of extended long term reference picture retention
WO2020159993A1 (en) Explicit signaling of extended long term reference picture retention

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20749417

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021543476

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112021014671

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 20217027274

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020749417

Country of ref document: EP

Effective date: 20210830

ENP Entry into the national phase

Ref document number: 112021014671

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20210726