WO2021136349A1 - Device and method for coding video data - Google Patents

Device and method for coding video data Download PDF

Info

Publication number
WO2021136349A1
WO2021136349A1 PCT/CN2020/141275 CN2020141275W WO2021136349A1 WO 2021136349 A1 WO2021136349 A1 WO 2021136349A1 CN 2020141275 W CN2020141275 W CN 2020141275W WO 2021136349 A1 WO2021136349 A1 WO 2021136349A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
value
ratio
equal
width
Prior art date
Application number
PCT/CN2020/141275
Other languages
French (fr)
Inventor
Yuchiao YANG
Chihyu TENG
Original Assignee
FG Innovation Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FG Innovation Company Limited filed Critical FG Innovation Company Limited
Priority to KR1020227025262A priority Critical patent/KR20220113533A/en
Priority to JP2022528269A priority patent/JP7499854B2/en
Priority to EP20911086.5A priority patent/EP4055826A4/en
Priority to CN202080088003.9A priority patent/CN114902663A/en
Priority to MX2022007973A priority patent/MX2022007973A/en
Publication of WO2021136349A1 publication Critical patent/WO2021136349A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present disclosure is generally related to video coding, and specifically, to techniques for determining, based on a ratio value between a width and a height of a block unit in an image frame, whether the block unit is predicted by a geometric partitioning mode.
  • Geometric partitioning mode is a coding tool in a video coding method.
  • an encoder may select two of a plurality of merge candidates for predicting a block unit in an image frame and provide two merge indices into a bitstream for a decoder to recognize the selected merge candidates.
  • the encoder and the decoder may split the block unit in the image frame along a partition line to generate a plurality of sub-blocks when the geometric partitioning mode is applied on the block unit. Some of the sub-blocks may be predicted based on only one of the two merge candidates, and other of the sub-blocks may be predicted based on both of the two merge candidates. Therefore, the encoder needs to select a prediction result of the block unit from a plurality of candidate results including a geometric result when the encoder tries a plurality of prediction modes including the geometric partitioning mode for predicting the block unit.
  • the encoder it is too complex for the encoder to predict the block unit in the geometric partitioning mode to generate the geometric result by selecting the partition line from a plurality of candidate lines and selecting the two merge candidates from the plurality of merge candidates. Therefore, in order to decrease the complexity of the encoding process, it may be important for the encoder to restrict the usage of the geometric partitioning mode for predicting the block unit.
  • the decoder also needs to know how to determine that the geometric partitioning mode is disabled.
  • the present disclosure is directed to a device and method for determining whether the geometric partitioning mode is applied on a block unit based on a comparison between a width and a height of the block unit.
  • a method for decoding a bitstream and an electronic device for performing the method comprises receiving an image frame of the bitstream; determining a block unit having a block width and a block height from the received image frame; determining whether a geometric partitioning mode is disabled for the block unit based on a comparison between the block width and the block height, wherein the block unit is divided to generate a plurality of sub-blocks predicted by different merge candidates of the block unit when the geometric partitioning mode is applied on the block unit; generating the predicted block by predicting the block unit based on a prediction mode different from the geometric partitioning mode when the geometric partitioning mode is disabled for the block unit; and reconstructing the block unit based on the predicted block.
  • a method for decoding a bitstream and an electronic device for performing the method comprises receiving an image frame of the bitstream; determining a block unit having a block width and a block height from the received image frame; determining a comparison between the block width and the block height; generating a predicted block by predicting the block unit based on a first prediction mode different from a geometric partitioning mode when the comparison indicates that a ratio value between the block width and the block height is greater than or equal to a threshold value, wherein the block unit is divided to generate a plurality of sub-blocks each predicted by different merge candidates of the block unit when the geometric partitioning mode is applied on the block unit; and reconstructing the block unit based on the predicted block.
  • FIG. 1 illustrates a block diagram of a system configured to encode and decode video data according to an example implementation of the present disclosure.
  • FIG. 2 illustrates a block diagram of the decoder module of the second electronic device illustrated in FIG. 1 according to an example implementation of the present disclosure.
  • FIG. 3 illustrates a flowchart of a method for encoding and decoding video data by an electronic device according to an example implementation of the present disclosure.
  • FIG. 4A-FIG. 4C are schematic illustrations of a block unit having different block sizes, according to example implementations of the present disclosure.
  • FIG. 5A and FIG. 5B are schematic illustrations of a block unit having different sub-blocks separated based on a partition line, according to example implementations of the present disclosure
  • FIG. 6 illustrates a block diagram of the encoder module of the first electronic device illustrated in FIG. 1 according to an example implementation of the present disclosure.
  • the disclosure uses the phrases “in one implementation, ” or “in some implementations, ” may refer to one or more of the same or different implementations.
  • the term “coupled” is defined as connected, whether directly or indirectly, through intervening components and is not necessarily limited to physical connections.
  • the term “comprising” means “including, but not necessarily limited to” and specifically indicates open-ended inclusion or membership in the so-described combination, group, series and the equivalent.
  • any disclosed coding function (s) or algorithm (s) described in the present disclosure may be implemented by hardware, software or a combination of software and hardware.
  • Disclosed functions may correspond to modules that are software, hardware, firmware, or any combination thereof.
  • a software implementation may include a program having computer executable instructions stored on computer readable medium such as memory or other type of storage devices.
  • computer readable medium such as memory or other type of storage devices.
  • one or more microprocessors or general-purpose computers with communication processing capability may be programmed with executable instructions and perform the disclosed function (s) or algorithm (s) .
  • microprocessors or general-purpose computers may be formed of applications specific integrated circuitry (ASIC) , programmable logic arrays, and/or using one or more digital signal processors (DSPs) .
  • ASIC applications specific integrated circuitry
  • DSPs digital signal processors
  • the computer readable medium includes but is not limited to random-access memory (RAM) , read-only memory (ROM) , erasable programmable read-only memory (EPROM) , electrically erasable programmable read-only memory (EEPROM) , flash memory, compact disc read-only memory (CD ROM) , magnetic cassettes, magnetic tape, magnetic disk storage, or any other equivalent medium capable of storing computer-readable instructions.
  • RAM random-access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory compact disc read-only memory (CD ROM)
  • CD ROM compact disc read-only memory
  • magnetic cassettes magnetic tape
  • magnetic disk storage or any other equivalent medium capable of storing computer-readable instructions.
  • FIG. 1 illustrates a block diagram of a system configured to encode and decode video data according to an example implementation of the present disclosure.
  • the system 100 includes a first electronic device 110, a second electronic device 120, and a communication medium 130.
  • the first electronic device 110 may be a source device including any device configured to encode video data and transmit encoded video data to the communication medium 130.
  • the second electronic device 120 may be a destination device including any device configured to receive encoded video data via the communication medium 130 and to decode encoded video data.
  • the first electronic device 110 may communicate via wire or wirelessly with the second electronic device 120 via the communication medium 130.
  • the first electronic device 110 may include a source module 112, an encoder module 114, and a first interface 116.
  • the second electronic device 120 may include a display module 122, a decoder module 124, and a second interface 126.
  • the first electronic device 110 may be a video encoder and the second electronic device 120 may be a video decoder.
  • the first electronic device 110 and/or the second electronic device 120 may be a mobile phone, a tablet, a desktop, a notebook, or other electronic devices.
  • FIG. 1 illustrates one example of the first electronic device 110 and the second electronic device 120.
  • the first electronic device 110 and second electronic device 120 may include more or less components than illustrated or have a different configuration of the various illustrated components.
  • the source module 112 may include a video capture device to capture new video, a video archive to store previously captured video, and/or a video feed interface to receive video from a video content provider.
  • the source module 112 may generate computer graphics-based data as the source video or generate a combination of live video, archived video, and computer-generated video as the source video.
  • the video capture device may be a charge-coupled device (CCD) image sensor, a complementary metal-oxide-semiconductor (CMOS) image sensor, or a camera.
  • CCD charge-coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the encoder module 114 and the decoder module 124 may each be implemented as any of a variety of suitable encoder/decoder circuitry such as one or more microprocessors, a central processing unit (CPU) , a graphic processing unit (GPU) , a system on chip (SoC) , digital signal processors (DSPs) , application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , discrete logic, software, hardware, firmware or any combinations thereof.
  • a device may store the program having instructions for the software in a suitable, non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the disclosed methods.
  • each of the encoder module 114 and the decoder module 124 may be included in one or more encoders or decoders, any of which may be integrated as part of a combined encoder/decoder (CODEC) in a device.
  • CDEC combined encoder/decoder
  • the first interface 116 and the second interface 126 may utilize customized protocols or follow existing standards or de facto standards including, but not limited to, Ethernet, IEEE 802.11 or IEEE 802.15 series, Wireless USB or telecommunication standards including, but not limited to, Global System for Mobile Communications (GSM) , Code-Division Multiple Access 2000 (CDMA2000) , Time Division Synchronous Code Division Multiple Access (TD-SCDMA) , Worldwide Interoperability for Microwave Access (WiMAX) , Third Generation Partnership Project Long-Term Evolution (3GPP-LTE) or Time-Division LTE (TD-LTE) .
  • the first interface 116 and the second interface 126 may each include any device configured to transmit and/or store a compliant video bitstream via the communication medium 130 and to receive the compliant video bitstream via the communication medium130.
  • the first interface 116 and the second interface 126 may include a computer system interface that enables a compliant video bitstream to be stored on a storage device or to be received from the storage device.
  • the first interface 116 and the second interface 126 may include a chipset supporting Peripheral Component Interconnect (PCI) and Peripheral Component Interconnect Express (PCIe) bus protocols, proprietary bus protocols, Universal Serial Bus (USB) protocols, I2C, or any other logical and physical structure that may be used to interconnect peer devices.
  • PCI Peripheral Component Interconnect
  • PCIe Peripheral Component Interconnect Express
  • the display module 122 may include a display using liquid crystal display (LCD) technology, plasma display technology, organic light emitting diode (OLED) display technology, or light emitting polymer display (LPD) technology with other display technologies used in other implementations.
  • the display module 122 may include a high-definition display or an ultra-high-definition display.
  • FIG. 2 illustrates a block diagram of the decoder module 124 of the second electronic device 120 illustrated in FIG. 1 according to an example implementation of the present disclosure.
  • the decoder module 124 includes an entropy decoder (e.g., entropy decoding unit 2241) , a prediction processor (e.g., prediction process unit 2242) , an inverse quantization/inverse transform processor (e.g., inverse quantization/inverse transform unit 2243) , a summer (e.g., summer 2244) , a filter (e.g., filtering unit 2245) , and a decoded picture buffer (e.g., decoded picture buffer 2246) .
  • entropy decoder e.g., entropy decoding unit 2241
  • a prediction processor e.g., prediction process unit 2242
  • an inverse quantization/inverse transform processor e.g., inverse quantization/inverse transform unit 2243
  • a summer e
  • the prediction process unit 2242 further includes an intra prediction processor (e.g., intra prediction unit 22421) and an inter prediction processor (e.g., inter prediction unit 22422) .
  • the decoder module 124 receives a bitstream and decodes the bitstream to output decoded video.
  • the entropy decoding unit 2241 may receive the bitstream including a plurality of syntax elements from the second interface 126 in FIG. 1 and perform a parsing operation on the bitstream to extract syntax elements from the bitstream. As part of the parsing operation, the entropy decoding unit 2241 may entropy decode the bitstream to generate quantized transform coefficients, quantization parameters, transform data, motion vectors, intra modes, partition information, and other syntax information.
  • the entropy decoding unit 2241 may perform context adaptive variable length coding (CAVLC) , context adaptive binary arithmetic coding (CABAC) , syntax-based context-adaptive binary arithmetic coding (SBAC) , probability interval partitioning entropy (PIPE) coding or another entropy coding technique to generate the quantized transform coefficients.
  • CAVLC context adaptive variable length coding
  • CABAC context adaptive binary arithmetic coding
  • SBAC syntax-based context-adaptive binary arithmetic coding
  • PIPE probability interval partitioning entropy
  • the entropy decoding unit 2241 may provide the quantized transform coefficients, the quantization parameters, and the transform data to the inverse quantization/inverse transform unit 2243 and provide the motion vectors, the intra modes, the partition information, and other syntax information to the prediction process unit 2242.
  • the prediction process unit 2242 may receive syntax elements such as motion vectors, intra modes, partition information, and other syntax information from the entropy decoding unit 2241.
  • the prediction process unit 2242 may receive the syntax elements including the partition information and divide image frames according to the partition information.
  • each of the image frames may be divided into at least one image block according to the partition information.
  • the at least one image block may include a luminance block for reconstructing a plurality of luminance samples and at least one chrominance block for reconstructing a plurality of chrominance samples.
  • the luminance block and the at least one chrominance block may be further divided to generate macroblocks, coding tree units (CTUs) , coding blocks (CBs) , sub-divisions thereof, and/or another equivalent coding unit.
  • CTUs coding tree units
  • CBs coding blocks
  • the prediction process unit 2242 may receive predicted data including the intra mode or the motion vector for a current image block of a specific one of the image frames.
  • the current image block may be the luminance block or one of the chrominance blocks in the specific image frame.
  • the intra prediction unit 22421 may perform intra-predictive coding of a current block unit relative to one or more neighboring blocks in the same frame as the current block unit based on syntax elements related to the intra mode in order to generate a predicted block.
  • the intra mode may specify the location of reference samples selected from the neighboring blocks within the current frame.
  • the intra prediction unit 22421 may reconstruct a plurality of chroma components of the current block unit based on a plurality of luma components of the current block unit when the chroma components are reconstructed by the prediction process unit 2242.
  • the intra prediction unit 22421 may reconstruct a plurality of chroma components of the current block unit based on the plurality of luma components of the current block unit when the luma components of the current block are reconstructed by the prediction process unit 2242.
  • the inter prediction unit 22422 may perform inter-predictive coding of the current block unit relative to one or more blocks in one or more reference image blocks based on syntax elements related to the motion vector in order to generate the predicted block.
  • the motion vector may indicate a displacement of the current block unit within the current image block relative to a reference block unit within the reference image block.
  • the reference block unit is a block determined to closely match the current block unit.
  • the inter prediction unit 22422 may receive the reference image block stored in the decoded picture buffer 2246 and reconstruct the current block unit based on the received reference image blocks.
  • the inverse quantization/inverse transform unit 2243 may apply inverse quantization and inverse transformation to reconstruct the residual block in the pixel domain.
  • the inverse quantization/inverse transform unit 2243 may apply inverse quantization to the residual quantized transform coefficient to generate a residual transform coefficient and then apply inverse transformation to the residual transform coefficient to generate the residual block in the pixel domain.
  • the inverse transformation may be inversely applied by the transformation process such as discrete cosine transform (DCT) , discrete sine transform (DST) , adaptive multiple transform (AMT) , mode-dependent non-separable secondary transform (MDNSST) , hypercube-givens transform (HyGT) , signal dependent transform, Karhunen-Loéve transform (KLT) , wavelet transform, integer transform, sub-band transform or a conceptually similar transform.
  • DCT discrete cosine transform
  • DST discrete sine transform
  • AMT adaptive multiple transform
  • MDNSST mode-dependent non-separable secondary transform
  • HyGT hypercube-givens transform
  • KLT Karhunen-Loéve transform
  • wavelet transform integer transform
  • sub-band transform sub-band transform or a conceptually similar transform.
  • the inverse transformation may convert the residual information from a transform domain, such as a frequency domain, back to the pixel domain.
  • the degree of inverse quantization may be modified by adjusting a quantization parameter.
  • the summer 2244 adds the residual block to the predicted block from the prediction process unit 2242 to produce a reconstructed block.
  • the summer 2244 adds the reconstructed residual block to the predicted block provided from the prediction process unit 2242 to produce a reconstructed block.
  • the filtering unit 2245 may include a deblocking filter, a sample adaptive offset (SAO) filter, a bilateral filter, and/or an adaptive loop filter (ALF) to remove blocking artifacts from the reconstructed block. Additional filters (in loop or post loop) may also be used in addition to the deblocking filter, the SAO filter, the bilateral filter and the ALF. Such filters are not explicitly illustrated for brevity but may filter the output of the summer 2244.
  • SAO sample adaptive offset
  • ALF adaptive loop filter
  • the filtering unit 2245 may output the decoded video to the display module 122 or other video receiving unit after the filtering unit 2245 performs the filtering process for the reconstructed blocks of the specific image frame.
  • the decoded picture buffer 2246 may be a reference picture memory that stores the reference block for use by the prediction process unit 2242 in decoding the bitstream (in inter coding modes) .
  • the decoded picture buffer 2246 may be formed by any of a variety of memory devices such as dynamic random-access memory (DRAM) , including synchronous DRAM (SDRAM) , magneto-resistive RAM (MRAM) , resistive RAM (RRAM) , or other types of memory devices.
  • DRAM dynamic random-access memory
  • SDRAM synchronous DRAM
  • MRAM magneto-resistive RAM
  • RRAM resistive RAM
  • the decoded picture buffer 2246 may be on-chip with other components of the decoder module 124 or off-chip relative to those components.
  • FIG. 3 illustrates a flowchart of a method 300 for encoding and decoding video data by an electronic device according to an example implementation of the present disclosure.
  • the method 300 is an example only as there are a variety of ways to perform the method.
  • the method 300 for decoding a bitstream and reconstructing a block unit may be performed using the configurations illustrated in FIG. 1 and FIG. 2 and various elements of these figures are referenced with regard to the method 300.
  • Each block illustrated in FIG. 3 may represent one or more processes, methods, or subroutines performed.
  • FIG. 3 The order of blocks in FIG. 3 is illustrative only and may change. Additional blocks may be added or less blocks may be utilized without departing from the present disclosure.
  • the decoder module 124 receives an image frame of video data.
  • the video data may be a bitstream.
  • the second electronic device 120 may receive the bitstream from an encoder, such as the first electronic device 110, or other video providers via the second interface 126.
  • the second interface 126 may provide the bitstream to the decoder module 124.
  • the entropy decoding unit 2241 may decode the bitstream to determine a plurality of prediction indications and a plurality of partitioning indications for the image frame and the decoder module 124 may further reconstruct the image frame based on the prediction indications and the partitioning indications.
  • the prediction indications and the partitioning indications may include a plurality of flags and a plurality of indices.
  • the decoder module 124 determines a block unit having a block width and a block height from the received video data.
  • the decoder module 124 may divide the image frame to determine the block unit according to the partition indications included in the bitstream. For example, the decoder module 124 may divide the image frames to generate a plurality of coding tree units and may further divide one of the coding tree units to determine the block unit according to the partition indications (e.g., based on a video coding standard) .
  • FIG. 4A-FIG. 4C are schematic illustrations of a block unit having different block sizes according to example implementations of the present disclosure.
  • a block size of the block unit may include a block height Hb and a block width Wb.
  • the block size of the block unit 410 includes the block height H410 and the block width W410
  • the block size of the block unit 430 includes the block height H430 and the block width W430
  • the block size of the block unit 450 includes the block height H450 and the block width W450.
  • the block height H410 may be equal to the block width W410
  • the block width W430 may be equal to 8 ⁇ H430
  • the block height H450 may be equal to 4 ⁇ W450.
  • the decoder module 124 determines a comparison between the block width and the block height. With further reference to FIG. 1 and FIG. 2, the decoder module 124 may determine the comparison between the block width and the block height based on an inequality.
  • a ratio value R between the block width Wb and the block height Hb may be greater than or equal to a threshold value T (i.e., R ⁇ T) .
  • the ratio value R between the width Wb and the block height Hb may be less than the threshold value T (i.e., R ⁇ T) .
  • the ratio value R between the width Wb and the block height Hb may be greater than the threshold value T (i.e., R > T) .
  • the ratio value R between the width Wb and the block height Hb may be less than or equal to the threshold value T (i.e., R ⁇ T) .
  • the decoder module 124 may determine a ratio parameter RP computed based on the block width Wb and the block height Hb.
  • a first ratio parameter RP1 may be computed based on division.
  • the first ratio parameter RP1 may be computed by dividing the block width Wb by the block height Hb, when the block width Wb is greater than the block height Hb. Therefore, the first ratio parameter RP1 may be computed by dividing the block height Hb by the block width Wb, when the block width Wb is less than or equal to the block height Hb.
  • the first ratio parameter RP1 may be equal to two to the power of the second ratio parameter RP2.
  • the decoder module 124 may compare the ratio parameter RP with a ratio threshold RT. The decoder module 124 may determine that the comparison indicates that the ratio value R is greater than the threshold value T when the ratio parameter RP is greater than the ratio threshold RT. The decoder module 124 may determine that the comparison indicates that the ratio value R is equal to the threshold value T when the ratio parameter RP is equal to the ratio threshold RT. The decoder module 124 may determine that the comparison indicates that the ratio value R is less than the threshold value T when the ratio parameter RP is less than the ratio threshold RT.
  • the decoder module 124 may compare the ratio parameter RP with a first ratio threshold RT1 when the first ratio parameter RP1 is set as the ratio parameter RP. In one implementation, the decoder module 124 may determine that the comparison indicates that the ratio value R is greater than or equal to the threshold value T when the first ratio parameter RP1 is greater than or equal to the first ratio threshold RT1. The decoder module 124 may determine that the comparison indicates that the ratio value R is less than the threshold value T when the first ratio parameter RP1 is less than the first ratio threshold RT1.
  • the decoder module 124 may determine that the comparison indicates that the ratio value R is less than or equal to the threshold value T when the first ratio parameter RP1 is less than or equal to the first ratio threshold RT1.
  • the first ratio threshold RT1 may be a positive integer.
  • the first ratio threshold RT1 may be equal to two to the power of n, and the number n is a positive integer.
  • the first ratio threshold RT1 may be preset to 1, 2, 4 or 8.
  • the decoder module 124 may compare the ratio parameter RP with a second ratio threshold RT2 when the second ratio parameter RP2 is set as the ratio parameter RP. The decoder module 124 may determine that the comparison indicates that the ratio value R is greater than or equal to the threshold value T when the second ratio parameter RP2 is greater than or equal to the second ratio threshold RT2. The decoder module 124 may determine that the comparison indicates that the ratio value R is less than the threshold value T when the second ratio parameter RP2 is less than the second ratio threshold RT2. The decoder module 124 may determine that the comparison indicates that the ratio value R is less than or equal to the threshold value T when the second ratio parameter RP2 is less than or equal to the second ratio threshold RT2.
  • the second ratio threshold RT2 may be a positive integer.
  • the second ratio threshold RT2 may be preset to 1, 2, 3 or 4.
  • the second ratio parameter RP2 of the block unit 410 equal to 0 and the second ratio parameter RP2 of the block unit 450 equal to 2 are less than the second ratio threshold RT2 when the second ratio threshold RT2 is equal to 3.
  • the second ratio parameter RP2 of the block unit 430 equal to 3 is not less than the second ratio threshold RT2 when the second ratio threshold RT2 is equal to 3.
  • the first ratio threshold RT1 may be equal to two to the power of the second ratio threshold RT2 since the first ratio parameter RP1 is equal to two to the power of the second ratio parameter RP2. Therefore, the result of the comparison may be unchanged when the decoder module 124 changes the comparison from a first inequality RP1 ⁇ RT1 to a second inequality RP2 ⁇ RT2.
  • the decoder module 124 may compare the block width Wb with a first magnified value N1 generated from the block height Hb and compare the block height Hb with a second magnified value N2 generated from the block width Wb when the decoder module 124 determines the comparison between the block width Wb and the block height Hb.
  • the first magnified value N1 may be computed by multiplying the block height Hb by the threshold value T
  • the decoder module 124 may determine the comparison indicates that the ratio value R is less than the threshold value T when the block width Wb is less than the first magnified value N1 and the block height Hb is less than the second magnified value N2.
  • the decoder module 124 may determine the comparison indicates that the ratio value R is greater than or equal to the threshold value T when the block width Wb is greater than or equal to the first magnified value N1.
  • the decoder module 124 may determine the comparison indicates that the ratio value R is greater than or equal to the threshold value T when the block height Hb is greater than or equal to the second magnified value N2.
  • the threshold value T may be equal to the first ratio threshold RT1 when the decoder module 124 determines the comparison based on the first magnified value N1 and the second magnified value N2.
  • the first ratio threshold RT1 may be preset to 1, 2, 4 or 8. Therefore, the threshold value T may be preset to 1, 2, 4 or 8 when the decoder module 124 determines the comparison based on the first magnified value N1 and the second magnified value N2.
  • a first division value Hb/Wb may be less than 1 when the block width Wb is greater than the block height Hb. Therefore, the first division value Hb/Wb is also less than the threshold value T and the fourth inequality Hb ⁇ RT1 ⁇ Wb is true when the block width Wb is greater than the block height Hb.
  • a second division value Wb/Hb may be less than 1 when the block width Wb is less than the block height Hb. Therefore, the second division value Wb/Hb is also less than the threshold value T and the third inequality Wb ⁇ RT1 ⁇ Hb is true when the block width Wb is less than the block height Hb and the threshold value T is preset to 1, 2, 4 or 8.
  • the first division value and the second division value may be equal to 1 when the block width Wb is equal to the block height Hb. Therefore, the first division value and the second division value is less than the threshold value T and the third inequality and the fourth inequality is true when the block width Wb is equal to the block height Hb and the threshold value T is preset to 2, 4 or 8.
  • the decoder module 124 determines whether a geometric partitioning mode is disabled for the block unit based on the comparison.
  • the method 300 may proceed to block 350 when the decoder module 124 determines that the geometric partitioning mode is disabled for the block unit.
  • the method may proceed to block 360 when the decoder module 124 determines the geometric partitioning mode is enabled for the block unit.
  • the decoder module 124 may determine that the geometric partitioning mode is disabled for the block unit when the comparison indicates that the ratio value R between the block width Wb and the block height Hb is greater than or equal to the threshold value T.
  • the decoder module 124 may determine that the geometric partitioning mode is enabled for the block unit when the comparison indicates that the ratio value R between the block width Wb and the block height Hb is less than the threshold value T.
  • the decoder module 124 may determine the comparison between the block width and the block height to restrict the usage of the geometric partitioning mode.
  • the prediction results of different neighboring partition lines may be similar to each other.
  • the block unit may be further divided along a horizontal direction or a vertical direction to generate two block units to be predicted respectively when the block unit is too narrow.
  • the geometric partitioning mode may be disabled when the block unit is too narrow.
  • the geometric partitioning mode may be disabled when the block width is greater than or equal to T times the block height, or the block height is greater than or equal to T times the block width.
  • the threshold value T may be equal to 2, 4 or 8.
  • the geometric partitioning mode may be disabled for the block unit when the decoder module 124 determines that the first ratio parameter RP1 is greater than or equal to the first ratio threshold RT1.
  • the geometric partitioning mode may be enabled for the block unit when the decoder module 124 determines that the first ratio parameter RP1 is less than the first ratio threshold RT1.
  • the first ratio threshold RT1 may be equal to eight. Therefore, the geometric partitioning mode may be disabled for the block unit when the first ratio parameter RP1 of the block unit is greater than or equal to 8.
  • the block width Wb and the block height Hb are greater than or equal to 8. Therefore, the geometric partitioning mode may not be applied on the block unit having the block size 64 ⁇ 8 or 8 ⁇ 64 when the first ratio threshold RT1 is equal to eight.
  • the geometric partitioning mode may not be applied on the block unit having the block size 64 ⁇ 8, 64 ⁇ 16, 32 ⁇ 8, 8 ⁇ 64, 16 ⁇ 64 or 8 ⁇ 32 when the first ratio threshold RT1 is equal to four.
  • the geometric partitioning mode may be disabled for the block unit when the decoder module 124 determines that the second ratio parameter RP2 is greater than or equal to the second ratio threshold RT2.
  • the geometric partitioning mode may be enabled for the block unit when the decoder module 124 determines that the second ratio parameter RP2 is less than the second ratio threshold RT2.
  • the geometric partitioning mode may not be applied on the block unit having the block size 64 ⁇ 8 or 8 ⁇ 64 when the second ratio threshold RT2 is equal to three.
  • the geometric partitioning mode may not be applied on the block unit having the block size 64 ⁇ 8, 64 ⁇ 16, 32 ⁇ 8, 8 ⁇ 64, 16 ⁇ 64 or 8 ⁇ 32 when the second ratio threshold RT2 is equal to two.
  • the geometric partitioning mode may be disabled for the block unit when the decoder module 124 determines that the block width Wb is greater than or equal to the first magnified value N1. In another implementation, the geometric partitioning mode may be disabled for the block unit when the decoder module 124 determines that the block height Hb is greater than or equal to the second magnified value N2. The geometric partitioning mode may be enabled for the block unit when the block width Wb is less than the first magnified value N1 and the block height Hb is less than the second magnified value N2.
  • the geometric partitioning mode may not be applied on the block unit having the block size 64 ⁇ 8 or 8 ⁇ 64 when the threshold value T is equal to eight.
  • the geometric partitioning mode may not be applied on the block unit having the block size 64 ⁇ 8 or 8 ⁇ 64 when the block width Wb is greater than or equal to the first magnified value 8 ⁇ Hb or block height Hb is greater than or equal to the second magnified value 8 ⁇ Wb.
  • the geometric partitioning mode may not be applied on the block unit having the block size 64 ⁇ 8, 64 ⁇ 16, 32 ⁇ 8, 8 ⁇ 64, 16 ⁇ 64 or 8 ⁇ 32 when the block width Wb is greater than or equal to the first magnified value 4 ⁇ Hb or block height Hb is greater than or equal to the second magnified value 4 ⁇ Wb.
  • the decoder module 124 predicts the block unit by using a first prediction mode different from the geometric partitioning mode to generate a predicted block.
  • the decoder module 124 may select the first prediction mode and predict the block unit based on the first prediction mode to generate the predicted block when the geometric partitioning mode is disabled for the block unit.
  • the decoder module 124 may predict the block unit based on the first prediction mode to generate the predicted block when the comparison indicates that the ratio value R between the block width Wb and the block height Hb is greater than or equal to the threshold value T.
  • the first prediction mode may be selected from a plurality of option modes. Each of the option modes is different from the geometric partitioning mode.
  • the option modes may be determined for selecting the first prediction mode by a first merge index. Therefore, the first prediction mode may be the one of the option modes selected based on the first merge index.
  • the first merge index may indicate the first prediction mode included in a merge candidate list.
  • the option modes may be a plurality of merge candidate modes in the merge candidate list determined from a plurality of spatial motion prediction modes of a plurality of neighboring blocks neighboring the block unit, a plurality of temporal motion prediction modes of a plurality of collocated blocks, history-based motion prediction modes stored in a first-in-first-out (FIFO) table, a plurality of pair-wise average motion prediction modes, and a zero motion mode.
  • FIFO first-in-first-out
  • the first prediction mode may indicate first motion information having a first reference frame and a first reference vector V0.
  • the first reference frame may be selected from a plurality of reference lists.
  • the first motion information may include a first list flag for determining one of the reference lists to select the first reference frame.
  • a plurality of block components in the block unit may be predicted based on the first reference vector V0 and the first reference frame.
  • the decoder module 124 predicts the block unit by using a second prediction mode selected from a plurality of candidate modes including the geometric partitioning mode to generate the predicted block.
  • the decoder module 124 may select the second prediction mode and predict the block unit based on the second prediction mode to generate the predicted block when the geometric partitioning mode is enabled for the block unit.
  • the decoder module 124 may predict the block unit based on the second prediction mode to generate the predicted block when the comparison indicates the ratio value R is less than the threshold value T.
  • the second prediction mode may be selected from the candidate modes.
  • the candidate modes may include the geometric partitioning mode.
  • the candidate modes may include the option modes and the geometric partitioning mode.
  • the decoder module 124 may determine a partition index, a second merge index and a third merge index of the block unit from the bitstream when the geometric partitioning mode is applied on the block unit for predicting the block unit.
  • FIG. 5A and FIG. 5B are schematic illustrations of a block unit having different sub-blocks separated based on a partition line, according to example implementations of the present disclosure.
  • the decoder module 124 may divide the block unit 510 to generate a first sub-block 512, a second sub-block 514 and a third sub-block 516 based on a partition line 511 determined from the partition index.
  • the partition index may indicate the partition line 511 of the block unit 510. In at least one implementation, the partition index may indicate a partition angle index and a partition distance index. The partition angle index may indicate a split angle of the partition line, and the partition distance index may indicate a split distance between the partition line and a center point of the block unit.
  • the decoder module 124 may determine the partition line based on the split angle and the split distance indicated by the partition index.
  • the bitstream may directly include the partition angle index and the partition distance index.
  • the decoder module 124 may predict the plurality of sub-blocks 512, 514 and 516 based on second motion information and third motion information determined from the merge candidate modes by the second merge index and the third merge index.
  • the second motion information indicated by the second merge index may be one of the merge candidate modes included in the merge candidate list
  • the third motion information indicated by the third merge index may also be one of the merge candidate modes included in the merge candidate list.
  • the second motion information may indicate a second reference frame and a second reference vector V1
  • the third motion information may indicate a third reference frame and a third reference vector V2.
  • the second reference frame and the third reference frame may be selected from the plurality of reference lists. In one implementation, the second reference frame and the third reference frame may be selected from the same one of the reference lists. In other implementations, the second reference frame and the third reference frame may be selected from different reference lists. In at least one implementation, the reference lists may be set as L0 and L1 when the number of the reference lists may be equal to 2.
  • the second motion information may include a second list flag for selecting the second reference frame and the third motion information may include a third list flag for selecting the third motion information.
  • the second reference frame and the third reference frame may be selected from the same reference list when the second list flag is equal to the third list flag.
  • the second reference frame and the third reference frame may be selected from different reference lists when the second list flag is different from the third list flag.
  • a plurality of first sub-block components in the first sub-block 512 may be predicted based on the second reference vector V1 and the second reference frame.
  • a plurality of second sub-block components in the second sub-block 514 may be predicted based on the third reference vector V2 and the third reference frame.
  • a plurality of third sub-block components in the third sub-block 516 may be predicted based on the second reference vector V1, the third reference vector V2, the second reference frame and the third reference frame.
  • the third sub-block components in the third sub-block 516 may be predicted by deriving a plurality of first reference samples determined based on the second reference vector V1 and the second reference frame and a plurality of second reference samples determined based on the third reference vector V2 and the third reference frame and merging the second reference samples and the third reference samples based on a plurality of blending weights.
  • the blending weights may be derived based on a plurality of component distances between the partition line 511 and the third sub-block components in the third prediction area 516.
  • the decoder module 124 reconstructs the block unit based on the predicted block.
  • the decoder module 124 may determine a plurality of residual components from the bitstream for the block unit, and add the residual components into the predicted block to reconstruct the block unit.
  • the decoder module 222 may reconstruct all of the other block units in the image frame for reconstructing the image frame and the video.
  • FIG. 6 illustrates a block diagram of the encoder module 114 of the first electronic device illustrated in FIG. 1 according to an example implementation of the present disclosure.
  • the encoder module 114 may include a prediction processor (e.g., prediction process unit 6141) , at least a first summer (e.g., first summer 6142) and a second summer (e.g., second summer 6145) , a transform/quantization processor (e.g., transform/quantization unit 6143) , an inverse quantization/inverse transform processor (e.g., inverse quantization/inverse transform unit 6144) , a filter (e.g., filtering unit 6146) , a decoded picture buffer (e.g., decoded picture buffer 6147) , and an entropy encoder (e.g., entropy encoding unit 6148) .
  • a prediction processor e.g., prediction process unit 6141
  • a first summer e.g., first summer 6142
  • the prediction process unit 6141 of the encoder module 114 may further include a partition processor (e.g., partition unit 61411) , an intra prediction processor (e.g., intra prediction unit 61412) , and an inter prediction processor (e.g., inter prediction unit 61413) .
  • a partition processor e.g., partition unit 61411
  • an intra prediction processor e.g., intra prediction unit 61412
  • an inter prediction processor e.g., inter prediction unit 61413
  • the encoder module 114 may receive the source video and encode the source video to output a bitstream.
  • the encoder module 114 may receive source video including a plurality of image frames and then divide the image frames according to a coding structure. Each of the image frames may be divided into at least one image block.
  • the at least one image block may include a luminance block having a plurality of luminance samples and at least one chrominance block having a plurality of chrominance samples.
  • the luminance block and the at least one chrominance block may be further divided to generate macroblocks, coding tree units (CTUs) , coding blocks (CBs) , sub-divisions thereof, and/or another equivalent coding unit.
  • CTUs coding tree units
  • CBs coding blocks
  • the encoder module 114 may perform additional sub-divisions of the source video. It should be noted that the disclosed implementations are generally applicable to video coding regardless of how the source video is partitioned prior to and/or during the encoding.
  • the prediction process unit 6141 may receive a current image block of a specific one of the image frames.
  • the current image block may be the luminance block or one of the chrominance blocks in the specific image frame.
  • the partition unit 61411 may divide the current image block into multiple block units.
  • the intra prediction unit 61412 may perform intra-predictive coding of a current block unit relative to one or more neighboring blocks in the same frame as the current block unit in order to provide spatial prediction.
  • the inter prediction unit 61413 may perform inter-predictive coding of the current block unit relative to one or more blocks in one or more reference image blocks to provide temporal prediction.
  • the prediction process unit 6141 may select one of the coding results generated by the intra prediction unit 61412 and the inter prediction unit 61413 based on a mode selection method, such as a cost function.
  • the mode selection method may be a rate-distortion optimization (RDO) process.
  • the prediction process unit 6141 may determine the selected coding result and provide a predicted block corresponding to the selected coding result to the first summer 6142 for generating a residual block and to the second summer 6145 for reconstructing the encoded block unit.
  • the prediction process unit 6141 may further provide syntax elements such as motion vectors, intra mode indicators, partition information, and other syntax information to the entropy encoding unit 6148.
  • the intra prediction unit 61412 may intra predict the current block unit.
  • the intra prediction unit 61412 may determine an intra prediction mode directed toward a reconstructed sample neighboring the current block unit in order to encode the current block unit.
  • the intra prediction unit 61412 may encode the current block unit using various intra prediction modes.
  • the intra prediction unit 61412 or the prediction process unit 6141 may select an appropriate intra prediction mode from the selected modes.
  • the intra prediction unit 61412 may encode the current block unit using a cross component prediction mode to predict one of the two chroma components of the current block unit based on the luma components of the current block unit.
  • the intra prediction unit 61412 may predict a first one of the two chroma components of the current block unit based on the second of the two chroma components of the current block unit.
  • the inter prediction unit 61413 may inter predict the current block unit as an alternative to the intra prediction performed by the intra prediction unit 61412.
  • the inter prediction unit 61413 may perform motion estimation to estimate motion of the current block unit for generating a motion vector.
  • the motion vector may indicate a displacement of the current block unit within the current image block relative to a reference block unit within a reference image block.
  • the inter prediction unit 61413 may receive at least one reference image block stored in the decoded picture buffer 6147 and estimate the motion based on the received reference image blocks to generate the motion vector.
  • the first summer 6142 may generate the residual block by subtracting the prediction block determined by the prediction process unit 6141 from the original current block unit.
  • the first summer 6142 may represent the component or components that perform this subtraction.
  • the transform/quantization unit 6143 may apply a transform to the residual block in order to generate a residual transform coefficient and then quantize the residual transform coefficients to further reduce bit rate.
  • the transform may be one of a DCT, DST, AMT, MDNSST, HyGT, signal dependent transform, KLT, wavelet transform, integer transform, sub-band transform or a conceptually similar transform.
  • the transform may convert the residual information from a pixel value domain to a transform domain, such as a frequency domain.
  • the degree of quantization may be modified by adjusting a quantization parameter.
  • the transform/quantization unit 6143 may perform a scan of the matrix including the quantized transform coefficients.
  • the entropy encoding unit 6148 may perform the scan.
  • the entropy encoding unit 6148 may receive a plurality of syntax elements from the prediction process unit 6141 and the transform/quantization unit 6143 including a quantization parameter, transform data, motion vectors, intra modes, partition information, and other syntax information.
  • the entropy encoding unit 6148 may encode the syntax elements into the bitstream.
  • the entropy encoding unit 6148 may entropy encode the quantized transform coefficients by performing CAVLC, CABAC, SBAC, PIPE coding or another entropy coding technique to generate an encoded bitstream.
  • the encoded bitstream may be transmitted to another device (i.e., the second electronic device 120 in FIG. 1) or archived for later transmission or retrieval.
  • the inverse quantization/inverse transform unit 6144 may apply inverse quantization and inverse transformation to reconstruct the residual block in the pixel domain for later use as a reference block.
  • the second summer 6145 may add the reconstructed residual block to the prediction block provided from the prediction process unit 6141 in order to produce a reconstructed block for storage in the decoded picture buffer 6147.
  • the filtering unit 6146 may include a deblocking filter, a SAO filter, a bilateral filter, and/or an ALF to remove blocking artifacts from the reconstructed block. Additional filters (in loop or post loop) may be used in addition to the deblocking filter, the SAO filter, the bilateral filter and the ALF. Such filters are not illustrated for brevity and may filter the output of the second summer 6145.
  • the decoded picture buffer 6147 may be a reference picture memory that stores the reference block for use by the encoder module 114 to encode video, such as in intra or inter coding modes.
  • the decoded picture buffer 6147 may include a variety of memory devices such as DRAM, including SDRAM, MRAM, RRAM) , or other types of memory devices.
  • the decoded picture buffer 6147 may be on-chip with other components of the encoder module 114 or off-chip relative to those components.
  • the encoder module 114 may perform the method 300 for encoding a video and reconstructing a block unit as illustrated in FIG. 3.
  • the method 300 may be performed using the configurations illustrated in FIG. 1 and FIG. 8 and various elements of these figures are referenced with regard to the method 300.
  • Each block illustrated in FIG. 3 may represent one or more processes, methods, or subroutines performed.
  • FIG. 3 The order of blocks in FIG. 3 is illustrative only and may change. Additional blocks may be added or less blocks may be utilized without departing from the present disclosure.
  • an encoder module 114 receives an image frame of video data.
  • the video data may be video.
  • the first electronic device 110 may receive the video via the source module 112.
  • the encoder module 114 may determine the image frame from the video.
  • an encoder module 114 determines a block unit having a block width and a block height from the received video data.
  • the encoder module 114 may divide the image frame to determine the block unit.
  • the prediction process unit 6141 of the first electronic device 110 may determine the block unit from the video via the partition unit 61411 and the encoder module 114 may provide a plurality of partition indications into a bitstream based on a partition result of the partition unit 61411.
  • the encoder module 114 may divide the image frames to generate a plurality of coding tree units and may further divide one of the coding tree units to determine the block unit (e.g., based on a video coding standard) .
  • the block height H410 is equal to the block width W410
  • the block width W430 is equal to 8 ⁇ H430
  • the block height H450 is equal to 4 ⁇ W450.
  • the encoder module 114 determines a comparison between the block width and the block height.
  • the encoder module 114 may determine the comparison between the block width and the block height based on an inequality. For example, a ratio value R between the block width Wb and the block height Hb is greater than or equal to a threshold value T (i.e., R ⁇ T) .
  • the ratio value R between the width Wb and the block height Hb may be less than the threshold value T (i.e., R ⁇ T) .
  • the encoder module 114 may determine a ratio parameter RP computed based on the block width Wb and the block height Hb.
  • a first ratio parameter RP1 may be computed based on the block width Wb and the block height Hb by division.
  • the encoder module 114 may compare the ratio parameter RP with a ratio threshold RT. In the implementation, the encoder module 114 may determine that the comparison indicates that the ratio value R is greater than or equal to the threshold value T when the ratio parameter RP is greater than or equal to the ratio threshold RT. The encoder module 114 may determine that the comparison indicates that the ratio value R is less than the threshold value T when the ratio parameter RP is less than the ratio threshold RT.
  • the encoder module 114 may compare the ratio parameter RP with a first ratio threshold RT1 when the first ratio parameter RP1 is set as the ratio parameter RP. In at least one implementation, the encoder module 114 may compare the ratio parameter RP with a second ratio threshold RT2 when the second ratio parameter RP2 is set as the ratio parameter RP.
  • the encoder module 114 may determine that the comparison indicates that the ratio value R is greater than or equal to the threshold value T when the first ratio parameter RP1 is greater than or equal to the first ratio threshold RT1 or the second ratio parameter RP2 is greater than or equal to the second ratio threshold RT2.
  • the encoder module 114 may determine that the comparison indicates that the ratio value R is less than the threshold value T when the first ratio parameter RP1 is less than the first ratio threshold RT1 or the second ratio parameter RP2 is less than the second ratio threshold RT2.
  • the first ratio threshold RT1 and the second ratio threshold RT2 may be a positive integer. In at least one implementation, the first ratio threshold RT1 may be equal to two to the power of n, and the number n is a positive integer. The first ratio threshold RT1 may be preset to 1, 2, 4 or 8, and the second ratio threshold RT2 may be preset to 1, 2, 3 or 4.
  • the first ratio threshold RT1 may be equal to two to the power of the second ratio threshold RT2 since the first ratio parameter RP1 is equal to two to the power of the second ratio parameter RP2. Therefore, the result of the comparison may be unchanged when the encoder module 114 changes the comparison from a first inequality RP1 ⁇ RT1 to a second inequality RP2 ⁇ RT2.
  • the encoder module 114 may compare the block width Wb with a first magnified value N1 computed by multiplying the block height Hb by the threshold value T and compare the block height Hb with a second magnified value N2 computed by multiplying the block width Wb by the threshold value T when the encoder module 114 determines the comparison between the block width Wb and the block height Hb.
  • the encoder module 114 may determine the comparison indicates that the ratio value R is less than the threshold value T when the block width Wb is less than the first magnified value N1 and the block height Hb is less than the second magnified value N2. In another implementation, the encoder module 114 may determine the comparison indicates that the ratio value R is greater than or equal to the threshold value T when the block width Wb is greater than or equal to the first magnified value N1 or the block height Hb is greater than or equal to the second magnified value N2.
  • the threshold value T may be equal to the first ratio threshold RT1 when the encoder module 114 determines the comparison based on the first magnified value N1 and the second magnified value N2.
  • the encoder module 114 determines whether a geometric partitioning mode is disabled for the block unit based on the comparison.
  • the method 300 may proceed to block 350 when the encoder module 114 determines that the geometric partitioning mode is disabled for the block unit.
  • the method may proceed to block 360 when the encoder module 114 determines the geometric partitioning mode is enabled for the block unit.
  • the encoder module 114 may determine that the geometric partitioning mode is disabled for the block unit when the comparison indicates that the ratio value R is greater than or equal to the threshold value T.
  • the encoder module 114 may determine that the geometric partitioning mode is enabled for the block unit when the comparison indicates that the ratio value R is less than the threshold value T.
  • the encoder module 114 predicts the block unit by using a first prediction mode different from the geometric partitioning mode to generate a predicted block.
  • the encoder module 114 may generate a plurality of coding results generated by the intra prediction unit 61412 and the inter prediction unit 61413. Then, the encoder module 114 may select one of the coding results based on a mode selection method, such as a cost function.
  • the mode selection method may be a rate-distortion optimization (RDO) process.
  • the encoder module 114 may select the first prediction mode from a plurality of option modes when the geometric partitioning mode is disabled for the block unit. In the implementation, the encoder module 114 may predict the block unit based on the first prediction mode to generate the predicted block when the comparison indicates that the ratio value R is greater than or equal to the threshold value T. Each of the option modes is different from the geometric partitioning mode. In at least one implementation, the option modes may include a plurality of merge candidate modes in a merge candidate list.
  • the encoder module 114 predicts the block unit by using a second prediction mode selected from a plurality of candidate modes including the geometric partitioning mode to generate the predicted block.
  • the encoder module 114 may generate the coding results generated by the intra prediction unit 61412 and the inter prediction unit 61413. Then, the encoder module 114 may select one of the coding results based on the mode selection method.
  • the encoder module 114 may select the second prediction mode from a plurality of candidate modes when the geometric partitioning mode is enabled for the block unit. In the implementation, the encoder module 114 may predict the block unit based on the second prediction mode to generate the predicted block when the comparison indicates that the ratio value R is less than the threshold value T. In at least one implementation, the candidate modes may include the option modes and the geometric partitioning modes.
  • the encoder module 114 reconstructs the block unit based on the predicted block.
  • the block unit may include a plurality of block elements.
  • each of the block elements may be a pixel element.
  • the encoder module 114 may determine a plurality of residual components by subtracting the predicted block from the block elements and provide the bitstream including a plurality of coefficients corresponding to the residual components. The encoder module 114 may return the residual components based on the coefficients, and add the returned residual components into the predicted block to generate a reconstructed block units.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method of decoding a bitstream by an electronic device is provided. An image frame of the bitstream is received, and a block unit having a block width and a block height is determined from the received image frame. The electronic device determines whether a geometric partitioning mode is disabled for the block unit based on a comparison between the block width and the block height. The block unit is divided to generate a plurality of sub-blocks predicted by different merge candidates of the block unit when the geometric partitioning mode is applied on the block unit. The predicted block is generated by predicting block unit based on a prediction mode different from the geometric partitioning mode when the geometric partitioning mode is disabled for the block unit. The block unit is reconstructed based on the predicted block.

Description

DEVICE AND METHOD FOR CODING VIDEO DATA
CROSS-REFERENCE TO RELATED APPLICATION (S)
The present disclosure claims the benefit of and priority to U.S. Provisional Patent Application Serial No. 62/955228, filed on December 30, 2019, entitled “Adjustment of Partition Candidates for Geometric Partition Prediction” (hereinafter referred to as “’ 228 provisional” ) . The disclosure of’ 228 provisional is hereby incorporated fully by reference into the present disclosure.
FIELD
The present disclosure is generally related to video coding, and specifically, to techniques for determining, based on a ratio value between a width and a height of a block unit in an image frame, whether the block unit is predicted by a geometric partitioning mode.
BACKGROUND
Geometric partitioning mode is a coding tool in a video coding method. In the geometric partitioning mode, an encoder may select two of a plurality of merge candidates for predicting a block unit in an image frame and provide two merge indices into a bitstream for a decoder to recognize the selected merge candidates.
The encoder and the decoder may split the block unit in the image frame along a partition line to generate a plurality of sub-blocks when the geometric partitioning mode is applied on the block unit. Some of the sub-blocks may be predicted based on only one of the two merge candidates, and other of the sub-blocks may be predicted based on both of the two merge candidates. Therefore, the encoder needs to select a prediction result of the block unit from a plurality of candidate results including a geometric result when the encoder tries a plurality of prediction modes including the geometric partitioning mode for predicting the block unit.
However, it is too complex for the encoder to predict the block unit in the geometric partitioning mode to generate the geometric result by selecting the partition line from a plurality of candidate lines and selecting the two merge candidates from the plurality of merge candidates. Therefore, in order to decrease the complexity of the encoding process, it may be important for the encoder to restrict the usage of the geometric partitioning mode for predicting the block unit. The decoder also needs to know how to determine that the geometric partitioning mode is disabled.
SUMMARY
The present disclosure is directed to a device and method for determining whether the geometric partitioning mode is applied on a block unit based on a comparison between a width and a height of the block unit.
In a first aspect of the present disclosure, a method for decoding a bitstream and an electronic device for performing the method are provided. The method comprises receiving an image frame of the bitstream; determining a block unit having a block width and a block height from the received image frame; determining whether a geometric partitioning mode is disabled for the block unit based on a comparison between the block width and the block height, wherein the block unit is divided to generate a plurality of sub-blocks predicted by different merge candidates of the block unit when the geometric partitioning mode is applied on the block unit; generating the predicted block by predicting the block unit based on a prediction mode different from the geometric partitioning mode when the geometric partitioning mode is disabled for the block unit; and reconstructing the block unit based on the predicted block.
In a second aspect of the present disclosure a method for decoding a bitstream and an electronic device for performing the method are provided. The method comprises receiving an image frame of the bitstream; determining a block unit having a block width and a block height from the received image frame; determining a comparison between the block width and the block height; generating a predicted block by predicting the block unit based on a first prediction mode different from a geometric partitioning mode when the comparison indicates that a ratio value between the block width and the block height is greater than or equal to a threshold value, wherein the block unit is divided to generate a plurality of sub-blocks each predicted by different merge candidates of the block unit when the geometric partitioning mode is applied on the block unit; and reconstructing the block unit based on the predicted block.
BRIEF DESCRIPTION OF THE DRAWINGS
Aspects of the present disclosure are best understood from the following detailed disclosure and the corresponding figures. Various features are not drawn to scale and dimensions of various features may be arbitrarily increased or reduced for clarity of discussion.
FIG. 1 illustrates a block diagram of a system configured to encode and decode video data according to an example implementation of the present disclosure.
FIG. 2 illustrates a block diagram of the decoder module of the second electronic device illustrated in FIG. 1 according to an example implementation of the present disclosure.
FIG. 3 illustrates a flowchart of a method for encoding and decoding video data by an electronic device according to an example implementation of the present disclosure.
FIG. 4A-FIG. 4C are schematic illustrations of a block unit having different block sizes, according to example implementations of the present disclosure.
FIG. 5A and FIG. 5B are schematic illustrations of a block unit having different sub-blocks separated based on a partition line, according to example implementations of the present disclosure
FIG. 6 illustrates a block diagram of the encoder module of the first electronic device illustrated in FIG. 1 according to an example implementation of the present disclosure.
DESCRIPTION
The following disclosure contains specific information pertaining to example implementations in the present disclosure. The figures and the corresponding detailed disclosure are directed to example implementations. However, the present disclosure is not limited to these example implementations. Other variations and implementations of the present disclosure will occur to those skilled in the art.
Unless noted otherwise, like or corresponding elements among the figures may be indicated by like or corresponding reference designators. The figures and illustrations in the present disclosure are generally not to scale and are not intended to correspond to actual relative dimensions.
For the purpose of consistency and ease of understanding, like features are identified (although, in some examples, not illustrated) by reference designators in the exemplary figures. However, the features in different implementations may differ in other respects and shall not be narrowly confined to what is illustrated in the figures.
The disclosure uses the phrases “in one implementation, ” or “in some implementations, ” may refer to one or more of the same or different implementations. The term “coupled” is defined as connected, whether directly or indirectly, through intervening components and is not necessarily limited to physical connections. The term “comprising” means “including, but not necessarily limited to” and specifically indicates open-ended inclusion or membership in the so-described combination, group, series and the equivalent.
For purposes of explanation and non-limitation, specific details such as functional entities, techniques, protocols, and standards are set forth for providing an understanding of the disclosed technology. In other examples, detailed disclosure of well-known methods, technologies, systems and architectures are omitted so as not to obscure the present disclosure with unnecessary details.
Persons skilled in the art will immediately recognize that any disclosed coding function (s) or algorithm (s) described in the present disclosure may be implemented by hardware,  software or a combination of software and hardware. Disclosed functions may correspond to modules that are software, hardware, firmware, or any combination thereof.
A software implementation may include a program having computer executable instructions stored on computer readable medium such as memory or other type of storage devices. For example, one or more microprocessors or general-purpose computers with communication processing capability may be programmed with executable instructions and perform the disclosed function (s) or algorithm (s) .
The microprocessors or general-purpose computers may be formed of applications specific integrated circuitry (ASIC) , programmable logic arrays, and/or using one or more digital signal processors (DSPs) . Although some of the disclosed implementations are oriented to software installed and executing on computer hardware, alternative implementations implemented as firmware or as hardware or combination of hardware and software are well within the scope of the present disclosure. The computer readable medium includes but is not limited to random-access memory (RAM) , read-only memory (ROM) , erasable programmable read-only memory (EPROM) , electrically erasable programmable read-only memory (EEPROM) , flash memory, compact disc read-only memory (CD ROM) , magnetic cassettes, magnetic tape, magnetic disk storage, or any other equivalent medium capable of storing computer-readable instructions.
FIG. 1 illustrates a block diagram of a system configured to encode and decode video data according to an example implementation of the present disclosure. The system 100 includes a first electronic device 110, a second electronic device 120, and a communication medium 130.
The first electronic device 110 may be a source device including any device configured to encode video data and transmit encoded video data to the communication medium 130. The second electronic device 120 may be a destination device including any device configured to receive encoded video data via the communication medium 130 and to decode encoded video data.
In at least one implementation, the first electronic device 110 may communicate via wire or wirelessly with the second electronic device 120 via the communication medium 130. The first electronic device 110 may include a source module 112, an encoder module 114, and a first interface 116. The second electronic device 120 may include a display module 122, a decoder module 124, and a second interface 126. The first electronic device 110 may be a video encoder and the second electronic device 120 may be a video decoder.
In at least one implementation, the first electronic device 110 and/or the second electronic device 120 may be a mobile phone, a tablet, a desktop, a notebook, or other electronic devices. FIG. 1 illustrates one example of the first electronic device 110 and the second electronic  device 120. The first electronic device 110 and second electronic device 120 may include more or less components than illustrated or have a different configuration of the various illustrated components.
In at least one implementation, the source module 112 may include a video capture device to capture new video, a video archive to store previously captured video, and/or a video feed interface to receive video from a video content provider. The source module 112 may generate computer graphics-based data as the source video or generate a combination of live video, archived video, and computer-generated video as the source video. The video capture device may be a charge-coupled device (CCD) image sensor, a complementary metal-oxide-semiconductor (CMOS) image sensor, or a camera.
In at least one implementation, the encoder module 114 and the decoder module 124 may each be implemented as any of a variety of suitable encoder/decoder circuitry such as one or more microprocessors, a central processing unit (CPU) , a graphic processing unit (GPU) , a system on chip (SoC) , digital signal processors (DSPs) , application specific integrated circuits (ASICs) , field programmable gate arrays (FPGAs) , discrete logic, software, hardware, firmware or any combinations thereof. When implemented partially in software, a device may store the program having instructions for the software in a suitable, non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the disclosed methods. In at least one implementation, each of the encoder module 114 and the decoder module 124 may be included in one or more encoders or decoders, any of which may be integrated as part of a combined encoder/decoder (CODEC) in a device.
In at least one implementation, the first interface 116 and the second interface 126 may utilize customized protocols or follow existing standards or de facto standards including, but not limited to, Ethernet, IEEE 802.11 or IEEE 802.15 series, Wireless USB or telecommunication standards including, but not limited to, Global System for Mobile Communications (GSM) , Code-Division Multiple Access 2000 (CDMA2000) , Time Division Synchronous Code Division Multiple Access (TD-SCDMA) , Worldwide Interoperability for Microwave Access (WiMAX) , Third Generation Partnership Project Long-Term Evolution (3GPP-LTE) or Time-Division LTE (TD-LTE) . In at least one implementation, the first interface 116 and the second interface 126 may each include any device configured to transmit and/or store a compliant video bitstream via the communication medium 130 and to receive the compliant video bitstream via the communication medium130.
In at least one implementation, the first interface 116 and the second interface 126 may include a computer system interface that enables a compliant video bitstream to be stored on a storage device or to be received from the storage device. For example, the first interface 116  and the second interface 126 may include a chipset supporting Peripheral Component Interconnect (PCI) and Peripheral Component Interconnect Express (PCIe) bus protocols, proprietary bus protocols, Universal Serial Bus (USB) protocols, I2C, or any other logical and physical structure that may be used to interconnect peer devices.
In at least one implementation, the display module 122 may include a display using liquid crystal display (LCD) technology, plasma display technology, organic light emitting diode (OLED) display technology, or light emitting polymer display (LPD) technology with other display technologies used in other implementations. The display module 122 may include a high-definition display or an ultra-high-definition display.
FIG. 2 illustrates a block diagram of the decoder module 124 of the second electronic device 120 illustrated in FIG. 1 according to an example implementation of the present disclosure. The decoder module 124 includes an entropy decoder (e.g., entropy decoding unit 2241) , a prediction processor (e.g., prediction process unit 2242) , an inverse quantization/inverse transform processor (e.g., inverse quantization/inverse transform unit 2243) , a summer (e.g., summer 2244) , a filter (e.g., filtering unit 2245) , and a decoded picture buffer (e.g., decoded picture buffer 2246) . The prediction process unit 2242 further includes an intra prediction processor (e.g., intra prediction unit 22421) and an inter prediction processor (e.g., inter prediction unit 22422) . The decoder module 124 receives a bitstream and decodes the bitstream to output decoded video.
The entropy decoding unit 2241 may receive the bitstream including a plurality of syntax elements from the second interface 126 in FIG. 1 and perform a parsing operation on the bitstream to extract syntax elements from the bitstream. As part of the parsing operation, the entropy decoding unit 2241 may entropy decode the bitstream to generate quantized transform coefficients, quantization parameters, transform data, motion vectors, intra modes, partition information, and other syntax information.
In at least one implementation, the entropy decoding unit 2241 may perform context adaptive variable length coding (CAVLC) , context adaptive binary arithmetic coding (CABAC) , syntax-based context-adaptive binary arithmetic coding (SBAC) , probability interval partitioning entropy (PIPE) coding or another entropy coding technique to generate the quantized transform coefficients. In at least one implementation, the entropy decoding unit 2241 may provide the quantized transform coefficients, the quantization parameters, and the transform data to the inverse quantization/inverse transform unit 2243 and provide the motion vectors, the intra modes, the partition information, and other syntax information to the prediction process unit 2242.
In at least one implementation, the prediction process unit 2242 may receive syntax elements such as motion vectors, intra modes, partition information, and other syntax information from the entropy decoding unit 2241. The prediction process unit 2242 may receive the syntax  elements including the partition information and divide image frames according to the partition information.
In at least one implementation, each of the image frames may be divided into at least one image block according to the partition information. The at least one image block may include a luminance block for reconstructing a plurality of luminance samples and at least one chrominance block for reconstructing a plurality of chrominance samples. The luminance block and the at least one chrominance block may be further divided to generate macroblocks, coding tree units (CTUs) , coding blocks (CBs) , sub-divisions thereof, and/or another equivalent coding unit.
In at least one implementation, during the decoding process, the prediction process unit 2242 may receive predicted data including the intra mode or the motion vector for a current image block of a specific one of the image frames. The current image block may be the luminance block or one of the chrominance blocks in the specific image frame.
In at least one implementation, the intra prediction unit 22421 may perform intra-predictive coding of a current block unit relative to one or more neighboring blocks in the same frame as the current block unit based on syntax elements related to the intra mode in order to generate a predicted block. The intra mode may specify the location of reference samples selected from the neighboring blocks within the current frame. In at least one implementation, the intra prediction unit 22421 may reconstruct a plurality of chroma components of the current block unit based on a plurality of luma components of the current block unit when the chroma components are reconstructed by the prediction process unit 2242.
In at least one implementation, the intra prediction unit 22421 may reconstruct a plurality of chroma components of the current block unit based on the plurality of luma components of the current block unit when the luma components of the current block are reconstructed by the prediction process unit 2242.
In at least one implementation, the inter prediction unit 22422 may perform inter-predictive coding of the current block unit relative to one or more blocks in one or more reference image blocks based on syntax elements related to the motion vector in order to generate the predicted block.
In at least one implementation, the motion vector may indicate a displacement of the current block unit within the current image block relative to a reference block unit within the reference image block. The reference block unit is a block determined to closely match the current block unit.
In at least one implementation, the inter prediction unit 22422 may receive the reference image block stored in the decoded picture buffer 2246 and reconstruct the current block unit based on the received reference image blocks.
In at least one implementation, the inverse quantization/inverse transform unit 2243 may apply inverse quantization and inverse transformation to reconstruct the residual block in the pixel domain. The inverse quantization/inverse transform unit 2243 may apply inverse quantization to the residual quantized transform coefficient to generate a residual transform coefficient and then apply inverse transformation to the residual transform coefficient to generate the residual block in the pixel domain.
In at least one implementation, the inverse transformation may be inversely applied by the transformation process such as discrete cosine transform (DCT) , discrete sine transform (DST) , adaptive multiple transform (AMT) , mode-dependent non-separable secondary transform (MDNSST) , hypercube-givens transform (HyGT) , signal dependent transform, Karhunen-Loéve transform (KLT) , wavelet transform, integer transform, sub-band transform or a conceptually similar transform.
In at least one implementation, the inverse transformation may convert the residual information from a transform domain, such as a frequency domain, back to the pixel domain. In at least one implementation, the degree of inverse quantization may be modified by adjusting a quantization parameter. The summer 2244 adds the residual block to the predicted block from the prediction process unit 2242 to produce a reconstructed block.
In at least one implementation, the summer 2244 adds the reconstructed residual block to the predicted block provided from the prediction process unit 2242 to produce a reconstructed block.
In at least one implementation, the filtering unit 2245 may include a deblocking filter, a sample adaptive offset (SAO) filter, a bilateral filter, and/or an adaptive loop filter (ALF) to remove blocking artifacts from the reconstructed block. Additional filters (in loop or post loop) may also be used in addition to the deblocking filter, the SAO filter, the bilateral filter and the ALF. Such filters are not explicitly illustrated for brevity but may filter the output of the summer 2244.
The filtering unit 2245 may output the decoded video to the display module 122 or other video receiving unit after the filtering unit 2245 performs the filtering process for the reconstructed blocks of the specific image frame.
In at least one implementation, the decoded picture buffer 2246 may be a reference picture memory that stores the reference block for use by the prediction process unit 2242 in decoding the bitstream (in inter coding modes) . The decoded picture buffer 2246 may be formed  by any of a variety of memory devices such as dynamic random-access memory (DRAM) , including synchronous DRAM (SDRAM) , magneto-resistive RAM (MRAM) , resistive RAM (RRAM) , or other types of memory devices.
In at least one implementation, the decoded picture buffer 2246 may be on-chip with other components of the decoder module 124 or off-chip relative to those components.
FIG. 3 illustrates a flowchart of a method 300 for encoding and decoding video data by an electronic device according to an example implementation of the present disclosure. The method 300 is an example only as there are a variety of ways to perform the method.
The method 300 for decoding a bitstream and reconstructing a block unit may be performed using the configurations illustrated in FIG. 1 and FIG. 2 and various elements of these figures are referenced with regard to the method 300. Each block illustrated in FIG. 3 may represent one or more processes, methods, or subroutines performed.
The order of blocks in FIG. 3 is illustrative only and may change. Additional blocks may be added or less blocks may be utilized without departing from the present disclosure.
At block 310, the decoder module 124 receives an image frame of video data. The video data may be a bitstream.
With reference to FIG. 1 and FIG. 2, the second electronic device 120 may receive the bitstream from an encoder, such as the first electronic device 110, or other video providers via the second interface 126. The second interface 126 may provide the bitstream to the decoder module 124.
In at least one implementation, the entropy decoding unit 2241 may decode the bitstream to determine a plurality of prediction indications and a plurality of partitioning indications for the image frame and the decoder module 124 may further reconstruct the image frame based on the prediction indications and the partitioning indications. The prediction indications and the partitioning indications may include a plurality of flags and a plurality of indices.
At block 320, the decoder module 124 determines a block unit having a block width and a block height from the received video data. With further reference to FIG. 1 and FIG. 2, the decoder module 124 may divide the image frame to determine the block unit according to the partition indications included in the bitstream. For example, the decoder module 124 may divide the image frames to generate a plurality of coding tree units and may further divide one of the coding tree units to determine the block unit according to the partition indications (e.g., based on a video coding standard) .
FIG. 4A-FIG. 4C are schematic illustrations of a block unit having different block sizes according to example implementations of the present disclosure. In at least one  implementation, a block size of the block unit may include a block height Hb and a block width Wb. For example, the block size of the block unit 410 includes the block height H410 and the block width W410, the block size of the block unit 430 includes the block height H430 and the block width W430, and the block size of the block unit 450 includes the block height H450 and the block width W450. In the implementation, the block height H410 may be equal to the block width W410, the block width W430 may be equal to 8 × H430 and the block height H450 may be equal to 4 × W450.
At block 330, the decoder module 124 determines a comparison between the block width and the block height. With further reference to FIG. 1 and FIG. 2, the decoder module 124 may determine the comparison between the block width and the block height based on an inequality.
For example, a ratio value R between the block width Wb and the block height Hb may be greater than or equal to a threshold value T (i.e., R ≥ T) . In one implementation, the ratio value R between the width Wb and the block height Hb may be less than the threshold value T (i.e., R < T) . In another implementation, the ratio value R between the width Wb and the block height Hb may be greater than the threshold value T (i.e., R > T) . In the other implementation, the ratio value R between the width Wb and the block height Hb may be less than or equal to the threshold value T (i.e., R < T) .
In at least one implementation, the decoder module 124 may determine a ratio parameter RP computed based on the block width Wb and the block height Hb. In at least one implementation, a first ratio parameter RP1 may be computed based on division. In one implementation, the first ratio parameter RP1 may be computed by dividing the block width Wb by the block height Hb (i.e., RP1 = Wb /Hb) when the block width Wb is greater than or equal to the block height Hb. The first ratio parameter RP1 may be computed by dividing the block height Hb by the block width Wb (i.e., RP1 = Hb /Wb) when the block width Wb is less than the block height Hb. In other implementation, the first ratio parameter RP1 may be computed by dividing the block width Wb by the block height Hb, when the block width Wb is greater than the block height Hb. Therefore, the first ratio parameter RP1 may be computed by dividing the block height Hb by the block width Wb, when the block width Wb is less than or equal to the block height Hb.
In at least one implementation, the second ratio parameter RP2 may be computed based on the block width Wb and the block height Hb by a logarithmic equation. In the implementation, one of the first ratio parameter RP1 and the second ratio parameter RP2 computed by different formulas may be set as the ratio parameter RP of the block unit. In at least one implementation, the second ratio parameter RP2 may be equal to an absolute value of a difference value generated by subtracting a base-2 logarithm of the block height from a base-2 logarithm of  the block width (i.e., RP2 = abs (log2 Wb –log2 Hb) ) . In other words, the second ratio parameter RP2 may be equal to the absolute value of a base-2 logarithm of a quotient value, and the quotient value is generated by dividing the block width Wb by the block height Hb or dividing the block height Hb by the block width Wb (i.e., RP2 = abs (log2 Wb –log2 Hb) = abs (log2 (Wb/Hb) ) =abs (log2 (Hb/Wb) ) ) . The first ratio parameter RP1 may be equal to two to the power of the second ratio parameter RP2.
In at least one implementation, the decoder module 124 may compare the ratio parameter RP with a ratio threshold RT. The decoder module 124 may determine that the comparison indicates that the ratio value R is greater than the threshold value T when the ratio parameter RP is greater than the ratio threshold RT. The decoder module 124 may determine that the comparison indicates that the ratio value R is equal to the threshold value T when the ratio parameter RP is equal to the ratio threshold RT. The decoder module 124 may determine that the comparison indicates that the ratio value R is less than the threshold value T when the ratio parameter RP is less than the ratio threshold RT.
In at least one implementation, the decoder module 124 may compare the ratio parameter RP with a first ratio threshold RT1 when the first ratio parameter RP1 is set as the ratio parameter RP. In one implementation, the decoder module 124 may determine that the comparison indicates that the ratio value R is greater than or equal to the threshold value T when the first ratio parameter RP1 is greater than or equal to the first ratio threshold RT1. The decoder module 124 may determine that the comparison indicates that the ratio value R is less than the threshold value T when the first ratio parameter RP1 is less than the first ratio threshold RT1. In other implementation, the decoder module 124 may determine that the comparison indicates that the ratio value R is less than or equal to the threshold value T when the first ratio parameter RP1 is less than or equal to the first ratio threshold RT1. In at least one implementation, the first ratio threshold RT1 may be a positive integer.
In at least one implementation, the first ratio threshold RT1 may be equal to two to the power of n, and the number n is a positive integer. The first ratio threshold RT1 may be preset to 1, 2, 4 or 8. For example, the first ratio parameter RP1 of the block unit 410 equal to 1 and the first ratio parameter RP1 = H450 /W450 of the block unit 450 equal to 4 are less than the first ratio threshold RT1 when the first ratio threshold RT1 is equal to 8. The first ratio parameter RP1 = W430 /H430 of the block unit 430 equal to 8 is not less than the first ratio threshold RT1 when the first ratio threshold RT1 is equal to 8.
In at least one implementation, the decoder module 124 may compare the ratio parameter RP with a second ratio threshold RT2 when the second ratio parameter RP2 is set as the ratio parameter RP. The decoder module 124 may determine that the comparison indicates that  the ratio value R is greater than or equal to the threshold value T when the second ratio parameter RP2 is greater than or equal to the second ratio threshold RT2. The decoder module 124 may determine that the comparison indicates that the ratio value R is less than the threshold value T when the second ratio parameter RP2 is less than the second ratio threshold RT2. The decoder module 124 may determine that the comparison indicates that the ratio value R is less than or equal to the threshold value T when the second ratio parameter RP2 is less than or equal to the second ratio threshold RT2.
In at least one implementation, the second ratio threshold RT2 may be a positive integer. The second ratio threshold RT2 may be preset to 1, 2, 3 or 4. For example, the second ratio parameter RP2 of the block unit 410 equal to 0 and the second ratio parameter RP2 of the block unit 450 equal to 2 are less than the second ratio threshold RT2 when the second ratio threshold RT2 is equal to 3. The second ratio parameter RP2 of the block unit 430 equal to 3 is not less than the second ratio threshold RT2 when the second ratio threshold RT2 is equal to 3.
In at least one implementation, the first ratio threshold RT1 may be equal to two to the power of the second ratio threshold RT2 since the first ratio parameter RP1 is equal to two to the power of the second ratio parameter RP2. Therefore, the result of the comparison may be unchanged when the decoder module 124 changes the comparison from a first inequality RP1 <RT1 to a second inequality RP2 < RT2.
In at least one implementation, the decoder module 124 may compare the block width Wb with a first magnified value N1 generated from the block height Hb and compare the block height Hb with a second magnified value N2 generated from the block width Wb when the decoder module 124 determines the comparison between the block width Wb and the block height Hb. The first magnified value N1 may be computed by multiplying the block height Hb by the threshold value T, and the second magnified value N2 may be computed by multiplying the block width Wb by the threshold value T (i.e., N1 = T× Hb and N2 = T× Wb) .
In at least one implementation, the decoder module 124 may determine the comparison indicates that the ratio value R is less than the threshold value T when the block width Wb is less than the first magnified value N1 and the block height Hb is less than the second magnified value N2. The decoder module 124 may determine the comparison indicates that the ratio value R is greater than or equal to the threshold value T when the block width Wb is greater than or equal to the first magnified value N1. The decoder module 124 may determine the comparison indicates that the ratio value R is greater than or equal to the threshold value T when the block height Hb is greater than or equal to the second magnified value N2.
In at least one implementation, the threshold value T may be equal to the first ratio threshold RT1 when the decoder module 124 determines the comparison based on the first  magnified value N1 and the second magnified value N2. The decoder module 124 may determine the comparison based on a third inequality Wb < N1 = RT1× Hb and a fourth inequality Hb < N2 = RT1 × Wb.
In at least one implementation, the first ratio threshold RT1 may be preset to 1, 2, 4 or 8. Therefore, the threshold value T may be preset to 1, 2, 4 or 8 when the decoder module 124 determines the comparison based on the first magnified value N1 and the second magnified value N2. In at least one implementation, a first division value Hb/Wb may be less than 1 when the block width Wb is greater than the block height Hb. Therefore, the first division value Hb/Wb is also less than the threshold value T and the fourth inequality Hb < RT1 × Wb is true when the block width Wb is greater than the block height Hb.
In other words, the comparison may only be influenced by the third inequality Wb < RT1× Hb when the block width Wb is greater than the block height Hb. Therefore, the comparison determined based on the first inequality RP1 = Wb /Hb < RT1 is identical to the comparison determined based on the third inequality Wb < N1 = RT1× Hb and the fourth inequality Hb < N2 = RT1 × Wb when the block width Wb is greater than the block height Hb.
In another implementation, a second division value Wb/Hb may be less than 1 when the block width Wb is less than the block height Hb. Therefore, the second division value Wb/Hb is also less than the threshold value T and the third inequality Wb < RT1 × Hb is true when the block width Wb is less than the block height Hb and the threshold value T is preset to 1, 2, 4 or 8.
In other words, the comparison may only be influenced by the fourth inequality Hb < RT1× Wb when the block width Wb is less than the block height Hb. Therefore, the comparison determined based on the first inequality RP1 = Hb /Wb < RT1 is identical to the comparison determined based on the third inequality Wb < N1 = RT1× Hb and the fourth inequality Hb < N2 = RT1 × Wb when the block width Wb is less than the block height Hb.
In other implementation, the first division value and the second division value may be equal to 1 when the block width Wb is equal to the block height Hb. Therefore, the first division value and the second division value is less than the threshold value T and the third inequality and the fourth inequality is true when the block width Wb is equal to the block height Hb and the threshold value T is preset to 2, 4 or 8. Thus, the comparison determined based on the first inequality RP1 = Hb /Wb < RT1 is identical to the comparison determined based on the third inequality Wb < N1 = RT1× Hb and the fourth inequality Hb < N2 = RT1 × Wb when the block width Wb is equal to the block height Hb and the threshold value T is preset to 2, 4 or 8.
At block 340, the decoder module 124 determines whether a geometric partitioning mode is disabled for the block unit based on the comparison. The method 300 may proceed to block 350 when the decoder module 124 determines that the geometric partitioning mode is  disabled for the block unit. The method may proceed to block 360 when the decoder module 124 determines the geometric partitioning mode is enabled for the block unit.
With further reference to FIG. 1 and FIG. 2, the decoder module 124 may determine that the geometric partitioning mode is disabled for the block unit when the comparison indicates that the ratio value R between the block width Wb and the block height Hb is greater than or equal to the threshold value T. The decoder module 124 may determine that the geometric partitioning mode is enabled for the block unit when the comparison indicates that the ratio value R between the block width Wb and the block height Hb is less than the threshold value T.
In at least one implementation, the decoder module 124 may determine the comparison between the block width and the block height to restrict the usage of the geometric partitioning mode. When the block unit is too narrow, the prediction results of different neighboring partition lines may be similar to each other. The block unit may be further divided along a horizontal direction or a vertical direction to generate two block units to be predicted respectively when the block unit is too narrow.
Therefore, the geometric partitioning mode may be disabled when the block unit is too narrow. For example, the geometric partitioning mode may be disabled when the block width is greater than or equal to T times the block height, or the block height is greater than or equal to T times the block width. In at least one implementation, the threshold value T may be equal to 2, 4 or 8.
In at least one implementation, the geometric partitioning mode may be disabled for the block unit when the decoder module 124 determines that the first ratio parameter RP1 is greater than or equal to the first ratio threshold RT1. The geometric partitioning mode may be enabled for the block unit when the decoder module 124 determines that the first ratio parameter RP1 is less than the first ratio threshold RT1.
For example, the first ratio threshold RT1 may be equal to eight. Therefore, the geometric partitioning mode may be disabled for the block unit when the first ratio parameter RP1 of the block unit is greater than or equal to 8.
In a general geometric partitioning mode, the block width Wb and the block height Hb are greater than or equal to 8. Therefore, the geometric partitioning mode may not be applied on the block unit having the block size 64×8 or 8×64 when the first ratio threshold RT1 is equal to eight. The geometric partitioning mode may not be applied on the block unit having the block size 64×8, 64×16, 32×8, 8×64, 16×64 or 8×32 when the first ratio threshold RT1 is equal to four.
In at least one implementation, the geometric partitioning mode may be disabled for the block unit when the decoder module 124 determines that the second ratio parameter RP2 is greater than or equal to the second ratio threshold RT2. The geometric partitioning mode may  be enabled for the block unit when the decoder module 124 determines that the second ratio parameter RP2 is less than the second ratio threshold RT2.
For example, the geometric partitioning mode may not be applied on the block unit having the block size 64×8 or 8×64 when the second ratio threshold RT2 is equal to three. The geometric partitioning mode may not be applied on the block unit having the block size 64×8, 64×16, 32×8, 8×64, 16×64 or 8×32 when the second ratio threshold RT2 is equal to two.
In at least one implementation, the geometric partitioning mode may be disabled for the block unit when the decoder module 124 determines that the block width Wb is greater than or equal to the first magnified value N1. In another implementation, the geometric partitioning mode may be disabled for the block unit when the decoder module 124 determines that the block height Hb is greater than or equal to the second magnified value N2. The geometric partitioning mode may be enabled for the block unit when the block width Wb is less than the first magnified value N1 and the block height Hb is less than the second magnified value N2.
For example, the geometric partitioning mode may not be applied on the block unit having the block size 64×8 or 8×64 when the threshold value T is equal to eight. In other words, the geometric partitioning mode may not be applied on the block unit having the block size 64×8 or 8×64 when the block width Wb is greater than or equal to the first magnified value 8×Hb or block height Hb is greater than or equal to the second magnified value 8×Wb. The geometric partitioning mode may not be applied on the block unit having the block size 64×8, 64×16, 32×8, 8×64, 16×64 or 8×32 when the block width Wb is greater than or equal to the first magnified value 4×Hb or block height Hb is greater than or equal to the second magnified value 4×Wb.
At block 350, the decoder module 124 predicts the block unit by using a first prediction mode different from the geometric partitioning mode to generate a predicted block. With further reference to FIG. 1 and FIG. 2, the decoder module 124 may select the first prediction mode and predict the block unit based on the first prediction mode to generate the predicted block when the geometric partitioning mode is disabled for the block unit. The decoder module 124 may predict the block unit based on the first prediction mode to generate the predicted block when the comparison indicates that the ratio value R between the block width Wb and the block height Hb is greater than or equal to the threshold value T.
In at least one implementation, the first prediction mode may be selected from a plurality of option modes. Each of the option modes is different from the geometric partitioning mode.
In at least one implementation, the option modes may be determined for selecting the first prediction mode by a first merge index. Therefore, the first prediction mode may be the one of the option modes selected based on the first merge index.
In the implementation, the first merge index may indicate the first prediction mode included in a merge candidate list. In at least one implementation, the option modes may be a plurality of merge candidate modes in the merge candidate list determined from a plurality of spatial motion prediction modes of a plurality of neighboring blocks neighboring the block unit, a plurality of temporal motion prediction modes of a plurality of collocated blocks, history-based motion prediction modes stored in a first-in-first-out (FIFO) table, a plurality of pair-wise average motion prediction modes, and a zero motion mode.
In at least one implantation, the first prediction mode may indicate first motion information having a first reference frame and a first reference vector V0. The first reference frame may be selected from a plurality of reference lists. In at least one implementation, the first motion information may include a first list flag for determining one of the reference lists to select the first reference frame. In at least one implementation, a plurality of block components in the block unit may be predicted based on the first reference vector V0 and the first reference frame.
At block 360, the decoder module 124 predicts the block unit by using a second prediction mode selected from a plurality of candidate modes including the geometric partitioning mode to generate the predicted block. With further reference to FIG. 1 and FIG. 2, the decoder module 124 may select the second prediction mode and predict the block unit based on the second prediction mode to generate the predicted block when the geometric partitioning mode is enabled for the block unit. The decoder module 124 may predict the block unit based on the second prediction mode to generate the predicted block when the comparison indicates the ratio value R is less than the threshold value T.
In at least one implementation, the second prediction mode may be selected from the candidate modes. In the implementation, the candidate modes may include the geometric partitioning mode. In one implementation, the candidate modes may include the option modes and the geometric partitioning mode.
In at least one implementation, the decoder module 124 may determine a partition index, a second merge index and a third merge index of the block unit from the bitstream when the geometric partitioning mode is applied on the block unit for predicting the block unit. FIG. 5A and FIG. 5B are schematic illustrations of a block unit having different sub-blocks separated based on a partition line, according to example implementations of the present disclosure. The decoder module 124 may divide the block unit 510 to generate a first sub-block 512, a second sub-block 514 and a third sub-block 516 based on a partition line 511 determined from the partition index.
In at least one implementation, the partition index may indicate the partition line 511 of the block unit 510. In at least one implementation, the partition index may indicate a  partition angle index and a partition distance index. The partition angle index may indicate a split angle of the partition line, and the partition distance index may indicate a split distance between the partition line and a center point of the block unit.
Therefore, the decoder module 124 may determine the partition line based on the split angle and the split distance indicated by the partition index. The bitstream may directly include the partition angle index and the partition distance index.
In at least one implementation, the decoder module 124 may predict the plurality of  sub-blocks  512, 514 and 516 based on second motion information and third motion information determined from the merge candidate modes by the second merge index and the third merge index. In at least one implementation, the second motion information indicated by the second merge index may be one of the merge candidate modes included in the merge candidate list, and the third motion information indicated by the third merge index may also be one of the merge candidate modes included in the merge candidate list. In at least one implantation, the second motion information may indicate a second reference frame and a second reference vector V1, and the third motion information may indicate a third reference frame and a third reference vector V2.
The second reference frame and the third reference frame may be selected from the plurality of reference lists. In one implementation, the second reference frame and the third reference frame may be selected from the same one of the reference lists. In other implementations, the second reference frame and the third reference frame may be selected from different reference lists. In at least one implementation, the reference lists may be set as L0 and L1 when the number of the reference lists may be equal to 2.
In at least one implementation, the second motion information may include a second list flag for selecting the second reference frame and the third motion information may include a third list flag for selecting the third motion information. In at least one implementation, the second reference frame and the third reference frame may be selected from the same reference list when the second list flag is equal to the third list flag. The second reference frame and the third reference frame may be selected from different reference lists when the second list flag is different from the third list flag.
In at least one implementation, a plurality of first sub-block components in the first sub-block 512 may be predicted based on the second reference vector V1 and the second reference frame. A plurality of second sub-block components in the second sub-block 514 may be predicted based on the third reference vector V2 and the third reference frame. In addition, a plurality of third sub-block components in the third sub-block 516 may be predicted based on the second reference vector V1, the third reference vector V2, the second reference frame and the third reference frame. The third sub-block components in the third sub-block 516 may be predicted by  deriving a plurality of first reference samples determined based on the second reference vector V1 and the second reference frame and a plurality of second reference samples determined based on the third reference vector V2 and the third reference frame and merging the second reference samples and the third reference samples based on a plurality of blending weights. In at least one implementation, the blending weights may be derived based on a plurality of component distances between the partition line 511 and the third sub-block components in the third prediction area 516.
At block 370, the decoder module 124 reconstructs the block unit based on the predicted block. With further reference to FIG. 1 and FIG. 2, the decoder module 124 may determine a plurality of residual components from the bitstream for the block unit, and add the residual components into the predicted block to reconstruct the block unit. The decoder module 222 may reconstruct all of the other block units in the image frame for reconstructing the image frame and the video.
FIG. 6 illustrates a block diagram of the encoder module 114 of the first electronic device illustrated in FIG. 1 according to an example implementation of the present disclosure. The encoder module 114 may include a prediction processor (e.g., prediction process unit 6141) , at least a first summer (e.g., first summer 6142) and a second summer (e.g., second summer 6145) , a transform/quantization processor (e.g., transform/quantization unit 6143) , an inverse quantization/inverse transform processor (e.g., inverse quantization/inverse transform unit 6144) , a filter (e.g., filtering unit 6146) , a decoded picture buffer (e.g., decoded picture buffer 6147) , and an entropy encoder (e.g., entropy encoding unit 6148) . The prediction process unit 6141 of the encoder module 114 may further include a partition processor (e.g., partition unit 61411) , an intra prediction processor (e.g., intra prediction unit 61412) , and an inter prediction processor (e.g., inter prediction unit 61413) .
The encoder module 114 may receive the source video and encode the source video to output a bitstream. The encoder module 114 may receive source video including a plurality of image frames and then divide the image frames according to a coding structure. Each of the image frames may be divided into at least one image block.
The at least one image block may include a luminance block having a plurality of luminance samples and at least one chrominance block having a plurality of chrominance samples. The luminance block and the at least one chrominance block may be further divided to generate macroblocks, coding tree units (CTUs) , coding blocks (CBs) , sub-divisions thereof, and/or another equivalent coding unit.
In at least one implementation, the encoder module 114 may perform additional sub-divisions of the source video. It should be noted that the disclosed implementations are  generally applicable to video coding regardless of how the source video is partitioned prior to and/or during the encoding.
In at least one implementation, during the encoding process, the prediction process unit 6141 may receive a current image block of a specific one of the image frames. The current image block may be the luminance block or one of the chrominance blocks in the specific image frame.
The partition unit 61411 may divide the current image block into multiple block units. The intra prediction unit 61412 may perform intra-predictive coding of a current block unit relative to one or more neighboring blocks in the same frame as the current block unit in order to provide spatial prediction. The inter prediction unit 61413 may perform inter-predictive coding of the current block unit relative to one or more blocks in one or more reference image blocks to provide temporal prediction.
In at least one implementation, the prediction process unit 6141 may select one of the coding results generated by the intra prediction unit 61412 and the inter prediction unit 61413 based on a mode selection method, such as a cost function. The mode selection method may be a rate-distortion optimization (RDO) process.
The prediction process unit 6141 may determine the selected coding result and provide a predicted block corresponding to the selected coding result to the first summer 6142 for generating a residual block and to the second summer 6145 for reconstructing the encoded block unit. The prediction process unit 6141 may further provide syntax elements such as motion vectors, intra mode indicators, partition information, and other syntax information to the entropy encoding unit 6148.
In at least one implementation, the intra prediction unit 61412 may intra predict the current block unit. The intra prediction unit 61412 may determine an intra prediction mode directed toward a reconstructed sample neighboring the current block unit in order to encode the current block unit.
In at least one implementation, the intra prediction unit 61412 may encode the current block unit using various intra prediction modes. The intra prediction unit 61412 or the prediction process unit 6141 may select an appropriate intra prediction mode from the selected modes. The intra prediction unit 61412 may encode the current block unit using a cross component prediction mode to predict one of the two chroma components of the current block unit based on the luma components of the current block unit. The intra prediction unit 61412 may predict a first one of the two chroma components of the current block unit based on the second of the two chroma components of the current block unit.
In at least one implementation, the inter prediction unit 61413 may inter predict the current block unit as an alternative to the intra prediction performed by the intra prediction unit 61412. The inter prediction unit 61413 may perform motion estimation to estimate motion of the current block unit for generating a motion vector.
The motion vector may indicate a displacement of the current block unit within the current image block relative to a reference block unit within a reference image block. The inter prediction unit 61413 may receive at least one reference image block stored in the decoded picture buffer 6147 and estimate the motion based on the received reference image blocks to generate the motion vector.
In at least one implementation, the first summer 6142 may generate the residual block by subtracting the prediction block determined by the prediction process unit 6141 from the original current block unit. The first summer 6142 may represent the component or components that perform this subtraction.
In at least one implementation, the transform/quantization unit 6143 may apply a transform to the residual block in order to generate a residual transform coefficient and then quantize the residual transform coefficients to further reduce bit rate. The transform may be one of a DCT, DST, AMT, MDNSST, HyGT, signal dependent transform, KLT, wavelet transform, integer transform, sub-band transform or a conceptually similar transform.
In at least one implementation, the transform may convert the residual information from a pixel value domain to a transform domain, such as a frequency domain. The degree of quantization may be modified by adjusting a quantization parameter.
In at least one implementation, the transform/quantization unit 6143 may perform a scan of the matrix including the quantized transform coefficients. Alternatively, the entropy encoding unit 6148 may perform the scan.
In at least one implementation, the entropy encoding unit 6148 may receive a plurality of syntax elements from the prediction process unit 6141 and the transform/quantization unit 6143 including a quantization parameter, transform data, motion vectors, intra modes, partition information, and other syntax information. The entropy encoding unit 6148 may encode the syntax elements into the bitstream.
In at least one implementation, the entropy encoding unit 6148 may entropy encode the quantized transform coefficients by performing CAVLC, CABAC, SBAC, PIPE coding or another entropy coding technique to generate an encoded bitstream. The encoded bitstream may be transmitted to another device (i.e., the second electronic device 120 in FIG. 1) or archived for later transmission or retrieval.
In at least one implementation, the inverse quantization/inverse transform unit 6144  may apply inverse quantization and inverse transformation to reconstruct the residual block in the pixel domain for later use as a reference block. The second summer 6145 may add the reconstructed residual block to the prediction block provided from the prediction process unit 6141 in order to produce a reconstructed block for storage in the decoded picture buffer 6147.
In at least one implementation, the filtering unit 6146 may include a deblocking filter, a SAO filter, a bilateral filter, and/or an ALF to remove blocking artifacts from the reconstructed block. Additional filters (in loop or post loop) may be used in addition to the deblocking filter, the SAO filter, the bilateral filter and the ALF. Such filters are not illustrated for brevity and may filter the output of the second summer 6145.
In at least one implementation, the decoded picture buffer 6147 may be a reference picture memory that stores the reference block for use by the encoder module 114 to encode video, such as in intra or inter coding modes. The decoded picture buffer 6147 may include a variety of memory devices such as DRAM, including SDRAM, MRAM, RRAM) , or other types of memory devices. The decoded picture buffer 6147 may be on-chip with other components of the encoder module 114 or off-chip relative to those components.
In at least one implementation, the encoder module 114 may perform the method 300 for encoding a video and reconstructing a block unit as illustrated in FIG. 3. The method 300 may be performed using the configurations illustrated in FIG. 1 and FIG. 8 and various elements of these figures are referenced with regard to the method 300. Each block illustrated in FIG. 3 may represent one or more processes, methods, or subroutines performed.
The order of blocks in FIG. 3 is illustrative only and may change. Additional blocks may be added or less blocks may be utilized without departing from the present disclosure.
At block 310, an encoder module 114 receives an image frame of video data. The video data may be video. With reference to FIGS. 1 and 6, the first electronic device 110 may receive the video via the source module 112. The encoder module 114 may determine the image frame from the video.
At block 320, an encoder module 114 determines a block unit having a block width and a block height from the received video data. With reference to FIG. 1 and FIG. 6, the encoder module 114 may divide the image frame to determine the block unit. In at least one implementation, the prediction process unit 6141 of the first electronic device 110 may determine the block unit from the video via the partition unit 61411 and the encoder module 114 may provide a plurality of partition indications into a bitstream based on a partition result of the partition unit 61411. In the implementation, the encoder module 114 may divide the image frames to generate a plurality of coding tree units and may further divide one of the coding tree units to determine the block unit (e.g., based on a video coding standard) . With reference to FIG. 4A-FIG. 4C, the block  height H410 is equal to the block width W410, the block width W430 is equal to 8 × H430 and the block height H450 is equal to 4 × W450.
At block 330, the encoder module 114 determines a comparison between the block width and the block height. With reference to FIG. 1 and FIG. 6, the encoder module 114 may determine the comparison between the block width and the block height based on an inequality. For example, a ratio value R between the block width Wb and the block height Hb is greater than or equal to a threshold value T (i.e., R ≥ T) . The ratio value R between the width Wb and the block height Hb may be less than the threshold value T (i.e., R < T) .
In at least one implementation, the encoder module 114 may determine a ratio parameter RP computed based on the block width Wb and the block height Hb. In at least one implementation, a first ratio parameter RP1 may be computed based on the block width Wb and the block height Hb by division. In one implementation, the first ratio parameter RP1 may be computed by dividing the block width Wb by the block height Hb (i.e., RP1 = Wb /Hb) when the block width Wb is greater than or equal to the block height Hb. In the implementation, the first ratio parameter RP1 may be computed by dividing the block height Hb by the block width Wb (i.e., RP1 = Hb /Wb) when the block width Wb is less than the block height Hb.
In at least one implementation, the second ratio parameter RP2 may be computed based on the block width Wb and the block height Hb by a logarithmic equation. In the implementation, one of the first ratio parameter RP1 and the second ratio parameter RP2 computed by different formulas may be set as the ratio parameter RP of the block unit. In at least one implementation, the second ratio parameter RP2 may be equal to an absolute value of a difference value generated by subtracting a base-2 logarithm of the block height from a base-2 logarithm of the block width (i.e., RP2 = abs (log2 Wb –log2 Hb) ) . In the implementation, the first ratio parameter RP1 may be equal to two to the power of the second ratio parameter RP2.
In at least one implementation, the encoder module 114 may compare the ratio parameter RP with a ratio threshold RT. In the implementation, the encoder module 114 may determine that the comparison indicates that the ratio value R is greater than or equal to the threshold value T when the ratio parameter RP is greater than or equal to the ratio threshold RT. The encoder module 114 may determine that the comparison indicates that the ratio value R is less than the threshold value T when the ratio parameter RP is less than the ratio threshold RT.
In at least one implementation, the encoder module 114 may compare the ratio parameter RP with a first ratio threshold RT1 when the first ratio parameter RP1 is set as the ratio parameter RP. In at least one implementation, the encoder module 114 may compare the ratio parameter RP with a second ratio threshold RT2 when the second ratio parameter RP2 is set as the ratio parameter RP.
In one implementation, the encoder module 114 may determine that the comparison indicates that the ratio value R is greater than or equal to the threshold value T when the first ratio parameter RP1 is greater than or equal to the first ratio threshold RT1 or the second ratio parameter RP2 is greater than or equal to the second ratio threshold RT2. The encoder module 114 may determine that the comparison indicates that the ratio value R is less than the threshold value T when the first ratio parameter RP1 is less than the first ratio threshold RT1 or the second ratio parameter RP2 is less than the second ratio threshold RT2.
In at least one implementation, the first ratio threshold RT1 and the second ratio threshold RT2 may be a positive integer. In at least one implementation, the first ratio threshold RT1 may be equal to two to the power of n, and the number n is a positive integer. The first ratio threshold RT1 may be preset to 1, 2, 4 or 8, and the second ratio threshold RT2 may be preset to 1, 2, 3 or 4.
In at least one implementation, the first ratio threshold RT1 may be equal to two to the power of the second ratio threshold RT2 since the first ratio parameter RP1 is equal to two to the power of the second ratio parameter RP2. Therefore, the result of the comparison may be unchanged when the encoder module 114 changes the comparison from a first inequality RP1 <RT1 to a second inequality RP2 < RT2.
In at least one implementation, the encoder module 114 may compare the block width Wb with a first magnified value N1 computed by multiplying the block height Hb by the threshold value T and compare the block height Hb with a second magnified value N2 computed by multiplying the block width Wb by the threshold value T when the encoder module 114 determines the comparison between the block width Wb and the block height Hb.
In at least one implementation, the encoder module 114 may determine the comparison indicates that the ratio value R is less than the threshold value T when the block width Wb is less than the first magnified value N1 and the block height Hb is less than the second magnified value N2. In another implementation, the encoder module 114 may determine the comparison indicates that the ratio value R is greater than or equal to the threshold value T when the block width Wb is greater than or equal to the first magnified value N1 or the block height Hb is greater than or equal to the second magnified value N2.
In at least one implementation, the threshold value T may be equal to the first ratio threshold RT1 when the encoder module 114 determines the comparison based on the first magnified value N1 and the second magnified value N2. The encoder module 114 may determine the comparison based on a third inequality Wb < N1 = RT1× Hb and a fourth inequality Hb < N2 = RT1 × Wb.
In at least one implementation, the comparison determined based on the first inequality RP1 = Wb /Hb < RT1 may be identical to the comparison determined based on the third inequality Wb < N1 = RT1× Hb and the fourth inequality Hb < N2 = RT1 × Wb when the block width Wb is greater than the block height Hb. The comparison determined based on the first inequality RP1 = Hb /Wb < RT1 may also be identical to the comparison determined based on the third inequality Wb < N1 = RT1× Hb and the fourth inequality Hb < N2 = RT1 × Wb when the block width Wb is less than the block height Hb.
At block 340, the encoder module 114 determines whether a geometric partitioning mode is disabled for the block unit based on the comparison. The method 300 may proceed to block 350 when the encoder module 114 determines that the geometric partitioning mode is disabled for the block unit. The method may proceed to block 360 when the encoder module 114 determines the geometric partitioning mode is enabled for the block unit.
With further reference to FIG. 1 and FIG. 6, the encoder module 114 may determine that the geometric partitioning mode is disabled for the block unit when the comparison indicates that the ratio value R is greater than or equal to the threshold value T. The encoder module 114 may determine that the geometric partitioning mode is enabled for the block unit when the comparison indicates that the ratio value R is less than the threshold value T.
At block 350, the encoder module 114 predicts the block unit by using a first prediction mode different from the geometric partitioning mode to generate a predicted block.. With further reference to FIG. 1 and FIG. 6, the encoder module 114 may generate a plurality of coding results generated by the intra prediction unit 61412 and the inter prediction unit 61413. Then, the encoder module 114 may select one of the coding results based on a mode selection method, such as a cost function. The mode selection method may be a rate-distortion optimization (RDO) process.
In at least one implementation, the encoder module 114 may select the first prediction mode from a plurality of option modes when the geometric partitioning mode is disabled for the block unit. In the implementation, the encoder module 114 may predict the block unit based on the first prediction mode to generate the predicted block when the comparison indicates that the ratio value R is greater than or equal to the threshold value T. Each of the option modes is different from the geometric partitioning mode. In at least one implementation, the option modes may include a plurality of merge candidate modes in a merge candidate list.
At block 360, the encoder module 114 predicts the block unit by using a second prediction mode selected from a plurality of candidate modes including the geometric partitioning mode to generate the predicted block. With further reference to FIG. 1 and FIG. 6, the encoder module 114 may generate the coding results generated by the intra prediction unit 61412 and the  inter prediction unit 61413. Then, the encoder module 114 may select one of the coding results based on the mode selection method.
In at least one implementation, the encoder module 114 may select the second prediction mode from a plurality of candidate modes when the geometric partitioning mode is enabled for the block unit. In the implementation, the encoder module 114 may predict the block unit based on the second prediction mode to generate the predicted block when the comparison indicates that the ratio value R is less than the threshold value T. In at least one implementation, the candidate modes may include the option modes and the geometric partitioning modes.
At block 370, the encoder module 114 reconstructs the block unit based on the predicted block. In at least one implementation, the block unit may include a plurality of block elements. In the implementation, each of the block elements may be a pixel element. With further reference to FIG. 1 and FIG. 2, the encoder module 114 may determine a plurality of residual components by subtracting the predicted block from the block elements and provide the bitstream including a plurality of coefficients corresponding to the residual components. The encoder module 114 may return the residual components based on the coefficients, and add the returned residual components into the predicted block to generate a reconstructed block units.
The disclosed implementations are to be considered in all respects as illustrative and not restrictive. It should also be understood that the present disclosure is not limited to the specific disclosed implementations but that many rearrangements, modifications, and substitutions are possible without departing from the scope of the present disclosure.

Claims (20)

  1. A method of decoding a bitstream by an electronic device, the method comprising:
    receiving an image frame of the bitstream;
    determining a block unit having a block width and a block height from the received image frame;
    determining whether a geometric partitioning mode is disabled for the block unit based on a comparison between the block width and the block height, wherein the block unit is divided to generate a plurality of sub-blocks predicted by different merge candidates of the block unit when the geometric partitioning mode is applied on the block unit;
    generating a predicted block by predicting the block unit based on a prediction mode different from the geometric partitioning mode when the geometric partitioning mode is disabled for the block unit; and
    reconstructing the block unit based on the predicted block.
  2. The method according to claim 1, further comprising:
    determining that the geometric partitioning mode is disabled for the block unit when the comparison indicates that a ratio value between the block width and the block height is greater than or equal to a threshold value; and
    determining that the geometric partitioning mode is not disabled when the comparison indicates that the ratio value is less than the threshold value.
  3. The method according to claim 2, further comprising:
    determining a ratio parameter generated based on the block width and the block height;
    comparing the ratio parameter with a ratio threshold;
    determining that the comparison indicates that the ratio value is greater than or equal to the threshold value when the ratio parameter is greater than or equal to the ratio threshold; and
    determining that the comparison indicates that the ratio value is less than the threshold value when the ratio parameter is less than the ratio threshold.
  4. The method according to claim 3, wherein
    the ratio parameter is determined by dividing the block width by the block height when the block width is greater than the block height; and
    the ratio parameter is determined by dividing the block height by the block width when the block height is greater than or equal to the block width.
  5. The method according to claim 3, wherein the ratio parameter is equal to an absolute value of a difference value generated by subtracting a base-2 logarithm of the block height from a base-2 logarithm of the block width.
  6. The method according to claim 2, wherein further comprising:
    comparing the block width with a first magnified value generated by multiplying the block height by the threshold value;
    comparing the block height with a second magnified value generated by multiplying the block width by the threshold value;
    determining that the comparison indicates that the ratio value is less than the threshold value when the block width is less than the first magnified value and the block height is less than the second magnified value;
    determining that the comparison indicates that the ratio value is greater than or equal to the threshold value when the block width is greater than or equal to the first magnified value; and
    determining that the comparison indicates that the ratio value is greater than or equal to the threshold value when the block height is greater than or equal to the second magnified value.
  7. The method according to claim 1, further comprising:
    determining a partition index, a first merge index and a second merge index of the block unit when the block unit is predicted by the geometric partitioning mode;
    dividing the block unit to generate a first sub-block and a second sub-block based on the partition index; and
    predicting the first sub-block by a first merge mode selected from a plurality of merge candidates based on the first merge index and predicting the second sub-block by a second merge mode selected from the plurality of merge candidates based on the second merge index to generate the predicted block.
  8. A method of decoding a bitstream by an electronic device, the method comprising:
    receiving an image frame of the bitstream;
    determining a block unit having a block width and a block height from the received image frame;
    determining a comparison between the block width and the block height;
    generating a predicted block by predicting the block unit based on a first prediction mode different from a geometric partitioning mode when the comparison indicates that a ratio value between the block width and the block height is greater than or equal to a threshold value, wherein the block unit is divided to generate a plurality of sub-blocks predicted by different merge candidates of the block unit when the geometric partitioning mode is applied on the block unit; and
    reconstructing the block unit based on the predicted block.
  9. The method according to claim 8, further comprising:
    determining a ratio parameter generated based on the block width and the block height;
    comparing the ratio parameter with a ratio threshold; and
    determining that the comparison indicates that the ratio value is greater than or equal to the threshold value when the ratio parameter is greater than or equal to the ratio threshold.
  10. The method according to claim 9, wherein:
    the ratio parameter is determined by dividing the block width by the block height when the block width is greater than the block height; and
    the ratio parameter is determined by dividing the block height by the block width when the block height is greater than or equal to the block width.
  11. The method according to claim 9, wherein the ratio parameter is equal to an absolute value of a difference value generated by subtracting a base-2 logarithm of the block height from a base-2 logarithm of the block width.
  12. The method according to claim 9, further comprising:
    determining that the comparison indicates that the ratio value is less than the threshold value when the ratio parameter is less than the ratio threshold; and
    generating the predicted block by predicting the block unit based on a second prediction mode selected from a plurality of candidate modes including the geometric partitioning mode when the comparison indicates that the ratio value is less than the threshold value.
  13. The method according to claim 8, further comprising:
    comparing the block width with a first magnified value generated by multiplying the block height by the threshold value;
    comparing the block height with a second magnified value generated by multiplying the  block width by the threshold value;
    determining that the comparison indicates that the ratio value is greater than or equal to the threshold value when the block width is greater than or equal to the first magnified value; and
    determining that the comparison indicates that the ratio value is greater than or equal to the threshold value when the block height is greater than or equal to the second magnified value.
  14. The method according to claim 13, further comprising:
    determining that the comparison indicates that the ratio value is less than the threshold value when the block width is less than the first magnified value and the block height is less than the second magnified value; and
    predicting the block unit to generate the predicted block by using a second prediction mode selected from a plurality of candidate modes including the geometric partitioning mode when the comparison represents that the ratio value is less than the threshold value.
  15. An electronic device for decoding a bitstream, the electronic device comprising:
    at least one processor; and
    a storage device coupled to the at least one processor and storing a program which, when executed by the at least one processor, causes the at least one processor to:
    receive an image frame of the bitstream;
    determine a block unit having a block width and a block height from the received image frame;
    determine a comparison between the block width and the block height;
    generate a predicted block by predicting the block unit based on a first prediction mode different from a geometric partitioning mode when the comparison indicates that a ratio value between the block width and the block height is greater than or equal to a threshold value, wherein the block unit is divided to generate a plurality of sub-blocks predicted by different merge candidates of the block unit when the geometric partitioning mode is applied on the block unit; and
    reconstruct the block unit based on the predicted block.
  16. The electronic device according to claim 15, wherein the program, when executed by the at least one processor, further causes the at least one processor to:
    determine a ratio parameter generated based on the block width and the block height;
    compare the ratio parameter with a ratio threshold; and
    determine that the comparison indicates that the ratio value is greater than or equal to the  threshold value when the ratio parameter is greater than or equal to the ratio threshold.
  17. The electronic device according to claim 16, wherein:
    the ratio parameter is determined by dividing the block width by the block height when the block width is greater than the block height; and
    the ratio parameter is determined by dividing the block height by the block width when the block height is greater than or equal to the block width.
  18. The electronic device according to claim 16, wherein the ratio parameter is equal to an absolute value of a difference value generated by subtracting a base-2 logarithm of the block height from a base-2 logarithm of the block width.
  19. The electronic device according to claim 16, wherein the program, when executed by the at least one processor, further causes the at least one processor to:
    determine that the comparison indicates that the ratio value is less than the threshold value when the ratio parameter is less than the ratio threshold; and
    generate the predicted block by predicting the block unit based on a second prediction mode selected from a plurality of candidate modes including the geometric partitioning mode when the comparison represents that the ratio value is less than the threshold value.
  20. The electronic device according to claim 15, wherein the program, when executed by the at least one processor, further causes the at least one processor to:
    compare the block width with a first magnified value generated by multiplying the block height by the threshold value;
    compare the block height with a second magnified value generated by multiplying the block width by the threshold value;
    determine that the comparison indicates that the ratio value is greater than or equal to the threshold value when the block width is greater than or equal to the first magnified value;
    determine that the comparison indicates that the ratio value is greater than or equal to the threshold value when the block height is greater than or equal to the second magnified value;
    determine that the comparison indicates that the ratio value is less than the threshold value when the block width is less than the first magnified value and the block height is less than the second magnified value; and
    generate the predicted block by predicting the block unit based on a second prediction mode selected from a plurality of candidate modes including the geometric partitioning mode  when the comparison indicates that the ratio value is less than the threshold value.
PCT/CN2020/141275 2019-12-30 2020-12-30 Device and method for coding video data WO2021136349A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
KR1020227025262A KR20220113533A (en) 2019-12-30 2020-12-30 Device and method for coding video data
JP2022528269A JP7499854B2 (en) 2019-12-30 2020-12-30 DEVICE AND METHOD FOR CODING VIDEO DATA - Patent application
EP20911086.5A EP4055826A4 (en) 2019-12-30 2020-12-30 Device and method for coding video data
CN202080088003.9A CN114902663A (en) 2019-12-30 2020-12-30 Apparatus and method for encoding video data
MX2022007973A MX2022007973A (en) 2019-12-30 2020-12-30 Device and method for coding video data.

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962955228P 2019-12-30 2019-12-30
US62/955,228 2019-12-30

Publications (1)

Publication Number Publication Date
WO2021136349A1 true WO2021136349A1 (en) 2021-07-08

Family

ID=76545708

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/141275 WO2021136349A1 (en) 2019-12-30 2020-12-30 Device and method for coding video data

Country Status (7)

Country Link
US (3) US11284078B2 (en)
EP (1) EP4055826A4 (en)
JP (1) JP7499854B2 (en)
KR (1) KR20220113533A (en)
CN (1) CN114902663A (en)
MX (1) MX2022007973A (en)
WO (1) WO2021136349A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020231953A1 (en) * 2019-05-10 2020-11-19 Beijing Dajia Internet Information Technology Co., Ltd. Methods and apparatuses for video coding with triangle prediction
KR20220070530A (en) * 2019-10-10 2022-05-31 베이징 다지아 인터넷 인포메이션 테크놀로지 컴퍼니 리미티드 Method and apparatus for video coding using triangulation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103004198A (en) * 2010-07-15 2013-03-27 索尼公司 Image processing apparatus and image processing method
CN105637868A (en) * 2013-08-09 2016-06-01 三星电子株式会社 Method and apparatus for determining merge mode
WO2019078581A1 (en) * 2017-10-16 2019-04-25 디지털인사이트주식회사 Method, device, and recording medium storing bit stream, for encoding/decoding image
WO2019201203A1 (en) * 2018-04-16 2019-10-24 Mediatek Inc. Methods and apparatuses of video processing with overlapped block motion compensation in video coding systems

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6519005B2 (en) * 1999-04-30 2003-02-11 Koninklijke Philips Electronics N.V. Method of concurrent multiple-mode motion estimation for digital video
JP4015934B2 (en) * 2002-04-18 2007-11-28 株式会社東芝 Video coding method and apparatus
KR101380580B1 (en) * 2006-08-02 2014-04-02 톰슨 라이센싱 Methods and apparatus for adaptive geometric partitioning for video encoding
US9288504B2 (en) * 2007-04-12 2016-03-15 Thomson Licensing Method and apparatus for fast geometric mode desicion in a video encoder
WO2009051719A2 (en) * 2007-10-16 2009-04-23 Thomson Licensing Methods and apparatus for video encoding and decoding geometically partitioned super blocks
EP2093699A1 (en) * 2008-02-19 2009-08-26 British Telecommunications Public Limited Company Movable object status determination
JP5422168B2 (en) * 2008-09-29 2014-02-19 株式会社日立製作所 Video encoding method and video decoding method
KR102219985B1 (en) * 2010-05-04 2021-02-25 엘지전자 주식회사 Method and apparatus for processing a video signal
US20120147961A1 (en) * 2010-12-09 2012-06-14 Qualcomm Incorporated Use of motion vectors in evaluating geometric partitioning modes
US9532066B2 (en) * 2011-01-21 2016-12-27 Qualcomm Incorporated Motion vector prediction
US9736489B2 (en) * 2011-09-17 2017-08-15 Qualcomm Incorporated Motion vector determination for video coding
IN2014CN02602A (en) * 2011-10-19 2015-08-07 Panasonic Corp
US20130114717A1 (en) * 2011-11-07 2013-05-09 Qualcomm Incorporated Generating additional merge candidates
US9426463B2 (en) * 2012-02-08 2016-08-23 Qualcomm Incorporated Restriction of prediction units in B slices to uni-directional inter prediction
US9532057B2 (en) * 2012-12-28 2016-12-27 Qualcomm Incorporated Inter-layer prediction using sample-adaptive adjustments for bit depth scalable video coding
US9826244B2 (en) * 2013-01-08 2017-11-21 Qualcomm Incorporated Device and method for scalable coding of video information based on high efficiency video coding
WO2018037853A1 (en) * 2016-08-26 2018-03-01 シャープ株式会社 Image decoding apparatus and image coding apparatus
WO2019001734A1 (en) * 2017-06-30 2019-01-03 Huawei Technologies Co., Ltd. Encoder, decoder, computer program and computer program product for processing a frame of a video sequence
WO2019151279A1 (en) * 2018-01-30 2019-08-08 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device, decoding device, encoding method, and decoding method
US11695967B2 (en) * 2018-06-22 2023-07-04 Op Solutions, Llc Block level geometric partitioning
SG11202104643QA (en) * 2018-11-08 2021-06-29 Guangdong Oppo Mobile Telecommunications Corp Ltd Method for encoding/decoding image signal and device therefor
KR20210118154A (en) * 2019-01-28 2021-09-29 오피 솔루션즈, 엘엘씨 Inter prediction in geometric partitioning with an adaptive number of regions
JP2022538969A (en) * 2019-06-24 2022-09-07 アリババ グループ ホウルディング リミテッド Method and apparatus for motion field storage in video coding
KR20220053567A (en) * 2019-09-01 2022-04-29 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 Prediction weight alignment in video coding
WO2021104433A1 (en) * 2019-11-30 2021-06-03 Beijing Bytedance Network Technology Co., Ltd. Simplified inter prediction with geometric partitioning
WO2021129694A1 (en) * 2019-12-24 2021-07-01 Beijing Bytedance Network Technology Co., Ltd. High level syntax for inter prediction with geometric partitioning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103004198A (en) * 2010-07-15 2013-03-27 索尼公司 Image processing apparatus and image processing method
CN105637868A (en) * 2013-08-09 2016-06-01 三星电子株式会社 Method and apparatus for determining merge mode
WO2019078581A1 (en) * 2017-10-16 2019-04-25 디지털인사이트주식회사 Method, device, and recording medium storing bit stream, for encoding/decoding image
WO2019201203A1 (en) * 2018-04-16 2019-10-24 Mediatek Inc. Methods and apparatuses of video processing with overlapped block motion compensation in video coding systems

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
L. XU (HIKVISION), X. CAO, Y. SUN, F. CHEN, L. WANG (HIKVISION): "CE4-related: On Modifications of GEO", 16. JVET MEETING; 20191001 - 20191011; GENEVA; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), no. JVET-P0248 ; m50212, 4 October 2019 (2019-10-04), XP030216676 *
See also references of EP4055826A4 *

Also Published As

Publication number Publication date
US20240089432A1 (en) 2024-03-14
KR20220113533A (en) 2022-08-12
US20220166978A1 (en) 2022-05-26
CN114902663A (en) 2022-08-12
MX2022007973A (en) 2022-07-05
US11284078B2 (en) 2022-03-22
EP4055826A1 (en) 2022-09-14
JP7499854B2 (en) 2024-06-14
US20210203931A1 (en) 2021-07-01
JP2023501726A (en) 2023-01-18
EP4055826A4 (en) 2023-11-22

Similar Documents

Publication Publication Date Title
WO2020043061A1 (en) Device and method for coding video data
US12075095B2 (en) Device and method for coding video data
WO2019210840A1 (en) Device and method for coding video data based on different reference sets in linear model prediction
US20240089432A1 (en) Device and method for coding video data for geometric partitioning mode
WO2019096221A1 (en) Device and method for coding video data based on adjusted intra mode list
US11368704B2 (en) Device and method for coding video data
US11272179B2 (en) Device and method for coding video data
US20220417501A1 (en) Device and method for coding video data
US11930199B2 (en) Motion storing process for partition prediction mode
US11051029B2 (en) Device and method for coding video data
US11622122B2 (en) Video coding device and method for determining an MVP candidate list
US12003780B2 (en) Device and method for decoding video data using maximum bit-depth constraint information
WO2022105721A1 (en) Device and method for coding video data
WO2022268207A1 (en) Device and method for partitioning blocks in video coding
WO2022095918A1 (en) Device and method for coding video data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20911086

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022528269

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020911086

Country of ref document: EP

Effective date: 20220610

ENP Entry into the national phase

Ref document number: 20227025262

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE