WO2021127723A1 - Method, apparatus and system for encoding and decoding a block of video samples - Google Patents

Method, apparatus and system for encoding and decoding a block of video samples Download PDF

Info

Publication number
WO2021127723A1
WO2021127723A1 PCT/AU2020/051233 AU2020051233W WO2021127723A1 WO 2021127723 A1 WO2021127723 A1 WO 2021127723A1 AU 2020051233 W AU2020051233 W AU 2020051233W WO 2021127723 A1 WO2021127723 A1 WO 2021127723A1
Authority
WO
WIPO (PCT)
Prior art keywords
flag
decoding
coefficient
block
transform
Prior art date
Application number
PCT/AU2020/051233
Other languages
French (fr)
Inventor
Jonathan GAN
Original Assignee
Canon Kabushiki Kaisha
Canon Australia Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Kabushiki Kaisha, Canon Australia Pty Ltd filed Critical Canon Kabushiki Kaisha
Publication of WO2021127723A1 publication Critical patent/WO2021127723A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present invention relates generally to digital video signal processing and, in particular, to a method, apparatus and system for encoding and decoding a block of video samples.
  • the present invention also relates to a computer program product including a computer readable medium having recorded thereon a computer program for encoding and decoding a block of video samples.
  • JVET Joint Video Experts Team
  • JVET Joint Video Experts Team
  • ITU-T Telecommunication Standardisation Sector
  • ITU-T Telecommunication Standardisation Sector
  • VCEG Video Coding Experts Group
  • MPEG Motion Picture Experts Group
  • VVC versatile video coding
  • Video data includes a sequence of frames of image data, each of which include one or more colour channels. Generally, one primary colour channel and two secondary colour channels are needed.
  • the primary colour channel is generally referred to as the ‘luma’ channel and the secondary colour channel(s) are generally referred to as the ‘chroma’ channels.
  • RGB red-green-blue
  • the video data representation seen by an encoder or a decoder is often using a colour space such as YCbCr.
  • YCbCr concentrates luminance, mapped to ‘luma’ according to a transfer function, in a Y (primary) channel and chroma in Cb and Cr (secondary) channels.
  • the Cb and Cr channels may be sampled spatially at a lower rate (subsampled) compared to the luma channel, for example half horizontally and half vertically - known as a ‘4:2:0 chroma format’.
  • the 4:2:0 chroma format is commonly used in ‘consumer’ applications, such as internet video streaming, broadcast television, and storage on Blu-RayTM disks. Subsampling the Cb and Cr channels at half-rate horizontally and not subsampling vertically is known as a ‘4:2:2 chroma format’.
  • the 4:2:2 chroma format is typically used in professional applications, including capture of footage for cinematic production and the like. The higher sampling rate of the 4:2:2 chroma format makes the resulting video more resilient to editing operations such as colour grading.
  • 4:2:2 chroma format material Prior to distribution to consumers, 4:2:2 chroma format material is often converted to the 4:2:0 chroma format and then encoded for distribution to consumers.
  • video is also characterised by resolution and frame rate.
  • Example resolutions are ultra-high definition (UHD) with a resolution of 3840x2160 or ‘8K’ with a resolution of 7680x4320 and example frame rates are 60 or 120Hz.
  • Luma sample rates may range from approximately 500 mega samples per second to several giga samples per second.
  • the sample rate of each chroma channel is one quarter the luma sample rate and for the 4:2:2 chroma format, the sample rate of each chroma channel is one half the luma sample rate.
  • the VVC standard is a ‘block based’ codec, in which frames are firstly divided into a square array of regions known as ‘coding tree units’ (CTUs).
  • CTUs generally occupy a relatively large area, such as 128x128 luma samples. However, CTUs at the right and bottom edge of each frame may be smaller in area.
  • a ‘coding tree’ for the luma channel and an additional coding tree for the chroma channels Associated with each CTU is a ‘coding tree’ for the luma channel and an additional coding tree for the chroma channels.
  • a coding tree defines a decomposition of the area of the CTU into a set of blocks, also referred to as ‘coding blocks’ (CBs).
  • coding units i.e., each CU having a coding block for each colour channel.
  • the CBs are processed for encoding or decoding in a particular order.
  • a CTU with a luma coding tree for a 128x128 luma sample area has a corresponding chroma coding tree for a 64x64 chroma sample area, collocated with the 128x128 luma sample area.
  • the collections of collocated blocks for a given area are generally referred to as ‘units’, for example the above-mentioned CUs, as well as ‘prediction units’ (PUs), and ‘transform units’ (TUs).
  • units for example the above-mentioned CUs
  • PUs prediction units
  • TUs transmission units
  • block may be used as a general term for areas or regions of a frame for which operations are applied to all colour channels.
  • a prediction unit of the contents (sample values) of the corresponding area of frame data is generated (a ‘prediction unit’). If the PU is generated from sample values in a previously signalled frame, the prediction is called inter prediction. If the PU is generated from previous samples in the same frame, the prediction is called intra prediction. Further, a representation of the difference (or ‘residual’ in the spatial domain) between the prediction and the contents of the area as seen at input to the encoder is formed. The difference in each colour channel may be transformed and coded as a block of residual coefficients, forming one or more TUs for a given CU.
  • the residual coefficients may be transformed by a transform such as a Discrete Cosine Transform (DCT), a Discrete Sine Transform (DST), or other transform, to produce a final block of transform coefficients that substantially decorrelates the residual samples.
  • the transform coefficients are then traversed in an order such as a backward diagonal scan, and each coefficient is encoded by an entropy encoder.
  • Entropy coding a transform coefficient consists of expressing the coefficient in terms of syntax elements, each of which is binarised.
  • the binarised syntax elements may then be further encoded by a context adaptive binary arithmetic coder (CABAC), or passed on to the bitstream (“bypass coding”).
  • CABAC context adaptive binary arithmetic coder
  • One aspect of the present invention provides a method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining at least a portion of a magnitude of the residual coefficient by decoding only a significance flag, a sign flag, greater than one flag and a parity flag using a CAB AC decoder; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
  • the method further comprises determining that a CAB AC coding budget for the transform block has been exhausted, and decoding the magnitude of the residual coefficient in full using Rice-EG decoding with a Rice parameter of 0.
  • Another aspect of the present disclosure provides a method of decoding a transform- skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining that a CABAC coding budget for the transform block has been exhausted; and decoding the magnitude of the residual coefficient in full using Rice-EG decoding with a Rice parameter of 0.
  • Another aspect of the present disclosure provides a method of decoding a transform- skipped residual coefficient of a transform block of a video bitstream, the method comprising: upon selecting the residual coefficient from the transform block, determining whether the CAB AC coding budget is exhausted; if the CAB AC coding budget is not exhausted, decoding the residual coefficient in full by: determining at least a portion of a magnitude of the residual coefficient by decoding only a significance flag, a sign flag, greater than one flag and a parity flag using a CAB AC decoder; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0; and if the CAB AC coding budget is exhausted, decoding the magnitude of the residual coefficient in full using Rice-EG decoding with a Rice parameter of 0.
  • Another aspect of the present disclosure provides a non-transitory computer readable medium having a computer program stored thereon to implement a method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining at least a portion of a magnitude of the residual coefficient by decoding only a significance flag, a sign flag, greater than one flag and a parity flag using a CABAC decoder; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
  • Another aspect of the present disclosure provides a video decoder, configured to: receive a transform-skipped residual coefficient of a transform block of a video bitstream; determine at least a portion of a magnitude of the residual coefficient by decoding only a significance flag, a sign flag, greater than one flag and a parity flag using a CABAC decoder; and decode any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
  • Another aspect of the present disclosure provides a method of decoding a transform- skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining significance of the residual coefficient by decoding or inferring a significance flag; determining a portion of a magnitude of the residual coefficient by further determining a sign flag, a greater than one flag, a parity flag, a greater than three flag, a greater than five flag, a greater than seven flag, and a greater than nine flag; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
  • Another aspect of the present disclosure provides a non-transitory computer readable medium having a computer program stored thereon to implement a method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining significance of the residual coefficient by decoding or inferring a significance flag; determining a portion of a magnitude of the residual coefficient by further determining a sign flag, a greater than one flag, a parity flag, a greater than three flag, a greater than five flag, a greater than seven flag, and a greater than nine flag; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
  • Another aspect of the present disclosure provides a video decoder, configured to: receive a transform-skipped residual coefficient of a transform block of a video bitstream; determine significance of the residual coefficient by decoding or inferring a significance flag; determine a portion of a magnitude of the residual coefficient by further determining a sign flag, a greater than one flag, a parity flag, a greater than three flag, a greater than five flag, a greater than seven flag, and a greater than nine flag; and decode any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
  • Another aspect of the present disclosure provides a method of encoding a transform- skipped residual coefficient of a transform block of a video bitstream, the method comprising: encoding a significance flag indicating whether the residual coefficient has a magnitude greater than zero to the bitstream; encoding a portion of a magnitude of the residual coefficient by further encoding a sign flag, a greater than one flag, a parity flag, a greater than three flag, a greater than five flag, a greater than seven flag, and a greater than nine flag to the bitstream; and; and encoding any remaining portion of the magnitude of the residual coefficient to the bitstream using Rice-EG decoding with a Rice parameter of 0.
  • Fig. l is a schematic block diagram showing a video encoding and decoding system
  • FIGs. 2A and 2B form a schematic block diagram of a general purpose computer system upon which one or both of the video encoding and decoding system of Fig. 1 may be practiced;
  • FIG. 3 is a schematic block diagram showing functional modules of a video encoder
  • FIG. 4 is a schematic block diagram showing functional modules of a video decoder
  • Fig. 5 is a schematic block diagram showing the available divisions of a block into one or more blocks in the tree structure of versatile video coding
  • FIG. 6 is a schematic illustration of a dataflow to achieve permitted divisions of a block into one or more blocks in a tree structure of versatile video coding
  • Figs. 7 A and 7B show an example division of a coding tree unit (CTU) into a number of coding units (CUs);
  • Fig. 8A shows a two-level backward diagonal scan
  • Fig. 8B shows a two-level forward diagonal scan
  • Fig. 9 shows a method for encoding a transform block of quantised coefficients
  • Fig. 10 shows a method for decoding a transform block of quantised coefficients
  • FIG. 11 shows a method for encoding a sub-block of quantised transform coefficients as performed by the method of Fig. 9;
  • Fig. 12 shows a method for decoding a sub-block of quantised transform coefficients as performed by the method of Fig. 10;
  • Fig. 13 shows a method for encoding a sub-block of quantised transform skip coefficients as performed by the method of Fig. 9;
  • Fig. 14 shows a method for decoding a sub-block of quantised transform skip coefficients as performed by the method of Fig. 10;
  • Fig. 15 shows an alternative method for encoding a sub-block of quantised transform skip coefficients
  • Fig. 16 shows an alternative method for decoding a sub-block of quantised transform skip coefficients.
  • Fig. l is a schematic block diagram showing functional modules of a video encoding and decoding system 100.
  • the system 100 may utilise constraints on the secondary transform kernel, such that the non-separable secondary transform may be performed with reduced complexity, while achieving similar coding performance to an unconstrained secondary transform kernel.
  • the system 100 includes a source device 110 and a destination device 130.
  • a communication channel 120 is used to communicate encoded video information from the source device 110 to the destination device 130.
  • the source device 110 and destination device 130 may either or both comprise respective mobile telephone handsets or “smartphones”, in which case the communication channel 120 is a wireless channel.
  • the source device 110 and destination device 130 may comprise video conferencing equipment, in which case the communication channel 120 is typically a wired channel, such as an internet connection.
  • the source device 110 and the destination device 130 may comprise any of a wide range of devices, including devices supporting over- the-air television broadcasts, cable television applications, internet video applications (including streaming) and applications where encoded video data is captured on some computer-readable storage medium, such as hard disk drives in a file server.
  • the source device 110 includes a video source 112, a video encoder 114 and a transmitter 116.
  • the video source 112 typically comprises a source of captured video frame data (shown as 113), such as an image capture sensor, a previously captured video sequence stored on a non-transitory recording medium, or a video feed from a remote image capture sensor.
  • the video source 112 may also be an output of a computer graphics card, for example displaying the video output of an operating system and various applications executing upon a computing device, for example a tablet computer.
  • Examples of source devices 110 that may include an image capture sensor as the video source 112 include smart-phones, video camcorders, professional video cameras, and network video cameras.
  • the video encoder 114 converts (or ‘encodes’) the captured frame data (indicated by an arrow 113) from the video source 112 into a bitstream (indicated by an arrow 115) as described further with reference to Fig. 3.
  • the bitstream 115 is transmitted by the transmitter 116 over the communication channel 120 as encoded video data (or “encoded video information”). It is also possible for the bitstream 115 to be stored in a non-transitory storage device 122, such as a “Flash” memory or a hard disk drive, until later being transmitted over the communication channel 120, or in-lieu of transmission over the communication channel 120.
  • the destination device 130 includes a receiver 132, a video decoder 134 and a display device 136.
  • the receiver 132 receives encoded video data from the communication channel 120 and passes received video data to the video decoder 134 as a bitstream (indicated by an arrow 133).
  • the video decoder 134 then outputs decoded frame data (indicated by an arrow 135) to the display device 136.
  • the decoded frame data 135 has the same chroma format as the frame data 113.
  • Examples of the display device 136 include a cathode ray tube, a liquid crystal display, such as in smart-phones, tablet computers, computer monitors or in stand-alone television sets. It is also possible for the functionality of each of the source device 110 and the destination device 130 to be embodied in a single device, examples of which include mobile telephone handsets and tablet computers.
  • each of the source device 110 and destination device 130 may be configured within a general purpose computing system, typically through a combination of hardware and software components.
  • Fig. 2A illustrates such a computer system 200, which includes: a computer module 201; input devices such as a keyboard 202, a mouse pointer device 203, a scanner 226, a camera 227, which may be configured as the video source 112, and a microphone 280; and output devices including a printer 215, a display device 214, which may be configured as the display device 136, and loudspeakers 217.
  • An external Modulator-Demodulator (Modem) transceiver device 216 may be used by the computer module 201 for communicating to and from a communications network 220 via a connection 221.
  • the communications network 220 which may represent the communication channel 120, may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN.
  • the modem 216 may be a traditional “dial-up” modem.
  • the modem 216 may be a broadband modem.
  • a wireless modem may also be used for wireless connection to the communications network 220.
  • the transceiver device 216 may provide the functionality of the transmitter 116 and the receiver 132 and the communication channel 120 may be embodied in the connection 221.
  • the computer module 201 typically includes at least one processor unit 205, and a memory unit 206.
  • the memory unit 206 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM).
  • the computer module 201 also includes an number of input/output (I/O) interfaces including: an audio-video interface 207 that couples to the video display 214, loudspeakers 217 and microphone 280; an EO interface 213 that couples to the keyboard 202, mouse 203, scanner 226, camera 227 and optionally a joystick or other human interface device (not illustrated); and an interface 208 for the external modem 216 and printer 215.
  • the signal from the audio-video interface 207 to the computer monitor 214 is generally the output of a computer graphics card.
  • the modem 216 may be incorporated within the computer module 201, for example within the interface 208.
  • the computer module 201 also has a local network interface 211, which permits coupling of the computer system 200 via a connection 223 to a local-area communications network 222, known as a Local Area Network (LAN).
  • LAN Local Area Network
  • the local communications network 222 may also couple to the wide network 220 via a connection 224, which would typically include a so-called “firewall” device or device of similar functionality.
  • the local network interface 211 may comprise an EthernetTM circuit card, a BluetoothTM wireless arrangement or an IEEE 802.11 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 211.
  • the local network interface 211 may also provide the functionality of the transmitter 116 and the receiver 132 and communication channel 120 may also be embodied in the local communications network 222.
  • the EO interfaces 208 and 213 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated).
  • Storage devices 209 are provided and typically include a hard disk drive (HDD) 210. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used.
  • An optical disk drive 212 is typically provided to act as a non-volatile source of data.
  • Portable memory devices such optical disks (e.g. CD-ROM, DVD, Blu ray DiscTM), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the computer system 200.
  • any of the HDD 210, optical drive 212, networks 220 and 222 may also be configured to operate as the video source 112, or as a destination for decoded video data to be stored for reproduction via the display 214.
  • the source device 110 and the destination device 130 of the system 100 may be embodied in the computer system 200.
  • the components 205 to 213 of the computer module 201 typically communicate via an interconnected bus 204 and in a manner that results in a conventional mode of operation of the computer system 200 known to those in the relevant art.
  • the processor 205 is coupled to the system bus 204 using a connection 218.
  • the memory 206 and optical disk drive 212 are coupled to the system bus 204 by connections 219. Examples of computers on which the described arrangements can be practised include IBM-PC’s and compatibles, Sun SPARCstations, Apple MacTM or alike computer systems.
  • the video encoder 114 and the video decoder 134 may be implemented using the computer system 200.
  • the video encoder 114, the video decoder 134 and methods to be described may be implemented as one or more software application programs 233 executable within the computer system 200.
  • the video encoder 114, the video decoder 134 and the steps of the described methods are effected by instructions 231 (see Fig. 2B) in the software 233 that are carried out within the computer system 200.
  • the software instructions 231 may be formed as one or more code modules, each for performing one or more particular tasks.
  • the software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the described methods and a second part and the corresponding code modules manage a user interface between the first part and the user.
  • the software may be stored in a computer readable medium, including the storage devices described below, for example.
  • the software is loaded into the computer system 200 from the computer readable medium, and then executed by the computer system 200.
  • a computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product.
  • the use of the computer program product in the computer system 200 preferably effects an advantageous apparatus for implementing the video encoder 114, the video decoder 134 and the described methods.
  • the software 233 is typically stored in the HDD 210 or the memory 206.
  • the software is loaded into the computer system 200 from a computer readable medium, and executed by the computer system 200.
  • the software 233 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 225 that is read by the optical disk drive 212.
  • the application programs 233 may be supplied to the user encoded on one or more CD-ROMs 225 and read via the corresponding drive 212, or alternatively may be read by the user from the networks 220 or 222. Still further, the software can also be loaded into the computer system 200 from other computer readable media.
  • Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 200 for execution and/or processing.
  • Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-ray DiscTM, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 201.
  • Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of the software, application programs, instructions and/or video data or encoded video data to the computer module 401 include radio or infra-red transmission channels, as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
  • GUIs graphical user interfaces
  • a user of the computer system 200 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s).
  • Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 217 and user voice commands input via the microphone 280.
  • Fig. 2B is a detailed schematic block diagram of the processor 205 and a “memory” 234.
  • the memory 234 represents a logical aggregation of all the memory modules (including the HDD 209 and semiconductor memory 206) that can be accessed by the computer module 201 in Fig. 2A.
  • a power-on self-test (POST) program 250 executes.
  • the POST program 250 is typically stored in a ROM 249 of the semiconductor memory 206 of Fig. 2A.
  • a hardware device such as the ROM 249 storing software is sometimes referred to as firmware.
  • the POST program 250 examines hardware within the computer module 201 to ensure proper functioning and typically checks the processor 205, the memory 234 (209, 206), and a basic input-output systems software (BIOS) module 251, also typically stored in the ROM 249, for correct operation. Once the POST program 250 has run successfully, the BIOS 251 activates the hard disk drive 210 of Fig. 2A.
  • BIOS basic input-output systems software
  • Activation of the hard disk drive 210 causes a bootstrap loader program 252 that is resident on the hard disk drive 210 to execute via the processor 205.
  • the operating system 253 is a system level application, executable by the processor 205, to fulfil various high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface.
  • the operating system 253 manages the memory 234 (209, 206) to ensure that each process or application running on the computer module 201 has sufficient memory in which to execute without colliding with memory allocated to another process.
  • the different types of memory available in the computer system 200 of Fig. 2 A must be used properly so that each process can run effectively. Accordingly, the aggregated memory 234 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 200 and how such is used.
  • the processor 205 includes a number of functional modules including a control unit 239, an arithmetic logic unit (ALU) 240, and a local or internal memory 248, sometimes called a cache memory.
  • the cache memory 248 typically includes a number of storage registers 244-246 in a register section.
  • One or more internal busses 241 functionally interconnect these functional modules.
  • the processor 205 typically also has one or more interfaces 242 for communicating with external devices via the system bus 204, using a connection 218.
  • the memory 234 is coupled to the bus 204 using a connection 219.
  • the application program 233 includes a sequence of instructions 231 that may include conditional branch and loop instructions.
  • the program 233 may also include data 232 which is used in execution of the program 233.
  • the instructions 231 and the data 232 are stored in memory locations 228, 229, 230 and 235, 236, 237, respectively.
  • a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 230.
  • an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 228 and 229.
  • the processor 205 is given a set of instructions which are executed therein.
  • the processor 205 waits for a subsequent input, to which the processor 205 reacts to by executing another set of instructions.
  • Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 202, 203, data received from an external source across one of the networks 220, 202, data retrieved from one of the storage devices 206, 209 or data retrieved from a storage medium 225 inserted into the corresponding reader 212, all depicted in Fig. 2 A.
  • the execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 234.
  • the video encoder 114, the video decoder 134 and the described methods may use input variables 254, which are stored in the memory 234 in corresponding memory locations 255, 256, 257.
  • the video encoder 114, the video decoder 134 and the described methods produce output variables 261, which are stored in the memory 234 in corresponding memory locations 262, 263, 264.
  • Intermediate variables 258 may be stored in memory locations 259, 260, 266 and 267.
  • each fetch, decode, and execute cycle comprises: a fetch operation, which fetches or reads an instruction 231 from a memory location 228, 229, 230; a decode operation in which the control unit 239 determines which instruction has been fetched; and an execute operation in which the control unit 239 and/or the ALU 240 execute the instruction.
  • a further fetch, decode, and execute cycle for the next instruction may be executed.
  • a store cycle may be performed by which the control unit 239 stores or writes a value to a memory location 232.
  • Figs. 9 to 16 Each step or sub-process in the method of Figs. 9 to 16, to be described, is associated with one or more segments of the program 233 and is typically performed by the register section 244, 245, 247, the ALU 240, and the control unit 239 in the processor 205 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 233.
  • Fig. 3 shows a schematic block diagram showing functional modules of the video encoder 114.
  • Fig. 4 shows a schematic block diagram showing functional modules of the video decoder 134.
  • data passes between functional modules within the video encoder 114 and the video decoder 134 in groups of samples or coefficients, such as divisions of blocks into sub-blocks of a fixed size, or as arrays.
  • the video encoder 114 and video decoder 134 may be implemented using a general-purpose computer system 200, as shown in Figs. 2A and 2B, where the various functional modules may be implemented by dedicated hardware within the computer system 200, by software executable within the computer system 200 such as one or more software code modules of the software application program 233 resident on the hard disk drive 205 and being controlled in its execution by the processor 205.
  • the video encoder 114 and video decoder 134 may be implemented by a combination of dedicated hardware and software executable within the computer system 200.
  • the video encoder 114, the video decoder 134 and the described methods may alternatively be implemented in dedicated hardware, such as one or more integrated circuits performing the functions or sub functions of the described methods.
  • dedicated hardware may include graphic processing units (GPUs), digital signal processors (DSPs), application-specific standard products (ASSPs), application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) or one or more microprocessors and associated memories.
  • the video encoder 114 comprises modules 310-386 and the video decoder 134 comprises modules 420-496 which may each be implemented as one or more software code modules of the software application program 233.
  • the video encoder 114 of Fig. 3 is an example of a versatile video coding (VVC) video encoding pipeline, other video codecs may also be used to perform the processing stages described herein.
  • the video encoder 114 receives captured frame data 113, such as a series of frames, each frame including one or more colour channels.
  • the frame data 113 may be in any chroma format, for example 4:0:0, 4:2:0, 4:2:2, or 4:4:4 chroma format.
  • a block partitioner 310 firstly divides the frame data 113 into CTUs, generally square in shape and configured such that a particular size for the CTUs is used.
  • the size of the CTUs may be 64x64, 128x128, or 256x256 luma samples for example.
  • the block partitioner 310 further divides each CTU into one or more CBs according to a luma coding tree and a chroma coding tree.
  • the CBs have a variety of sizes, and may include both square and non-square aspect ratios. In the VVC standard, CBs, CUs, PUs, and TUs always have side lengths that are powers of two.
  • a current CB represented as 312 is output from the block partitioner 310, progressing in accordance with an iteration over the one or more blocks of the CTU, in accordance with the luma coding tree and the chroma coding tree of the CTU.
  • Options for partitioning CTUs into CBs are further described below with reference to Figs. 5 and 6.
  • the CTUs resulting from the first division of the frame data 113 may be scanned in raster scan order and may be grouped into one or more ‘slices’.
  • a slice may be an ‘intra’ (or T) slice.
  • An intra slice (I slice) indicates that every CU in the slice is intra predicted.
  • a slice may be uni- or bi-predicted (‘P’ or ⁇ ’ slice, respectively), indicating additional availability of uni- and bi-prediction in the slice, respectively.
  • the video encoder 114 For each CTU, the video encoder 114 operates in two stages. In the first stage (referred to as a ‘search’ stage), the block partitioner 310 tests various potential configurations of a coding tree. Each potential configuration of a coding tree has associated ‘candidate’ CBs. The first stage involves testing various candidate CBs to select CBs providing high compression efficiency with low distortion. The testing generally involves a Lagrangian optimisation whereby a candidate CB is evaluated based on a weighted combination of the rate (coding cost) and the distortion (error with respect to the input frame data 113). The ‘best’ candidate CBs (the CBs with the lowest evaluated rate/distortion) are selected for subsequent encoding into the bitstream 115.
  • search the block partitioner 310 tests various potential configurations of a coding tree. Each potential configuration of a coding tree has associated ‘candidate’ CBs.
  • the first stage involves testing various candidate CBs to select CBs providing high compression efficiency with low distortion.
  • candidate CBs include an option to use a CB for a given area or to further split the area according to various splitting options and code each of the smaller resulting areas with further CBs, or split the areas even further.
  • both the CBs and the coding tree themselves are selected in the search stage.
  • the video encoder 114 produces a prediction block (PB), indicated by an arrow 320, for each CB, for example the CB 312.
  • the PB 320 is a prediction of the contents of the associated CB 312.
  • a subtracter module 322 produces a difference, indicated as 324 (or ‘residual’, referring to the difference being in the spatial domain), between the PB 320 and the CB 312.
  • the residual 324 is a block-size difference between corresponding samples in the PB 320 and the CB 312.
  • the residual 324 is transformed, quantised and represented as a transform block (TB), indicated by an arrow 336.
  • the PB 320 and associated TB 336 are typically chosen from one of many possible candidate CBs, for example based on evaluated cost or distortion.
  • a candidate coding block is a CB resulting from one of the prediction modes available to the video encoder 114 for the associated PB and the resulting residual. Each candidate CB results in one or more corresponding TBs.
  • the TB 336 is a quantised and transformed representation of the residual 324. When combined with the predicted PB in the video decoder 114, the TB 336 reduces the difference between decoded CBs and the original CB 312 at the expense of additional signalling in a bitstream.
  • the rate is typically measured in bits.
  • the distortion of the CB is typically estimated as a difference in sample values, such as a sum of absolute differences (SAD) or a sum of squared differences (SSD).
  • the estimate resulting from each candidate PB may be determined by a mode selector 386 using the residual 324 to determine a prediction mode (represented by an arrow 388).
  • Estimation of the coding costs associated with each candidate prediction mode and corresponding residual coding can be performed at significantly lower cost than entropy coding of the residual. Accordingly, a number of candidate modes can be evaluated to determine an optimum mode in a rate-distortion sense.
  • Determining an optimum mode in terms of rate-distortion is typically achieved using a variation of Lagrangian optimisation.
  • Selection of the prediction mode 388 typically involves determining a coding cost for the residual data resulting from application of a particular prediction mode.
  • the coding cost may be approximated by using a ‘sum of absolute transformed differences’ (SATD) whereby a relatively simple transform, such as a Hadamard transform, is used to obtain an estimated transformed residual cost.
  • SATD sum of absolute transformed differences
  • the costs resulting from the simplified estimation method are monotonically related to the actual costs that would otherwise be determined from a full evaluation.
  • the simplified estimation method may be used to make the same decision (i.e.
  • Prediction modes fall broadly into two categories.
  • a first category is ‘intra-frame prediction’ (also referred to as ‘intra prediction’).
  • intra-frame prediction a prediction for a block is generated, and the generation method may use other samples obtained from the current frame.
  • Types of intra prediction include intra planar, intra DC, intra angular, and matrix weighted intra prediction (MIP).
  • MIP matrix weighted intra prediction
  • intra prediction modes For an intra-predicted PB, it is possible for different intra prediction modes to be used for luma and chroma, and thus intra prediction is described primarily in terms of operation upon PBs.
  • chroma CBs may be predicted from co-located luma samples by a cross-component linear model prediction.
  • inter-frame prediction also referred to as ‘inter prediction’.
  • inter-frame prediction a prediction for a block is produced using samples from one or two frames preceding the current frame in an order of coding frames in the bitstream.
  • a single coding tree is typically used for both the luma channel and the chroma channels.
  • the order of coding frames in the bitstream may differ from the order of the frames when captured or displayed.
  • the block is said to be ‘uni -predicted’ and has one associated motion vector.
  • two frames are used for prediction, the block is said to be ‘bi-predicted’ and has two associated motion vectors.
  • each CU may be intra predicted or uni -predicted.
  • each CU may be intra predicted, uni -predicted, or bi-predicted.
  • Frames are typically coded using a ‘group of pictures’ structure, enabling a temporal hierarchy of frames.
  • a temporal hierarchy of frames allows a frame to reference a preceding and a subsequent picture in the order of displaying the frames.
  • the images are coded in the order necessary to ensure the dependencies for decoding each frame are met.
  • Inter prediction and skip modes are described as two distinct modes. However, both inter prediction mode and skip mode involve motion vectors referencing blocks of samples from preceding frames.
  • Inter prediction involves a coded motion vector delta, specifying a motion vector relative to a motion vector predictor. The motion vector predictor is obtained from a list of one or more candidate motion vectors, selected with a ‘merge index’. The coded motion vector delta provides a spatial offset to a selected motion vector prediction.
  • Inter prediction also uses a coded residual in the bitstream 133.
  • Skip mode uses only an index (also named a ‘merge index’) to select one out of several motion vector candidates. The selected candidate is used without any further signalling.
  • skip mode does not support coding of any residual coefficients.
  • the absence of coded residual coefficients when the skip mode is used means that there is no need to perform transforms for the skip mode. Therefore, skip mode does not typically result in pipeline processing issues.
  • Pipeline processing issues may be the case for intra predicted CUs and inter predicted CUs. Due to the limited signalling of the skip mode, skip mode is useful for achieving very high compression performance when relatively high quality reference frames are available.
  • Bi-predicted CUs in higher temporal layers of a random-access group-of-picture structure typically have high quality reference pictures and motion vector candidates that accurately reflect underlying motion.
  • the samples are selected according to a motion vector and reference picture index.
  • inter prediction is described primarily in terms of operation upon PUs rather than PBs.
  • different techniques may be applied to generate the PU.
  • intra prediction may use values from adjacent rows and columns of previously reconstructed samples, in combination with a direction to generate a PU according to a prescribed filtering and generation process.
  • the PU may be described using a small number of parameters.
  • Inter prediction methods may vary in the number of motion parameters and their precision.
  • Motion parameters typically comprise a reference frame index, indicating which reference frame(s) from lists of reference frames are to be used plus a spatial translation for each of the reference frames, but may include more frames, special frames, or complex affine parameters such as scaling and rotation.
  • a pre determined motion refinement process may be applied to generate dense motion estimates based on referenced sample blocks.
  • Lagrangian or similar optimisation processing can be employed to both select an optimal partitioning of a CTU into CBs (by the block partitioner 310) as well as the selection of a best prediction mode from a plurality of possibilities.
  • the prediction mode with the lowest cost measurement is selected as the ‘best’ mode.
  • the lowest cost mode is the selected prediction mode 388 and is also encoded in the bitstream 115 by an entropy encoder 338.
  • the selection of the prediction mode 388 by operation of the mode selector module 386 extends to operation of the block partitioner 310.
  • candidates for selection of the prediction mode 388 may include modes applicable to a given block and additionally modes applicable to multiple smaller blocks that collectively are collocated with the given block.
  • the process of selection of candidates implicitly is also a process of determining the best hierarchical decomposition of the CTU into CBs.
  • a ‘coding’ stage In the second stage of operation of the video encoder 114 (referred to as a ‘coding’ stage), an iteration over the selected luma coding tree and the selected chroma coding tree, and hence each selected CB, is performed in the video encoder 114. In the iteration, the CBs are encoded into the bitstream 115, as described further herein.
  • the entropy encoder 338 supports both variable-length coding of syntax elements and arithmetic coding of syntax elements. Arithmetic coding is supported using a context-adaptive binary arithmetic coding (CABAC) process. Arithmetically coded syntax elements consist of sequences of one or more ‘bins’ . Bins, like bits, have a value of ‘0’ or ‘ G . Bins are not encoded in the bitstream 115 as discrete bits. Bins have an associated predicted (or ‘likely’ or ‘most probable’) value and an associated probability, known as a ‘context’. When the actual bin to be coded matches the predicted value, a ‘most probable symbol’ (MPS) is coded.
  • MPS most probable symbol
  • Coding a most probable symbol is relatively inexpensive in terms of consumed bits.
  • a ‘least probable symbol’ LPS
  • Coding a least probable symbol has a relatively high cost in terms of consumed bits.
  • the bin coding techniques enable efficient coding of bins where the probability of a ‘0’ versus a ‘ G is skewed. For a syntax element with two possible values (that is, a ‘flag’), a single bin is adequate. For syntax elements with many possible values, a sequence of bins is needed.
  • each bin may be associated with more than one context.
  • the selection of a particular context can be dependent on earlier bins in the syntax element, the bin values of neighbouring syntax elements (i.e. those from neighbouring blocks) and the like.
  • Each time a context-coded bin is encoded the context that was selected for that bin (if any) is updated in a manner reflective of the new bin value.
  • the binary arithmetic coding scheme is said to be adaptive.
  • bins that lack a context are bins that lack a context (‘bypass bins’). Bypass bins are coded assuming an equiprobable distribution between a ‘0’ and a ‘ G. Thus, each bin occupies one bit in the bitstream 115. The absence of a context saves memory and reduces complexity, and thus bypass bins are used where the distribution of values for the particular bin is not skewed.
  • the entropy encoder 338 encodes the prediction mode 388 using a combination of context-coded and bypass-coded bins. For example, when the prediction mode 388 is an intra prediction mode, a list of ‘most probable modes’ is generated in the video encoder 114.
  • the list of most probable modes is typically of a fixed length, such as three or six modes, and may include modes encountered in earlier blocks.
  • a context-coded bin encodes a flag indicating if the prediction mode is one of the most probable modes. If the intra prediction mode 388 is one of the most probable modes, further signalling, using bypass-coded bins, is encoded. The encoded further signalling is indicative of which most probable mode corresponds with the intra prediction mode 388, for example using a truncated unary bin string. Otherwise, the intra prediction mode 388 is encoded as a ‘remaining mode’. Encoding as a remaining mode uses an alternative syntax, such as a fixed-length code, also coded using bypass-coded bins, to express intra prediction modes other than those present in the most probable mode list.
  • a multiplexer module 384 outputs the PB 320 according to the determined best prediction mode 388, selecting from the tested prediction mode of each candidate CB.
  • the candidate prediction modes need not include every conceivable prediction mode supported by the video encoder 114.
  • a residual with lowest coding cost represented as 324
  • the lossy compression process comprises the steps of transformation, quantisation and entropy coding.
  • a forward primary transform module 326 applies a forward transform to the residual 324, converting the residual 324 from the spatial domain to the frequency domain, and producing primary transform coefficients represented by an arrow 328.
  • the primary transform coefficients 328 are passed to a forward secondary transform module 330 to produce transform coefficients represented by an arrow 332 by performing a non-separable secondary transform (NSST) operation.
  • NST non-separable secondary transform
  • the forward primary transform is typically separable, transforming a set of rows and then a set of columns of each block, typically using a type-II discrete cosine transform (DCT-2), although a type- VII discrete sine transform (DST-7) and a type- VIII discrete cosine transform (DCT-8) may also be available, for example horizontally for block widths not exceeding 16 samples and vertically for block heights not exceeding 16 samples.
  • DCT-2 type-II discrete cosine transform
  • DST-7 type- VII discrete sine transform
  • DCT-8 type- VIII discrete cosine transform
  • the forward secondary transform is generally a non-separable transform, which is only applied for the residual of intra-predicted CUs and may nonetheless also be bypassed.
  • the forward secondary transform operates either on 16 samples (arranged as the upper-left 4x4 sub-block of the primary transform coefficients 328) or 64 samples (arranged as the upper-left 8x8 coefficients, arranged as four 4x4 sub-blocks of the primary transform coefficients 328).
  • the matrix coefficients of the forward secondary transform are selected from multiple sets according to the intra prediction mode of the CU such that two sets of coefficients are available for use.
  • the video encoder 114 may also choose to skip both the primary and secondary transforms, known as ‘transform skip’ mode. Skipping the transforms is suited to residual data that lacks adequate correlation for reduced coding cost via expression as transform basis functions. Certain types of content, such as relatively simple computer generated graphics may exhibit similar behaviour. When transform skip mode is used, the transform coefficients 332 are the same as the residual coefficients 324.
  • the transform coefficients 332 are passed to a quantiser module 334.
  • quantisation in accordance with a ‘quantisation parameter’ is performed to produce quantised coefficients, represented by the arrow 336.
  • the quantisation parameter is constant for a given TB and thus results in a uniform scaling for the production of residual coefficients for a TB.
  • a non-uniform scaling is also possible by application of a ‘quantisation matrix’, whereby the scaling factor applied for each residual coefficient is derived from a combination of the quantisation parameter and the corresponding entry in a scaling matrix, typically having a size equal to that of the TB.
  • the scaling matrix may have a size that is smaller than the size of the TB, and when applied to the TB a nearest neighbour approach is used to provide scaling values for each residual coefficient from a scaling matrix smaller in size than the TB size.
  • the quantised coefficients 336 are supplied to the entropy encoder 338 for encoding in the bitstream 115.
  • the quantised coefficients of each TB with at least one significant quantised coefficient are scanned to produce an ordered list of values, according to a scan pattern.
  • the scan pattern generally scans the TB as a sequence of 4x4 ‘sub-blocks’, providing a regular scanning operation at the granularity of 4x4 sets of residual coefficients, with the arrangement of sub-blocks dependent on the size of the TB.
  • the prediction mode 388 and the corresponding block partitioning are also encoded in the bitstream 115.
  • the video encoder 114 needs access to a frame representation corresponding to the frame representation seen by the video decoder 134.
  • the quantised coefficients 336 are also inverse quantised by a dequantiser module 340 to produce reconstructed transform coefficients, represented by an arrow 342.
  • the reconstructed transform coefficients 342 are passed through an inverse secondary transform module 344 to produce reconstructed primary transform coefficients, represented by an arrow 346.
  • the reconstructed primary transform coefficients 346 are passed to an inverse primary transform module 348 to produce reconstructed residual samples, represented by an arrow 350, of the TU.
  • the types of inverse transform performed by the inverse secondary transform module 344 correspond with the types of forward transform performed by the forward secondary transform module 330.
  • the types of inverse transform performed by the inverse primary transform module 348 correspond with the types of primary transform performed by the primary transform module 326.
  • a summation module 352 adds the reconstructed residual samples 350 and the PU 320 to produce reconstructed samples (indicated by an arrow 354) of the CU.
  • the reconstructed samples 354 are passed to a reference sample cache 356 and an in loop filters module 368.
  • the minimal dependencies typically include a Tine buffer’ of samples along the bottom of a row of CTUs, for use by the next row of CTUs and column buffering the extent of which is set by the height of the CTU.
  • the reference sample cache 356 supplies reference samples (represented by an arrow 358) to a reference sample filter 360.
  • the sample filter 360 applies a smoothing operation to produce filtered reference samples (indicated by an arrow 362).
  • the filtered reference samples 362 are used by an intra-frame prediction module 364 to produce an intra-predicted block of samples, represented by an arrow 366. For each candidate intra prediction mode the intra-frame prediction module 364 produces a block of samples, that is 366.
  • the in-loop filters module 368 applies several filtering stages to the reconstructed samples 354.
  • the filtering stages include a ‘deblocking filter’ (DBF) which applies smoothing aligned to the CU boundaries to reduce artefacts resulting from discontinuities.
  • Another filtering stage present in the in-loop filters module 368 is an ‘adaptive loop filter’ (ALF), which applies a Wiener-based adaptive filter to further reduce distortion.
  • a further available filtering stage in the in-loop filters module 368 is a ‘sample adaptive offset’ (SAO) filter.
  • the SAO filter operates by firstly classifying reconstructed samples into one or multiple categories and, according to the allocated category, applying an offset at the sample level.
  • Filtered samples are output from the in-loop filters module 368.
  • the filtered samples 370 are stored in a frame buffer 372.
  • the frame buffer 372 typically has the capacity to store several (for example up to 16) pictures and thus is stored in the memory 206.
  • the frame buffer 372 is not typically stored using on-chip memory due to the large memory consumption required. As such, access to the frame buffer 372 is costly in terms of memory bandwidth.
  • the frame buffer 372 provides reference frames (represented by an arrow 374) to a motion estimation module 376 and a motion compensation module 380.
  • the motion estimation module 376 estimates a number of ‘motion vectors’ (indicated as 378), each being a Cartesian spatial offset from the location of the present CB, referencing a block in one of the reference frames in the frame buffer 372.
  • a filtered block of reference samples (represented as 382) is produced for each motion vector.
  • the filtered reference samples 382 form further candidate modes available for potential selection by the mode selector 386.
  • the PU 320 may be formed using one reference block (‘uni -predicted’) or may be formed using two reference blocks (‘bi-predicted’).
  • the motion compensation module 380 For the selected motion vector, the motion compensation module 380 produces the PB 320 in accordance with a filtering process supportive of sub-pixel accuracy in the motion vectors. As such, the motion estimation module 376 (which operates on many candidate motion vectors) may perform a simplified filtering process compared to that of the motion compensation module 380 (which operates on the selected candidate only) to achieve reduced computational complexity.
  • the motion vector 378 is encoded into the bitstream 115.
  • the video encoder 114 of Fig. 3 is described with reference to versatile video coding (VVC), other video coding standards or implementations may also employ the processing stages of modules 310-386.
  • the frame data 113 (and bitstream 115) may also be read from (or written to) memory 206, the hard disk drive 210, a CD-ROM, a Blu-ray diskTM or other computer readable storage medium. Additionally, the frame data 113 (and bitstream 115) may be received from (or transmitted to) an external source, such as a server connected to the communications network 220 or a radio-frequency receiver.
  • the video decoder 134 is shown in Fig. 4. Although the video decoder 134 of Fig.
  • bitstream 133 is input to the video decoder 134.
  • the bitstream 133 may be read from memory 206, the hard disk drive 210, a CD-ROM, a Blu-ray diskTM or other non-transitory computer readable storage medium.
  • the bitstream 133 may be received from an external source such as a server connected to the communications network 220 or a radio- frequency receiver.
  • the bitstream 133 contains encoded syntax elements representing the captured frame data to be decoded.
  • the bitstream 133 is input to an entropy decoder module 420.
  • the entropy decoder module 420 extracts syntax elements from the bitstream 133 by decoding sequences of ‘bins’ and passes the values of the syntax elements to other modules in the video decoder 134.
  • One example of a syntax element extracted from the bitstream 133 are quantised coefficients 424.
  • the entropy decoder module 420 uses an arithmetic decoding engine to decode each syntax element as a sequence of one or more bins. Each bin may use one or more ‘contexts’, with a context describing probability levels to be used for coding a ‘one’ and a ‘zero’ value for the bin.
  • a ‘context modelling’ or ‘context selection’ step is performed to choose one of the available contexts for decoding the bin.
  • the process of decoding bins forms a sequential feedback loop.
  • the number of operations in the feedback loop is preferably minimised to enable the entropy decoder 420 to achieve a high throughput in bins/second.
  • Context modelling depends on other properties of the bitstream known to the video decoder 134 at the time of selecting the context, that is, properties preceding the current bin.
  • a context may be selected based on the quad-tree depth of the current CU in the coding tree.
  • Dependencies are preferably based on properties that are known well in advance of decoding a bin, or are determined without requiring long sequential processes.
  • the quantised coefficients 424 are input to a dequantiser module 428.
  • the dequantiser module 428 performs inverse quantisation (or ‘scaling’) on the quantised coefficients 424 to create reconstructed intermediate transform coefficients, represented by an arrow 432, according to a quantisation parameter.
  • the video decoder 134 reads a quantisation matrix from the bitstream 133 as a sequence of scaling factors and arranges the scaling factors into a matrix.
  • the inverse scaling uses the quantisation matrix in combination with the quantisation parameter to create the reconstructed intermediate transform coefficients 432.
  • the reconstructed intermediate transform coefficients 432 are passed to an inverse secondary transform module 436 where a secondary transform may be applied, in accordance with a decoded “nsst index” syntax element.
  • the “nsst index” is decoded from the bitstream 133 by the entropy decoder 420, under execution of the processor 205.
  • the inverse secondary transform module 436 produces reconstructed transform coefficients 440.
  • the reconstructed transform coefficients 440 are passed to an inverse primary transform module 444.
  • the module 444 transforms the coefficients from the frequency domain back to the spatial domain.
  • the result of operation of the module 444 is a block of residual samples, represented by an arrow 448.
  • the block of residual samples 448 is equal in size to the corresponding CU.
  • the type of inverse primary transform may be a type-II discrete cosine transform (DCT-2), a type- VII discrete sine transform (DST-7), a type- VIII discrete cosine transform (DCT-8), or a ‘transform skip’ mode.
  • the use of transform skip mode is signalled by a transform skip flag, which may be decoded from the bitstream 133 or otherwise inferred.
  • transform skip mode When transform skip mode is used, the residual samples 448 are the same as the reconstructed transform coefficients 440.
  • the residual samples 448 are supplied to a summation module 450.
  • the residual samples 448 are added to a decoded PB (represented as 452) to produce a block of reconstructed samples, represented by an arrow 456.
  • the reconstructed samples 456 are supplied to a reconstructed sample cache 460 and an in-loop filtering module 488.
  • the in-loop filtering module 488 produces reconstructed blocks of frame samples, represented as 492.
  • the frame samples 492 are written to a frame buffer 496.
  • the reconstructed sample cache 460 operates similarly to the reconstructed sample cache 356 of the video encoder 114.
  • the reconstructed sample cache 460 provides storage for reconstructed sample needed to intra predict subsequent CBs without the memory 206 (for example by using the data 232 instead, which is typically on-chip memory).
  • Reference samples represented by an arrow 464, are obtained from the reconstructed sample cache 460 and supplied to a reference sample filter 468 to produce filtered reference samples indicated by arrow 472.
  • the filtered reference samples 472 are supplied to an intra-frame prediction module 476.
  • the module 476 produces a block of intra-predicted samples, represented by an arrow 480, in accordance with an intra prediction mode parameter 458 signalled in the bitstream 133 and decoded by the entropy decoder 420.
  • the intra-predicted samples 480 form the decoded PB 452 via a multiplexor module 484.
  • Intra prediction produces a prediction block (PB) of samples, that is, a block in one colour component, derived using ‘neighbouring samples’ in the same colour component. The neighbouring samples are samples adjacent to the current block and by virtue of being preceding in the block decoding order have already been reconstructed.
  • PB prediction block
  • the luma and chroma blocks may use different intra prediction modes. However, the two chroma channels each share the same intra prediction mode.
  • Intra prediction for luma blocks consist of four types. “DC intra prediction” involves populating a PB with a single value representing the average of the neighbouring samples. “Planar intra prediction” involves populating a PB with samples according to a plane, with a DC offset and a vertical and horizontal gradient being derived from the neighbouring samples. “Angular intra prediction” involves populating a PB with neighbouring samples filtered and propagated across the PB in a particular direction (or ‘angle’). In VVC a PB may select from up to 65 angles, with rectangular blocks able to utilise different angles not available to square blocks.
  • “Matrix intra prediction” involves populating a PB by multiplying a reduced set of neighbouring samples by one of a number of available matrices available to the video decoder 134.
  • the reduced set of neighbouring samples is produced by filtering and subsampling the neighbouring samples.
  • a reduced set of prediction samples is produced by multiplying the reduced set of samples by a matrix, and adding an offset vector.
  • the matrix and associated offset vector are selected from a number of possible matrices depending on the size of the PB, with a particular selection of matrix and offset vector being indicated by a “MIP mode” syntax element. For example, for PBs with size greater than 8x8 there are 11 MIP modes, while for PBs of size 8x8 there are 19 MIP modes.
  • the PB produced by matrix intra prediction is populated from the reduced set of prediction samples by interpolation.
  • a fifth type of intra prediction is available to chroma PBs, whereby the PB is generated from collocated luma reconstructed samples according to a ‘cross-component linear model’ (CCLM) mode.
  • CCLM cross-component linear model
  • Three different CCLM modes are available, each of which uses a different model derived from the neighbouring luma and chroma samples. The derived model is then used to generate a block of samples for the chroma PB from the collocated luma samples.
  • a motion compensation module 434 produces a block of inter-predicted samples, represented as 438, using a motion vector and reference frame index to select and filter a block of samples 498 from the frame buffer 496.
  • the block of samples 498 is obtained from a previously decoded frame stored in the frame buffer 496. For bi-prediction, two blocks of samples are produced and blended together to produce samples for the decoded PB 452.
  • the frame buffer 496 is populated with filtered block data 492 from an in-loop filtering module 488.
  • the in-loop filtering module 488 applies any of the DBF, the ALF and SAO filtering operations.
  • the motion vector is applied to both the luma and chroma channels, although the filtering processes for sub-sample interpolation luma and chroma channel are different.
  • the frame buffer outputs the decoded video samples 135.
  • FIG. 5 is a schematic block diagram showing a collection 500 of available divisions or splits of a region into one or more sub-regions in the tree structure of versatile video coding.
  • the divisions shown in the collection 500 are available to the block partitioner 310 of the encoder 114 to divide each CTU into one or more CUs or CBs according to a coding tree, as determined by the Lagrangian optimisation, as described with reference to Fig. 3.
  • the collection 500 shows only square regions being divided into other, possibly non-square sub-regions, it should be understood that the diagram 500 is showing the potential divisions but not requiring the containing region to be square. If the containing region is non-square, the dimensions of the blocks resulting from the division are scaled according to the aspect ratio of the containing block. Once a region is not further split, that is, at a leaf node of the coding tree, a CU occupies that region.
  • the particular subdivision of a CTU into one or more CUs by the block partitioner 310 is referred to as the ‘coding tree’ of the CTU.
  • the process of subdividing regions into sub-regions must terminate when the resulting sub-regions reach a minimum CU size.
  • CUs are constrained to have a minimum width or height of four. Other minimums, both in terms of width and height or in terms of width or height are also possible.
  • the process of subdivision may also terminate prior to the deepest level of decomposition, resulting in a CU larger than the minimum CU size. It is possible for no splitting to occur, resulting in a single CU occupying the entirety of the CTU. A single CU occupying the entirety of the CTU is the largest available coding unit size. Due to use of subsampled chroma formats, such as 4:2:0, arrangements of the video encoder 114 and the video decoder 134 may terminate splitting of regions in the chroma channels earlier than in the luma channels.
  • leaf nodes of the coding tree exist CUs, with no further subdivision.
  • a leaf node 510 contains one CU.
  • a split into two or more further nodes each of which could be a leaf node that forms one CU, or a non leaf node containing further splits into smaller regions.
  • one coding block exists for each colour channel. Splitting terminating at the same depth for both luma and chroma results in three collocated CBs. Splitting terminating at a deeper depth for luma than for chroma results in a plurality of luma CBs being collocated with the CBs of the chroma channels.
  • a quad-tree split 512 divides the containing region into four equal-size regions as shown in Fig. 5.
  • VVC versatile video coding
  • Each of the splits 514 and 516 divides the containing region into two equal-size regions. The division is either along a horizontal boundary (514) or a vertical boundary (516) within the containing block.
  • ternary horizontal split 518 and a ternary vertical split 520 divide the block into three regions, bounded either horizontally (518) or vertically (520) along 1 ⁇ 4 and 3 ⁇ 4 of the containing region width or height.
  • the combination of the quad tree, binary tree, and ternary tree is referred to as ‘QTBTTT’ .
  • the root of the tree includes zero or more quadtree splits (the ‘QT’ section of the tree).
  • the ‘multi-tree’ or ‘MT’ section of the tree) may occur (the ‘multi-tree’ or ‘MT’ section of the tree), finally ending in CBs or CUs at leaf nodes of the tree.
  • the tree leaf nodes are CUs.
  • the tree leaf nodes are CBs.
  • the QTBTTT results in many more possible CU sizes, particularly considering possible recursive application of binary tree and/or ternary tree splits.
  • the potential for unusual (non square) block sizes can be reduced by constraining split options to eliminate splits that would result in a block width or height either being less than four samples or in not being a multiple of four samples.
  • the constraint would apply in considering luma samples.
  • the constraint can be applied separately to the blocks for the chroma channels.
  • Fig. 6 is a schematic flow diagram illustrating a data flow 600 of a QTBTTT (or ‘coding tree’) structure used in versatile video coding.
  • the QTBTTT structure is used for each CTU to define a division of the CTU into one or more CUs.
  • the QTBTTT structure of each CTU is determined by the block partitioner 310 in the video encoder 114 and encoded into the bitstream 115 or decoded from the bitstream 133 by the entropy decoder 420 in the video decoder 134.
  • the data flow 600 further characterises the permissible combinations available to the block partitioner 310 for dividing a CTU into one or more CUs, according to the divisions shown in Fig. 5.
  • Quad-tree (QT) split decision 610 is made by the block partitioner 310.
  • the decision at 610 returning a ‘ 1 ’ symbol indicates a decision to split the current node into four sub -nodes according to the quad-tree split 512.
  • the result is the generation of four new nodes, such as at 620, and for each new node, recursing back to the QT split decision 610.
  • Each new node is considered in raster (or Z-scan) order.
  • quad-tree partitioning ceases and multi-tree (MT) splits are subsequently considered.
  • an MT split decision 612 is made by the block partitioner 310.
  • a decision to perform an MT split is indicated. Returning a ‘O’ symbol at decision 612 indicates that no further splitting of the node into sub-nodes is to be performed. If no further splitting of a node is to be performed, then the node is a leaf node of the coding tree and corresponds to a CU. The leaf node is output at 622.
  • the MT split 612 indicates a decision to perform an MT split (returns a ‘ 1 ’ symbol)
  • the block partitioner 310 proceeds to a direction decision 614.
  • the direction decision 614 indicates the direction of the MT split as either horizontal ( ⁇ ’ or ‘0’) or vertical (‘V’ or ‘l’).
  • the block partiti oner 310 proceeds to a decision 616 if the decision 614 returns a ‘O’ indicating a horizontal direction.
  • the block partitioner 310 proceeds to a decision 618 if the decision 614 returns a ‘ indicating a vertical direction.
  • the number of partitions for the MT split is indicated as either two (binary split or ⁇ T’ node) or three (ternary split or ‘TT’) at the BT/TT split. That is, a BT/TT split decision 616 is made by the block partitioner 310 when the indicated direction from 614 is horizontal and a BT/TT split decision 618 is made by the block partitioner 310 when the indicated direction from 614 is vertical.
  • the BT/TT split decision 616 indicates whether the horizontal split is the binary split 514, indicated by returning a ‘O’, or the ternary split 518, indicated by returning a ‘ .
  • the BT/TT split decision 616 indicates a binary split
  • two nodes are generated by the block partitioner 310, according to the binary horizontal split 514.
  • the BT/TT split 616 indicates a ternary split
  • three nodes are generated by the block partitioner 310, according to the ternary horizontal split 518.
  • the BT/TT split decision 618 indicates whether the vertical split is the binary split 516, indicated by returning a ‘O’, or the ternary split 520, indicated by returning a ‘ 1 ’ .
  • the BT/TT split 618 indicates a binary split
  • at a generate VBT CTU nodes step 627 two nodes are generated by the block partitioner 310, according to the vertical binary split 516.
  • the BT/TT split 618 indicates a ternary split
  • three nodes are generated by the block partitioner 310, according to the vertical ternary split 520.
  • recursion of the data flow 600 back to the MT split decision 612 is applied, in a left-to-right or top-to-bottom order, depending on the direction 614.
  • the binary tree and ternary tree splits may be applied to generate CUs having a variety of sizes.
  • Figs. 7A and 7B provide an example division 700 of a CTU 710 into a number of CUs or CBs.
  • An example CU 712 is shown in Fig. 7A.
  • Fig. 7A shows a spatial arrangement of CUs in the CTU 710.
  • the example division 700 is also shown as a coding tree 720 in Fig. 7B.
  • the contained nodes are scanned or traversed in a ‘Z-order’ to create lists of nodes, represented as columns in the coding tree 720.
  • Z-order scanning results in top left to right followed by bottom left to right order.
  • the Z-order scanning simplifies to a top-to-bottom scan and a left-to-right scan, respectively.
  • the coding tree 720 of Fig. 7B lists all nodes and CUs according to the applied scan order. Each split generates a list of two, three or four new nodes at the next level of the tree until a leaf node (CU) is reached.
  • the quantised coefficients 336 may be rearranged to a one dimensional list by performing a two-level backward diagonal scan.
  • the quantised coefficients 424 may be rearranged from a one-dimensional list to a two-dimensional collection of sub-blocks by the same two-level backward diagonal scan.
  • Fig. 8A shows a two-level backward diagonal scan 810 of an example 8x8 TB 800.
  • the scan 810 is shown progressing from the bottom-right residual coefficient position of the TB 800 back to the top-left (DC) residual coefficient position of the TB 800.
  • the path of the scan 810 progresses with 4x4 regions, known as sub-blocks, and from one sub-block to the next.
  • sub-block sizes For TBs of width or height of two, sub-block sizes of 2x2, 2x8, or 8x2 are available. Scanning within a particular sub-block is either performed or the sub-block skipped, according to a ‘coded sub-block flag’.
  • Fig. 8B shows an alternative, two-level forward diagonal scan 860 of an example 8x8 TB 850, which is used when the transform skip mode is selected.
  • the quantised coefficients 336 are rearranged to a one dimensional by the scan 860.
  • the quantised coefficients 424 are rearranged from a one- dimensional list to a two-dimensional collection of sub-blocks by the scan 860.
  • the scan 860 is shown progressing from the top-left (DC) residual coefficient position of the TB 850 to the bottom-right residual coefficient position of the TB 850. Unlike the scan 810, the scan 860 does not terminate at a ‘last significant coefficient’.
  • Figs. 8A and 8B show scan patterns typically used in VVC.
  • the examples described herein use the scan pattern 810 for encoding residual coefficients that have been transformed by the module 326 and the scan pattern 860 is used for transform-skipped transform blocks. However, in some implementations other scan patterns can be used.
  • Fig. 9 shows a method 900 for encoding a transform block of quantised coefficients 336.
  • the method 900 may be embodied by apparatus such as a configured FPGA, an ASIC, or an ASSP. Additionally, the method 900 may be performed by the video encoder 114 under execution of the processor 205. As such, the method 900 may be implemented as modules of the software 233 stored on computer-readable storage medium and/or in the memory 206.
  • the method 900 is implemented in some arrangements by the video encoder 114 at the entropy encoder 338 on receiving the transform coefficients 336.
  • the method 900 begins at an encode last position step 910.
  • the entropy encoder determines if the video encoder 114 applied a transform to produce the current transform block of quantised coefficients 336. If a transform was applied, the video encoder 114 finds the position of the last significant coefficient in the quantised coefficients 336. The last significant coefficient is determined in relation to the forward direction of an appropriate scan pattern, for example in the direction of the two-level forward diagonal scan 860. A quantised coefficient is significant if it has any value other than zero. The position of the last significant coefficient is written to the bitstream 133.
  • step 910 determines the position of the last residual coefficient at step 910 is not implemented, as indicated using dotted lines.
  • the method 900 proceeds under control of the processor 205 from step 910 to a select first sub-block step 920.
  • the select first sub-block step 920 if the video encoder 114 did not select transform skip mode, then the sub-block containing the last significant coefficient is selected.
  • the top-left sub-block is selected.
  • the method 900 proceeds under control of the processor 205 from step 920 to a determine coded sub-block flag step 930.
  • TRUE means that the flag value indicates a mode is selected or a requirement is met.
  • FALSE means that the flag value indicates a mode is not selected or a requirement is not met.
  • the video encoder 114 sets a coded sub-block flag. If the video encoder 114 did not select transform skip mode and the current selected sub-block is the first sub-block selected in the select first sub-block step 920, the coded sub-block flag is set to “TRUE” but is not encoded to the bitstream 133. If the video encoder 114 did not select transform skip mode and the current selected sub-block is identified as a last sub-block as described below in relation to a last sub-block test 960, the coded sub-block flag is set to “TRUE” but is not encoded to the bitstream 133.
  • the video encoder 114 selects transform skip mode, the current selected sub-block is identified as the last sub-block, and all the coded sub-block flags for previous sub-blocks in the current transform block were “FALSE”, the codec sub-block flag is set to “TRUE” but is not encoded to the bitstream 133. Otherwise, the video encoder 114 sets the coded sub-block flag to (i) “TRUE” if there is at least one significant coefficient in the 4x4 quantised coefficients belonging to the selected sub-block, or (ii) “FALSE” if there are no significant coefficients, and encodes the coded sub-block flag to the bitstream 133.
  • the method 900 proceeds under control of the processor 205 from step 930 to a coded sub-block flag test step 940.
  • the method 900 determines whether the value of the coded sub-block flag is “TRUE” or not. The method 900 proceeds to an encode sub-block step 950 if the coded sub-block flag is set to “TRUE”. Otherwise, if the coded sub block flag is set to “FALSE” the method 900 proceeds to the last sub-block test step 960.
  • the entropy encoder 338 encodes the quantised coefficients in the selected sub-block to the bitstream 133. If the video encoder 114 did not select transform skip mode, the step 950 invokes a method 1100, described below in relation to Fig. 11. If the video encoder 114 selected transform skip mode, the step 950 invokes a method 1300 or a method 1500, described below in relation to Fig. 13 and Fig. 15, respectively.
  • the method 900 proceeds under control of the processor from step 950 to the last sub-block test 960. [000135] At the last sub-block test 960, the method 900 operates to determine if the selected sub-block is the last sub-block in the current transform block. If the video encoder 114 did not select transform skip mode, the last sub-block is the top-left sub-block of the transform block.
  • the last sub-block is the bottom-right sub-block of the transform block. If the current selected sub-block is the last sub-block, the step 900 returns “YES” and the method 900 terminates. Otherwise, if the current selected sub block is not the last sub-block in the transform block, the step 960 returns “NO” and the method 900 proceeds to a select next sub-block step 970.
  • next sub-block step 970 a next sub-block in the transform block is selected. If the video encoder 114 did not select transform skip mode, the next sub-block in the corresponding scan pattern, typically the backward diagonal scan order 810, is selected. If the video encoder 114 selected transform skip mode, the next sub-block in the corresponding scan pattern, typically the forward diagonal scan order 860, is selected. The method 900 proceeds from step 970 to the determine coded sub-block flag step 930.
  • Fig. 10 shows a method 1000 for decoding a transform block of quantised coefficients 424.
  • the method 1000 may be embodied by apparatus such as a configured FPGA, an ASIC, or an ASSP. Additionally, the method 1000 may be performed by the video decoder 134 under execution of the processor 205. As such, the method 1000 may be implemented as modules of the software 233 stored on computer-readable storage medium and/or in the memory 206.
  • the method 1000 is implemented in some arrangements by the video encoder 134 at the entropy decoder 420 on receiving the bitstream 133.
  • the method 1000 begins at a decode last position step 1010.
  • a last significant coefficient position may be determined based on the transform skip flag.
  • the transform skip flag for the transform block is decoded from the bitstream or can be inferred by the entropy decoder 420. If the video decoder 134 decoded or inferred the transform skip flag for the current transform block to be “FALSE”, that is a transform was applied, the last significant coefficient position is decoded from the bitstream 133 at step 1010. If the video decoder 134 decoded or inferred the transform skip flag for the current transform block to be TRUE (transform skip mode was applied), determining the last significant coefficient position at step 1010 is not implemented, as indicated using dotted lines.
  • the method 1000 proceeds under control of the processor 205 from step 1010 to a select first sub-block step 1020.
  • the video decoder 134 selects a first sub-block of the transform block. If the video decoder 134 did decode or infer that transform skip mode is not used (transform skip flag is “FALSE”), the sub-block containing the last significant coefficient position is selected. If the video decoder 134 decoded or inferred that transform skip mode is used at step 1010, the top-left sub-block is selected at step 1020.
  • the method 1000 proceeds under control of the processor 205 from step 1020 to a determine coded sub block flag step 1030.
  • the video decoder 134 determines a coded sub-block flag. If the transform skip flag was decoded or inferred as “FALSE” and the current selected sub-block is the first sub-block selected in the select first sub-block step 1020, the coded sub-block flag is set to “TRUE” (that is, the coded sub-block flag is inferred to be “TRUE”). If the transform skip flag was decoded or inferred as “FALSE” and the current selected sub-block is identified as a last sub-block as described below in a last sub-block test 1060, the coded sub-block flag is inferred as “TRUE”.
  • the video decoder 134 decodes the coded sub-block flag from the bitstream 133.
  • the method 1000 proceeds under control of the processor 205 from step 1030 to a coded sub-block flag test 1040.
  • the method 1000 tests the value of the coded sub-block flag determined at step 1030.
  • the method 1000 proceeds to a decode sub-block step 1050 if the coded sub-block flag is determined to have a value of “TRUE” at step 1040. Otherwise if the coded sub-block flag is determined to have a value of “FALSE” at step 1040, all the quantised coefficients in the current selected sub-block are assigned a value of zero, and the method 1000 proceeds to a last sub-block test 1060.
  • the entropy decoder 420 decodes quantised coefficients for the selected sub-block from the bitstream 133. If the video decoder 134 determines that transform skip mode is not used, the step 1050 invokes a method 1200, described below in relation to Fig. 12. If the video decoder 134 determines that transform skip mode is used, in some implementations the step 1050 invokes a method 1400 or a method 1600, described below in relation to Fig. 14 and Fig. 16 respectively. The method 1000 proceeds under control of the processor 205 to the last sub-block test 1060.
  • the last sub-block test 1060 if the video decoder 134 determined that transform skip mode is not used, the last sub-block is the top-left sub-block of the transform block. If the video decoder 134 determined transform skip mode is used, the last sub-block is the bottom- right sub-block of the transform block. If the current selected sub-block is the last sub-block, the step 1060 returns “YES” and the method 1000 terminates. Otherwise, the step 1060 returns “NO” and the method 1000 proceeds to a select next sub-block step 1070.
  • next sub-block step 1070 if the video decoder 134 determined at step 1010 that transform skip mode is not used, the next sub-block in the backward diagonal scan order 810 is selected. If the video decoder 134 determined at step 1010 that transform skip mode is used, the next sub-block in the forward diagonal scan order 860 is selected. The method 1000 proceeds under control of the processor 205 from step 1060 to the determine coded sub-block flag step 1030.
  • the quantised coefficients are binarised by the video encoder 114 (typically by the entropy encoder 338) into a number of syntax elements prior to encoding. For example, because the quantised coefficients 336 often have a value of zero, one syntax element is a significance flag, which is set to “FALSE” for a quantised coefficient with a value of zero. If the significance flag is set to “FALSE”, no further syntax elements for the associated quantised coefficient are signalled.
  • the significance flag may be encoded to the bitstream 133 by using the context-adaptive binary arithmetic coding (CAB AC) entropy coder.
  • CAB AC context-adaptive binary arithmetic coding
  • CAB AC coder encodes syntax elements relatively efficiently, limiting the use of the CABAC coder is generally desirable to minimise computational requirements and cost for hardware implementations. Therefore, after the quantised coefficients 336 are binarised into a number of syntax elements by the entropy encoder 338, some syntax elements are CABAC coded to the bitstream 133, while other syntax elements are bypass coded to the bitstream 133.
  • the total number of syntax element bins processed by CABAC is limited per transform block. In the VVC standard the limit is set at 1.75 bins per sample. For example, for an 8x8 transform block which consists of sixty-four samples, a CABAC bin budget is set at one hundred and twelve (112) bins.
  • Fig. 11 shows the method 1100 for encoding the quantised coefficients (336) of the current selected sub-block to the bitstream 133.
  • the method 1100 is implemented at step 950 of the method 900 if the sub-block belongs to a transform block for which transform skip mode has not been selected.
  • the method 1100 may be embodied by apparatus such as a configured FPGA, an ASIC, or an ASSP.
  • the method 1100 may be performed by the video encoder 114 under execution of the processor 205. As such, the method 1100 may be implemented as modules of the software 233 stored on computer-readable storage medium and/or in the memory 206. The method 1100 begins at a select first coefficient step 1110.
  • a quantised coefficient of the current sub block is selected. If the current sub-block contains the last significant coefficient position, a current selected coefficient is set to the last significant coefficient. Otherwise, if the current sub-block does not contain the last significant coefficient position, the current selected coefficient is set to the bottom -right coefficient of the current sub-block.
  • the method 1100 proceeds under control of the processor 205 from step 1110 to a use CABAC check 1120.
  • the video encoder 114 checks whether the remaining CABAC bin budget is greater than or equal to four. If the remaining CABAC bin budget is greater than or equal to four, the step 1120 returns “YES” and the method 1100 proceeds to a significant check step 1130. Otherwise, if the current CABAC bin budget is less than four, the step 1120 returns “FALSE” and the method 1100 proceeds to an encode remainder pass step 1180.
  • the video encoder 114 checks whether the current selected coefficient has a magnitude greater than zero. If the current coefficient is the last significant coefficient, a significance flag is set to “TRUE” at step 1130 but is not encoded to the bitstream 133. If the current selected sub-block is not the first or last sub-block in the backward scan order 810, and the current selected coefficient is the final coefficient as described below in a final coefficient check 1160, and all the significance flags for previous coefficients in the current selected sub-block were “FALSE”, the significance flag is set to “TRUE” at step 1130 but not encoded to the bitstream 133.
  • the significance flag is set to “TRUE” at step 1130 and encoded using the CABAC coder to the bitstream 133. Whenever a flag is encoded by the CABAC coder to the bitstream 133, the remaining CABAC bin budget is also reduced by one. If the significance flag is set to “TRUE”, the method 1100 proceeds to a greater than one check 1140. Otherwise, if the current coefficient has a magnitude of zero, the significance flag is set to “FALSE” and encoded using the CAB AC coder to the bitstream 133. The method 1100 proceeds to the final coefficient check 1160.
  • the video encoder 114 checks whether the current selected coefficient has a magnitude greater than one. If the current coefficient has a magnitude greater than one, then a greater than one flag is set to “TRUE” and encoded using the CABAC coder to the bitstream 133. Upon returning “TRUE” at step 1140, the method 1100 proceeds to an encode greater than three and parity flags step 1150. Otherwise, if the current coefficient has a magnitude of one, the greater than one flag is set to “FALSE” at step 1140 and encoded using the CABAC coder to the bitstream 133. The method 1100 proceeds to the final coefficient check 1160 if step 1140 returns “FALSE”.
  • the video encoder 114 encodes a parity flag and a greater than three flag for the current sub-block.
  • Step 1150 can be implemented by the entropy encoder 338 for example. Execution of step 1150 sets the parity flag to “FALSE” if the current coefficient has an even magnitude or sets the parity flag to “TRUE” if the current coefficient has an odd magnitude.
  • the parity flag is encoded using the CABAC coder to the bitstream 133.
  • the video encoder 114 sets the greater than three flag to “TRUE” if the current coefficient has a magnitude greater than three, or sets the greater than three flag to “FALSE” otherwise.
  • the greater than three flag is encoded using the CABAC coder to the bitstream 133.
  • the method 1100 proceeds under control of the processor 205 from step 1150 to the final coefficient check 1160.
  • the video encoder 114 checks whether the current selected coefficient is the top-left coefficient of the current selected sub-block. If the current selected coefficient is the top-left coefficient of the current selected sub-block, the step 1160 returns “YES” and the method 1100 proceeds to the encode remainder pass step 1180. Otherwise, if the current coefficient is not the top-left coefficient, the step 1160 returns “NO” and the method 1100 proceeds to a select next coefficient step 1170.
  • the next coefficient in the backward diagonal scan order 810 is selected.
  • the method 1100 proceeds from the step 1170 to the use CABAC check 1120.
  • any remaining magnitudes of the quantised coefficients of the current selected sub-block are binarised and bypass coded to the bitstream 133, for example by the entropy encoder 338.
  • the quantised coefficients are processed in the backward diagonal scan order 810. If a quantised coefficient was encoded by the CAB AC coder (that is, the use CABAC check 1120 was passed (returned “YES”)), the quantised coefficient at scan position n has a remaining magnitude r[n] if the greater than three flag is “TRUE”.
  • the magnitude r[n] is binarised and then bypass coded to the bitstream 133. If a quantised coefficient was not encoded by the CABAC coder (the use CABAC check 1120 was not passed/returned “NO”), the absolute magnitude x[n] is binarised and bypass coded to the bitstream 133.
  • the method 1100 proceeds under control of the processor 205 from step 1180 to an encode signs pass step 1190.
  • step 1190 sign bits for any significant coefficients of the current selected sub-block are bypass coded to the bitstream 133.
  • a quantised coefficient that was encoded by the CABAC coder is significant if the significance flag is “TRUE”.
  • a quantised coefficient that was not encoded by the CABAC coder is significant if the absolute magnitude x[n] is greater than zero.
  • the sign bits are bypass coded to the bitstream 133 in the backward diagonal scan order 810. The method 1100 terminates upon execution of step 1190.
  • Fig. 12 shows the method 1200 for decoding quantised coefficients (424) for the current selected sub-block from the bitstream 133, when the sub-block belongs to a transform block for which transform skip mode has not been selected.
  • the method 1200 can be implemented at step 1050 if transform skip mode has been decoded or inferred not to be used.
  • the method 1200 may be embodied by apparatus such as a configured FPGA, an ASIC, or an ASSP. Additionally, the method 1200 may be performed by the video decoder 134 under execution of the processor 205. As such, the method 1200 may be implemented as modules of the software 233 stored on computer-readable storage medium and/or in the memory 206.
  • the method 1200 begins at a select first coefficient step 1210.
  • the method 1200 selects a first quantised coefficient of the current sub-block. If the current sub-block contains the last significant coefficient position, then a current selected coefficient is set to the last significant coefficient. Otherwise, the current selected coefficient is set to the bottom-right coefficient of the current sub-block. The method 1200 proceeds from step 1210 to a use CABAC check step 1220.
  • the video decoder 134 checks whether the remaining CABAC bin budget satisfies a threshold, that is whether the remaining CABAC bin budget for the transform block is greater than or equal to four bins. If the remaining budget is greater than or equal to four, the step 1220 returns “YES” and the method 1200 proceeds to a significant check step 1230. Otherwise, if the remaining CABAC budget is less than four bins, the step 1220 returns “NO” and the method 1200 proceeds to a decode remainder pass step 1280.
  • a threshold that is whether the remaining CABAC bin budget for the transform block is greater than or equal to four bins.
  • the video decoder 134 checks whether the current selected coefficient has a magnitude greater than zero. If the current coefficient is the last significant coefficient, then a significance flag is inferred to be “TRUE”. If the current selected sub-block is not the first or last sub-block in the backward scan order 810, and the current selected coefficient is the final coefficient as described below in a final coefficient check 1260, and all the significance flags for previous coefficients in the current selected sub-block were “FALSE”, the significance flag is inferred to be “TRUE” at step 1230. Otherwise, the significance flag is decoded from the bitstream 133 by the CABAC coder.
  • the method 1200 Whenever a flag is decoded by the CABAC coder from the bitstream 133, the remaining CABAC bin budget is also reduced by one. If the value of the inferred or decoded significance flag is “TRUE”, the method 1200 proceeds to a greater than one check 1240. Otherwise if the significance flag is inferred or decoded as “FALSE”, the current selected coefficient is assigned a value of zero and the method 1200 proceeds to the final coefficient check 1260.
  • the video decoder 134 decodes a greater than one flag using the CABAC coder from the bitstream 133. If the greater than one flag is decoded as “TRUE”, the method 1200 proceeds to a decode greater than three and parity flags step 1250. Otherwise if the greater than one flag is decoded as “FALSE”, the current selected coefficient is assigned a magnitude of one and the method 1200 proceeds to the final coefficient check 1260.
  • the video decoder 134 decodes a parity flag using the CABAC coder from the bitstream 133.
  • the video decoder 134 also decodes a greater than three flag using the CABAC coder from the bitstream 133.
  • the method 1200 proceeds under control of the processor 205 from step 1250 to the final coefficient check step 1260.
  • the video decoder 134 checks whether the current selected coefficient is the top-left coefficient of the current selected sub-block. If the current selected coefficient is the top-left coefficient of the current selected sub-block, the step 1260 returns “YES” and the method 1200 proceeds to the decode remainder pass step 1280. Otherwise, if the current selected coefficient is not the top-left coefficient of the current selected sub-block, the step 1260 returns “NO” and the method 1200 proceeds to a select next coefficient step 1270.
  • next coefficient step 1270 the next coefficient in the backward diagonal scan order 810 is selected.
  • the method 1200 proceeds under control of the processor 205 from step 1270 to the use CAB AC check 1220.
  • any remaining magnitudes of the quantised coefficients of the current selected sub-block are decoded from the bitstream 133 without using the CABAC coder.
  • the remaining magnitudes of the quantised coefficients of the current selected sub-block are decoded using bypass decoding.
  • the quantised coefficients are processed in the backward diagonal scan order 810. If a quantised coefficient was decoded by the CABAC coder (the use CABAC check 1220 was passed or returned “YES”), and the greater than three flag was decoded with a value of “TRUE”, then a remaining magnitude r[n] is bypass decoded from the bitstream 133, where n is the scan position of the quantised coefficient.
  • the absolute magnitude x[n] is bypass decoded from the bitstream 133.
  • the method 1200 proceeds under control of the processor 205 from step 1280 to a decode signs pass step 1290.
  • step 1290 sign bits for any significant coefficients of the current selected sub-block are bypass decoded from the bitstream 133.
  • a quantised coefficient is significant if the absolute magnitude x[n] is greater than zero.
  • the sign bits are bypass decoded from the bitstream 133 in the backward diagonal scan order 810.
  • the value of a quantised coefficient is set to —x[n] if the associated sign bit has a value of one.
  • the value of a quantised coefficient is set to x[n] if the associated sign bit has a value of zero.
  • the method 1200 terminates upon execution of the step 1290.
  • Fig. 13 shows the method 1300 for encoding the quantised coefficients (336) of the current selected sub-block to the bitstream 133.
  • the method 1300 is implemented at step 950 of the method 900 if the sub-block belongs to a transform block for which transform skip mode has been selected.
  • the method 1300 may be embodied by apparatus such as a configured FPGA, an ASIC, or an ASSP. Additionally, the method 1300 may be performed by the video encoder 114 under execution of the processor 205. As such, the method 1300 may be implemented as modules of the software 233 stored on computer-readable storage medium and/or in the memory 206.
  • the method 1300 begins at a select first coefficient step 1310.
  • a current selected coefficient is set to the top- left quantised coefficient of the current sub-block.
  • the method 1300 proceeds under control of the processor 205 from step 1310 to a use CAB AC check 1320.
  • the video encoder 114 checks whether the remaining CAB AC bin budget is greater than or equal to four. If the remaining budget is greater than or equal to four, the step 1320 returns “YES” and the method 1300 proceeds to a significant check step 1330. Otherwise, if the remaining budget is less than four, the step 1320 returns “NO” and the method 1300 proceeds to an encode remainder pass step 1390.
  • the video encoder 114 sets a significance flag value.
  • the encoder 114 checks whether the current selected coefficient has a magnitude greater than zero. If the coded sub-block flag associated with the current selected sub-block was set to “TRUE” and encoded to the bitstream 133, and the current selected coefficient is the final coefficient as described below in a final coefficient check 1370, and all significance flags for previous coefficients in the current selected sub-block were “FALSE”, then the significance flag is set to “TRUE” but not encoded to the bitstream 133. If the current coefficient has a magnitude greater than zero, then the significance flag is set to “TRUE” and encoded using the CABAC coder to the bitstream 133.
  • the method 1300 Whenever a flag is encoded by the CABAC coder to the bitstream 133, the remaining CABAC bin budget is also reduced by one. If the significance flag is set to “TRUE”, the method 1300 proceeds under control of the processor 205 from step 1330 to an encode sign flag step 1335. Otherwise, if the current coefficient has a magnitude of zero, the significance flag is set to “FALSE” and encoded using the CAB AC coder to the bitstream 133. The method 1300 proceeds in this event from the step 1330 to the final coefficient check 1370.
  • the video encoder 114 encodes a sign bit of the current selected coefficient using the CAB AC coder to the bitstream 133.
  • the sign bit has a value of zero if the value of the current selected coefficient is positive.
  • the sign bit has a value of one if the value of the current selected coefficient is negative.
  • the method 1300 proceeds under control of the processor 205 from step 1335 to a greater than one check 1340.
  • the video encoder 114 checks whether the current selected coefficient has a magnitude greater than one. If the current coefficient has a magnitude greater than one, a greater than one flag is set to “TRUE” and encoded using the CABAC coder to the bitstream 133. The method 1300 proceeds to an encode parity flag step 1345 if step 1340 returns “TRUE”. Otherwise, if the current coefficient has a magnitude of one, the greater than one flag is set to “FALSE” and encoded using the CABAC coder to the bitstream 133. The method 1300 proceeds to the final coefficient check 1370 if step 1340 returns “FALSE”.
  • the video encoder 114 encodes a parity flag for the current quantised residual coefficient.
  • the parity flag is set to “FALSE” if the current coefficient has an even magnitude.
  • the parity flag is set to “TRUE” if the current coefficient has an odd magnitude.
  • the parity flag is encoded using the CABAC coder to the bitstream 133.
  • the method 1300 proceeds under control of the processor 205 from step 1345 to a use CABAC check 1350.
  • the video encoder 114 checks whether the remaining CABAC bin budget meets the threshold (is greater than or equal to four). If the remaining budget is greater than or equal to four, the step 1350 returns “YES” and method 1300 proceeds to a greater than gtk check step 1360. Otherwise, if the remaining budget is less than four, the step 1350 returns “NO” and the method 1300 proceeds to the encode remainder pass step 1390.
  • the video encoder 114 checks whether the current selected coefficient has a magnitude greater than 2 * k + 1. k is set to one the first time the method 1300 reaches the greater than gtk check 1360 for the current selected coefficient. If the selected coefficient has a magnitude greater than 2 * k + 1, then a greater than 2k + 1 flag is set to “TRUE” and encoded using the CABAC coder to the bitstream 133. If k is less than four, the step 1360 returns “YES” and the method 1300 stays at the greater than gtk check 1360 and k is increased by one.
  • step 1360 returns “NO” and the method 1300 proceeds to the final coefficient check 1370. If the selected coefficient has a magnitude less than or equal to 2 * k + 1, the greater than 2k + 1 flag is set to “FALSE” and encoded using the CAB AC coder to the bitstream 133. Accordingly, up to four flags can be encoded in step 1360, relating to whether the current selected coefficient has a magnitude greater than 3, 5, 7 and 9 (for k values of 1, 2, 3 and 4 respectively). If the greater than 2k + 1 flag is set to “FALSE” the step 1360 returns “NO” and the method 1300 proceeds to the final coefficient check 1370.
  • the video encoder 114 checks whether the current selected coefficient is the bottom-right coefficient of the current selected sub-block. If the current selected coefficient is the bottom-right coefficient of the current selected sub-block, the step 1370 returns “YES” and the method 1300 proceeds to the encode remainder pass step 1390. Otherwise, if the current selected coefficient is not the bottom-right coefficient, the step 1370 returns “NO” and the method 1300 proceeds to a select next coefficient step 1380.
  • next coefficient step 1380 the next coefficient in the forward diagonal scan order 860 is selected.
  • the method 1300 proceeds under control of the processor 205 from the step 1308 to the use CABAC check 1320.
  • any remaining magnitudes of the quantised coefficients of the current selected sub-block are binarised and bypass coded to the bitstream 133.
  • r[n] is binarised in a process described further below, and then bypass coded to the bitstream 133. If a quantised coefficient was encoded by the CABAC coder and both the use CABAC check 1320 and the use CABAC check 1350 were passed (returned “YES”), but the greater than nine flag was not encoded, or has a value of “FALSE”, there is no remaining magnitude that needs to be encoded to the bitstream 133.
  • Fig. 14 shows the method 1400 for decoding the quantised coefficients (424) of the current selected sub-block from the bitstream 133.
  • the method 1400 can be implemented at the step 1050 if the sub-block belongs to a transform block for which transform skip mode has been selected.
  • the method 1400 may be embodied by apparatus such as a configured FPGA, an ASIC, or an ASSP. Additionally, the method 1400 may be performed by the video decoder 134 under execution of the processor 205. As such, the method 1400 may be implemented as modules of the software 233 stored on computer-readable storage medium and/or in the memory 206.
  • the method 1400 begins at a select first coefficient step 1410.
  • a current selected coefficient is set to the top- left coefficient of the current sub-block.
  • the method 1400 proceeds under control of the processor 205 to a use CAB AC check 1420.
  • the video decoder 134 checks whether the remaining CAB AC bin budget meets a threshold.
  • the threshold relates to whether the remaining CABAC bin budget is greater than or equal to four bins. If the remaining budget is greater than or equal to four, the step 1420 returns “YES” and the method 1400 proceeds to a significant check step 1430. Otherwise, if the remaining budget is less than four the method 1400 proceeds to a decode remainder pass step 1490.
  • the video decoder 134 checks whether the current selected coefficient has a magnitude greater than zero and sets a significance flag accordingly.
  • the significance flag associated with the current selected sub-block was decoded as “TRUE” from the bitstream 133 and not inferred, and the current selected coefficient is the final coefficient as described below in a final coefficient check 1470, and all significance flags for previous coefficients in the current selected sub-block were “FALSE”, then the significance flag is inferred to be “TRUE”. Otherwise, the significance flag is decoded from the bitstream 133 by the CABAC coder. Whenever a flag is decoded by the CABAC coder from the bitstream 133, the remaining CABAC bin budget is also reduced by one. If the significance flag is inferred or decoded as “TRUE”, the method 1400 proceeds to a decode sign flag step 1435. Otherwise, if the significance flag is inferred or decoded as “FALSE”, the current selected coefficient is assigned a value of zero and the method 1400 proceeds to the final coefficient check step 1470.
  • the video decoder 134 decodes a sign bit of the current selected coefficient using the CABAC coder from the bitstream 133.
  • the method 1400 proceeds under control of the processor 205 from step 1435 to a greater than one check step 1440.
  • the video decoder 134 decodes a greater than one flag using the CABAC coder from the bitstream 133. If the decoded greater than one flag has a value of “TRUE”, the method 1400 proceeds to a decode parity flag step 1445. Otherwise if the decoded greater than one flag has a value of “FALSE”, the current selected coefficient is assigned a magnitude of one and the method 1400 proceeds to the final coefficient check step 1470.
  • the video decoder 134 decodes a parity flag using the CABAC coder from the bitstream 133.
  • the method 1400 proceeds under control of the processor 205 from step 1445 to a use CABAC check step 1450.
  • the video decoder 134 checks whether the remaining CABAC bin budget meets the threshold (is greater than or equal to four). If the remaining budget is greater than or equal to four, the step 1450 returns “YES” and the method 1400 proceeds to a greater than gtk check stepl460. Otherwise, if the remaining budget is less than four, the method 1400 proceeds to the decode remainder pass step 1490.
  • the video decoder 134 decodes a greater than 2k + 1 flag using the CABAC coder from the bitstream 133.
  • the variable k is set to one the first time the method 1400 reaches the greater than gtk check 1360 for the current selected coefficient. If the decoded greater than 2k + 1 flag has a value of “TRUE”, and k is less than four, step 1460 returns “YES”, the method 1400 remains at the greater than gtk check step 1460 and k is increased by one. Otherwise if k is equal to four, the step 1460 returns “NO” and the method 1400 proceeds to the final coefficient check 1470.
  • the current selected coefficient is assigned a magnitude of 2k + p.
  • the variable p has a value of zero if the parity flag was decoded as “FALSE”, and p has a value of one if the parity flag was decoded as “TRUE”. Accordingly, up to four flags can be decoded at step 1360, relating to whether the current selected coefficient has a magnitude greater than 3, 5, 7 and 9 (for k values of 1, 2, 3 and 4 respectively). If the decoded greater than 2k + 1 flag has a value of “FALSE”, the step 1460 returns “NO” and the method 1400 proceeds to the final coefficient check step 1470.
  • the video decoder 134 checks whether the current selected coefficient is the bottom-right coefficient of the current selected sub-block. If the current selected coefficient is the bottom-right coefficient of the current selected sub-block, the step 1470 returns “YES” and the method 1400 proceeds to the decode remainder pass step 1490. Otherwise, if the current selected coefficient is not the bottom-right coefficient of the current selected sub-block, the step 1470 returns “NO” and the method 1400 proceeds to a select next coefficient step 1480.
  • next coefficient step 1480 the next coefficient in the forward diagonal scan order 860 is selected.
  • the method 1400 proceeds under control of the processor 205 from step 1480 to the use CABAC check step 1420.
  • any remaining magnitudes of the quantised coefficients of the current selected sub-block are decoded from the bitstream 133 without using the CABAC coder.
  • the remaining magnitudes of the quantised coefficients of the current selected sub-block are bypass decoded.
  • the quantised coefficients are processed in the forward diagonal scan order 860. If a quantised coefficient was decoded by the CABAC coder and both the use CABAC check 1420 and the use CABAC check 1450 were passed (returned “YES”), and the greater than nine flag was decoded as “TRUE”, then a remaining magnitude r[n] is decoded from the bitstream 133, where n is the scan position of the quantised coefficient.
  • the value of the quantised coefficient is set to — x[n ⁇ if the associated sign bit has a value of one.
  • the value of the quantised coefficient is set to x[n] if the associated sign bit has a value of zero.
  • the method 1400 terminates upon execution of step 1490.
  • Methods 1100 and 1200 describe a regular residual coding (RRC) process which is used when transform skip mode has not been selected for the transform block.
  • Methods 1300 and 1400 describe a transform skip residual coding (TSRC) process which is used when transform skip mode has been selected for the transform block. Having the TSRC process different to the RRC process can be advantageous because quantised coefficients produced in a transform skip TB have different statistical properties to quantised coefficients produced in a non-transform skip TB. Therefore, a different residual coding process is resultantly needed to exploit the statistical properties of quantised coefficients produced in a transform skip TB.
  • the resulting coefficients represent the characteristics of the residual signal in the frequency domain, and the coefficients at or near the DC frequency (the top-left comer of the TB) typically have the greatest magnitude.
  • Coefficients corresponding to higher frequencies will typically have relatively small or zero magnitude. Signalling many zero-valued high frequency coefficients by signalling a last significant position is resultantly efficient. In contrast, if a transform is skipped, the resulting coefficients are representative of the residual signal in the spatial domain. The magnitudes of spatial residual coefficients typically do not depend on each residual coefficient’s location within the transform block, so there is no benefit in signalling a last significant position.
  • methods 1300 and 1400 describe a working TSRC process
  • the number of syntax elements per quantised coefficient coded by the CABAC coder is eight, compared to four syntax elements per quantised coefficient coded by the CABAC coder for the RRC process of methods 1100 and 1200.
  • the remaining CABAC bin budget is potentially checked twice per quantised coefficient for the TSRC process of methods 1300 and 1400, compared with just one check per quantised coefficient for the RRC process of methods 1100 and 1200. It is desirable for the RRC and TSRC processes to be similar in complexity for hardware implementations, to avoid one process being a bottleneck, that is causing overall delay, for the overall coding process. Requiring a different residual coding process with higher complexity can be disadvantageous in terms of hardware implementation.
  • the step 950 invokes a method 1500 described below in relation to Fig. 15.
  • the step 1050 invokes a method 1600 described below in relation to Fig. 16.
  • Each of the methods 1500 and 1600 relates to a TSRC implementation that has similar complexity with the RRC implementation methods 1100 and 1200
  • Fig. 15 shows the method 1500 for encoding the quantised coefficients (336) of the current selected sub-block to the bitstream 133.
  • the method 1500 can be implemented at step 950 if the sub-block belongs to a transform block for which transform skip mode has been selected.
  • the method 1500 may be embodied by apparatus such as a configured FPGA, an ASIC, or an ASSP. Additionally, the method 1500 may be performed by the video encoder 114 under execution of the processor 205. As such, the method 1500 may be implemented as modules of the software 233 stored on computer-readable storage medium and/or in the memory 206.
  • the method 1500 begins at a select first coefficient step 1510.
  • a current selected coefficient is set to the top- left coefficient of the current sub-block of the transform block.
  • the method 1500 proceeds under control of the processor 205 from step 1510 to a use CABAC check step 1520.
  • the video encoder 114 checks whether the remaining CAB AC bin budget is greater than or equal to a threshold, similarly to step 1320, being four bins. If the remaining budget is greater than or equal to four, the step 1520 returns “YES” and the method 1500 proceeds to a significant check step 1530. Otherwise, if the remaining CAB AC bin budget is less than four, the method 1500 proceeds to an encode remainder pass step 1590.
  • the video encoder 114 checks whether the current selected coefficient has a magnitude greater than zero and sets a significance flag for the current selected coefficient.
  • the step 1530 operates as follows:
  • the video encoder 114 encodes a sign bit of the current selected coefficient using the CABAC coder to the bitstream 133.
  • the sign bit has a value of zero if the value of the current selected coefficient is positive.
  • the sign bit has a value of one if the value of the current selected coefficient is negative.
  • the method 1500 proceeds under control of the processor 205 from step 1540 to a greater than one check step 1550.
  • the video encoder 114 checks whether the current selected coefficient has a magnitude greater than one. If the current coefficient has a magnitude greater than one, then a greater than one flag is set to “TRUE” and encoded using the CABAC coder to the bitstream 133.
  • the method 1500 proceeds to an encode parity flag step 1560 if step 1550 returns “TRUE”. Otherwise, if the current coefficient has a magnitude of one, the greater than one flag is set to “FALSE” and encoded using the CABAC coder to the bitstream 133. The method 1500 proceeds to the final coefficient check 1570 if execution of step 1550 returns “FALSE”.
  • the video encoder 114 sets a parity flag for the current selected coefficient.
  • the parity flag is set to “FALSE” if the current coefficient has an even magnitude.
  • the parity flag is set to “TRUE” if the current coefficient has an odd magnitude.
  • the parity flag is encoded using the CABAC coder to the bitstream 133.
  • the method 1500 proceeds under control of the processor 205 from step 1560 to the final coefficient check 1570.
  • the video encoder 114 checks whether the current selected coefficient is the bottom-right coefficient of the current selected sub-block. If the current selected coefficient is the bottom-right coefficient of the current selected sub-block, the step 1570 returns “YES” and the method 1500 proceeds to the encode remainder pass step 1590. Otherwise, if the current selected coefficient is the bottom -right coefficient of the current sub-block, the step 1570 returns “NO” and the method 1500 proceeds to a select next coefficient step 1580.
  • next coefficient step 1580 the next coefficient in the forward diagonal scan order 860 is selected.
  • the method 1500 proceeds under control of the processor 205 from step 1580 to the use CABAC check step 1520.
  • any remaining magnitudes of the quantised coefficients of the current selected sub-block are binarised and bypass coded to the bitstream 133.
  • Fig. 16 shows the method 1600 for decoding the quantised coefficients (424) of the current selected sub-block from the bitstream 133.
  • the method 1600 can be implemented at the step 950 if the sub-block belongs to a transform block for which transform skip mode has been selected.
  • the method 1600 may be embodied by apparatus such as a configured FPGA, an ASIC, or an ASSP. Additionally, the method 1600 may be performed by the video decoder 134 under execution of the processor 205. As such, the method 1600 may be implemented as modules of the software 233 stored on computer-readable storage medium and/or in the memory 206.
  • the method 1600 implements decoding complementing the encoding of the method 1500.
  • the method 1600 begins at a select first coefficient step 1610.
  • a current selected coefficient is set to the top- left coefficient of the current sub-block.
  • the method 1600 proceeds under execution of the processor 205 from step 1610 to a use CABAC check 1620.
  • the video decoder 134 checks whether the remaining CABAC bin budget has been satisfied. The threshold is whether the remaining bin budget is greater than or equal to four. If the remaining budget is greater than or equal to four, the step 1620 returns “YES” and the method 1600 proceeds to a significant check step 1630.
  • step 1620 returns “NO” and the method 1600 proceeds to a decode remainder pass 1690.
  • the video decoder 134 checks whether the current selected coefficient has a magnitude greater than zero and a significance flag is set. If the coded sub-block flag associated with the current selected sub-block was decoded as “TRUE” from the bitstream 133 and not inferred, and the current selected coefficient is the final coefficient as described below in a final coefficient check 1670, and all significance flags for previous coefficients in the current selected sub-block were “FALSE”, the significance flag is inferred to be “TRUE”. Otherwise, the significance flag is decoded from the bitstream 133 by the CAB AC coder. Whenever a flag is decoded by the CAB AC coder from the bitstream 133, the remaining CABAC bin budget is reduced by one.
  • the method 1600 proceeds to a decode sign flag step 1640. Otherwise if the significance flag is decoded as “FALSE”, the current selected coefficient is assigned a value of zero and the method 1600 proceeds to the final coefficient check step 1670.
  • the video decoder 134 decodes a sign bit of the current selected coefficient using the CABAC coder from the bitstream 133.
  • the method 1600 proceeds under control of the processor 205 from step 1640 to a greater than one check step 1650.
  • the video decoder 134 decodes a greater than one flag using the CABAC coder from the bitstream 133. If the decoded greater than one flag has a value of “TRUE”, the method 1600 proceeds to a decode parity flag step 1660. Otherwise if the decoded greater than one flag has a value of “FALSE”, the current selected coefficient is assigned a magnitude of one and the method 1600 proceeds to the final coefficient check step 1670.
  • the video decoder 134 decodes a parity flag using the CABAC coder from the bitstream 133.
  • the method 1600 proceeds under control of the processor 205 from step 1660 to the final coefficient check step 1670.
  • the video decoder 134 checks whether the current selected coefficient is the bottom-right coefficient of the current selected sub-block. If the current selected coefficient is the bottom-right coefficient of the current selected sub-block, the step 1670 returns “YES” and the method 1600 proceeds to the decode remainder pass step 1690. Otherwise, if the current selected coefficient is not the bottom-right coefficient of the current selected sub-block, the step 1670 returns “NO” and the method 1600 proceeds to a select next coefficient step 1680.
  • next coefficient step 1680 the next coefficient in the forward diagonal scan order 860 is selected.
  • the method 1600 proceeds under control of the processor 205 from step 1680 to the use CABAC check step 1620.
  • any remaining magnitudes of the quantised coefficients of the current selected sub-block are decoded from the bitstream 133 without using the CABAC coder.
  • the remaining magnitudes of the quantised coefficients of the current selected sub-block are decoded using bypass decoding.
  • the quantised coefficients are processed in the forward diagonal scan order 860. If a quantised coefficient was decoded by the CABAC coder (the use CABAC check 1620 was passed or returned “YES”), and the greater than one flag was decoded and set to “TRUE”, a remaining magnitude r[n] is bypass decoded from the bitstream 133, where n is the scan position of the quantised coefficient.
  • the value of the quantised coefficient is set to — x[n ⁇ if the associated sign bit has a value of one.
  • the value of the quantised coefficient is set to x[n] if the associated sign bit has a value of zero.
  • the method 1600 terminates upon execution of step 1690.
  • r[n] is binarised depending on an associated Rice parameter R.
  • r[n] is binarised as a concatenation of a prefix code and a suffix code.
  • the prefix code is a bit string of length six with all bits equal to one.
  • the suffix code is derived by binarizing r[n] — c max with an exponential Golomb order-k code, with k set equal to R + 1.
  • the overall binarisation for r[n] may be referred to as a Rice- EG code.
  • the methods 900 to 1600 encode or decode quantised coefficients 336 or 424 respectively of a selected sub-block.
  • the quantised coefficients may also be referred to generally as quantised transform coefficients, quantised residual coefficients, or residual coefficients.
  • Steps 1630 to 1660 operate so that only a significance flag, a sign flag, greater than one flag and a parity flag are decoded using a CAB AC decoder (if the use CAB AC check 1620 return “YES”).
  • the decoded flags can represent at least a portion of the magnitude of the residual coefficient or the magnitude of the residual coefficient in full. Any remaining portion of the residual coefficient can be decoded at step 1690.
  • the binarising and decoding at step 1690 using Rice- EG is implemented using a fixed Rice parameter of 0 (zero). Implementations in this regard reduce the complexity required for TRC in terms of bin budgeting compared to RRC. Further, the number of steps required to implement TSRC is reduced as a second CABAC budget check (such as 1450) is avoided. Rather the method 1600 operates using a single CABAC check without adversely affecting coding gain compared to applying RRC methods directly.
  • the method 1600 also operates to use binarising and bypass decoding to decode the magnitude of the residual coefficient in full, potentially with a Rice parameter of 0, if the CABAC budget is determined to have been exhausted at step 1620.
  • the methods 1500 and 1600 allow complexity for RRC and TSRC to be similar, thereby removing complexity of hardware implementation.
  • Use of a Rice parameter R 0 has also been found to improve coding gain in some instances, particularly in relation to bypass coding any remainder resulting from operation of the methods 1500 and 1600 (steps 1590 and 1690 respectively).

Abstract

A system and method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream. The method comprises determining at least a portion of a magnitude of the residual coefficient by decoding only a significance flag, a sign flag, greater than one flag and a parity flag using a CABAC decoder; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.

Description

METHOD, APPARATUS AND SYSTEM FOR ENCODING AND DECODING A
BLOCK OF VIDEO SAMPLES
REFERENCE TO RELATED APPLICATION(S)
[0001] This application claims the benefit under 35 U.S.C. §119 of the filing date of Australian Patent Application No. 2019284053, filed 23 December 2019, hereby incorporated by reference in its entirety as if fully set forth herein.
TECHNICAL FIELD
[0002] The present invention relates generally to digital video signal processing and, in particular, to a method, apparatus and system for encoding and decoding a block of video samples. The present invention also relates to a computer program product including a computer readable medium having recorded thereon a computer program for encoding and decoding a block of video samples.
BACKGROUND
[0003] Many applications for video coding currently exist, including applications for transmission and storage of video data. Many video coding standards have also been developed and others are currently in development. Recent developments in video coding standardisation have led to the formation of a group called the “Joint Video Experts Team” (JVET). The Joint Video Experts Team (JVET) includes members of Study Group 16, Question 6 (SG16/Q6) of the Telecommunication Standardisation Sector (ITU-T) of the International Telecommunication Union (ITU), also known as the “Video Coding Experts Group” (VCEG), and members of the International Organisations for Standardisation / International Electrotechnical Commission Joint Technical Committee 1 / Subcommittee 29 / Working Group 11 (ISO/IEC JTC1/SC29/WG11), also known as the “Moving Picture Experts Group” (MPEG).
[0004] The Joint Video Experts Team (JVET) issued a Call for Proposals (CfP), with responses analysed at its 10th meeting in San Diego, USA. The submitted responses demonstrated video compression capability significantly outperforming that of the current state-of-the-art video compression standard, i.e.: “high efficiency video coding” (HEVC). On the basis of this outperformance it was decided to commence a project to develop a new video compression standard, to be named ‘versatile video coding’ (VVC). VVC is anticipated to address ongoing demand for ever-higher compression performance, especially as video formats increase in capability (e.g., with higher resolution and higher frame rate) and address increasing market demand for service delivery over WANs, where bandwidth costs are relatively high. At the same time, VVC must be implementable in contemporary silicon processes and offer an acceptable trade-off between the achieved performance versus the implementation cost (for example, in terms of silicon area, CPU processor load, memory utilisation and bandwidth).
[0005] Video data includes a sequence of frames of image data, each of which include one or more colour channels. Generally, one primary colour channel and two secondary colour channels are needed. The primary colour channel is generally referred to as the ‘luma’ channel and the secondary colour channel(s) are generally referred to as the ‘chroma’ channels.
Although video data is typically displayed in an RGB (red-green-blue) colour space, this colour space has a high degree of correlation between the three respective components. The video data representation seen by an encoder or a decoder is often using a colour space such as YCbCr. YCbCr concentrates luminance, mapped to ‘luma’ according to a transfer function, in a Y (primary) channel and chroma in Cb and Cr (secondary) channels. Moreover, the Cb and Cr channels may be sampled spatially at a lower rate (subsampled) compared to the luma channel, for example half horizontally and half vertically - known as a ‘4:2:0 chroma format’. The 4:2:0 chroma format is commonly used in ‘consumer’ applications, such as internet video streaming, broadcast television, and storage on Blu-Ray™ disks. Subsampling the Cb and Cr channels at half-rate horizontally and not subsampling vertically is known as a ‘4:2:2 chroma format’. The 4:2:2 chroma format is typically used in professional applications, including capture of footage for cinematic production and the like. The higher sampling rate of the 4:2:2 chroma format makes the resulting video more resilient to editing operations such as colour grading. Prior to distribution to consumers, 4:2:2 chroma format material is often converted to the 4:2:0 chroma format and then encoded for distribution to consumers. In addition to chroma format, video is also characterised by resolution and frame rate. Example resolutions are ultra-high definition (UHD) with a resolution of 3840x2160 or ‘8K’ with a resolution of 7680x4320 and example frame rates are 60 or 120Hz. Luma sample rates may range from approximately 500 mega samples per second to several giga samples per second. For the 4:2:0 chroma format, the sample rate of each chroma channel is one quarter the luma sample rate and for the 4:2:2 chroma format, the sample rate of each chroma channel is one half the luma sample rate.
[0006] The VVC standard is a ‘block based’ codec, in which frames are firstly divided into a square array of regions known as ‘coding tree units’ (CTUs). CTUs generally occupy a relatively large area, such as 128x128 luma samples. However, CTUs at the right and bottom edge of each frame may be smaller in area. Associated with each CTU is a ‘coding tree’ for the luma channel and an additional coding tree for the chroma channels. A coding tree defines a decomposition of the area of the CTU into a set of blocks, also referred to as ‘coding blocks’ (CBs). It is also possible for a single coding tree to specify blocks both for the luma channel and the chroma channels, in which case the collections of collocated coding blocks are referred to as ‘coding units’ (CUs), i.e., each CU having a coding block for each colour channel. The CBs are processed for encoding or decoding in a particular order. As a consequence of the use of the 4:2:0 chroma format, a CTU with a luma coding tree for a 128x128 luma sample area has a corresponding chroma coding tree for a 64x64 chroma sample area, collocated with the 128x128 luma sample area. When a single coding tree is in use for the luma channel and the chroma channels, the collections of collocated blocks for a given area are generally referred to as ‘units’, for example the above-mentioned CUs, as well as ‘prediction units’ (PUs), and ‘transform units’ (TUs). When separate coding trees are used for a given area, the above- mentioned CBs, as well as ‘prediction blocks’ (PBs), and ‘transform blocks’ (TBs) are used.
[0007] Notwithstanding the above distinction between ‘units’ and ‘blocks’, the term ‘block’ may be used as a general term for areas or regions of a frame for which operations are applied to all colour channels.
[0008] For each CU a prediction unit (PU) of the contents (sample values) of the corresponding area of frame data is generated (a ‘prediction unit’). If the PU is generated from sample values in a previously signalled frame, the prediction is called inter prediction. If the PU is generated from previous samples in the same frame, the prediction is called intra prediction. Further, a representation of the difference (or ‘residual’ in the spatial domain) between the prediction and the contents of the area as seen at input to the encoder is formed. The difference in each colour channel may be transformed and coded as a block of residual coefficients, forming one or more TUs for a given CU. The residual coefficients may be transformed by a transform such as a Discrete Cosine Transform (DCT), a Discrete Sine Transform (DST), or other transform, to produce a final block of transform coefficients that substantially decorrelates the residual samples. The transform coefficients are then traversed in an order such as a backward diagonal scan, and each coefficient is encoded by an entropy encoder. Entropy coding a transform coefficient consists of expressing the coefficient in terms of syntax elements, each of which is binarised. The binarised syntax elements may then be further encoded by a context adaptive binary arithmetic coder (CABAC), or passed on to the bitstream (“bypass coding”). [0009] In some classes of video content such as screen content, it may be advantageous to avoid performing a transform. Then the residual coefficients are traversed and encoded. Because the statistics for the residual coefficients are not the same as the statistics for the transform coefficients, it is generally advantageous for the residual coefficients to be encoded in a different process from the encoding process for transform coefficients. Existing solutions typically use different hardware implementations for encoding residual coefficients and transform coefficients. However, hardware implementations for performing the two processes of encoding transform coefficients and residual coefficients respectively are preferably similar in complexity. In particular, it is desirable that the two processes are similar in their utilisation of the CABAC coder.
SUMMARY
[00010] It is an object of the present invention to substantially overcome, or at least ameliorate, one or more disadvantages of existing arrangements.
[00011] One aspect of the present invention provides a method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining at least a portion of a magnitude of the residual coefficient by decoding only a significance flag, a sign flag, greater than one flag and a parity flag using a CAB AC decoder; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
[00012] According to another aspect, the method further comprises determining that a CAB AC coding budget for the transform block has been exhausted, and decoding the magnitude of the residual coefficient in full using Rice-EG decoding with a Rice parameter of 0.
[00013] Another aspect of the present disclosure provides a method of decoding a transform- skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining that a CABAC coding budget for the transform block has been exhausted; and decoding the magnitude of the residual coefficient in full using Rice-EG decoding with a Rice parameter of 0.
[00014] Another aspect of the present disclosure provides a method of decoding a transform- skipped residual coefficient of a transform block of a video bitstream, the method comprising: upon selecting the residual coefficient from the transform block, determining whether the CAB AC coding budget is exhausted; if the CAB AC coding budget is not exhausted, decoding the residual coefficient in full by: determining at least a portion of a magnitude of the residual coefficient by decoding only a significance flag, a sign flag, greater than one flag and a parity flag using a CAB AC decoder; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0; and if the CAB AC coding budget is exhausted, decoding the magnitude of the residual coefficient in full using Rice-EG decoding with a Rice parameter of 0.
[00015] Another aspect of the present disclosure provides a non-transitory computer readable medium having a computer program stored thereon to implement a method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining at least a portion of a magnitude of the residual coefficient by decoding only a significance flag, a sign flag, greater than one flag and a parity flag using a CABAC decoder; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
[00016] Another aspect of the present disclosure provides a system, comprising: a memory; and a processor, wherein the processor is configured to execute code stored on the memory for implementing a method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining at least a portion of a magnitude the residual coefficient by decoding only a significance flag, a sign flag, greater than one flag and a parity flag using a CABAC decoder; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
[00017] Another aspect of the present disclosure provides a video decoder, configured to: receive a transform-skipped residual coefficient of a transform block of a video bitstream; determine at least a portion of a magnitude of the residual coefficient by decoding only a significance flag, a sign flag, greater than one flag and a parity flag using a CABAC decoder; and decode any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
[00018] Another aspect of the present disclosure provides a method of decoding a transform- skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining significance of the residual coefficient by decoding or inferring a significance flag; determining a portion of a magnitude of the residual coefficient by further determining a sign flag, a greater than one flag, a parity flag, a greater than three flag, a greater than five flag, a greater than seven flag, and a greater than nine flag; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
[00019] Another aspect of the present disclosure provides a non-transitory computer readable medium having a computer program stored thereon to implement a method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining significance of the residual coefficient by decoding or inferring a significance flag; determining a portion of a magnitude of the residual coefficient by further determining a sign flag, a greater than one flag, a parity flag, a greater than three flag, a greater than five flag, a greater than seven flag, and a greater than nine flag; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
[00020] Another aspect of the present disclosure provides a system, comprising: a memory; and a processor, wherein the processor is configured to execute code stored on the memory for implementing a method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining significance of the residual coefficient by decoding or inferring a significance flag; determining a portion of a magnitude of the residual coefficient by further determining a sign flag, a greater than one flag, a parity flag, a greater than three flag, a greater than five flag, a greater than seven flag, and a greater than nine flag; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
[00021] Another aspect of the present disclosure provides a video decoder, configured to: receive a transform-skipped residual coefficient of a transform block of a video bitstream; determine significance of the residual coefficient by decoding or inferring a significance flag; determine a portion of a magnitude of the residual coefficient by further determining a sign flag, a greater than one flag, a parity flag, a greater than three flag, a greater than five flag, a greater than seven flag, and a greater than nine flag; and decode any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
[00022] Another aspect of the present disclosure provides a method of encoding a transform- skipped residual coefficient of a transform block of a video bitstream, the method comprising: encoding a significance flag indicating whether the residual coefficient has a magnitude greater than zero to the bitstream; encoding a portion of a magnitude of the residual coefficient by further encoding a sign flag, a greater than one flag, a parity flag, a greater than three flag, a greater than five flag, a greater than seven flag, and a greater than nine flag to the bitstream; and; and encoding any remaining portion of the magnitude of the residual coefficient to the bitstream using Rice-EG decoding with a Rice parameter of 0.
[00023] Other aspects are also described.
BRIEF DESCRIPTION OF THE DRAWINGS
[00024] At least one embodiment of the present invention will now be described with reference to the following drawings and and appendices, in which:
[00025] Fig. l is a schematic block diagram showing a video encoding and decoding system;
[00026] Figs. 2A and 2B form a schematic block diagram of a general purpose computer system upon which one or both of the video encoding and decoding system of Fig. 1 may be practiced;
[00027] Fig. 3 is a schematic block diagram showing functional modules of a video encoder;
[00028] Fig. 4 is a schematic block diagram showing functional modules of a video decoder;
[00029] Fig. 5 is a schematic block diagram showing the available divisions of a block into one or more blocks in the tree structure of versatile video coding;
[00030] Fig. 6 is a schematic illustration of a dataflow to achieve permitted divisions of a block into one or more blocks in a tree structure of versatile video coding;
[00031] Figs. 7 A and 7B show an example division of a coding tree unit (CTU) into a number of coding units (CUs);
[00032] Fig. 8A shows a two-level backward diagonal scan;
[00033] Fig. 8B shows a two-level forward diagonal scan;
[00034] Fig. 9 shows a method for encoding a transform block of quantised coefficients; [00035] Fig. 10 shows a method for decoding a transform block of quantised coefficients;
[00036] Fig. 11 shows a method for encoding a sub-block of quantised transform coefficients as performed by the method of Fig. 9;
[00037] Fig. 12 shows a method for decoding a sub-block of quantised transform coefficients as performed by the method of Fig. 10;
[00038] Fig. 13 shows a method for encoding a sub-block of quantised transform skip coefficients as performed by the method of Fig. 9;
[00039] Fig. 14 shows a method for decoding a sub-block of quantised transform skip coefficients as performed by the method of Fig. 10;
[00040] Fig. 15 shows an alternative method for encoding a sub-block of quantised transform skip coefficients; and
[00041] Fig. 16 shows an alternative method for decoding a sub-block of quantised transform skip coefficients.
DETAILED DESCRIPTION INCLUDING BEST MODE
[00042] Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
[00043] As described above, it may be advantageous to encode residual coefficients with a different residual coding process when a transform has not been performed. However, if two different residual coding processes are used, allowing complexity of the two processes to be similar is desirable.
[00044] Fig. l is a schematic block diagram showing functional modules of a video encoding and decoding system 100. The system 100 may utilise constraints on the secondary transform kernel, such that the non-separable secondary transform may be performed with reduced complexity, while achieving similar coding performance to an unconstrained secondary transform kernel. [00045] The system 100 includes a source device 110 and a destination device 130. A communication channel 120 is used to communicate encoded video information from the source device 110 to the destination device 130. In some arrangements, the source device 110 and destination device 130 may either or both comprise respective mobile telephone handsets or “smartphones”, in which case the communication channel 120 is a wireless channel. In other arrangements, the source device 110 and destination device 130 may comprise video conferencing equipment, in which case the communication channel 120 is typically a wired channel, such as an internet connection. Moreover, the source device 110 and the destination device 130 may comprise any of a wide range of devices, including devices supporting over- the-air television broadcasts, cable television applications, internet video applications (including streaming) and applications where encoded video data is captured on some computer-readable storage medium, such as hard disk drives in a file server.
[00046] As shown in Fig. 1, the source device 110 includes a video source 112, a video encoder 114 and a transmitter 116. The video source 112 typically comprises a source of captured video frame data (shown as 113), such as an image capture sensor, a previously captured video sequence stored on a non-transitory recording medium, or a video feed from a remote image capture sensor. The video source 112 may also be an output of a computer graphics card, for example displaying the video output of an operating system and various applications executing upon a computing device, for example a tablet computer. Examples of source devices 110 that may include an image capture sensor as the video source 112 include smart-phones, video camcorders, professional video cameras, and network video cameras.
[00047] The video encoder 114 converts (or ‘encodes’) the captured frame data (indicated by an arrow 113) from the video source 112 into a bitstream (indicated by an arrow 115) as described further with reference to Fig. 3. The bitstream 115 is transmitted by the transmitter 116 over the communication channel 120 as encoded video data (or “encoded video information”). It is also possible for the bitstream 115 to be stored in a non-transitory storage device 122, such as a “Flash” memory or a hard disk drive, until later being transmitted over the communication channel 120, or in-lieu of transmission over the communication channel 120.
[00048] The destination device 130 includes a receiver 132, a video decoder 134 and a display device 136. The receiver 132 receives encoded video data from the communication channel 120 and passes received video data to the video decoder 134 as a bitstream (indicated by an arrow 133). The video decoder 134 then outputs decoded frame data (indicated by an arrow 135) to the display device 136. The decoded frame data 135 has the same chroma format as the frame data 113. Examples of the display device 136 include a cathode ray tube, a liquid crystal display, such as in smart-phones, tablet computers, computer monitors or in stand-alone television sets. It is also possible for the functionality of each of the source device 110 and the destination device 130 to be embodied in a single device, examples of which include mobile telephone handsets and tablet computers.
[00049] Notwithstanding the example devices mentioned above, each of the source device 110 and destination device 130 may be configured within a general purpose computing system, typically through a combination of hardware and software components. Fig. 2A illustrates such a computer system 200, which includes: a computer module 201; input devices such as a keyboard 202, a mouse pointer device 203, a scanner 226, a camera 227, which may be configured as the video source 112, and a microphone 280; and output devices including a printer 215, a display device 214, which may be configured as the display device 136, and loudspeakers 217. An external Modulator-Demodulator (Modem) transceiver device 216 may be used by the computer module 201 for communicating to and from a communications network 220 via a connection 221. The communications network 220, which may represent the communication channel 120, may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN. Where the connection 221 is a telephone line, the modem 216 may be a traditional “dial-up” modem. Alternatively, where the connection 221 is a high capacity (e.g., cable or optical) connection, the modem 216 may be a broadband modem. A wireless modem may also be used for wireless connection to the communications network 220. The transceiver device 216 may provide the functionality of the transmitter 116 and the receiver 132 and the communication channel 120 may be embodied in the connection 221.
[00050] The computer module 201 typically includes at least one processor unit 205, and a memory unit 206. For example, the memory unit 206 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 201 also includes an number of input/output (I/O) interfaces including: an audio-video interface 207 that couples to the video display 214, loudspeakers 217 and microphone 280; an EO interface 213 that couples to the keyboard 202, mouse 203, scanner 226, camera 227 and optionally a joystick or other human interface device (not illustrated); and an interface 208 for the external modem 216 and printer 215. The signal from the audio-video interface 207 to the computer monitor 214 is generally the output of a computer graphics card. In some implementations, the modem 216 may be incorporated within the computer module 201, for example within the interface 208. The computer module 201 also has a local network interface 211, which permits coupling of the computer system 200 via a connection 223 to a local-area communications network 222, known as a Local Area Network (LAN). As illustrated in Fig. 2A, the local communications network 222 may also couple to the wide network 220 via a connection 224, which would typically include a so-called “firewall” device or device of similar functionality. The local network interface 211 may comprise an Ethernet™ circuit card, a Bluetooth™ wireless arrangement or an IEEE 802.11 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 211. The local network interface 211 may also provide the functionality of the transmitter 116 and the receiver 132 and communication channel 120 may also be embodied in the local communications network 222.
[00051] The EO interfaces 208 and 213 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 209 are provided and typically include a hard disk drive (HDD) 210. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 212 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g. CD-ROM, DVD, Blu ray Disc™), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the computer system 200. Typically, any of the HDD 210, optical drive 212, networks 220 and 222 may also be configured to operate as the video source 112, or as a destination for decoded video data to be stored for reproduction via the display 214. The source device 110 and the destination device 130 of the system 100 may be embodied in the computer system 200.
[00052] The components 205 to 213 of the computer module 201 typically communicate via an interconnected bus 204 and in a manner that results in a conventional mode of operation of the computer system 200 known to those in the relevant art. For example, the processor 205 is coupled to the system bus 204 using a connection 218. Likewise, the memory 206 and optical disk drive 212 are coupled to the system bus 204 by connections 219. Examples of computers on which the described arrangements can be practised include IBM-PC’s and compatibles, Sun SPARCstations, Apple Mac™ or alike computer systems. [00053] Where appropriate or desired, the video encoder 114 and the video decoder 134, as well as methods described below, may be implemented using the computer system 200. In particular, the video encoder 114, the video decoder 134 and methods to be described, may be implemented as one or more software application programs 233 executable within the computer system 200. In particular, the video encoder 114, the video decoder 134 and the steps of the described methods are effected by instructions 231 (see Fig. 2B) in the software 233 that are carried out within the computer system 200. The software instructions 231 may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the described methods and a second part and the corresponding code modules manage a user interface between the first part and the user.
[00054] The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer system 200 from the computer readable medium, and then executed by the computer system 200. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the computer system 200 preferably effects an advantageous apparatus for implementing the video encoder 114, the video decoder 134 and the described methods.
[00055] The software 233 is typically stored in the HDD 210 or the memory 206. The software is loaded into the computer system 200 from a computer readable medium, and executed by the computer system 200. Thus, for example, the software 233 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 225 that is read by the optical disk drive 212.
[00056] In some instances, the application programs 233 may be supplied to the user encoded on one or more CD-ROMs 225 and read via the corresponding drive 212, or alternatively may be read by the user from the networks 220 or 222. Still further, the software can also be loaded into the computer system 200 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 200 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-ray Disc™, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 201. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of the software, application programs, instructions and/or video data or encoded video data to the computer module 401 include radio or infra-red transmission channels, as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
[00057] The second part of the application program 233 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 214. Through manipulation of typically the keyboard 202 and the mouse 203, a user of the computer system 200 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 217 and user voice commands input via the microphone 280.
[00058] Fig. 2B is a detailed schematic block diagram of the processor 205 and a “memory” 234. The memory 234 represents a logical aggregation of all the memory modules (including the HDD 209 and semiconductor memory 206) that can be accessed by the computer module 201 in Fig. 2A.
[00059] When the computer module 201 is initially powered up, a power-on self-test (POST) program 250 executes. The POST program 250 is typically stored in a ROM 249 of the semiconductor memory 206 of Fig. 2A. A hardware device such as the ROM 249 storing software is sometimes referred to as firmware. The POST program 250 examines hardware within the computer module 201 to ensure proper functioning and typically checks the processor 205, the memory 234 (209, 206), and a basic input-output systems software (BIOS) module 251, also typically stored in the ROM 249, for correct operation. Once the POST program 250 has run successfully, the BIOS 251 activates the hard disk drive 210 of Fig. 2A. Activation of the hard disk drive 210 causes a bootstrap loader program 252 that is resident on the hard disk drive 210 to execute via the processor 205. This loads an operating system 253 into the RAM memory 206, upon which the operating system 253 commences operation. The operating system 253 is a system level application, executable by the processor 205, to fulfil various high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface. [00060] The operating system 253 manages the memory 234 (209, 206) to ensure that each process or application running on the computer module 201 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the computer system 200 of Fig. 2 A must be used properly so that each process can run effectively. Accordingly, the aggregated memory 234 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 200 and how such is used.
[00061] As shown in Fig. 2B, the processor 205 includes a number of functional modules including a control unit 239, an arithmetic logic unit (ALU) 240, and a local or internal memory 248, sometimes called a cache memory. The cache memory 248 typically includes a number of storage registers 244-246 in a register section. One or more internal busses 241 functionally interconnect these functional modules. The processor 205 typically also has one or more interfaces 242 for communicating with external devices via the system bus 204, using a connection 218. The memory 234 is coupled to the bus 204 using a connection 219.
[00062] The application program 233 includes a sequence of instructions 231 that may include conditional branch and loop instructions. The program 233 may also include data 232 which is used in execution of the program 233. The instructions 231 and the data 232 are stored in memory locations 228, 229, 230 and 235, 236, 237, respectively. Depending upon the relative size of the instructions 231 and the memory locations 228-230, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 230. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 228 and 229.
[00063] In general, the processor 205 is given a set of instructions which are executed therein. The processor 205 waits for a subsequent input, to which the processor 205 reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 202, 203, data received from an external source across one of the networks 220, 202, data retrieved from one of the storage devices 206, 209 or data retrieved from a storage medium 225 inserted into the corresponding reader 212, all depicted in Fig. 2 A. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 234.
[00064] The video encoder 114, the video decoder 134 and the described methods may use input variables 254, which are stored in the memory 234 in corresponding memory locations 255, 256, 257. The video encoder 114, the video decoder 134 and the described methods produce output variables 261, which are stored in the memory 234 in corresponding memory locations 262, 263, 264. Intermediate variables 258 may be stored in memory locations 259, 260, 266 and 267.
[00065] Referring to the processor 205 of Fig. 2B, the registers 244, 245, 246, the arithmetic logic unit (ALU) 240, and the control unit 239 work together to perform sequences of micro operations needed to perform “fetch, decode, and execute” cycles for every instruction in the instruction set making up the program 233. Each fetch, decode, and execute cycle comprises: a fetch operation, which fetches or reads an instruction 231 from a memory location 228, 229, 230; a decode operation in which the control unit 239 determines which instruction has been fetched; and an execute operation in which the control unit 239 and/or the ALU 240 execute the instruction.
[00066] Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 239 stores or writes a value to a memory location 232.
[00067] Each step or sub-process in the method of Figs. 9 to 16, to be described, is associated with one or more segments of the program 233 and is typically performed by the register section 244, 245, 247, the ALU 240, and the control unit 239 in the processor 205 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 233. [00068] Fig. 3 shows a schematic block diagram showing functional modules of the video encoder 114. Fig. 4 shows a schematic block diagram showing functional modules of the video decoder 134. Generally, data passes between functional modules within the video encoder 114 and the video decoder 134 in groups of samples or coefficients, such as divisions of blocks into sub-blocks of a fixed size, or as arrays. The video encoder 114 and video decoder 134 may be implemented using a general-purpose computer system 200, as shown in Figs. 2A and 2B, where the various functional modules may be implemented by dedicated hardware within the computer system 200, by software executable within the computer system 200 such as one or more software code modules of the software application program 233 resident on the hard disk drive 205 and being controlled in its execution by the processor 205. Alternatively, the video encoder 114 and video decoder 134 may be implemented by a combination of dedicated hardware and software executable within the computer system 200. The video encoder 114, the video decoder 134 and the described methods may alternatively be implemented in dedicated hardware, such as one or more integrated circuits performing the functions or sub functions of the described methods. Such dedicated hardware may include graphic processing units (GPUs), digital signal processors (DSPs), application-specific standard products (ASSPs), application- specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) or one or more microprocessors and associated memories. In particular, the video encoder 114 comprises modules 310-386 and the video decoder 134 comprises modules 420-496 which may each be implemented as one or more software code modules of the software application program 233.
[00069] Although the video encoder 114 of Fig. 3 is an example of a versatile video coding (VVC) video encoding pipeline, other video codecs may also be used to perform the processing stages described herein. The video encoder 114 receives captured frame data 113, such as a series of frames, each frame including one or more colour channels. The frame data 113 may be in any chroma format, for example 4:0:0, 4:2:0, 4:2:2, or 4:4:4 chroma format. A block partitioner 310 firstly divides the frame data 113 into CTUs, generally square in shape and configured such that a particular size for the CTUs is used. The size of the CTUs may be 64x64, 128x128, or 256x256 luma samples for example. The block partitioner 310 further divides each CTU into one or more CBs according to a luma coding tree and a chroma coding tree. The CBs have a variety of sizes, and may include both square and non-square aspect ratios. In the VVC standard, CBs, CUs, PUs, and TUs always have side lengths that are powers of two. Thus, a current CB, represented as 312, is output from the block partitioner 310, progressing in accordance with an iteration over the one or more blocks of the CTU, in accordance with the luma coding tree and the chroma coding tree of the CTU. Options for partitioning CTUs into CBs are further described below with reference to Figs. 5 and 6.
[00070] The CTUs resulting from the first division of the frame data 113 may be scanned in raster scan order and may be grouped into one or more ‘slices’. A slice may be an ‘intra’ (or T) slice. An intra slice (I slice) indicates that every CU in the slice is intra predicted. Alternatively, a slice may be uni- or bi-predicted (‘P’ or Έ’ slice, respectively), indicating additional availability of uni- and bi-prediction in the slice, respectively.
[00071] For each CTU, the video encoder 114 operates in two stages. In the first stage (referred to as a ‘search’ stage), the block partitioner 310 tests various potential configurations of a coding tree. Each potential configuration of a coding tree has associated ‘candidate’ CBs. The first stage involves testing various candidate CBs to select CBs providing high compression efficiency with low distortion. The testing generally involves a Lagrangian optimisation whereby a candidate CB is evaluated based on a weighted combination of the rate (coding cost) and the distortion (error with respect to the input frame data 113). The ‘best’ candidate CBs (the CBs with the lowest evaluated rate/distortion) are selected for subsequent encoding into the bitstream 115. Included in evaluation of candidate CBs is an option to use a CB for a given area or to further split the area according to various splitting options and code each of the smaller resulting areas with further CBs, or split the areas even further. As a consequence, both the CBs and the coding tree themselves are selected in the search stage.
[00072] The video encoder 114 produces a prediction block (PB), indicated by an arrow 320, for each CB, for example the CB 312. The PB 320 is a prediction of the contents of the associated CB 312. A subtracter module 322 produces a difference, indicated as 324 (or ‘residual’, referring to the difference being in the spatial domain), between the PB 320 and the CB 312. The residual 324 is a block-size difference between corresponding samples in the PB 320 and the CB 312. The residual 324 is transformed, quantised and represented as a transform block (TB), indicated by an arrow 336. The PB 320 and associated TB 336 are typically chosen from one of many possible candidate CBs, for example based on evaluated cost or distortion.
[00073] A candidate coding block (CB) is a CB resulting from one of the prediction modes available to the video encoder 114 for the associated PB and the resulting residual. Each candidate CB results in one or more corresponding TBs. The TB 336 is a quantised and transformed representation of the residual 324. When combined with the predicted PB in the video decoder 114, the TB 336 reduces the difference between decoded CBs and the original CB 312 at the expense of additional signalling in a bitstream.
[00074] Each candidate coding block (CB), that is prediction block (PB) in combination with a transform block (TB), thus has an associated coding cost (or ‘rate’) and an associated difference (or ‘distortion’). The rate is typically measured in bits. The distortion of the CB is typically estimated as a difference in sample values, such as a sum of absolute differences (SAD) or a sum of squared differences (SSD). The estimate resulting from each candidate PB may be determined by a mode selector 386 using the residual 324 to determine a prediction mode (represented by an arrow 388). Estimation of the coding costs associated with each candidate prediction mode and corresponding residual coding can be performed at significantly lower cost than entropy coding of the residual. Accordingly, a number of candidate modes can be evaluated to determine an optimum mode in a rate-distortion sense.
[00075] Determining an optimum mode in terms of rate-distortion is typically achieved using a variation of Lagrangian optimisation. Selection of the prediction mode 388 typically involves determining a coding cost for the residual data resulting from application of a particular prediction mode. The coding cost may be approximated by using a ‘sum of absolute transformed differences’ (SATD) whereby a relatively simple transform, such as a Hadamard transform, is used to obtain an estimated transformed residual cost. In some implementations using relatively simple transforms, the costs resulting from the simplified estimation method are monotonically related to the actual costs that would otherwise be determined from a full evaluation. In implementations with monotonically related estimated costs, the simplified estimation method may be used to make the same decision (i.e. prediction mode) with a reduction in complexity in the video encoder 114. To allow for possible non-monotonicity in the relationship between estimated and actual costs, the simplified estimation method may be used to generate a list of best candidates. The non-monotonicity may result from further mode decisions available for the coding of residual data, for example. The list of best candidates may be of an arbitrary number. A more complete search may be performed using the best candidates to establish optimal mode choices for coding the residual data for each of the candidates, allowing a final selection of the prediction mode 388 along with other mode decisions. [00076] Prediction modes fall broadly into two categories. A first category is ‘intra-frame prediction’ (also referred to as ‘intra prediction’). In intra-frame prediction, a prediction for a block is generated, and the generation method may use other samples obtained from the current frame. Types of intra prediction include intra planar, intra DC, intra angular, and matrix weighted intra prediction (MIP). For an intra-predicted PB, it is possible for different intra prediction modes to be used for luma and chroma, and thus intra prediction is described primarily in terms of operation upon PBs. Additionally, chroma CBs may be predicted from co-located luma samples by a cross-component linear model prediction.
[00077] The second category of prediction modes is ‘inter-frame prediction’ (also referred to as ‘inter prediction’). In inter-frame prediction a prediction for a block is produced using samples from one or two frames preceding the current frame in an order of coding frames in the bitstream. Moreover, for inter-frame prediction, a single coding tree is typically used for both the luma channel and the chroma channels. The order of coding frames in the bitstream may differ from the order of the frames when captured or displayed. When one frame is used for prediction, the block is said to be ‘uni -predicted’ and has one associated motion vector. When two frames are used for prediction, the block is said to be ‘bi-predicted’ and has two associated motion vectors. For a P slice, each CU may be intra predicted or uni -predicted. For a B slice, each CU may be intra predicted, uni -predicted, or bi-predicted. Frames are typically coded using a ‘group of pictures’ structure, enabling a temporal hierarchy of frames. A temporal hierarchy of frames allows a frame to reference a preceding and a subsequent picture in the order of displaying the frames. The images are coded in the order necessary to ensure the dependencies for decoding each frame are met.
[00078] A subcategory of inter prediction is referred to as ‘ skip mode’ . Inter prediction and skip modes are described as two distinct modes. However, both inter prediction mode and skip mode involve motion vectors referencing blocks of samples from preceding frames. Inter prediction involves a coded motion vector delta, specifying a motion vector relative to a motion vector predictor. The motion vector predictor is obtained from a list of one or more candidate motion vectors, selected with a ‘merge index’. The coded motion vector delta provides a spatial offset to a selected motion vector prediction. Inter prediction also uses a coded residual in the bitstream 133. Skip mode uses only an index (also named a ‘merge index’) to select one out of several motion vector candidates. The selected candidate is used without any further signalling. Also, skip mode does not support coding of any residual coefficients. The absence of coded residual coefficients when the skip mode is used means that there is no need to perform transforms for the skip mode. Therefore, skip mode does not typically result in pipeline processing issues. Pipeline processing issues may be the case for intra predicted CUs and inter predicted CUs. Due to the limited signalling of the skip mode, skip mode is useful for achieving very high compression performance when relatively high quality reference frames are available. Bi-predicted CUs in higher temporal layers of a random-access group-of-picture structure typically have high quality reference pictures and motion vector candidates that accurately reflect underlying motion.
[00079] The samples are selected according to a motion vector and reference picture index.
The motion vector and reference picture index applies to all colour channels and thus inter prediction is described primarily in terms of operation upon PUs rather than PBs. Within each category (that is, intra- and inter-frame prediction), different techniques may be applied to generate the PU. For example, intra prediction may use values from adjacent rows and columns of previously reconstructed samples, in combination with a direction to generate a PU according to a prescribed filtering and generation process. Alternatively, the PU may be described using a small number of parameters. Inter prediction methods may vary in the number of motion parameters and their precision. Motion parameters typically comprise a reference frame index, indicating which reference frame(s) from lists of reference frames are to be used plus a spatial translation for each of the reference frames, but may include more frames, special frames, or complex affine parameters such as scaling and rotation. In addition, a pre determined motion refinement process may be applied to generate dense motion estimates based on referenced sample blocks.
[00080] Lagrangian or similar optimisation processing can be employed to both select an optimal partitioning of a CTU into CBs (by the block partitioner 310) as well as the selection of a best prediction mode from a plurality of possibilities. Through application of a Lagrangian optimisation process of the candidate modes in the mode selector module 386, the prediction mode with the lowest cost measurement is selected as the ‘best’ mode. The lowest cost mode is the selected prediction mode 388 and is also encoded in the bitstream 115 by an entropy encoder 338. The selection of the prediction mode 388 by operation of the mode selector module 386 extends to operation of the block partitioner 310. For example, candidates for selection of the prediction mode 388 may include modes applicable to a given block and additionally modes applicable to multiple smaller blocks that collectively are collocated with the given block. In cases including modes applicable to a given block and smaller collocated blocks, the process of selection of candidates implicitly is also a process of determining the best hierarchical decomposition of the CTU into CBs.
[00081] In the second stage of operation of the video encoder 114 (referred to as a ‘coding’ stage), an iteration over the selected luma coding tree and the selected chroma coding tree, and hence each selected CB, is performed in the video encoder 114. In the iteration, the CBs are encoded into the bitstream 115, as described further herein.
[00082] The entropy encoder 338 supports both variable-length coding of syntax elements and arithmetic coding of syntax elements. Arithmetic coding is supported using a context-adaptive binary arithmetic coding (CABAC) process. Arithmetically coded syntax elements consist of sequences of one or more ‘bins’ . Bins, like bits, have a value of ‘0’ or ‘ G . Bins are not encoded in the bitstream 115 as discrete bits. Bins have an associated predicted (or ‘likely’ or ‘most probable’) value and an associated probability, known as a ‘context’. When the actual bin to be coded matches the predicted value, a ‘most probable symbol’ (MPS) is coded. Coding a most probable symbol is relatively inexpensive in terms of consumed bits. When the actual bin to be coded mismatches the likely value, a ‘least probable symbol’ (LPS) is coded. Coding a least probable symbol has a relatively high cost in terms of consumed bits. The bin coding techniques enable efficient coding of bins where the probability of a ‘0’ versus a ‘ G is skewed. For a syntax element with two possible values (that is, a ‘flag’), a single bin is adequate. For syntax elements with many possible values, a sequence of bins is needed.
[00083] The presence of later bins in the sequence may be determined based on the value of earlier bins in the sequence. Additionally, each bin may be associated with more than one context. The selection of a particular context can be dependent on earlier bins in the syntax element, the bin values of neighbouring syntax elements (i.e. those from neighbouring blocks) and the like. Each time a context-coded bin is encoded, the context that was selected for that bin (if any) is updated in a manner reflective of the new bin value. As such, the binary arithmetic coding scheme is said to be adaptive.
[00084] Also supported by the video encoder 114 are bins that lack a context (‘bypass bins’). Bypass bins are coded assuming an equiprobable distribution between a ‘0’ and a ‘ G. Thus, each bin occupies one bit in the bitstream 115. The absence of a context saves memory and reduces complexity, and thus bypass bins are used where the distribution of values for the particular bin is not skewed. [00085] The entropy encoder 338 encodes the prediction mode 388 using a combination of context-coded and bypass-coded bins. For example, when the prediction mode 388 is an intra prediction mode, a list of ‘most probable modes’ is generated in the video encoder 114. The list of most probable modes is typically of a fixed length, such as three or six modes, and may include modes encountered in earlier blocks. A context-coded bin encodes a flag indicating if the prediction mode is one of the most probable modes. If the intra prediction mode 388 is one of the most probable modes, further signalling, using bypass-coded bins, is encoded. The encoded further signalling is indicative of which most probable mode corresponds with the intra prediction mode 388, for example using a truncated unary bin string. Otherwise, the intra prediction mode 388 is encoded as a ‘remaining mode’. Encoding as a remaining mode uses an alternative syntax, such as a fixed-length code, also coded using bypass-coded bins, to express intra prediction modes other than those present in the most probable mode list.
[00086] A multiplexer module 384 outputs the PB 320 according to the determined best prediction mode 388, selecting from the tested prediction mode of each candidate CB. The candidate prediction modes need not include every conceivable prediction mode supported by the video encoder 114.
[00087] Having determined and selected the PB 320, and subtracted the PB 320 from the original sample block at the subtractor 322, a residual with lowest coding cost, represented as 324, is obtained and subjected to lossy compression. The lossy compression process comprises the steps of transformation, quantisation and entropy coding. A forward primary transform module 326 applies a forward transform to the residual 324, converting the residual 324 from the spatial domain to the frequency domain, and producing primary transform coefficients represented by an arrow 328. The primary transform coefficients 328 are passed to a forward secondary transform module 330 to produce transform coefficients represented by an arrow 332 by performing a non-separable secondary transform (NSST) operation. The forward primary transform is typically separable, transforming a set of rows and then a set of columns of each block, typically using a type-II discrete cosine transform (DCT-2), although a type- VII discrete sine transform (DST-7) and a type- VIII discrete cosine transform (DCT-8) may also be available, for example horizontally for block widths not exceeding 16 samples and vertically for block heights not exceeding 16 samples. The transformation of each set of rows and columns is performed by applying one-dimensional transforms firstly to each row of a block to produce an intermediate result and then to each column of the intermediate result to produce a final result. The forward secondary transform is generally a non-separable transform, which is only applied for the residual of intra-predicted CUs and may nonetheless also be bypassed. The forward secondary transform operates either on 16 samples (arranged as the upper-left 4x4 sub-block of the primary transform coefficients 328) or 64 samples (arranged as the upper-left 8x8 coefficients, arranged as four 4x4 sub-blocks of the primary transform coefficients 328). Moreover, the matrix coefficients of the forward secondary transform are selected from multiple sets according to the intra prediction mode of the CU such that two sets of coefficients are available for use. The use of one of the sets of matrix coefficients, or the bypassing of the forward secondary transform, is signalled with an “nsst index” syntax element, coded using a truncated unary binarisation to express the values zero (secondary transform not applied), one (first set of matrix coefficients selected), or two (second set of matrix coefficients selected).
[00088] The video encoder 114 may also choose to skip both the primary and secondary transforms, known as ‘transform skip’ mode. Skipping the transforms is suited to residual data that lacks adequate correlation for reduced coding cost via expression as transform basis functions. Certain types of content, such as relatively simple computer generated graphics may exhibit similar behaviour. When transform skip mode is used, the transform coefficients 332 are the same as the residual coefficients 324.
[00089] The transform coefficients 332 are passed to a quantiser module 334. At the module 334, quantisation in accordance with a ‘quantisation parameter’ is performed to produce quantised coefficients, represented by the arrow 336. The quantisation parameter is constant for a given TB and thus results in a uniform scaling for the production of residual coefficients for a TB. A non-uniform scaling is also possible by application of a ‘quantisation matrix’, whereby the scaling factor applied for each residual coefficient is derived from a combination of the quantisation parameter and the corresponding entry in a scaling matrix, typically having a size equal to that of the TB. The scaling matrix may have a size that is smaller than the size of the TB, and when applied to the TB a nearest neighbour approach is used to provide scaling values for each residual coefficient from a scaling matrix smaller in size than the TB size. The quantised coefficients 336 are supplied to the entropy encoder 338 for encoding in the bitstream 115. Typically, the quantised coefficients of each TB with at least one significant quantised coefficient are scanned to produce an ordered list of values, according to a scan pattern. The scan pattern generally scans the TB as a sequence of 4x4 ‘sub-blocks’, providing a regular scanning operation at the granularity of 4x4 sets of residual coefficients, with the arrangement of sub-blocks dependent on the size of the TB. Additionally, the prediction mode 388 and the corresponding block partitioning are also encoded in the bitstream 115. [00090] As described above, the video encoder 114 needs access to a frame representation corresponding to the frame representation seen by the video decoder 134. Thus, the quantised coefficients 336 are also inverse quantised by a dequantiser module 340 to produce reconstructed transform coefficients, represented by an arrow 342. The reconstructed transform coefficients 342 are passed through an inverse secondary transform module 344 to produce reconstructed primary transform coefficients, represented by an arrow 346. The reconstructed primary transform coefficients 346 are passed to an inverse primary transform module 348 to produce reconstructed residual samples, represented by an arrow 350, of the TU. The types of inverse transform performed by the inverse secondary transform module 344 correspond with the types of forward transform performed by the forward secondary transform module 330.
The types of inverse transform performed by the inverse primary transform module 348 correspond with the types of primary transform performed by the primary transform module 326. A summation module 352 adds the reconstructed residual samples 350 and the PU 320 to produce reconstructed samples (indicated by an arrow 354) of the CU.
[00091] The reconstructed samples 354 are passed to a reference sample cache 356 and an in loop filters module 368. The reference sample cache 356, typically implemented using static RAM on an ASIC (thus avoiding costly off-chip memory access) provides minimal sample storage needed to satisfy the dependencies for generating intra-frame PBs for subsequent CUs in the frame. The minimal dependencies typically include a Tine buffer’ of samples along the bottom of a row of CTUs, for use by the next row of CTUs and column buffering the extent of which is set by the height of the CTU. The reference sample cache 356 supplies reference samples (represented by an arrow 358) to a reference sample filter 360. The sample filter 360 applies a smoothing operation to produce filtered reference samples (indicated by an arrow 362). The filtered reference samples 362 are used by an intra-frame prediction module 364 to produce an intra-predicted block of samples, represented by an arrow 366. For each candidate intra prediction mode the intra-frame prediction module 364 produces a block of samples, that is 366.
[00092] The in-loop filters module 368 applies several filtering stages to the reconstructed samples 354. The filtering stages include a ‘deblocking filter’ (DBF) which applies smoothing aligned to the CU boundaries to reduce artefacts resulting from discontinuities. Another filtering stage present in the in-loop filters module 368 is an ‘adaptive loop filter’ (ALF), which applies a Wiener-based adaptive filter to further reduce distortion. A further available filtering stage in the in-loop filters module 368 is a ‘sample adaptive offset’ (SAO) filter. The SAO filter operates by firstly classifying reconstructed samples into one or multiple categories and, according to the allocated category, applying an offset at the sample level.
[00093] Filtered samples, represented by an arrow 370, are output from the in-loop filters module 368. The filtered samples 370 are stored in a frame buffer 372. The frame buffer 372 typically has the capacity to store several (for example up to 16) pictures and thus is stored in the memory 206. The frame buffer 372 is not typically stored using on-chip memory due to the large memory consumption required. As such, access to the frame buffer 372 is costly in terms of memory bandwidth. The frame buffer 372 provides reference frames (represented by an arrow 374) to a motion estimation module 376 and a motion compensation module 380.
[00094] The motion estimation module 376 estimates a number of ‘motion vectors’ (indicated as 378), each being a Cartesian spatial offset from the location of the present CB, referencing a block in one of the reference frames in the frame buffer 372. A filtered block of reference samples (represented as 382) is produced for each motion vector. The filtered reference samples 382 form further candidate modes available for potential selection by the mode selector 386. Moreover, for a given CU, the PU 320 may be formed using one reference block (‘uni -predicted’) or may be formed using two reference blocks (‘bi-predicted’). For the selected motion vector, the motion compensation module 380 produces the PB 320 in accordance with a filtering process supportive of sub-pixel accuracy in the motion vectors. As such, the motion estimation module 376 (which operates on many candidate motion vectors) may perform a simplified filtering process compared to that of the motion compensation module 380 (which operates on the selected candidate only) to achieve reduced computational complexity. When the video encoder 114 selects inter prediction for a CU the motion vector 378 is encoded into the bitstream 115.
[00095] Although the video encoder 114 of Fig. 3 is described with reference to versatile video coding (VVC), other video coding standards or implementations may also employ the processing stages of modules 310-386. The frame data 113 (and bitstream 115) may also be read from (or written to) memory 206, the hard disk drive 210, a CD-ROM, a Blu-ray disk™ or other computer readable storage medium. Additionally, the frame data 113 (and bitstream 115) may be received from (or transmitted to) an external source, such as a server connected to the communications network 220 or a radio-frequency receiver. [00096] The video decoder 134 is shown in Fig. 4. Although the video decoder 134 of Fig. 4 is an example of a versatile video coding (VVC) video decoding pipeline, other video codecs may also be used to perform the processing stages described herein. As shown in Fig. 4, the bitstream 133 is input to the video decoder 134. The bitstream 133 may be read from memory 206, the hard disk drive 210, a CD-ROM, a Blu-ray disk™ or other non-transitory computer readable storage medium. Alternatively, the bitstream 133 may be received from an external source such as a server connected to the communications network 220 or a radio- frequency receiver. The bitstream 133 contains encoded syntax elements representing the captured frame data to be decoded.
[00097] The bitstream 133 is input to an entropy decoder module 420. The entropy decoder module 420 extracts syntax elements from the bitstream 133 by decoding sequences of ‘bins’ and passes the values of the syntax elements to other modules in the video decoder 134. One example of a syntax element extracted from the bitstream 133 are quantised coefficients 424. The entropy decoder module 420 uses an arithmetic decoding engine to decode each syntax element as a sequence of one or more bins. Each bin may use one or more ‘contexts’, with a context describing probability levels to be used for coding a ‘one’ and a ‘zero’ value for the bin. Where multiple contexts are available for a given bin, a ‘context modelling’ or ‘context selection’ step is performed to choose one of the available contexts for decoding the bin. The process of decoding bins forms a sequential feedback loop. The number of operations in the feedback loop is preferably minimised to enable the entropy decoder 420 to achieve a high throughput in bins/second. Context modelling depends on other properties of the bitstream known to the video decoder 134 at the time of selecting the context, that is, properties preceding the current bin. For example, a context may be selected based on the quad-tree depth of the current CU in the coding tree. Dependencies are preferably based on properties that are known well in advance of decoding a bin, or are determined without requiring long sequential processes.
[00098] The quantised coefficients 424 are input to a dequantiser module 428. The dequantiser module 428 performs inverse quantisation (or ‘scaling’) on the quantised coefficients 424 to create reconstructed intermediate transform coefficients, represented by an arrow 432, according to a quantisation parameter. Should use of a non-uniform inverse quantisation matrix be indicated in the bitstream 133, the video decoder 134 reads a quantisation matrix from the bitstream 133 as a sequence of scaling factors and arranges the scaling factors into a matrix.
The inverse scaling uses the quantisation matrix in combination with the quantisation parameter to create the reconstructed intermediate transform coefficients 432. The reconstructed intermediate transform coefficients 432 are passed to an inverse secondary transform module 436 where a secondary transform may be applied, in accordance with a decoded “nsst index” syntax element. The “nsst index” is decoded from the bitstream 133 by the entropy decoder 420, under execution of the processor 205. The inverse secondary transform module 436 produces reconstructed transform coefficients 440.
[00099] The reconstructed transform coefficients 440 are passed to an inverse primary transform module 444. The module 444 transforms the coefficients from the frequency domain back to the spatial domain. The result of operation of the module 444 is a block of residual samples, represented by an arrow 448. The block of residual samples 448 is equal in size to the corresponding CU. The type of inverse primary transform may be a type-II discrete cosine transform (DCT-2), a type- VII discrete sine transform (DST-7), a type- VIII discrete cosine transform (DCT-8), or a ‘transform skip’ mode. The use of transform skip mode is signalled by a transform skip flag, which may be decoded from the bitstream 133 or otherwise inferred. When transform skip mode is used, the residual samples 448 are the same as the reconstructed transform coefficients 440.
[000100] The residual samples 448 are supplied to a summation module 450. At the summation module 450 the residual samples 448 are added to a decoded PB (represented as 452) to produce a block of reconstructed samples, represented by an arrow 456. The reconstructed samples 456 are supplied to a reconstructed sample cache 460 and an in-loop filtering module 488. The in-loop filtering module 488 produces reconstructed blocks of frame samples, represented as 492. The frame samples 492 are written to a frame buffer 496.
[000101] The reconstructed sample cache 460 operates similarly to the reconstructed sample cache 356 of the video encoder 114. The reconstructed sample cache 460 provides storage for reconstructed sample needed to intra predict subsequent CBs without the memory 206 (for example by using the data 232 instead, which is typically on-chip memory). Reference samples, represented by an arrow 464, are obtained from the reconstructed sample cache 460 and supplied to a reference sample filter 468 to produce filtered reference samples indicated by arrow 472. The filtered reference samples 472 are supplied to an intra-frame prediction module 476. The module 476 produces a block of intra-predicted samples, represented by an arrow 480, in accordance with an intra prediction mode parameter 458 signalled in the bitstream 133 and decoded by the entropy decoder 420. [000102] When the prediction mode of a CB is indicated to be intra prediction in the bitstream 133, the intra-predicted samples 480 form the decoded PB 452 via a multiplexor module 484. Intra prediction produces a prediction block (PB) of samples, that is, a block in one colour component, derived using ‘neighbouring samples’ in the same colour component. The neighbouring samples are samples adjacent to the current block and by virtue of being preceding in the block decoding order have already been reconstructed. Where luma and chroma blocks are collocated, the luma and chroma blocks may use different intra prediction modes. However, the two chroma channels each share the same intra prediction mode.
[000103] Intra prediction for luma blocks consist of four types. “DC intra prediction” involves populating a PB with a single value representing the average of the neighbouring samples. “Planar intra prediction” involves populating a PB with samples according to a plane, with a DC offset and a vertical and horizontal gradient being derived from the neighbouring samples. “Angular intra prediction” involves populating a PB with neighbouring samples filtered and propagated across the PB in a particular direction (or ‘angle’). In VVC a PB may select from up to 65 angles, with rectangular blocks able to utilise different angles not available to square blocks. “Matrix intra prediction” involves populating a PB by multiplying a reduced set of neighbouring samples by one of a number of available matrices available to the video decoder 134. The reduced set of neighbouring samples is produced by filtering and subsampling the neighbouring samples. Then, a reduced set of prediction samples is produced by multiplying the reduced set of samples by a matrix, and adding an offset vector. The matrix and associated offset vector are selected from a number of possible matrices depending on the size of the PB, with a particular selection of matrix and offset vector being indicated by a “MIP mode” syntax element. For example, for PBs with size greater than 8x8 there are 11 MIP modes, while for PBs of size 8x8 there are 19 MIP modes. Finally, the PB produced by matrix intra prediction is populated from the reduced set of prediction samples by interpolation.
[000104] A fifth type of intra prediction is available to chroma PBs, whereby the PB is generated from collocated luma reconstructed samples according to a ‘cross-component linear model’ (CCLM) mode. Three different CCLM modes are available, each of which uses a different model derived from the neighbouring luma and chroma samples. The derived model is then used to generate a block of samples for the chroma PB from the collocated luma samples. [000105] When the prediction mode of a CB is indicated to be inter prediction in the bitstream 133, a motion compensation module 434 produces a block of inter-predicted samples, represented as 438, using a motion vector and reference frame index to select and filter a block of samples 498 from the frame buffer 496. The block of samples 498 is obtained from a previously decoded frame stored in the frame buffer 496. For bi-prediction, two blocks of samples are produced and blended together to produce samples for the decoded PB 452. The frame buffer 496 is populated with filtered block data 492 from an in-loop filtering module 488. As with the in-loop filtering module 368 of the video encoder 114, the in-loop filtering module 488 applies any of the DBF, the ALF and SAO filtering operations. Generally, the motion vector is applied to both the luma and chroma channels, although the filtering processes for sub-sample interpolation luma and chroma channel are different. The frame buffer outputs the decoded video samples 135.
[000106] Fig. 5 is a schematic block diagram showing a collection 500 of available divisions or splits of a region into one or more sub-regions in the tree structure of versatile video coding.
The divisions shown in the collection 500 are available to the block partitioner 310 of the encoder 114 to divide each CTU into one or more CUs or CBs according to a coding tree, as determined by the Lagrangian optimisation, as described with reference to Fig. 3.
[000107] Although the collection 500 shows only square regions being divided into other, possibly non-square sub-regions, it should be understood that the diagram 500 is showing the potential divisions but not requiring the containing region to be square. If the containing region is non-square, the dimensions of the blocks resulting from the division are scaled according to the aspect ratio of the containing block. Once a region is not further split, that is, at a leaf node of the coding tree, a CU occupies that region. The particular subdivision of a CTU into one or more CUs by the block partitioner 310 is referred to as the ‘coding tree’ of the CTU.
[000108] The process of subdividing regions into sub-regions must terminate when the resulting sub-regions reach a minimum CU size. In addition to constraining CUs to prohibit block areas smaller than a predetermined minimum size, for example 16 samples, CUs are constrained to have a minimum width or height of four. Other minimums, both in terms of width and height or in terms of width or height are also possible. The process of subdivision may also terminate prior to the deepest level of decomposition, resulting in a CU larger than the minimum CU size. It is possible for no splitting to occur, resulting in a single CU occupying the entirety of the CTU. A single CU occupying the entirety of the CTU is the largest available coding unit size. Due to use of subsampled chroma formats, such as 4:2:0, arrangements of the video encoder 114 and the video decoder 134 may terminate splitting of regions in the chroma channels earlier than in the luma channels.
[000109] At the leaf nodes of the coding tree exist CUs, with no further subdivision. For example, a leaf node 510 contains one CU. At the non-leaf nodes of the coding tree exist a split into two or more further nodes, each of which could be a leaf node that forms one CU, or a non leaf node containing further splits into smaller regions. At each leaf node of the coding tree, one coding block exists for each colour channel. Splitting terminating at the same depth for both luma and chroma results in three collocated CBs. Splitting terminating at a deeper depth for luma than for chroma results in a plurality of luma CBs being collocated with the CBs of the chroma channels.
[000110] A quad-tree split 512 divides the containing region into four equal-size regions as shown in Fig. 5. Compared to HEVC, versatile video coding (VVC) achieves additional flexibility with the addition of a horizontal binary split 514 and a vertical binary split 516. Each of the splits 514 and 516 divides the containing region into two equal-size regions. The division is either along a horizontal boundary (514) or a vertical boundary (516) within the containing block.
[000111] Further flexibility is achieved in versatile video coding with addition of a ternary horizontal split 518 and a ternary vertical split 520. The ternary splits 518 and 520 divide the block into three regions, bounded either horizontally (518) or vertically (520) along ¼ and ¾ of the containing region width or height. The combination of the quad tree, binary tree, and ternary tree is referred to as ‘QTBTTT’ . The root of the tree includes zero or more quadtree splits (the ‘QT’ section of the tree). Once the QT section terminates, zero or more binary or ternary splits may occur (the ‘multi-tree’ or ‘MT’ section of the tree), finally ending in CBs or CUs at leaf nodes of the tree. Where the tree describes all colour channels, the tree leaf nodes are CUs. Where the tree describes the luma channel or the chroma channels, the tree leaf nodes are CBs.
[000112] Compared to HEVC, which supports only the quad tree and thus only supports square blocks, the QTBTTT results in many more possible CU sizes, particularly considering possible recursive application of binary tree and/or ternary tree splits. The potential for unusual (non square) block sizes can be reduced by constraining split options to eliminate splits that would result in a block width or height either being less than four samples or in not being a multiple of four samples. Generally, the constraint would apply in considering luma samples. However, in the arrangements described, the constraint can be applied separately to the blocks for the chroma channels. Application of the constraint to split options to chroma channels can result in differing minimum block sizes for luma versus chroma, for example when the frame data is in the 4:2:0 chroma format or the 4:2:2 chroma format. Each split produces sub-regions with a side dimension either unchanged, halved or quartered, with respect to the containing region. Then, since the CTU size is a power of two, the side dimensions of all CUs are also powers of two.
[000113] Fig. 6 is a schematic flow diagram illustrating a data flow 600 of a QTBTTT (or ‘coding tree’) structure used in versatile video coding. The QTBTTT structure is used for each CTU to define a division of the CTU into one or more CUs. The QTBTTT structure of each CTU is determined by the block partitioner 310 in the video encoder 114 and encoded into the bitstream 115 or decoded from the bitstream 133 by the entropy decoder 420 in the video decoder 134. The data flow 600 further characterises the permissible combinations available to the block partitioner 310 for dividing a CTU into one or more CUs, according to the divisions shown in Fig. 5.
[000114] Starting from the top level of the hierarchy, that is at the CTU, zero or more quad-tree divisions are first performed. Specifically, a Quad-tree (QT) split decision 610 is made by the block partitioner 310. The decision at 610 returning a ‘ 1 ’ symbol indicates a decision to split the current node into four sub -nodes according to the quad-tree split 512. The result is the generation of four new nodes, such as at 620, and for each new node, recursing back to the QT split decision 610. Each new node is considered in raster (or Z-scan) order. Alternatively, if the QT split decision 610 indicates that no further split is to be performed (returns a ‘O’ symbol), quad-tree partitioning ceases and multi-tree (MT) splits are subsequently considered.
[000115] Firstly, an MT split decision 612 is made by the block partitioner 310. At 612, a decision to perform an MT split is indicated. Returning a ‘O’ symbol at decision 612 indicates that no further splitting of the node into sub-nodes is to be performed. If no further splitting of a node is to be performed, then the node is a leaf node of the coding tree and corresponds to a CU. The leaf node is output at 622. Alternatively, if the MT split 612 indicates a decision to perform an MT split (returns a ‘ 1 ’ symbol), the block partitioner 310 proceeds to a direction decision 614. [000116] The direction decision 614 indicates the direction of the MT split as either horizontal (Ή’ or ‘0’) or vertical (‘V’ or ‘l’). The block partiti oner 310 proceeds to a decision 616 if the decision 614 returns a ‘O’ indicating a horizontal direction. The block partitioner 310 proceeds to a decision 618 if the decision 614 returns a ‘ indicating a vertical direction.
[000117] At each of the decisions 616 and 618, the number of partitions for the MT split is indicated as either two (binary split or ΈT’ node) or three (ternary split or ‘TT’) at the BT/TT split. That is, a BT/TT split decision 616 is made by the block partitioner 310 when the indicated direction from 614 is horizontal and a BT/TT split decision 618 is made by the block partitioner 310 when the indicated direction from 614 is vertical.
[000118] The BT/TT split decision 616 indicates whether the horizontal split is the binary split 514, indicated by returning a ‘O’, or the ternary split 518, indicated by returning a ‘ . When the BT/TT split decision 616 indicates a binary split, at a generate HBT CTU nodes step 625 two nodes are generated by the block partitioner 310, according to the binary horizontal split 514. When the BT/TT split 616 indicates a ternary split, at a generate HTT CTU nodes step 626 three nodes are generated by the block partitioner 310, according to the ternary horizontal split 518.
[000119] The BT/TT split decision 618 indicates whether the vertical split is the binary split 516, indicated by returning a ‘O’, or the ternary split 520, indicated by returning a ‘ 1 ’ . When the BT/TT split 618 indicates a binary split, at a generate VBT CTU nodes step 627 two nodes are generated by the block partitioner 310, according to the vertical binary split 516. When the BT/TT split 618 indicates a ternary split, at a generate VTT CTU nodes step 628 three nodes are generated by the block partitioner 310, according to the vertical ternary split 520. For each node resulting from steps 625-628 recursion of the data flow 600 back to the MT split decision 612 is applied, in a left-to-right or top-to-bottom order, depending on the direction 614. As a consequence, the binary tree and ternary tree splits may be applied to generate CUs having a variety of sizes.
[000120] Figs. 7A and 7B provide an example division 700 of a CTU 710 into a number of CUs or CBs. An example CU 712 is shown in Fig. 7A. Fig. 7A shows a spatial arrangement of CUs in the CTU 710. The example division 700 is also shown as a coding tree 720 in Fig. 7B.
[000121] At each non-leaf node in the CTU 710 of Fig. 7A, for example nodes 714, 716 and 718, the contained nodes (which may be further divided or may be CUs) are scanned or traversed in a ‘Z-order’ to create lists of nodes, represented as columns in the coding tree 720. For a quad-tree split, the Z-order scanning results in top left to right followed by bottom left to right order. For horizontal and vertical splits, the Z-order scanning (traversal) simplifies to a top-to-bottom scan and a left-to-right scan, respectively. The coding tree 720 of Fig. 7B lists all nodes and CUs according to the applied scan order. Each split generates a list of two, three or four new nodes at the next level of the tree until a leaf node (CU) is reached.
[000122] Having decomposed the image into CTUs and further into CUs by the block partitioner 310, and using the CUs to generate each residual block (324) as described with reference to Fig. 3, residual blocks are subject to forward transformation by the video encoder 114. An equivalent inverse transform process is performed in the video decoder 134 to obtain TBs from the bitstream 133.
[000123] In the video encoder 114, the quantised coefficients 336 may be rearranged to a one dimensional list by performing a two-level backward diagonal scan. Similarly, in the video decoder 134, the quantised coefficients 424 may be rearranged from a one-dimensional list to a two-dimensional collection of sub-blocks by the same two-level backward diagonal scan.
[000124] Fig. 8A shows a two-level backward diagonal scan 810 of an example 8x8 TB 800. The scan 810 is shown progressing from the bottom-right residual coefficient position of the TB 800 back to the top-left (DC) residual coefficient position of the TB 800. The path of the scan 810 progresses with 4x4 regions, known as sub-blocks, and from one sub-block to the next. For TBs of width or height of two, sub-block sizes of 2x2, 2x8, or 8x2 are available. Scanning within a particular sub-block is either performed or the sub-block skipped, according to a ‘coded sub-block flag’. When scanning of a sub-block is skipped all residual coefficients within the sub-block are inferred to have a value of zero. Although the scan 810 is shown commencing from the bottom-right residual coefficient position of the TB 800, for a given set of residual coefficients scanning commences from the position of the ‘last significant coefficient’, the coefficient being ‘last’ when order of coefficients is considered as progressing from the DC coefficient instead of the scan order.
[000125] Fig. 8B shows an alternative, two-level forward diagonal scan 860 of an example 8x8 TB 850, which is used when the transform skip mode is selected. When the transform skip mode is used in the video encoder 114, the quantised coefficients 336 are rearranged to a one dimensional by the scan 860. Similarly, if the transform skip mode has been signalled for the current TB in the video decoder 134, the quantised coefficients 424 are rearranged from a one- dimensional list to a two-dimensional collection of sub-blocks by the scan 860. The scan 860 is shown progressing from the top-left (DC) residual coefficient position of the TB 850 to the bottom-right residual coefficient position of the TB 850. Unlike the scan 810, the scan 860 does not terminate at a ‘last significant coefficient’.
[000126] Figs. 8A and 8B show scan patterns typically used in VVC. The examples described herein use the scan pattern 810 for encoding residual coefficients that have been transformed by the module 326 and the scan pattern 860 is used for transform-skipped transform blocks. However, in some implementations other scan patterns can be used.
[000127] Fig. 9 shows a method 900 for encoding a transform block of quantised coefficients 336. The method 900 may be embodied by apparatus such as a configured FPGA, an ASIC, or an ASSP. Additionally, the method 900 may be performed by the video encoder 114 under execution of the processor 205. As such, the method 900 may be implemented as modules of the software 233 stored on computer-readable storage medium and/or in the memory 206.
[000128] The method 900 is implemented in some arrangements by the video encoder 114 at the entropy encoder 338 on receiving the transform coefficients 336. The method 900 begins at an encode last position step 910.
[000129] At the encode last position step 910, the entropy encoder determines if the video encoder 114 applied a transform to produce the current transform block of quantised coefficients 336. If a transform was applied, the video encoder 114 finds the position of the last significant coefficient in the quantised coefficients 336. The last significant coefficient is determined in relation to the forward direction of an appropriate scan pattern, for example in the direction of the two-level forward diagonal scan 860. A quantised coefficient is significant if it has any value other than zero. The position of the last significant coefficient is written to the bitstream 133. If the video encoder 114 did not apply a transform to produce the current transform block of quantised coefficients 336 (transform skip mode was selected), determining the position of the last residual coefficient at step 910 is not implemented, as indicated using dotted lines. The method 900 proceeds under control of the processor 205 from step 910 to a select first sub-block step 920.
[000130] At the select first sub-block step 920, if the video encoder 114 did not select transform skip mode, then the sub-block containing the last significant coefficient is selected.
If the video encoder 114 selected transform skip mode, the top-left sub-block is selected. The method 900 proceeds under control of the processor 205 from step 920 to a determine coded sub-block flag step 930.
[000131] The description herein refers to some flags being “TRUE” or “FALSE”. Setting to “TRUE” means that the flag value indicates a mode is selected or a requirement is met. Setting to “FALSE” means that the flag value indicates a mode is not selected or a requirement is not met.
[000132] At the determine coded sub-block flag step 930, the video encoder 114 sets a coded sub-block flag. If the video encoder 114 did not select transform skip mode and the current selected sub-block is the first sub-block selected in the select first sub-block step 920, the coded sub-block flag is set to “TRUE” but is not encoded to the bitstream 133. If the video encoder 114 did not select transform skip mode and the current selected sub-block is identified as a last sub-block as described below in relation to a last sub-block test 960, the coded sub-block flag is set to “TRUE” but is not encoded to the bitstream 133. If the video encoder 114 selected transform skip mode, the current selected sub-block is identified as the last sub-block, and all the coded sub-block flags for previous sub-blocks in the current transform block were “FALSE”, the codec sub-block flag is set to “TRUE” but is not encoded to the bitstream 133. Otherwise, the video encoder 114 sets the coded sub-block flag to (i) “TRUE” if there is at least one significant coefficient in the 4x4 quantised coefficients belonging to the selected sub-block, or (ii) “FALSE” if there are no significant coefficients, and encodes the coded sub-block flag to the bitstream 133. The method 900 proceeds under control of the processor 205 from step 930 to a coded sub-block flag test step 940.
[000133] At the coded sub-block flag test step 940, the method 900 determines whether the value of the coded sub-block flag is “TRUE” or not. The method 900 proceeds to an encode sub-block step 950 if the coded sub-block flag is set to “TRUE”. Otherwise, if the coded sub block flag is set to “FALSE” the method 900 proceeds to the last sub-block test step 960.
[000134] At the encode sub-block step 950, the entropy encoder 338 encodes the quantised coefficients in the selected sub-block to the bitstream 133. If the video encoder 114 did not select transform skip mode, the step 950 invokes a method 1100, described below in relation to Fig. 11. If the video encoder 114 selected transform skip mode, the step 950 invokes a method 1300 or a method 1500, described below in relation to Fig. 13 and Fig. 15, respectively. The method 900 proceeds under control of the processor from step 950 to the last sub-block test 960. [000135] At the last sub-block test 960, the method 900 operates to determine if the selected sub-block is the last sub-block in the current transform block. If the video encoder 114 did not select transform skip mode, the last sub-block is the top-left sub-block of the transform block.
If the video encoder 114 selected transform skip mode, the last sub-block is the bottom-right sub-block of the transform block. If the current selected sub-block is the last sub-block, the step 900 returns “YES” and the method 900 terminates. Otherwise, if the current selected sub block is not the last sub-block in the transform block, the step 960 returns “NO” and the method 900 proceeds to a select next sub-block step 970.
[000136] At the select next sub-block step 970, a next sub-block in the transform block is selected. If the video encoder 114 did not select transform skip mode, the next sub-block in the corresponding scan pattern, typically the backward diagonal scan order 810, is selected. If the video encoder 114 selected transform skip mode, the next sub-block in the corresponding scan pattern, typically the forward diagonal scan order 860, is selected. The method 900 proceeds from step 970 to the determine coded sub-block flag step 930.
[000137] Fig. 10 shows a method 1000 for decoding a transform block of quantised coefficients 424. The method 1000 may be embodied by apparatus such as a configured FPGA, an ASIC, or an ASSP. Additionally, the method 1000 may be performed by the video decoder 134 under execution of the processor 205. As such, the method 1000 may be implemented as modules of the software 233 stored on computer-readable storage medium and/or in the memory 206.
[000138] The method 1000 is implemented in some arrangements by the video encoder 134 at the entropy decoder 420 on receiving the bitstream 133. The method 1000 begins at a decode last position step 1010.
[000139] At the decode last position step 1010, a last significant coefficient position may be determined based on the transform skip flag. The transform skip flag for the transform block is decoded from the bitstream or can be inferred by the entropy decoder 420. If the video decoder 134 decoded or inferred the transform skip flag for the current transform block to be “FALSE”, that is a transform was applied, the last significant coefficient position is decoded from the bitstream 133 at step 1010. If the video decoder 134 decoded or inferred the transform skip flag for the current transform block to be TRUE (transform skip mode was applied), determining the last significant coefficient position at step 1010 is not implemented, as indicated using dotted lines. The method 1000 proceeds under control of the processor 205 from step 1010 to a select first sub-block step 1020. [000140] At the select first sub-block step 1020, the video decoder 134 selects a first sub-block of the transform block. If the video decoder 134 did decode or infer that transform skip mode is not used (transform skip flag is “FALSE”), the sub-block containing the last significant coefficient position is selected. If the video decoder 134 decoded or inferred that transform skip mode is used at step 1010, the top-left sub-block is selected at step 1020. The method 1000 proceeds under control of the processor 205 from step 1020 to a determine coded sub block flag step 1030.
[000141] At the determine coded sub-block flag step 1030, the video decoder 134 determines a coded sub-block flag. If the transform skip flag was decoded or inferred as “FALSE” and the current selected sub-block is the first sub-block selected in the select first sub-block step 1020, the coded sub-block flag is set to “TRUE” (that is, the coded sub-block flag is inferred to be “TRUE”). If the transform skip flag was decoded or inferred as “FALSE” and the current selected sub-block is identified as a last sub-block as described below in a last sub-block test 1060, the coded sub-block flag is inferred as “TRUE”. If the transform skip flag was decoded or inferred as “TRUE”, the current selected sub-block is identified as the last sub-block, and all the coded sub-block flags for previous sub-blocks in the current transform block were “FALSE”, the coded sub-block flag is inferred as “TRUE”. Otherwise, the video decoder 134 decodes the coded sub-block flag from the bitstream 133. The method 1000 proceeds under control of the processor 205 from step 1030 to a coded sub-block flag test 1040.
[000142] At the coded sub-block flag test 1040, the method 1000 tests the value of the coded sub-block flag determined at step 1030. The method 1000 proceeds to a decode sub-block step 1050 if the coded sub-block flag is determined to have a value of “TRUE” at step 1040. Otherwise if the coded sub-block flag is determined to have a value of “FALSE” at step 1040, all the quantised coefficients in the current selected sub-block are assigned a value of zero, and the method 1000 proceeds to a last sub-block test 1060.
[000143] At the decode sub-block step 1050, the entropy decoder 420 decodes quantised coefficients for the selected sub-block from the bitstream 133. If the video decoder 134 determines that transform skip mode is not used, the step 1050 invokes a method 1200, described below in relation to Fig. 12. If the video decoder 134 determines that transform skip mode is used, in some implementations the step 1050 invokes a method 1400 or a method 1600, described below in relation to Fig. 14 and Fig. 16 respectively. The method 1000 proceeds under control of the processor 205 to the last sub-block test 1060. [000144] At the last sub-block test 1060, if the video decoder 134 determined that transform skip mode is not used, the last sub-block is the top-left sub-block of the transform block. If the video decoder 134 determined transform skip mode is used, the last sub-block is the bottom- right sub-block of the transform block. If the current selected sub-block is the last sub-block, the step 1060 returns “YES” and the method 1000 terminates. Otherwise, the step 1060 returns “NO” and the method 1000 proceeds to a select next sub-block step 1070.
[000145] At the select next sub-block step 1070, if the video decoder 134 determined at step 1010 that transform skip mode is not used, the next sub-block in the backward diagonal scan order 810 is selected. If the video decoder 134 determined at step 1010 that transform skip mode is used, the next sub-block in the forward diagonal scan order 860 is selected. The method 1000 proceeds under control of the processor 205 from step 1060 to the determine coded sub-block flag step 1030.
[000146] In order to exploit the statistical characteristics of the quantised coefficients 336, the quantised coefficients are binarised by the video encoder 114 (typically by the entropy encoder 338) into a number of syntax elements prior to encoding. For example, because the quantised coefficients 336 often have a value of zero, one syntax element is a significance flag, which is set to “FALSE” for a quantised coefficient with a value of zero. If the significance flag is set to “FALSE”, no further syntax elements for the associated quantised coefficient are signalled.
The significance flag may be encoded to the bitstream 133 by using the context-adaptive binary arithmetic coding (CAB AC) entropy coder.
[000147] Although the CAB AC coder encodes syntax elements relatively efficiently, limiting the use of the CABAC coder is generally desirable to minimise computational requirements and cost for hardware implementations. Therefore, after the quantised coefficients 336 are binarised into a number of syntax elements by the entropy encoder 338, some syntax elements are CABAC coded to the bitstream 133, while other syntax elements are bypass coded to the bitstream 133. The total number of syntax element bins processed by CABAC is limited per transform block. In the VVC standard the limit is set at 1.75 bins per sample. For example, for an 8x8 transform block which consists of sixty-four samples, a CABAC bin budget is set at one hundred and twelve (112) bins. Over the course of encoding a TB to the bitstream 133, the remaining CABAC bin budget is tracked and decremented whenever a syntax element is CABAC coded. When the remaining CABAC bin budget is exhausted, any remaining quantised coefficients and the associated syntax elements must be bypass coded. [000148] Fig. 11 shows the method 1100 for encoding the quantised coefficients (336) of the current selected sub-block to the bitstream 133. The method 1100 is implemented at step 950 of the method 900 if the sub-block belongs to a transform block for which transform skip mode has not been selected. The method 1100 may be embodied by apparatus such as a configured FPGA, an ASIC, or an ASSP. Additionally, the method 1100 may be performed by the video encoder 114 under execution of the processor 205. As such, the method 1100 may be implemented as modules of the software 233 stored on computer-readable storage medium and/or in the memory 206. The method 1100 begins at a select first coefficient step 1110.
[000149] At the select first coefficient step 1110, a quantised coefficient of the current sub block is selected. If the current sub-block contains the last significant coefficient position, a current selected coefficient is set to the last significant coefficient. Otherwise, if the current sub-block does not contain the last significant coefficient position, the current selected coefficient is set to the bottom -right coefficient of the current sub-block. The method 1100 proceeds under control of the processor 205 from step 1110 to a use CABAC check 1120.
[000150] At the use CABAC check 1120, the video encoder 114 checks whether the remaining CABAC bin budget is greater than or equal to four. If the remaining CABAC bin budget is greater than or equal to four, the step 1120 returns “YES” and the method 1100 proceeds to a significant check step 1130. Otherwise, if the current CABAC bin budget is less than four, the step 1120 returns “FALSE” and the method 1100 proceeds to an encode remainder pass step 1180.
[000151] At the significant check step 1130, the video encoder 114 checks whether the current selected coefficient has a magnitude greater than zero. If the current coefficient is the last significant coefficient, a significance flag is set to “TRUE” at step 1130 but is not encoded to the bitstream 133. If the current selected sub-block is not the first or last sub-block in the backward scan order 810, and the current selected coefficient is the final coefficient as described below in a final coefficient check 1160, and all the significance flags for previous coefficients in the current selected sub-block were “FALSE”, the significance flag is set to “TRUE” at step 1130 but not encoded to the bitstream 133. If the current coefficient has a magnitude greater than zero, the significance flag is set to “TRUE” at step 1130 and encoded using the CABAC coder to the bitstream 133. Whenever a flag is encoded by the CABAC coder to the bitstream 133, the remaining CABAC bin budget is also reduced by one. If the significance flag is set to “TRUE”, the method 1100 proceeds to a greater than one check 1140. Otherwise, if the current coefficient has a magnitude of zero, the significance flag is set to “FALSE” and encoded using the CAB AC coder to the bitstream 133. The method 1100 proceeds to the final coefficient check 1160.
[000152] At the greater than one check 1140, the video encoder 114 checks whether the current selected coefficient has a magnitude greater than one. If the current coefficient has a magnitude greater than one, then a greater than one flag is set to “TRUE” and encoded using the CABAC coder to the bitstream 133. Upon returning “TRUE” at step 1140, the method 1100 proceeds to an encode greater than three and parity flags step 1150. Otherwise, if the current coefficient has a magnitude of one, the greater than one flag is set to “FALSE” at step 1140 and encoded using the CABAC coder to the bitstream 133. The method 1100 proceeds to the final coefficient check 1160 if step 1140 returns “FALSE”.
[000153] At the encode greater than three and parity flags step 1150, the video encoder 114 encodes a parity flag and a greater than three flag for the current sub-block. Step 1150 can be implemented by the entropy encoder 338 for example. Execution of step 1150 sets the parity flag to “FALSE” if the current coefficient has an even magnitude or sets the parity flag to “TRUE” if the current coefficient has an odd magnitude. The parity flag is encoded using the CABAC coder to the bitstream 133. The video encoder 114 sets the greater than three flag to “TRUE” if the current coefficient has a magnitude greater than three, or sets the greater than three flag to “FALSE” otherwise. The greater than three flag is encoded using the CABAC coder to the bitstream 133. The method 1100 proceeds under control of the processor 205 from step 1150 to the final coefficient check 1160.
[000154] At the final coefficient check 1160, the video encoder 114 checks whether the current selected coefficient is the top-left coefficient of the current selected sub-block. If the current selected coefficient is the top-left coefficient of the current selected sub-block, the step 1160 returns “YES” and the method 1100 proceeds to the encode remainder pass step 1180. Otherwise, if the current coefficient is not the top-left coefficient, the step 1160 returns “NO” and the method 1100 proceeds to a select next coefficient step 1170.
[000155] At the select next coefficient step 1170, the next coefficient in the backward diagonal scan order 810 is selected. The method 1100 proceeds from the step 1170 to the use CABAC check 1120. [000156] At the encode remainder pass step 1180, any remaining magnitudes of the quantised coefficients of the current selected sub-block are binarised and bypass coded to the bitstream 133, for example by the entropy encoder 338. The quantised coefficients are processed in the backward diagonal scan order 810. If a quantised coefficient was encoded by the CAB AC coder (that is, the use CABAC check 1120 was passed (returned “YES”)), the quantised coefficient at scan position n has a remaining magnitude r[n] if the greater than three flag is “TRUE”. The remaining magnitude is determined as r[n] = x[n] — 4, where x[n] is the absolute magnitude of the quantised coefficient at scan position n. The magnitude r[n] is binarised and then bypass coded to the bitstream 133. If a quantised coefficient was not encoded by the CABAC coder (the use CABAC check 1120 was not passed/returned “NO”), the absolute magnitude x[n] is binarised and bypass coded to the bitstream 133. The method 1100 proceeds under control of the processor 205 from step 1180 to an encode signs pass step 1190.
[000157] At the encode signs pass step 1190, sign bits for any significant coefficients of the current selected sub-block are bypass coded to the bitstream 133. A quantised coefficient that was encoded by the CABAC coder is significant if the significance flag is “TRUE”. A quantised coefficient that was not encoded by the CABAC coder is significant if the absolute magnitude x[n] is greater than zero. The sign bits are bypass coded to the bitstream 133 in the backward diagonal scan order 810. The method 1100 terminates upon execution of step 1190.
[000158] Fig. 12 shows the method 1200 for decoding quantised coefficients (424) for the current selected sub-block from the bitstream 133, when the sub-block belongs to a transform block for which transform skip mode has not been selected. The method 1200 can be implemented at step 1050 if transform skip mode has been decoded or inferred not to be used. The method 1200 may be embodied by apparatus such as a configured FPGA, an ASIC, or an ASSP. Additionally, the method 1200 may be performed by the video decoder 134 under execution of the processor 205. As such, the method 1200 may be implemented as modules of the software 233 stored on computer-readable storage medium and/or in the memory 206. The method 1200 begins at a select first coefficient step 1210.
[000159] At the select first coefficient step 1210, the method 1200 selects a first quantised coefficient of the current sub-block. If the current sub-block contains the last significant coefficient position, then a current selected coefficient is set to the last significant coefficient. Otherwise, the current selected coefficient is set to the bottom-right coefficient of the current sub-block. The method 1200 proceeds from step 1210 to a use CABAC check step 1220.
[000160] At the use CABAC check 1220, the video decoder 134 checks whether the remaining CABAC bin budget satisfies a threshold, that is whether the remaining CABAC bin budget for the transform block is greater than or equal to four bins. If the remaining budget is greater than or equal to four, the step 1220 returns “YES” and the method 1200 proceeds to a significant check step 1230. Otherwise, if the remaining CABAC budget is less than four bins, the step 1220 returns “NO” and the method 1200 proceeds to a decode remainder pass step 1280.
[000161] At the significant check step 1230, the video decoder 134 checks whether the current selected coefficient has a magnitude greater than zero. If the current coefficient is the last significant coefficient, then a significance flag is inferred to be “TRUE”. If the current selected sub-block is not the first or last sub-block in the backward scan order 810, and the current selected coefficient is the final coefficient as described below in a final coefficient check 1260, and all the significance flags for previous coefficients in the current selected sub-block were “FALSE”, the significance flag is inferred to be “TRUE” at step 1230. Otherwise, the significance flag is decoded from the bitstream 133 by the CABAC coder. Whenever a flag is decoded by the CABAC coder from the bitstream 133, the remaining CABAC bin budget is also reduced by one. If the value of the inferred or decoded significance flag is “TRUE”, the method 1200 proceeds to a greater than one check 1240. Otherwise if the significance flag is inferred or decoded as “FALSE”, the current selected coefficient is assigned a value of zero and the method 1200 proceeds to the final coefficient check 1260.
[000162] At the greater than one check 1240, the video decoder 134 decodes a greater than one flag using the CABAC coder from the bitstream 133. If the greater than one flag is decoded as “TRUE”, the method 1200 proceeds to a decode greater than three and parity flags step 1250. Otherwise if the greater than one flag is decoded as “FALSE”, the current selected coefficient is assigned a magnitude of one and the method 1200 proceeds to the final coefficient check 1260.
[000163] At the decode greater than three and parity flags step 1250, the video decoder 134 decodes a parity flag using the CABAC coder from the bitstream 133. The video decoder 134 also decodes a greater than three flag using the CABAC coder from the bitstream 133. The method 1200 proceeds under control of the processor 205 from step 1250 to the final coefficient check step 1260. [000164] At the final coefficient check step 1260, the video decoder 134 checks whether the current selected coefficient is the top-left coefficient of the current selected sub-block. If the current selected coefficient is the top-left coefficient of the current selected sub-block, the step 1260 returns “YES” and the method 1200 proceeds to the decode remainder pass step 1280. Otherwise, if the current selected coefficient is not the top-left coefficient of the current selected sub-block, the step 1260 returns “NO” and the method 1200 proceeds to a select next coefficient step 1270.
[000165] At the select next coefficient step 1270, the next coefficient in the backward diagonal scan order 810 is selected. The method 1200 proceeds under control of the processor 205 from step 1270 to the use CAB AC check 1220.
[000166] At the decode remainder pass step 1280, any remaining magnitudes of the quantised coefficients of the current selected sub-block are decoded from the bitstream 133 without using the CABAC coder. The remaining magnitudes of the quantised coefficients of the current selected sub-block are decoded using bypass decoding. The quantised coefficients are processed in the backward diagonal scan order 810. If a quantised coefficient was decoded by the CABAC coder (the use CABAC check 1220 was passed or returned “YES”), and the greater than three flag was decoded with a value of “TRUE”, then a remaining magnitude r[n] is bypass decoded from the bitstream 133, where n is the scan position of the quantised coefficient. The absolute magnitude x[n] of the quantised coefficient is then determined as x[n] = 4 + p[n] + r[n], where p[n] has a value of zero if the parity flag was decoded as “FALSE”, and p[n] has a value of one if the parity flag was decoded as “TRUE”. If a quantised coefficient was decoded by the CABAC coder and the greater than one flag was decoded as “TRUE”, but the greater than three flag was not decoded, or was decoded as “FALSE”, the absolute magnitude is determined as x[n] = 2 + p[n]. If a quantised coefficient was not decoded by the CABAC coder (the use CABAC check 1220 was not passed and returned “NO”), the absolute magnitude x[n] is bypass decoded from the bitstream 133. The method 1200 proceeds under control of the processor 205 from step 1280 to a decode signs pass step 1290.
[000167] At the decode signs pass step 1290, sign bits for any significant coefficients of the current selected sub-block are bypass decoded from the bitstream 133. A quantised coefficient is significant if the absolute magnitude x[n] is greater than zero. The sign bits are bypass decoded from the bitstream 133 in the backward diagonal scan order 810. The value of a quantised coefficient is set to —x[n] if the associated sign bit has a value of one. The value of a quantised coefficient is set to x[n] if the associated sign bit has a value of zero. The method 1200 terminates upon execution of the step 1290.
[000168] Fig. 13 shows the method 1300 for encoding the quantised coefficients (336) of the current selected sub-block to the bitstream 133. The method 1300 is implemented at step 950 of the method 900 if the sub-block belongs to a transform block for which transform skip mode has been selected. The method 1300 may be embodied by apparatus such as a configured FPGA, an ASIC, or an ASSP. Additionally, the method 1300 may be performed by the video encoder 114 under execution of the processor 205. As such, the method 1300 may be implemented as modules of the software 233 stored on computer-readable storage medium and/or in the memory 206. The method 1300 begins at a select first coefficient step 1310.
[000169] At the select first coefficient step 1310, a current selected coefficient is set to the top- left quantised coefficient of the current sub-block. The method 1300 proceeds under control of the processor 205 from step 1310 to a use CAB AC check 1320.
[000170] At the use CABAC check 1320, the video encoder 114 checks whether the remaining CAB AC bin budget is greater than or equal to four. If the remaining budget is greater than or equal to four, the step 1320 returns “YES” and the method 1300 proceeds to a significant check step 1330. Otherwise, if the remaining budget is less than four, the step 1320 returns “NO” and the method 1300 proceeds to an encode remainder pass step 1390.
[000171] At the significant check step 1330, the video encoder 114 sets a significance flag value. In executing step 1330, the encoder 114 checks whether the current selected coefficient has a magnitude greater than zero. If the coded sub-block flag associated with the current selected sub-block was set to “TRUE” and encoded to the bitstream 133, and the current selected coefficient is the final coefficient as described below in a final coefficient check 1370, and all significance flags for previous coefficients in the current selected sub-block were “FALSE”, then the significance flag is set to “TRUE” but not encoded to the bitstream 133. If the current coefficient has a magnitude greater than zero, then the significance flag is set to “TRUE” and encoded using the CABAC coder to the bitstream 133. Whenever a flag is encoded by the CABAC coder to the bitstream 133, the remaining CABAC bin budget is also reduced by one. If the significance flag is set to “TRUE”, the method 1300 proceeds under control of the processor 205 from step 1330 to an encode sign flag step 1335. Otherwise, if the current coefficient has a magnitude of zero, the significance flag is set to “FALSE” and encoded using the CAB AC coder to the bitstream 133. The method 1300 proceeds in this event from the step 1330 to the final coefficient check 1370.
[000172] At the encode sign flag step 1335, the video encoder 114 encodes a sign bit of the current selected coefficient using the CAB AC coder to the bitstream 133. The sign bit has a value of zero if the value of the current selected coefficient is positive. The sign bit has a value of one if the value of the current selected coefficient is negative. The method 1300 proceeds under control of the processor 205 from step 1335 to a greater than one check 1340.
[000173] At the greater than one check 1340, the video encoder 114 checks whether the current selected coefficient has a magnitude greater than one. If the current coefficient has a magnitude greater than one, a greater than one flag is set to “TRUE” and encoded using the CABAC coder to the bitstream 133. The method 1300 proceeds to an encode parity flag step 1345 if step 1340 returns “TRUE”. Otherwise, if the current coefficient has a magnitude of one, the greater than one flag is set to “FALSE” and encoded using the CABAC coder to the bitstream 133. The method 1300 proceeds to the final coefficient check 1370 if step 1340 returns “FALSE”.
[000174] At the encode parity flag step 1345, the video encoder 114 encodes a parity flag for the current quantised residual coefficient. The parity flag is set to “FALSE” if the current coefficient has an even magnitude. The parity flag is set to “TRUE” if the current coefficient has an odd magnitude. The parity flag is encoded using the CABAC coder to the bitstream 133. The method 1300 proceeds under control of the processor 205 from step 1345 to a use CABAC check 1350.
[000175] At the use CABAC check 1350, the video encoder 114 checks whether the remaining CABAC bin budget meets the threshold (is greater than or equal to four). If the remaining budget is greater than or equal to four, the step 1350 returns “YES” and method 1300 proceeds to a greater than gtk check step 1360. Otherwise, if the remaining budget is less than four, the step 1350 returns “NO” and the method 1300 proceeds to the encode remainder pass step 1390.
[000176] At the greater than gtk check step 1360, the video encoder 114 checks whether the current selected coefficient has a magnitude greater than 2 * k + 1. k is set to one the first time the method 1300 reaches the greater than gtk check 1360 for the current selected coefficient. If the selected coefficient has a magnitude greater than 2 * k + 1, then a greater than 2k + 1 flag is set to “TRUE” and encoded using the CABAC coder to the bitstream 133. If k is less than four, the step 1360 returns “YES” and the method 1300 stays at the greater than gtk check 1360 and k is increased by one. Otherwise if k is equal to four, the step 1360 returns “NO” and the method 1300 proceeds to the final coefficient check 1370. If the selected coefficient has a magnitude less than or equal to 2 * k + 1, the greater than 2k + 1 flag is set to “FALSE” and encoded using the CAB AC coder to the bitstream 133. Accordingly, up to four flags can be encoded in step 1360, relating to whether the current selected coefficient has a magnitude greater than 3, 5, 7 and 9 (for k values of 1, 2, 3 and 4 respectively). If the greater than 2k + 1 flag is set to “FALSE” the step 1360 returns “NO” and the method 1300 proceeds to the final coefficient check 1370.
[000177] At the final coefficient check step 1370, the video encoder 114 checks whether the current selected coefficient is the bottom-right coefficient of the current selected sub-block. If the current selected coefficient is the bottom-right coefficient of the current selected sub-block, the step 1370 returns “YES” and the method 1300 proceeds to the encode remainder pass step 1390. Otherwise, if the current selected coefficient is not the bottom-right coefficient, the step 1370 returns “NO” and the method 1300 proceeds to a select next coefficient step 1380.
[000178] At the select next coefficient step 1380, the next coefficient in the forward diagonal scan order 860 is selected. The method 1300 proceeds under control of the processor 205 from the step 1308 to the use CABAC check 1320.
[000179] At the encode remainder pass step 1390, any remaining magnitudes of the quantised coefficients of the current selected sub-block are binarised and bypass coded to the bitstream 133. The quantised coefficients are processed in the forward diagonal scan order 860. If a quantised coefficient was encoded by the CABAC coder and both the use CABAC check 1320 and the use CABAC check 1350 were passed (returned “YES”), the quantised coefficient at scan position n has a remaining magnitude r[n] if the greater than nine flag is “TRUE”. The remaining magnitude is determined as r[n] = x[n] — 10, where x[n] is the absolute magnitude of the quantised coefficient at scan position n. r[n] is binarised in a process described further below, and then bypass coded to the bitstream 133. If a quantised coefficient was encoded by the CABAC coder and both the use CABAC check 1320 and the use CABAC check 1350 were passed (returned “YES”), but the greater than nine flag was not encoded, or has a value of “FALSE”, there is no remaining magnitude that needs to be encoded to the bitstream 133.
[000180] If a quantised coefficient was encoded by the CABAC coder and the use CABAC check 1320 was passed (returned “YES”) but the use CABAC check 1350 was not passed (returned “NO”), then the quantised coefficient has a remaining magnitude if the greater than one flag is “TRUE”. The remaining magnitude is determined as r[n] = x[n] — 2. r[n] is binarised in a process described further below, and then bypass coded to the bitstream 133. If a quantised coefficient was encoded by the CAB AC coder and the use CAB AC check 1320 was passed (returned “YES”) but the use CAB AC check 1350 was not passed (returned “NO”), if the greater than one flag was not encoded or has a value of “FALSE”, there is no remaining magnitude that needs to be encoded to the bitstream 133.
[000181] If a quantised coefficient was not encoded by the CAB AC coder (the use CAB AC check 1320 was not passed or returned “NO”), the absolute magnitude x[n] is binarised in a process described further below, and then x[n] and the sign bit for the quantised coefficient are bypass coded to the bitstream 133. The method 1300 terminates upon execution of step 1390.
[000182] Fig. 14 shows the method 1400 for decoding the quantised coefficients (424) of the current selected sub-block from the bitstream 133. The method 1400 can be implemented at the step 1050 if the sub-block belongs to a transform block for which transform skip mode has been selected. The method 1400 may be embodied by apparatus such as a configured FPGA, an ASIC, or an ASSP. Additionally, the method 1400 may be performed by the video decoder 134 under execution of the processor 205. As such, the method 1400 may be implemented as modules of the software 233 stored on computer-readable storage medium and/or in the memory 206. The method 1400 begins at a select first coefficient step 1410.
[000183] At the select first coefficient step 1410, a current selected coefficient is set to the top- left coefficient of the current sub-block. The method 1400 proceeds under control of the processor 205 to a use CAB AC check 1420.
[000184] At the use CABAC check 1420, the video decoder 134 checks whether the remaining CAB AC bin budget meets a threshold. The threshold relates to whether the remaining CABAC bin budget is greater than or equal to four bins. If the remaining budget is greater than or equal to four, the step 1420 returns “YES” and the method 1400 proceeds to a significant check step 1430. Otherwise, if the remaining budget is less than four the method 1400 proceeds to a decode remainder pass step 1490.
[000185] At the significant check 1430, the video decoder 134 checks whether the current selected coefficient has a magnitude greater than zero and sets a significance flag accordingly.
If the coded sub-block flag associated with the current selected sub-block was decoded as “TRUE” from the bitstream 133 and not inferred, and the current selected coefficient is the final coefficient as described below in a final coefficient check 1470, and all significance flags for previous coefficients in the current selected sub-block were “FALSE”, then the significance flag is inferred to be “TRUE”. Otherwise, the significance flag is decoded from the bitstream 133 by the CABAC coder. Whenever a flag is decoded by the CABAC coder from the bitstream 133, the remaining CABAC bin budget is also reduced by one. If the significance flag is inferred or decoded as “TRUE”, the method 1400 proceeds to a decode sign flag step 1435. Otherwise, if the significance flag is inferred or decoded as “FALSE”, the current selected coefficient is assigned a value of zero and the method 1400 proceeds to the final coefficient check step 1470.
[000186] At the decode sign flag step 1435, the video decoder 134 decodes a sign bit of the current selected coefficient using the CABAC coder from the bitstream 133. The method 1400 proceeds under control of the processor 205 from step 1435 to a greater than one check step 1440.
[000187] At the greater than one check step 1440, the video decoder 134 decodes a greater than one flag using the CABAC coder from the bitstream 133. If the decoded greater than one flag has a value of “TRUE”, the method 1400 proceeds to a decode parity flag step 1445. Otherwise if the decoded greater than one flag has a value of “FALSE”, the current selected coefficient is assigned a magnitude of one and the method 1400 proceeds to the final coefficient check step 1470.
[000188] At the decode parity flag step 1445, the video decoder 134 decodes a parity flag using the CABAC coder from the bitstream 133. The method 1400 proceeds under control of the processor 205 from step 1445 to a use CABAC check step 1450.
[000189] At the use CABAC check stepl450, the video decoder 134 checks whether the remaining CABAC bin budget meets the threshold (is greater than or equal to four). If the remaining budget is greater than or equal to four, the step 1450 returns “YES” and the method 1400 proceeds to a greater than gtk check stepl460. Otherwise, if the remaining budget is less than four, the method 1400 proceeds to the decode remainder pass step 1490.
[000190] At the greater than gtk check 1460, the video decoder 134 decodes a greater than 2k + 1 flag using the CABAC coder from the bitstream 133. The variable k is set to one the first time the method 1400 reaches the greater than gtk check 1360 for the current selected coefficient. If the decoded greater than 2k + 1 flag has a value of “TRUE”, and k is less than four, step 1460 returns “YES”, the method 1400 remains at the greater than gtk check step 1460 and k is increased by one. Otherwise if k is equal to four, the step 1460 returns “NO” and the method 1400 proceeds to the final coefficient check 1470. If the decoded greater than 2k + 1 flag has a value of “FALSE”, the current selected coefficient is assigned a magnitude of 2k + p. The variable p has a value of zero if the parity flag was decoded as “FALSE”, and p has a value of one if the parity flag was decoded as “TRUE”. Accordingly, up to four flags can be decoded at step 1360, relating to whether the current selected coefficient has a magnitude greater than 3, 5, 7 and 9 (for k values of 1, 2, 3 and 4 respectively). If the decoded greater than 2k + 1 flag has a value of “FALSE”, the step 1460 returns “NO” and the method 1400 proceeds to the final coefficient check step 1470.
[000191] At the final coefficient check step 1470, the video decoder 134 checks whether the current selected coefficient is the bottom-right coefficient of the current selected sub-block. If the current selected coefficient is the bottom-right coefficient of the current selected sub-block, the step 1470 returns “YES” and the method 1400 proceeds to the decode remainder pass step 1490. Otherwise, if the current selected coefficient is not the bottom-right coefficient of the current selected sub-block, the step 1470 returns “NO” and the method 1400 proceeds to a select next coefficient step 1480.
[000192] At the select next coefficient step 1480, the next coefficient in the forward diagonal scan order 860 is selected. The method 1400 proceeds under control of the processor 205 from step 1480 to the use CABAC check step 1420.
[000193] At the decode remainder pass step 1490, any remaining magnitudes of the quantised coefficients of the current selected sub-block are decoded from the bitstream 133 without using the CABAC coder. The remaining magnitudes of the quantised coefficients of the current selected sub-block are bypass decoded. The quantised coefficients are processed in the forward diagonal scan order 860. If a quantised coefficient was decoded by the CABAC coder and both the use CABAC check 1420 and the use CABAC check 1450 were passed (returned “YES”), and the greater than nine flag was decoded as “TRUE”, then a remaining magnitude r[n] is decoded from the bitstream 133, where n is the scan position of the quantised coefficient. The absolute magnitude x[n] of the quantised coefficient is determined as x[n] = 10 + p[n] + r[n], where p[n] has a value of zero if the parity flag was decoded as “FALSE”, and p[n] has a value of one if the parity flag was decoded as “TRUE”. If a quantised coefficient was decoded by the CABAC coder and both the use CABAC check 1420 and the use CABAC check 1450 were passed (returned “YES”), but the greater than nine flag was not decoded, or was decoded as “FALSE”, then there is no remaining magnitude that needs to be decoded from the bitstream 133.
[000194] If a quantised coefficient was decoded by the CAB AC coder and the use CAB AC check 1420 was passed (returned “YES”) but the use CABAC check 1450 was not passed (returned “NO”), and the greater than one flag was decoded as “TRUE”, a remaining magnitude r[n] is bypass decoded from the bitstream 133. The absolute magnitude is determined as x[n] = 2 + p[n\ + r[n\. If a quantised coefficient was decoded by the CABAC coder and the use CABAC check 1420 was passed (returned “YES”) but the use CABAC check 1450 was not passed (returned “NO”), but the greater than one flag was not decoded or was decoded as “FALSE”, there is no remaining magnitude that needs to be decoded from the bitstream 133.
[000195] If a quantised coefficient was not decoded by the CABAC coder (the use CABAC check 1420 returned “NO”), then the absolute magnitude x[n] and the sign bit for the quantised coefficient are bypass decoded from the bitstream 133.
[000196] For each of the quantised coefficients of the current selected sub-block, the value of the quantised coefficient is set to — x[n\ if the associated sign bit has a value of one. The value of the quantised coefficient is set to x[n] if the associated sign bit has a value of zero. The method 1400 terminates upon execution of step 1490.
[000197] Methods 1100 and 1200 describe a regular residual coding (RRC) process which is used when transform skip mode has not been selected for the transform block. Methods 1300 and 1400 describe a transform skip residual coding (TSRC) process which is used when transform skip mode has been selected for the transform block. Having the TSRC process different to the RRC process can be advantageous because quantised coefficients produced in a transform skip TB have different statistical properties to quantised coefficients produced in a non-transform skip TB. Therefore, a different residual coding process is resultantly needed to exploit the statistical properties of quantised coefficients produced in a transform skip TB.
[000198] For example, when a transform is applied the resulting coefficients represent the characteristics of the residual signal in the frequency domain, and the coefficients at or near the DC frequency (the top-left comer of the TB) typically have the greatest magnitude.
Coefficients corresponding to higher frequencies will typically have relatively small or zero magnitude. Signalling many zero-valued high frequency coefficients by signalling a last significant position is resultantly efficient. In contrast, if a transform is skipped, the resulting coefficients are representative of the residual signal in the spatial domain. The magnitudes of spatial residual coefficients typically do not depend on each residual coefficient’s location within the transform block, so there is no benefit in signalling a last significant position.
[000199] Although methods 1300 and 1400 describe a working TSRC process, the number of syntax elements per quantised coefficient coded by the CABAC coder is eight, compared to four syntax elements per quantised coefficient coded by the CABAC coder for the RRC process of methods 1100 and 1200. Additionally, the remaining CABAC bin budget is potentially checked twice per quantised coefficient for the TSRC process of methods 1300 and 1400, compared with just one check per quantised coefficient for the RRC process of methods 1100 and 1200. It is desirable for the RRC and TSRC processes to be similar in complexity for hardware implementations, to avoid one process being a bottleneck, that is causing overall delay, for the overall coding process. Requiring a different residual coding process with higher complexity can be disadvantageous in terms of hardware implementation.
[000200] In another arrangement of the encode sub-block step 950, if the video encoder 114 selected transform skip mode, the step 950 invokes a method 1500 described below in relation to Fig. 15. In an associated arrangement of the decode sub-block step 1050, if the video decoder 134 selected transform skip mode, the step 1050 invokes a method 1600 described below in relation to Fig. 16. Each of the methods 1500 and 1600 relates to a TSRC implementation that has similar complexity with the RRC implementation methods 1100 and 1200
[000201] Fig. 15 shows the method 1500 for encoding the quantised coefficients (336) of the current selected sub-block to the bitstream 133. The method 1500 can be implemented at step 950 if the sub-block belongs to a transform block for which transform skip mode has been selected. The method 1500 may be embodied by apparatus such as a configured FPGA, an ASIC, or an ASSP. Additionally, the method 1500 may be performed by the video encoder 114 under execution of the processor 205. As such, the method 1500 may be implemented as modules of the software 233 stored on computer-readable storage medium and/or in the memory 206. The method 1500 begins at a select first coefficient step 1510.
[000202] At the select first coefficient step 1510, a current selected coefficient is set to the top- left coefficient of the current sub-block of the transform block. The method 1500 proceeds under control of the processor 205 from step 1510 to a use CABAC check step 1520. [000203] At the use CAB AC check step 1520, the video encoder 114 checks whether the remaining CAB AC bin budget is greater than or equal to a threshold, similarly to step 1320, being four bins. If the remaining budget is greater than or equal to four, the step 1520 returns “YES” and the method 1500 proceeds to a significant check step 1530. Otherwise, if the remaining CAB AC bin budget is less than four, the method 1500 proceeds to an encode remainder pass step 1590.
[000204] At the significant check step 1530, the video encoder 114 checks whether the current selected coefficient has a magnitude greater than zero and sets a significance flag for the current selected coefficient. The step 1530 operates as follows:
(a) If the coded sub-block flag associated with the current selected sub-block was set to “TRUE” and encoded to the bitstream 133, and the current selected coefficient is the final coefficient as described below in a final coefficient check 1570, and all significance flags for previous coefficients in the current selected sub-block were “FALSE”, then the significance flag is set to “TRUE” but not encoded to the bitstream 133.
(b) If the current coefficient has a magnitude greater than zero, then the significance flag is set to “TRUE” and encoded using the CABAC coder to the bitstream 133.
(c) Otherwise, if the current coefficient has a magnitude of zero, the significance flag is set to “FALSE” and encoded using the CABAC coder to the bitstream 133.
[000205] Whenever a flag is encoded by the CABAC coder to the bitstream 133, the remaining CABAC bin budget is also reduced by one. If the significance flag is set to “TRUE”, the method 1500 proceeds to an encode sign flag step 1540. If the significance flag is set to “FALSE” at step 1530, the method 1500 proceeds to the final coefficient check 1570.
[000206] At the encode sign flag step 1540, the video encoder 114 encodes a sign bit of the current selected coefficient using the CABAC coder to the bitstream 133. The sign bit has a value of zero if the value of the current selected coefficient is positive. The sign bit has a value of one if the value of the current selected coefficient is negative. The method 1500 proceeds under control of the processor 205 from step 1540 to a greater than one check step 1550. [000207] At the greater than one check step 1550, the video encoder 114 checks whether the current selected coefficient has a magnitude greater than one. If the current coefficient has a magnitude greater than one, then a greater than one flag is set to “TRUE” and encoded using the CABAC coder to the bitstream 133. The method 1500 proceeds to an encode parity flag step 1560 if step 1550 returns “TRUE”. Otherwise, if the current coefficient has a magnitude of one, the greater than one flag is set to “FALSE” and encoded using the CABAC coder to the bitstream 133. The method 1500 proceeds to the final coefficient check 1570 if execution of step 1550 returns “FALSE”.
[000208] At the encode parity flag step 1560, the video encoder 114 sets a parity flag for the current selected coefficient. The parity flag is set to “FALSE” if the current coefficient has an even magnitude. The parity flag is set to “TRUE” if the current coefficient has an odd magnitude. The parity flag is encoded using the CABAC coder to the bitstream 133. The method 1500 proceeds under control of the processor 205 from step 1560 to the final coefficient check 1570.
[000209] At the final coefficient check 1570, the video encoder 114 checks whether the current selected coefficient is the bottom-right coefficient of the current selected sub-block. If the current selected coefficient is the bottom-right coefficient of the current selected sub-block, the step 1570 returns “YES” and the method 1500 proceeds to the encode remainder pass step 1590. Otherwise, if the current selected coefficient is the bottom -right coefficient of the current sub-block, the step 1570 returns “NO” and the method 1500 proceeds to a select next coefficient step 1580.
[000210] At the select next coefficient step 1580, the next coefficient in the forward diagonal scan order 860 is selected. The method 1500 proceeds under control of the processor 205 from step 1580 to the use CABAC check step 1520.
[000211] At the encode remainder pass step 1590, any remaining magnitudes of the quantised coefficients of the current selected sub-block are binarised and bypass coded to the bitstream 133. The quantised coefficients are processed in the forward diagonal scan order 860. If a quantised coefficient was encoded by the CABAC coder (the use CABAC check 1520 was passed by returning “YES”), the quantised coefficient at scan position n has a remaining magnitude r[n] if the greater than one flag is “TRUE”. The remaining magnitude is calculated as r[n] = x[n] — 2, where x[n] is the absolute magnitude of the quantised coefficient at scan position n. r[n] is binarised as described further below, and bypass coded to the bitstream 133. [000212] If a quantised coefficient was encoded by the CABAC coder, but the greater than one flag was not encoded, or has a value of “FALSE”, there is no remaining magnitude that needs to be encoded to the bitstream 133. If a quantised coefficient was not encoded by the CABAC coder (the use CABAC check 1520 returned “NO” so was not passed), then the absolute magnitude x[n] is binarised in a process described further below, and then x[n] and the sign bit for the quantised coefficient are bypass coded to the bitstream 133. The method 1500 terminates upon execution of step 1590.
[000213] Fig. 16 shows the method 1600 for decoding the quantised coefficients (424) of the current selected sub-block from the bitstream 133. The method 1600 can be implemented at the step 950 if the sub-block belongs to a transform block for which transform skip mode has been selected. The method 1600 may be embodied by apparatus such as a configured FPGA, an ASIC, or an ASSP. Additionally, the method 1600 may be performed by the video decoder 134 under execution of the processor 205. As such, the method 1600 may be implemented as modules of the software 233 stored on computer-readable storage medium and/or in the memory 206. The method 1600 implements decoding complementing the encoding of the method 1500. The method 1600 begins at a select first coefficient step 1610.
[000214] At the select first coefficient step 1610, a current selected coefficient is set to the top- left coefficient of the current sub-block. The method 1600 proceeds under execution of the processor 205 from step 1610 to a use CABAC check 1620.
[000215] At the use CABAC check 1620, the video decoder 134 checks whether the remaining CABAC bin budget has been satisfied. The threshold is whether the remaining bin budget is greater than or equal to four. If the remaining budget is greater than or equal to four, the step 1620 returns “YES” and the method 1600 proceeds to a significant check step 1630.
Otherwise, if the remaining budget is less than four, the step 1620 returns “NO” and the method 1600 proceeds to a decode remainder pass 1690.
[000216] At the significant check step 1630, the video decoder 134 checks whether the current selected coefficient has a magnitude greater than zero and a significance flag is set. If the coded sub-block flag associated with the current selected sub-block was decoded as “TRUE” from the bitstream 133 and not inferred, and the current selected coefficient is the final coefficient as described below in a final coefficient check 1670, and all significance flags for previous coefficients in the current selected sub-block were “FALSE”, the significance flag is inferred to be “TRUE”. Otherwise, the significance flag is decoded from the bitstream 133 by the CAB AC coder. Whenever a flag is decoded by the CAB AC coder from the bitstream 133, the remaining CABAC bin budget is reduced by one. If the significance flag is inferred or decoded as “TRUE”, the method 1600 proceeds to a decode sign flag step 1640. Otherwise if the significance flag is decoded as “FALSE”, the current selected coefficient is assigned a value of zero and the method 1600 proceeds to the final coefficient check step 1670.
[000217] At the decode sign flag step 1640, the video decoder 134 decodes a sign bit of the current selected coefficient using the CABAC coder from the bitstream 133. The method 1600 proceeds under control of the processor 205 from step 1640 to a greater than one check step 1650.
[000218] At the greater than one check step 1650, the video decoder 134 decodes a greater than one flag using the CABAC coder from the bitstream 133. If the decoded greater than one flag has a value of “TRUE”, the method 1600 proceeds to a decode parity flag step 1660. Otherwise if the decoded greater than one flag has a value of “FALSE”, the current selected coefficient is assigned a magnitude of one and the method 1600 proceeds to the final coefficient check step 1670.
[000219] At the decode parity flag step 1660, the video decoder 134 decodes a parity flag using the CABAC coder from the bitstream 133. The method 1600 proceeds under control of the processor 205 from step 1660 to the final coefficient check step 1670.
[000220] At the final coefficient check step 1670, the video decoder 134 checks whether the current selected coefficient is the bottom-right coefficient of the current selected sub-block. If the current selected coefficient is the bottom-right coefficient of the current selected sub-block, the step 1670 returns “YES” and the method 1600 proceeds to the decode remainder pass step 1690. Otherwise, if the current selected coefficient is not the bottom-right coefficient of the current selected sub-block, the step 1670 returns “NO” and the method 1600 proceeds to a select next coefficient step 1680.
[000221] At the select next coefficient step 1680, the next coefficient in the forward diagonal scan order 860 is selected. The method 1600 proceeds under control of the processor 205 from step 1680 to the use CABAC check step 1620.
[000222] At the decode remainder pass step 1690, any remaining magnitudes of the quantised coefficients of the current selected sub-block are decoded from the bitstream 133 without using the CABAC coder. The remaining magnitudes of the quantised coefficients of the current selected sub-block are decoded using bypass decoding. The quantised coefficients are processed in the forward diagonal scan order 860. If a quantised coefficient was decoded by the CABAC coder (the use CABAC check 1620 was passed or returned “YES”), and the greater than one flag was decoded and set to “TRUE”, a remaining magnitude r[n] is bypass decoded from the bitstream 133, where n is the scan position of the quantised coefficient. The absolute magnitude x[n\ of the quantised coefficient is determined as x[n] = 2 + p[n] + r[n], where p[n] has a value of zero if the parity flag was decoded as “FALSE”, and p[n] has a value of one if the parity flag was decoded as “TRUE”. If a quantised coefficient was decoded by the CABAC coder, but the greater than one flag was not decoded, or has a value of “FALSE”, then there is no remaining magnitude that needs to be decoded from the bitstream 133. If a quantised coefficient was not decoded by the CABAC coder (the use CABAC check 1620 was not passed/retumed “NO”), the absolute magnitude x[n] and the sign bit for the quantised coefficient are bypass decoded from the bitstream 133.
[000223] For each of the quantised coefficients of the current selected sub-block, the value of the quantised coefficient is set to — x[n\ if the associated sign bit has a value of one. The value of the quantised coefficient is set to x[n] if the associated sign bit has a value of zero. The method 1600 terminates upon execution of step 1690.
[000224] The remaining magnitude r[n] is binarised depending on an associated Rice parameter R. A maximum Rice code value cmax is determined as cmax = 6 * 2R. If r[n] is less than cmax , then r[n] is entirely binarised as a Rice code with Rice parameter R. Table 1 shows an example Rice binarisation when R = 0. Table 2 shows an example Rice binarisation when R = 1.
Figure imgf000057_0001
Table 1: Rice code binarisation for R = 0
Figure imgf000058_0001
Table 2: Rice code binarisation for R = 1
[000225] If r[n] is greater than or equal to cmax, then r[n] is binarised as a concatenation of a prefix code and a suffix code. The prefix code is a bit string of length six with all bits equal to one. The suffix code is derived by binarizing r[n] — cmax with an exponential Golomb order-k code, with k set equal to R + 1. The overall binarisation for r[n] may be referred to as a Rice- EG code.
[000226] When a quantised coefficient is not decoded by the CABAC coder, the absolute magnitude x[n] is bypass decoded from the bitstream 133. x[n] is binarised by the same Rice- EG code described above.
[000227] In one arrangement, the remaining magnitude r[n] and absolute magnitude x[n] are binarised using a fixed Rice parameter of R = 1. In another arrangement, the remaining magnitude r[n] and absolute magnitude x[n] are binarised using a fixed Rice parameter of zero, that is R = 0. In a preferred arrangement, steps 1590 and 1690 use a Rice parameter of R = 0 for binarisation prior to bypass encoding and decoding respectively. Tests conducted have found that using the methods 1500 and 1600 in conjunction with a Rice parameter of R = 0 can provide improved gain compared to using a structure similar to the method 1400 for TSRC (that is using RRC methods even if transformed skipped) and a traditional Rice parameter of R = 1. Tests conducted have indicated that a Rice parameter of R = 0 more efficiently encodes r[n] if the remaining magnitude syntax element frequently takes the value of zero.
[000228] The methods 900 to 1600 encode or decode quantised coefficients 336 or 424 respectively of a selected sub-block. The quantised coefficients may also be referred to generally as quantised transform coefficients, quantised residual coefficients, or residual coefficients.
[000229] Operation of the method 1600 allows a transform-skipped residual coefficient of a transform block of a video bitstream to be decoded without requiring up to 8 bins to decode magnitude of the residual coefficient in full. Steps 1630 to 1660 operate so that only a significance flag, a sign flag, greater than one flag and a parity flag are decoded using a CAB AC decoder (if the use CAB AC check 1620 return “YES”). The decoded flags can represent at least a portion of the magnitude of the residual coefficient or the magnitude of the residual coefficient in full. Any remaining portion of the residual coefficient can be decoded at step 1690. In a preferred implementation, the binarising and decoding at step 1690 using Rice- EG is implemented using a fixed Rice parameter of 0 (zero). Implementations in this regard reduce the complexity required for TRC in terms of bin budgeting compared to RRC. Further, the number of steps required to implement TSRC is reduced as a second CABAC budget check (such as 1450) is avoided. Rather the method 1600 operates using a single CABAC check without adversely affecting coding gain compared to applying RRC methods directly.
[000230] The method 1600 also operates to use binarising and bypass decoding to decode the magnitude of the residual coefficient in full, potentially with a Rice parameter of 0, if the CABAC budget is determined to have been exhausted at step 1620.
INDUSTRIAL APPLICABILITY
[000231] The arrangements described are applicable to the computer and data processing industries and particularly for the digital signal processing for the encoding a decoding of signals such as video and image signals, achieving high compression efficiency.
[000232] The methods 1500 and 1600 allow complexity for RRC and TSRC to be similar, thereby removing complexity of hardware implementation. Use of a Rice parameter R = 0 has also been found to improve coding gain in some instances, particularly in relation to bypass coding any remainder resulting from operation of the methods 1500 and 1600 (steps 1590 and 1690 respectively).
[000233] The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.

Claims

1. A method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining at least a portion of a magnitude of the residual coefficient by decoding only a significance flag, a sign flag, greater than one flag and a parity flag using a CABAC decoder; and decoding any remaining portion of the magnitude of the residual coefficient using Rice- EG decoding with a Rice parameter of 0.
2. The method according to claim 1, further comprising determining that a CABAC coding budget for the transform block has been exhausted, and decoding the magnitude of the residual coefficient in full using Rice-EG decoding with a Rice parameter of 0.
3. A method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining that a CABAC coding budget for the transform block has been exhausted; and decoding the magnitude of the residual coefficient in full using Rice-EG decoding with a Rice parameter of 0.
4. A method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream, the method comprising: upon selecting the residual coefficient from the transform block, determining whether the CAB AC coding budget is exhausted; if the CABAC coding budget is not exhausted, decoding the residual coefficient in full by: determining at least a portion of a magnitude of the residual coefficient by decoding only a significance flag, a sign flag, greater than one flag and a parity flag using a CABAC decoder; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0; and if the CABAC coding budget is exhausted, decoding the magnitude of the residual coefficient in full using Rice-EG decoding with a Rice parameter of 0.
5. A non-transitory computer readable medium having a computer program stored thereon to implement a method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining at least a portion of a magnitude of the residual coefficient by decoding only a significance flag, a sign flag, greater than one flag and a parity flag using a CABAC decoder; and decoding any remaining portion of the magnitude of the residual coefficient using Rice- EG decoding with a Rice parameter of 0.
6. A system, comprising: a memory; and a processor, wherein the processor is configured to execute code stored on the memory for implementing a method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining at least a portion of a magnitude the residual coefficient by decoding only a significance flag, a sign flag, greater than one flag and a parity flag using a CABAC decoder; and decoding any remaining portion of the magnitude of the residual coefficient using Rice- EG decoding with a Rice parameter of 0.
7. A video decoder, configured to: receive a transform-skipped residual coefficient of a transform block of a video bitstream; determine at least a portion of a magnitude of the residual coefficient by decoding only a significance flag, a sign flag, greater than one flag and a parity flag using a CAB AC decoder; and decode any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
8. A method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining significance of the residual coefficient by decoding or inferring a significance flag; determining a portion of a magnitude of the residual coefficient by further determining a sign flag, a greater than one flag, a parity flag, a greater than three flag, a greater than five flag, a greater than seven flag, and a greater than nine flag; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
9. A non-transitory computer readable medium having a computer program stored thereon to implement a method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining significance of the residual coefficient by decoding or inferring a significance flag; determining a portion of a magnitude of the residual coefficient by further determining a sign flag, a greater than one flag, a parity flag, a greater than three flag, a greater than five flag, a greater than seven flag, and a greater than nine flag; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
10. A system, comprising: a memory; and a processor, wherein the processor is configured to execute code stored on the memory for implementing a method of decoding a transform-skipped residual coefficient of a transform block of a video bitstream, the method comprising: determining significance of the residual coefficient by decoding or inferring a significance flag; determining a portion of a magnitude of the residual coefficient by further determining a sign flag, a greater than one flag, a parity flag, a greater than three flag, a greater than five flag, a greater than seven flag, and a greater than nine flag; and decoding any remaining portion of the magnitude of the residual coefficient using Rice-EG decoding with a Rice parameter of 0.
11. A video decoder, configured to: receive a transform-skipped residual coefficient of a transform block of a video bitstream; determine significance of the residual coefficient by decoding or inferring a significance determine a portion of a magnitude of the residual coefficient by further determining a sign flag, a greater than one flag, a parity flag, a greater than three flag, a greater than five flag, a greater than seven flag, and a greater than nine flag; and decode any remaining portion of the magnitude of the residual coefficient using Rice- EG decoding with a Rice parameter of 0.
12. A method of encoding a transform-skipped residual coefficient of a transform block to a video bitstream, the method comprising: encoding a significance flag indicating whether the residual coefficient has a magnitude greater than zero to the bitstream; encoding a portion of a magnitude of the residual coefficient by further encoding a sign flag, a greater than one flag, a parity flag, a greater than three flag, a greater than five flag, a greater than seven flag, and a greater than nine flag to the bitstream; and encoding any remaining portion of the magnitude of the residual coefficient to the bitstream using Rice-EG decoding with a Rice parameter of 0.
PCT/AU2020/051233 2019-12-23 2020-11-13 Method, apparatus and system for encoding and decoding a block of video samples WO2021127723A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2019284053A AU2019284053A1 (en) 2019-12-23 2019-12-23 Method, apparatus and system for encoding and decoding a block of video samples
AU2019284053 2019-12-23

Publications (1)

Publication Number Publication Date
WO2021127723A1 true WO2021127723A1 (en) 2021-07-01

Family

ID=76572831

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2020/051233 WO2021127723A1 (en) 2019-12-23 2020-11-13 Method, apparatus and system for encoding and decoding a block of video samples

Country Status (3)

Country Link
AU (1) AU2019284053A1 (en)
TW (1) TW202126050A (en)
WO (1) WO2021127723A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023023608A3 (en) * 2021-08-18 2023-04-13 Innopeak Technology, Inc. History-based rice parameter derivations for video coding
WO2023132993A1 (en) * 2022-01-10 2023-07-13 Innopeak Technology, Inc. Signaling general constraints information for video coding

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150016537A1 (en) * 2013-07-12 2015-01-15 Qualcomm Incorporated Rice parameter initialization for coefficient level coding in video coding process
US20160353113A1 (en) * 2015-05-29 2016-12-01 Qualcomm Incorporated Coding data using an enhanced context-adaptive binary arithmetic coding (cabac) design
US20160373788A1 (en) * 2013-07-09 2016-12-22 Sony Corporation Data encoding and decoding
US20170064336A1 (en) * 2015-09-01 2017-03-02 Qualcomm Incorporated Coefficient level coding in video coding
WO2020060867A1 (en) * 2018-09-21 2020-03-26 Interdigital Vc Holdings, Inc. Scalar quantizer decision scheme for dependent scalar quantization

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160373788A1 (en) * 2013-07-09 2016-12-22 Sony Corporation Data encoding and decoding
US20150016537A1 (en) * 2013-07-12 2015-01-15 Qualcomm Incorporated Rice parameter initialization for coefficient level coding in video coding process
US20160353113A1 (en) * 2015-05-29 2016-12-01 Qualcomm Incorporated Coding data using an enhanced context-adaptive binary arithmetic coding (cabac) design
US20170064336A1 (en) * 2015-09-01 2017-03-02 Qualcomm Incorporated Coefficient level coding in video coding
WO2020060867A1 (en) * 2018-09-21 2020-03-26 Interdigital Vc Holdings, Inc. Scalar quantizer decision scheme for dependent scalar quantization

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GARY SULLIVAN; OHM JENS-RAINER: "Meeting Report of the 15th Meeting of the Joint Video Experts Team (JVET), Gothenburg, SE, 3- 12 July 2019", JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, 15TH MEETING; DOCUMENT: JVET-0 NOTES DE, 3 July 2019 (2019-07-03), CH, pages 1 - 447, XP009529684 *
H. SCHWARZ (FRAUNHOFER), T. NGUYEN (FRAUNHOFER), D. MARPE (FRAUNHOFER), T. WIEGAND (FRAUNHOFER HHI), M. KARCZEWICZ (QUALCOMM), M. : "CE7: Transform coefficient coding with reduced number of regular-coded bins (tests 7.1.3a, 7.1.3b)", 12. JVET MEETING; 20181003 - 20181012; MACAO; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 3 October 2018 (2018-10-03), Macao, CN, pages 1 - 19, XP030194467 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023023608A3 (en) * 2021-08-18 2023-04-13 Innopeak Technology, Inc. History-based rice parameter derivations for video coding
WO2023132993A1 (en) * 2022-01-10 2023-07-13 Innopeak Technology, Inc. Signaling general constraints information for video coding

Also Published As

Publication number Publication date
TW202126050A (en) 2021-07-01
AU2019284053A1 (en) 2021-07-08

Similar Documents

Publication Publication Date Title
US11910028B2 (en) Method, apparatus and system for encoding and decoding a tree of blocks of video samples
AU2020201753B2 (en) Method, apparatus and system for encoding and decoding a block of video samples
US20230037302A1 (en) Method, apparatus and system for encoding and decoding a coding tree unit
US20220394311A1 (en) Method apparatus and system for encoding and decoding a coding tree unit
US11949857B2 (en) Method, apparatus and system for encoding and decoding a tree of blocks of video samples
AU2021273633B2 (en) Method, apparatus and system for encoding and decoding a block of video samples
WO2021127723A1 (en) Method, apparatus and system for encoding and decoding a block of video samples
AU2021254642A1 (en) Method, apparatus and system for encoding and decoding a tree of blocks of video samples
WO2020033992A1 (en) Method, apparatus and system for encoding and decoding a transformed block of video samples
US20240146912A1 (en) Method, apparatus and system for encoding and decoding a tree of blocks of video samples
US20240146914A1 (en) Method, apparatus and system for encoding and decoding a tree of blocks of video samples
US20240146913A1 (en) Method, apparatus and system for encoding and decoding a tree of blocks of video samples
US20240146915A1 (en) Method, apparatus and system for encoding and decoding a tree of blocks of video samples
AU2020202057A1 (en) Method, apparatus and system for encoding and decoding a block of video samples
AU2020202285A1 (en) Method, apparatus and system for encoding and decoding a block of video samples
AU2019203981A1 (en) Method, apparatus and system for encoding and decoding a block of video samples
AU2019232802A1 (en) Method, apparatus and system for encoding and decoding an Image frame from a bistream

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20906976

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20906976

Country of ref document: EP

Kind code of ref document: A1