WO2023160470A1 - 编解码方法和装置 - Google Patents

编解码方法和装置 Download PDF

Info

Publication number
WO2023160470A1
WO2023160470A1 PCT/CN2023/076726 CN2023076726W WO2023160470A1 WO 2023160470 A1 WO2023160470 A1 WO 2023160470A1 CN 2023076726 W CN2023076726 W CN 2023076726W WO 2023160470 A1 WO2023160470 A1 WO 2023160470A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
decoding
unit
data packets
block
Prior art date
Application number
PCT/CN2023/076726
Other languages
English (en)
French (fr)
Inventor
罗忆
杨海涛
冯俊凯
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023160470A1 publication Critical patent/WO2023160470A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/43Hardware specially adapted for motion estimation or compensation
    • H04N19/433Hardware specially adapted for motion estimation or compensation characterised by techniques for memory access
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Definitions

  • the embodiments of the present application relate to the field of media technologies, and in particular, to encoding and decoding methods and devices.
  • a media device uses a display interface when transmitting media content.
  • the media content may be compressed through an encoding operation to reduce the amount of bandwidth during the transmission of the media content.
  • the receiving end needs to decode the compressed media content through a decoding operation to restore the media content.
  • the time required for the decoder to decode the compressed media content and restore the media content is related to the decoding performance of the decoder. How to improve the decoding performance of the decoder is one of the problems urgently needed to be solved by those skilled in the art.
  • Embodiments of the present application provide a codec method and device, which can improve the decoding performance of a decoder.
  • the embodiment of the present application adopts the following technical solutions:
  • the embodiment of the present application provides a decoding method, the method includes: first acquiring a bit stream. Then a plurality of data packets are obtained according to the bit stream. Then, according to the identifiers of the multiple data packets, the multiple data packets are sent to multiple entropy decoders for decoding to obtain multiple syntax elements. Finally, the media content is restored according to the plurality of syntax elements.
  • the decoding method provided by the embodiment of the present application does not use a single entropy decoder for decoding during the decoding process, but sends data packets to multiple entropy decoders for parallel decoding according to the identifier of the data packets.
  • using multiple entropy decoders for parallel decoding can improve the throughput of the decoder, thereby improving the decoding performance of the decoder.
  • each data packet carries an identifier
  • the decoder can use the identifier to quickly determine the entropy decoder corresponding to the data packet, so as to achieve the effect of parallel decoding by multiple entropy decoders with relatively low complexity.
  • the foregoing media content may include at least one of an image, an image slice, or a video.
  • the sending the multiple data packets to multiple entropy decoders for decoding according to the identifiers of the multiple data packets to obtain multiple syntax elements may include: according to the multiple The identifier of the data packet determines the subflow to which each data packet of the plurality of data packets belongs. Send each data packet to the decoding buffer of the sub-stream to which each data packet belongs. The data packets in each decoding buffer are sent to an entropy decoder corresponding to each buffer for decoding to obtain the plurality of syntax elements.
  • a bit stream may be composed of data packets of multiple sub-streams.
  • each entropy decoder corresponds to a substream. So for each packet in the bitstream.
  • Each data packet can determine the substream to which the data packet belongs through the identification of the data packet, and then send the data packet to the decoding buffer of the entropy decoder corresponding to the substream to which the data packet belongs, and then the entropy decoder decodes
  • the packets in the buffer are decoded to obtain syntax elements. That is, each number obtained by bit stream segmentation All data packets can be sent to the corresponding entropy decoder for decoding, so as to realize parallel decoding by multiple entropy decoders.
  • Using multiple entropy decoders to decode in parallel can increase the throughput of the decoder, thereby improving the decoding performance of the decoder.
  • the above-mentioned decoding buffer may be a first-in-first-out (first input first output, FIFO) buffer.
  • the data packet that enters the decoding buffer first will leave the buffer first, and at the same time enter the entropy decoder first for entropy decoding.
  • the restoring the media content according to the multiple syntax elements may include: dequantizing the multiple syntax elements to obtain multiple residuals; predicting the multiple residuals Rebuild restores media content.
  • the media content can be obtained by performing inverse quantization, prediction, reconstruction and restoration on the multiple syntax elements.
  • dequantization and predictive reconstruction can be processed by any method conceivable by those skilled in the art, which is not specifically limited in this embodiment of the present application.
  • uniform inverse quantization may be used for inverse quantization.
  • the obtaining multiple data packets according to the bit stream may include: segmenting the bit stream into multiple data packets according to a preset segmentation length.
  • the bit stream may be divided into multiple data packets according to the division length of N bits.
  • N is a positive integer.
  • the data packet may include a data header and a data body.
  • the data header is used to store the identifier of the data packet.
  • a data packet with a length of N bits may include a data header with a length of M bits and a data body with a length of N-M bits.
  • M and N are agreed fixed values in the encoding and decoding process.
  • the above data packets may have the same length.
  • the above data packets may all have a length of N bits.
  • a data header with a length of M bits can be extracted, and the remaining data of N-M bits can be used as a data body.
  • the data in the data header can be analyzed to obtain the mark of the data packet (also referred to as a sub-flow mark value).
  • the embodiment of the present application further provides an encoding method, the method includes: first obtaining a plurality of syntax elements according to media content. Then, the multiple syntax elements are sent to an entropy encoder for encoding to obtain multiple substreams. Thereafter, according to interleaving the plurality of sub-streams into a bitstream, the bitstream includes a plurality of data packets. Wherein, the data packet includes an identifier for indicating the substream to which it belongs.
  • each data packet in the encoded bit stream contains an identifier for indicating the substream to which the data packet belongs, and the decoding end can convert the data packet in the bit stream to They are respectively sent to multiple entropy decoders for parallel decoding.
  • using multiple entropy decoders for parallel decoding can improve the throughput of the decoder, thereby improving the decoding performance of the decoder.
  • the foregoing media content may include at least one of an image, an image slice, or a video.
  • the input image may be divided into image blocks, and then multiple syntax elements are obtained according to the image blocks. Then, the multiple syntax elements are sent to an entropy encoder for encoding to obtain multiple substreams. The plurality of sub-streams are then interleaved into a bit stream according to.
  • the sending the multiple syntax elements to an entropy encoder for encoding to obtain multiple substreams may include: sending the multiple syntax elements to multiple entropy encoders for encoding to obtain multiple substreams.
  • the multiple syntax elements may be sent to three entropy encoders for encoding to obtain three substreams.
  • the interleaving the multiple sub-streams into a bit stream includes: segmenting each sub-stream of the multiple sub-streams into multiple data packets according to a preset segmentation length; The multiple packets get the bit flow.
  • Step 7 Input K data packets into the bit stream in sequence.
  • Step 8 Select the next coded substream buffer according to the preset order, and return to step 2; if all coded substream buffers have been processed, end the substream interleaving operation.
  • substream interleaving may also be performed according to the following steps: Step 1: Select a coded substream buffer. Step 2: Record the size of the remaining data in the current encoded substream buffer as S. If S is greater than or equal to N-M bits: take out data with a length of N-M bits from the buffer of the current coded substream. Step 3. If S is less than N-M bits: If the current image block is the last block of the input image or input image slice, then take all the data from the current encoded substream buffer, and add several 0s after the data until The data length is N-M bits; if the current image block is not the last block, skip to step six. Step 4.
  • Step 5 Use the data obtained in step 2 or step 3 as the data body, and add a data header; the length of the data header is M bits, and the content of the data header is the substream tag value corresponding to the current encoded substream buffer, which is constructed as shown in Figure 5 packets shown.
  • Step 5. Enter the data packet into the bit stream.
  • each substream can be segmented to obtain multiple data packets through a preset segmentation length, and then a bit stream can be obtained according to the multiple data packets. Since each data packet in the encoded bit stream includes an identifier for indicating the substream to which the data packet belongs, the decoder can send the data packets in the bit stream to multiple entropy decoders for parallel decoding according to the identifier. Compared with using a single entropy decoder for decoding, using multiple entropy decoders for parallel decoding can improve the throughput of the decoder, thereby improving the decoding performance of the decoder.
  • the above encoding buffer may be a FIFO buffer.
  • the data that enters the encoding buffer first will leave the buffer first, and at the same time enter the entropy encoder first for entropy encoding.
  • the specific method for obtaining the bit stream according to multiple data packets may be processed by any method conceivable by those skilled in the art, which is not specifically limited in this embodiment of the present application.
  • data packets of a substream may be encoded into a bitstream in substream order. For example, first compile the data packets of the first sub-stream into the bit stream, after all the data packets of the first sub-stream are compiled into the bit stream, then program the data packets of the second sub-stream into the bit stream until all the data of the sub-streams The packets are all encoded into the bitstream.
  • the data packets of the sub-streams may be compiled into the bit stream in a polling order. If there are 3 sub-flows, one data packet of the first sub-flow can be compiled first, then one data packet of the second sub-flow can be compiled, and then one data packet of the third sub-flow can be compiled. This reciprocates until the data packets of all sub-streams are all compiled into the bit stream.
  • the obtaining a plurality of syntax elements according to media content may include: performing prediction on the media content to obtain a plurality of prediction data; quantizing the plurality of prediction data to obtain the plurality of Grammatical elements.
  • the specific method of quantification and prediction may be processed by any method conceivable by those skilled in the art, which is not specifically limited in this embodiment of the present application.
  • uniform quantization can be used for quantization.
  • multiple syntax elements can be obtained according to the media content, and then a bit stream can be obtained by encoding the multiple syntax elements. Since each data packet in the bit stream includes an identifier for indicating the substream to which the data packet belongs, the decoder can send the data packets in the bit stream to multiple entropy decoders for parallel decoding according to the identifier. Compared with using a single entropy decoder for decoding, using multiple entropy decoders for parallel decoding can improve the throughput of the decoder, thereby improving the decoding performance of the decoder.
  • the data packet may include a data header and a data body.
  • the data header is used to store the identifier of the data packet.
  • a data packet with a length of N bits may include a data header with a length of M bits and a data body with a length of N-M bits.
  • M and N are agreed fixed values in the encoding and decoding process.
  • the above data packets may have the same length.
  • the above data packets may all have a length of N bits.
  • a data header with a length of M bits can be extracted, and the remaining data of N-M bits can be used as a data body.
  • the data in the data header can be analyzed to obtain the mark of the data packet (also referred to as a sub-flow mark value).
  • the embodiment of the present application further provides a decoding device, which includes: an obtaining unit, a substream deinterleaving unit, a decoding unit, and a restoring unit.
  • the acquiring unit is used to acquire bit stream.
  • the sub-stream deinterleaving unit is configured to obtain multiple data packets according to the bit stream.
  • the decoding unit is configured to send the multiple data packets to multiple entropy decoders for decoding according to the identifiers of the multiple data packets to obtain multiple syntax elements.
  • the restoration unit is configured to restore media content according to the plurality of syntax elements.
  • the decoding unit is specifically configured to: determine the substream to which each data packet in the plurality of data packets belongs according to the identifiers of the plurality of data packets; sending each data packet to the decoding buffer of the substream to which each data packet belongs; sending each data packet in the decoding buffer to an entropy decoder corresponding to each buffer for decoding to obtain the plurality of syntax elements.
  • the restoring unit is specifically configured to: perform inverse quantization on the multiple syntax elements to obtain multiple residuals; predict and reconstruct the multiple residuals to restore media content.
  • the substream deinterleaving unit is specifically configured to: the substream deinterleaving unit is specifically configured to: segment the bit stream into multiple data packets according to a preset segmentation length.
  • the data packet may include a data header and a data body.
  • the data header is used to store the identifier of the data packet.
  • a data packet with a length of N bits may include a data header with a length of M bits and a data body with a length of N-M bits.
  • M and N are agreed fixed values in the encoding and decoding process.
  • the above data packets may have the same length.
  • the above data packets may all have a length of N bits.
  • a data header with a length of M bits can be extracted, and the remaining data of N-M bits can be used as a data body.
  • the data in the data header can be analyzed to obtain the mark of the data packet (also referred to as a sub-flow mark value).
  • the embodiment of the present application further provides an encoding device, which includes: a syntax unit, an encoding unit, and a substream interleaving unit.
  • the syntax unit is configured to obtain a plurality of syntax elements according to media content;
  • the encoding unit is configured to send the plurality of syntax elements to an entropy encoder for encoding to obtain a plurality of substreams;
  • the substream interleaving unit uses For interleaving the multiple sub-streams into a bit stream, the bit stream includes multiple data packets, and the data packets include identifiers for indicating the sub-streams they belong to.
  • the sub-stream interleaving unit is specifically configured to: segment each sub-stream of the multiple sub-streams into multiple data packets according to a preset segmentation length; Get the bitstream.
  • the syntax unit is specifically configured to: predict the media content to obtain a plurality of prediction data; quantize the plurality of prediction data to obtain the plurality of syntax elements.
  • the data packet may include a data header and a data body.
  • the data header is used to store the identifier of the data packet.
  • a data packet with a length of N bits may include a data header with a length of M bits and a data body with a length of N-M bits.
  • M and N are agreed fixed values in the encoding and decoding process.
  • the above data packets may have the same length.
  • the above data packets may all have a length of N bits.
  • a data header with a length of M bits can be extracted, and the remaining data of N-M bits can be used as a data body.
  • the data in the data header can be analyzed to obtain the mark of the data packet (also referred to as a sub-flow mark value).
  • the embodiment of the present application also provides a decoding device, which includes: at least one processor, and when the at least one processor executes program codes or instructions, the above-mentioned first aspect or any possible implementation thereof can be realized method described in .
  • the device may further include at least one memory, and the at least one memory is used to store the program code or instructions.
  • an embodiment of the present application further provides a coding device, which includes: at least one processor, and when the at least one processor executes program codes or instructions, implement the above second aspect or any possible implementation thereof method described in .
  • the device may further include at least one memory, and the at least one memory is used to store the program code or instructions.
  • the embodiment of the present application further provides a chip, including: an input interface, an output interface, and at least one processor.
  • the chip also includes a memory.
  • the at least one processor is used to execute the code in the memory, and when the at least one processor executes the code, the chip implements the method described in the above first aspect or any possible implementation thereof.
  • the aforementioned chip may also be an integrated circuit.
  • the embodiment of the present application further provides a computer-readable storage medium for storing a computer program, and the computer program includes a method for implementing the above-mentioned first aspect or any possible implementation thereof.
  • the embodiments of the present application further provide a computer program product containing instructions, which, when run on a computer, enable the computer to implement the method described in the above first aspect or any possible implementation thereof.
  • the codec device, computer storage medium, computer program product and chip provided in this embodiment are all used to execute the codec method provided above, therefore, the beneficial effects it can achieve can refer to the codec method provided above The beneficial effects in the above will not be repeated here.
  • Figure 1a is an exemplary block diagram of a decoding system provided by an embodiment of the present application.
  • FIG. 1b is an exemplary block diagram of a video decoding system provided by an embodiment of the present application.
  • FIG. 2 is an exemplary block diagram of a video encoder provided in an embodiment of the present application
  • FIG. 3 is an exemplary block diagram of a video decoder provided in an embodiment of the present application.
  • FIG. 4 is an exemplary schematic diagram of a candidate image block provided by an embodiment of the present application.
  • FIG. 5 is an exemplary block diagram of a video decoding device provided in an embodiment of the present application.
  • FIG. 6 is an exemplary block diagram of a device provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a coding framework provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an encoding method provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a sub-stream interleaving provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of another substream interleaving provided by the embodiment of the present application.
  • FIG. 11 is a schematic diagram of a decoding framework provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a decoding method provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of an encoding device provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a decoding device provided by an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of a chip provided by an embodiment of the present application.
  • first and second in the description of the embodiments of the present application and in the drawings are used to distinguish different objects, or to distinguish different treatments for the same object, rather than to describe the specific order of objects .
  • Interface compression When media devices transmit images and videos, they will use the display interface to compress and decompress the image and video data passing through the display interface, which is abbreviated as interface compression.
  • Bit stream a binary stream generated after encoding media content (such as image content, video content, etc.).
  • Syntax elements the data obtained after typical encoding operations such as prediction and transformation of media content, and the main input of entropy encoding.
  • Entropy Encoder An encoder module that converts input syntax elements into a bitstream.
  • Entropy Decoder An encoder module that converts an input bitstream into syntax elements.
  • Substream A codestream obtained by entropy coding a subset of syntax elements.
  • Subflow tag value The index of the subflow to which the tagged packet belongs.
  • Sub-stream interleaving the operation of combining multiple sub-streams into a bit stream, or multiplexing.
  • Substream deinterleaving The operation of splitting different substreams from a bitstream, or demultiplexing.
  • Data encoding and decoding includes two parts: data encoding and data decoding.
  • Data encoding is performed on the source side (or commonly referred to as the encoder side), and typically involves processing (eg, compressing) raw data to reduce the amount of data needed to represent that raw data (and thus more efficient storage and/or transmission).
  • Data decoding is performed on the destination side (or commonly referred to as the decoder side), and usually involves inverse processing relative to the encoder side to reconstruct the original data.
  • the "codec" of data involved in the embodiments of the present application should be understood as “encoding” or "decoding" of data.
  • the encoding part and the decoding part are also collectively referred to as codec (encoding and decoding, CODEC).
  • the original data can be reconstructed, i.e. the reconstructed original data has the same quality as the original data (assuming no transmission loss or other data loss during storage or transmission).
  • further compression is performed by quantization, etc., to reduce the amount of data required to represent the original data, and the decoder side cannot completely reconstruct the original data, that is, the quality of the reconstructed original data is lower than that of the original data or Difference.
  • the embodiments of the present application may be applied to video data and other data that require compression/decompression.
  • video data encoding referred to as video encoding
  • Other types of data can refer to the following description , which will not be described in detail in this embodiment of the present application. It should be noted that, compared with video coding, in the coding process of data such as audio data and integer data, there is no need to divide the data into blocks, but the data can be directly coded.
  • Video coding generally refers to the processing of sequences of images that form a video or video sequence.
  • picture In the field of video coding, the terms “picture”, “frame” or “image” may be used as synonyms.
  • Video coding standards belong to "lossy hybrid video codecs" (ie, combining spatial and temporal prediction in the pixel domain with 2D transform coding in the transform domain for applying quantization).
  • Each image in a video sequence is usually partitioned into a non-overlapping set of blocks, usually encoded at the block level.
  • encoders usually process, i.e.
  • video at the block (video block) level e.g., through spatial (intra) prediction and temporal (inter) prediction to produce a predicted block; from the current block (currently processed/to be processed block) to obtain the residual block; transform the residual block in the transform domain and quantize the residual block to reduce the amount of data to be transmitted (compressed), and the decoder side will be inversely processed relative to the encoder Partially applied to encoded or compressed blocks to reconstruct the current block for representation.
  • the encoder needs to repeat the decoder's processing steps such that the encoder and decoder generate the same predicted (eg, intra and inter) and/or reconstructed pixels for processing, ie encoding, subsequent blocks.
  • the encoder 20 and the decoder 30 are described with reference to FIGS. 1 a to 3 .
  • FIG. 1a is an exemplary block diagram of a decoding system 10 provided by an embodiment of the present application, for example, a video decoding system 10 (or simply referred to as the decoding system 10 ) that can utilize the technology of the embodiment of the present application.
  • the video encoder 20 (or simply referred to as the encoder 20) and the video decoder 30 (or simply referred to as the decoder 30) in the video decoding system 10 represent devices that can be used to perform various techniques according to various examples described in the embodiments of the present application. equipment etc.
  • the decoding system 10 includes a source device 12 for providing coded image data 21 such as coded images to a destination device 14 for decoding the coded image data 21 .
  • the source device 12 includes an encoder 20 , and optionally, an image source 16 , a preprocessor (or a preprocessing unit) 18 such as an image preprocessor, and a communication interface (or a communication unit) 22 .
  • Image source 16 may include or be any type of image capture device for capturing real world images, etc., and/or any type of image generation device, such as a computer graphics processor or any type of Devices for acquiring and/or providing real-world images, computer-generated images (e.g., screen content, virtual reality (VR) images, and/or any combination thereof (e.g., augmented reality (AR) images). So
  • the image source may be any type of memory or storage that stores any of the above images.
  • the image (or image data) 17 may also be referred to as an original image (or original image data) 17 .
  • the preprocessor 18 is used to receive the original image data 17 and perform preprocessing on the original image data 17 to obtain a preprocessed image (or preprocessed image data) 19 .
  • preprocessing performed by preprocessor 18 may include cropping, color format conversion (eg, from RGB to YCbCr), color grading, or denoising. It can be understood that the preprocessing unit 18 can be an optional component.
  • a video encoder (or encoder) 20 is used to receive preprocessed image data 19 and provide encoded image data 21 (further described below with reference to FIG. 2 etc.).
  • the communication interface 22 in the source device 12 may be used to receive the encoded image data 21 and send the encoded image data 21 (or any other processed version) via the communication channel 13 to another device such as the destination device 14 or any other device for storage Or rebuild directly.
  • the destination device 14 includes a decoder 30 , and may also optionally include a communication interface (or communication unit) 28 , a post-processor (or post-processing unit) 32 and a display device 34 .
  • the communication interface 28 in the destination device 14 is used to receive the coded image data 21 (or any other processed version) directly from the source device 12 or from any other source device such as a storage device, for example, the storage device is a coded image data storage device, And the coded image data 21 is supplied to the decoder 30 .
  • the communication interface 22 and the communication interface 28 can be used to pass through a direct communication link between the source device 12 and the destination device 14, such as a direct wired or wireless connection, etc., or through any type of network, such as a wired network, a wireless network, or any other Combination, any type of private network and public network or any combination thereof, send or receive coded image data (or coded data) 21 .
  • the communication interface 22 can be used to encapsulate the encoded image data 21 into a suitable format such as a message, and/or use any type of transmission encoding or processing to process the encoded image data, so that it can be transmitted over a communication link or communication network on the transmission.
  • the communication interface 28 corresponds to the communication interface 22, eg, can be used to receive the transmission data and process the transmission data using any type of corresponding transmission decoding or processing and/or decapsulation to obtain the encoded image data 21 .
  • Both the communication interface 22 and the communication interface 28 can be configured as a one-way communication interface as indicated by an arrow pointing from the source device 12 to the corresponding communication channel 13 of the destination device 14 in FIG. 1 a, or a two-way communication interface, and can be used to send and receive messages etc., to establish the connection, confirm and exchange any other information related to the communication link and/or data transmission such as the transmission of encoded image data, etc.
  • Video decoder (or decoder) 30 is used to receive encoded image data 21 and provide decoded image data (or decoded image data) 31 (further described below with reference to FIG. 3 etc.).
  • the post-processor 32 is used to perform post-processing on decoded image data 31 (also referred to as reconstructed image data) such as a decoded image to obtain post-processed image data 33 such as a post-processed image.
  • Post-processing performed by post-processing unit 32 may include, for example, color format conversion (e.g., from YCbCr to RGB), color grading, cropping, or resampling, or any other processing for producing decoded image data 31 for display by a display device 34 or the like. .
  • the display device 34 is used to receive the post-processed image data 33 to display the image to a user or viewer or the like.
  • Display device 34 may be or include any type of display for representing the reconstructed image, eg, an integrated or external display screen or display.
  • the display screen may include a liquid crystal display (LCD), an organic light emitting diode (OLED) display, a plasma display, a projector, a micro LED display, a liquid crystal on silicon (LCoS) display, or a liquid crystal on silicon (LCoS) display. ), a digital light processor (DLP), or any type of other display.
  • LCD liquid crystal display
  • OLED organic light emitting diode
  • plasma display e.g., a plasma display, a projector, a micro LED display, a liquid crystal on silicon (LCoS) display, or a liquid crystal on silicon (LCoS) display.
  • DLP digital light processor
  • the decoding system 10 also includes a training engine 25, the training engine 25 is used to train the encoder 20 (especially the entropy encoding unit 270 in the encoder 20) or the decoder 30 (especially the entropy decoding unit 304 in the decoder 30), Entropy coding is performed on the image block to be coded by using the estimated probability distribution obtained according to the estimation.
  • the training engine 25 please refer to the following method test example.
  • FIG. 1a shows the source device 12 and the destination device 14 as independent devices
  • device embodiments may also include the source device 12 and the destination device 14 or the functions of the source device 12 and the destination device 14 at the same time, that is, include both the source device 12 and the destination device 14.
  • Device 12 or corresponding function and destination device 14 or corresponding function may be implemented using the same hardware and/or software or by separate hardware and/or software or any combination thereof.
  • FIG. 1b is an exemplary block diagram of a video decoding system 40 provided by an embodiment of the present application, an encoder 20 (such as a video encoder 20) or a decoder 30 (such as a video decoder 30) or both All can be realized by processing circuits in the video decoding system 40 shown in FIG. 1b, such as one or more microprocessors, digital signal processors (digital signal processor, DSP), application-specific integrated circuits , ASIC), field-programmable gate array (field-programmable gate array, FPGA), discrete logic, hardware, video encoding dedicated processor, or any combination thereof.
  • DSP digital signal processor
  • ASIC application-specific integrated circuits
  • FPGA field-programmable gate array
  • FIG. 2 is an exemplary block diagram of a video encoder provided in an embodiment of the present application
  • FIG. 3 is an exemplary block diagram of a video decoder provided in an embodiment of the present application.
  • Encoder 20 may be implemented by processing circuitry 46 to include the various modules discussed with reference to encoder 20 of FIG. 2 and/or any other encoder system or subsystem described herein.
  • Decoder 30 may be implemented by processing circuitry 46 to include the various modules discussed with reference to decoder 30 of FIG. 3 and/or any other decoder system or subsystem described herein.
  • the processing circuitry 46 may be used to perform various operations discussed below.
  • the device can store the instructions of the software in a suitable non-transitory computer-readable storage medium, and use one or more processors to execute the instructions in hardware, thereby Execute the technology of the embodiment of this application.
  • One of the video encoder 20 and the video decoder 30 may be integrated in a single device as part of a combined codec (encoder/decoder, CODEC), as shown in FIG. 1b.
  • Source device 12 and destination device 14 may comprise any of a variety of devices, including any type of handheld or stationary device, such as a notebook or laptop computer, cell phone, smartphone, tablet or tablet computer, camera, tower desktop computers, set-top boxes, televisions, display devices, digital media players, video game consoles, video streaming devices (such as content service servers or content distribution servers), broadcast receiving devices, broadcast transmitting devices, and monitoring devices, etc., and No or any type of operating system may be used.
  • the source device 12 and the destination device 14 may also be devices in a cloud computing scenario, such as virtual machines in a cloud computing scenario.
  • source device 12 and destination device 14 may be equipped with components for wireless communication. Accordingly, source device 12 and destination device 14 may be wireless communication devices.
  • the source device 12 and the destination device 14 may install a virtual scene application (application, APP) such as a virtual reality (virtual reality, VR) application, an augmented reality (augmented reality, AR) application or a mixed reality (mixed reality, MR) application, and A VR application, an AR application or an MR application may be run based on user operations (such as clicking, touching, sliding, shaking, voice control, etc.).
  • APP virtual scene application
  • the source device 12 and the destination device 14 can collect images/videos of any objects in the environment through cameras and/or sensors, and then display virtual objects on the display device according to the collected images/videos.
  • the virtual objects can be VR scenes, AR scenes or Virtual objects in the MR scene (that is, objects in the virtual environment).
  • the virtual scene applications in the source device 12 and the destination device 14 can be built-in applications in the source device 12 and the destination device 14, or can be third-party service providers installed by the user
  • the provided application is not specifically limited.
  • source device 12 and destination device 14 may install real-time video transmission applications, such as live broadcast applications.
  • the source device 12 and the destination device 14 can collect images/videos through cameras, and then display the collected images/videos on a display device.
  • the video decoding system 10 shown in FIG. 1a is only exemplary, and the techniques provided by the embodiments of the present application can be applied to video encoding settings (for example, video encoding or video decoding), which do not necessarily include encoding Any data communication between the device and the decoding device.
  • data is retrieved from local storage, sent over a network, and so on.
  • a video encoding device may encode and store data into memory, and/or a video decoding device may retrieve and decode data from memory.
  • encoding and decoding are performed by devices that do not communicate with each other but simply encode data to memory and/or retrieve and decode data from memory.
  • FIG. 1b is an exemplary block diagram of a video decoding system 40 provided by an embodiment of the present application.
  • the video decoding system 40 may include an imaging device 41, a video encoder 20, a video decoding 30 (and/or a video codec implemented by processing circuitry 46 ), an antenna 42 , one or more processors 43 , one or more memory stores 44 and/or a display device 45 .
  • imaging device 41, antenna 42, processing circuit 46, video encoder 20, video decoder 30, processor 43, memory storage 44 and/or display device 45 are capable of communicating with each other.
  • the video coding system 40 may include only the video encoder 20 or only the video decoder 30 .
  • antenna 42 may be used to transmit or receive an encoded bitstream of video data.
  • display device 45 may be used to present video data.
  • the processing circuit 46 may include application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, and the like.
  • the video decoding system 40 may also include an optional processor 43 , and the optional processor 43 may similarly include application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, and the like.
  • the memory storage 44 can be any type of memory, such as volatile memory (for example, static random access memory (static random access memory, SRAM), dynamic random access memory (dynamic random access memory, DRAM), etc.) or non-volatile memory.
  • volatile memory for example, flash memory, etc.
  • memory storage 44 can be implemented by cache memory.
  • processing circuitry 46 may include memory (eg, cache, etc.) for implementing an image buffer or the like.
  • video encoder 20 implemented by logic circuitry may include an image buffer (eg, implemented by processing circuitry 46 or memory storage 44 ) and a graphics processing unit (eg, implemented by processing circuitry 46 ).
  • a graphics processing unit may be communicatively coupled to the image buffer.
  • Graphics processing unit may include video encoder 20 implemented by processing circuitry 46 to implement the various modules discussed with reference to FIG. 2 and/or any other encoder system or subsystem described herein.
  • Logic circuits may be used to perform the various operations discussed herein.
  • video decoder 30 may be implemented by processing circuitry 46 in a similar manner to implement the various aspects discussed with reference to video decoder 30 of FIG. 3 and/or any other decoder system or subsystem described herein. module.
  • logic circuit implemented video decoder 30 may include an image buffer (implemented by processing circuit 46 or memory storage 44 ) and a graphics processing unit (eg, implemented by processing circuit 46 ).
  • a graphics processing unit may be communicatively coupled to the image buffer.
  • Graphics processing unit may include video decoder 30 implemented by processing circuitry 46 to implement the various modules discussed with reference to FIG. 3 and/or any other decoder system or subsystem described herein.
  • antenna 42 may be used to receive an encoded bitstream of video data.
  • an encoded bitstream may contain data related to encoded video frames, indicators, index values, mode selection data, etc., as discussed herein, such as data related to encoding partitions (e.g., transform coefficients or quantized transform coefficients , (as discussed) an optional indicator, and/or data defining an encoding split).
  • Video coding system 40 may also include video decoder 30 coupled to antenna 42 and used to decode the encoded bitstream.
  • a display device 45 is used to present video frames.
  • the video decoder 30 may be used to perform a reverse process.
  • the video decoder 30 may be configured to receive and parse such syntax elements and decode the associated video data accordingly.
  • video encoder 20 may entropy encode the syntax elements into an encoded video bitstream.
  • video decoder 30 may parse such syntax elements and decode the related video data accordingly.
  • VVC general video coding
  • VCEG video coding experts group
  • MPEG motion picture experts group
  • HEVC high-efficiency video coding
  • the video encoder 20 includes an input terminal (or input interface) 201, a residual calculation unit 204, a transformation processing unit 206, a quantization unit 208, an inverse quantization unit 210, an inverse transformation processing unit 212, a reconstruction unit 214, Loop filter 220 , decoded picture buffer (decoded picture buffer, DPB) 230 , mode selection unit 260 , entropy coding unit 270 and output terminal (or output interface) 272 .
  • Mode selection unit 260 may include inter prediction unit 244 , intra prediction unit 254 , and partition unit 262 .
  • Inter prediction unit 244 may include a motion estimation unit and a motion compensation unit (not shown).
  • the video encoder 20 shown in FIG. 2 may also be called a hybrid video encoder or a video encoder based on a hybrid video codec.
  • the inter-frame prediction unit is a trained target model (also called a neural network), and the neural network is used to process an input image or an image region or an image block to generate a prediction value of the input image block.
  • the neural network of is used to receive an input image or an image region or an image block, and generate a prediction value of the input image or an image region or an image block.
  • the residual calculation unit 204, the transform processing unit 206, the quantization unit 208, and the mode selection unit 260 constitute the forward signal path of the encoder 20, while the inverse quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214, the buffer 216, the loop A path filter 220, a decoded picture buffer (decoded picture buffer, DPB) 230, an inter prediction unit 244, and an intra prediction unit 254 form the backward signal path of the encoder, wherein the backward signal path of the encoder 20 corresponds to the decoding signal path of the decoder (see decoder 30 in FIG. 3).
  • Inverse quantization unit 210, inverse transform processing unit 212, reconstruction unit 214, loop filter 220, decoded picture buffer 230, inter prediction unit 244, and intra prediction unit 254 also make up the "built-in decoder" of video encoder 20 .
  • the encoder 20 is operable to receive, via an input 201 or the like, an image (or image data) 17, eg an image in a sequence of images forming a video or a video sequence.
  • the received image or image data may also be a preprocessed image (or preprocessed image data) 19 .
  • image 17 may also be referred to as a current image or an image to be encoded (especially when the current image is distinguished from other images in video encoding, other images such as the same video sequence, that is, the video sequence that also includes the current image, before encoding post image and/or decoded image).
  • a (digital) image is or can be viewed as a two-dimensional array or matrix of pixel points with intensity values. Pixels in the array may also be referred to as pixels (pixel or pel) (short for image element). The number of pixels in the array or image in the horizontal and vertical directions (or axes) determines the size and/or resolution of the image. In order to represent a color, three color components are usually used, that is, an image can be represented as or include three pixel arrays. In the RBG format or color space, an image includes corresponding red, green and blue pixel arrays.
  • each pixel is usually expressed in a luminance/chroma format or color space, such as YCbCr, including a luminance component indicated by Y (sometimes also indicated by L) and two chrominance components indicated by Cb and Cr.
  • the luminance (luma) component Y represents brightness or grayscale level intensity (e.g., both are the same in a grayscale image), while the two chrominance (chroma) components Cb and Cr represent chrominance or color information components .
  • an image in the YCbCr format includes a luminance pixel point array of luminance pixel point values (Y) and two chrominance pixel point arrays of chrominance values (Cb and Cr).
  • Images in RGB format can be converted or transformed to YCbCr format and vice versa, a process also known as color transformation or conversion. If the image is black and white, the image may only include an array of luminance pixels. Correspondingly, the image can be, for example, an array of luma pixels in monochrome format or an array of luma pixels and two corresponding arrays of chrominance pixels in 4:2:0, 4:2:2 and 4:4:4 color formats .
  • an embodiment of the video encoder 20 may include an image segmentation unit (not shown in FIG. 2 ) for segmenting the image 17 into a plurality of (typically non-overlapping) image blocks 203 .
  • These blocks can also be called root blocks, macroblocks (H.264/AVC) or coding tree blocks (CTB), or coding tree units (coding tree unit, CTU) in the H.265/HEVC and VVC standards ).
  • the segmentation unit can be used to use the same block size for all images in a video sequence and to use a corresponding grid that defines the block size, or to vary the block size between images or subsets or groups of images and segment each image into corresponding piece.
  • the video encoder may be adapted to directly receive the blocks 203 of an image 17 , for example one, several or all blocks making up said image 17 .
  • the image block 203 may also be referred to as a current image block or an image block to be encoded.
  • the image block 203 is also or can be regarded as a two-dimensional array or matrix composed of pixels with intensity values (pixel values), but the image block 203 is smaller than that of the image 17 .
  • block 203 may comprise an array of pixel points (e.g., a luminance array in the case of a monochrome image 17 or a luminance array or chrominance array in the case of a color image). array) or three pixel point arrays (for example, one luma array and two chrominance arrays in the case of a color image 17) or any other number and/or type of arrays depending on the color format employed.
  • a block may be an array of M ⁇ N (M columns ⁇ N rows) pixel points, or an array of M ⁇ N transform coefficients, and the like.
  • the video encoder 20 shown in FIG. 2 is used to encode the image 17 block by block, eg, performing encoding and prediction on each block 203 .
  • the video encoder 20 shown in FIG. 2 can also be used to segment and/or encode an image using slices (also called video slices), where an image can use one or more slices (typically non-overlapping ) for segmentation or encoding.
  • slices also called video slices
  • Each slice may include one or more blocks (for example, a coding tree unit CTU) or one or more block groups (for example, a coding block (tile) in the H.265/HEVC/VVC standard and a tile in the VVC standard ( brick).
  • the video encoder 20 shown in FIG. 2 can also be configured to use slices/coded block groups (also called video coded block groups) and/or coded blocks (also called video coded block groups) ) to segment and/or encode an image, where an image may be segmented or encoded using one or more slices/coded block groups (usually non-overlapping), each slice/coded block group may consist of one or more A block (such as a CTU) or one or more coding blocks, etc., wherein each coding block may be in the shape of a rectangle or the like, and may include one or more complete or partial blocks (such as a CTU).
  • slices/coded block groups also called video coded block groups
  • coded blocks also called video coded block groups
  • the residual calculation unit 204 is used to calculate the residual block 205 according to the image block (or original block) 203 and the prediction block 265 (the prediction block 265 will be described in detail later): for example, pixel by pixel (pixel by pixel) from the image
  • the pixel value of the predicted block 265 is subtracted from the pixel value of the block 203 to obtain the residual block 205 in the pixel domain.
  • the transform processing unit 206 is configured to perform discrete cosine transform (discrete cosine transform, DCT) or discrete sine transform (discrete sine transform, DST) etc. on the pixel point values of the residual block 205 to obtain transform coefficients 207 in the transform domain.
  • the transform coefficients 207 may also be referred to as transform residual coefficients, representing the residual block 205 in the transform domain.
  • Transform processing unit 206 may be configured to apply an integer approximation of DCT/DST, such as the transform specified for H.265/HEVC. This integer approximation is usually scaled by some factor compared to the orthogonal DCT transform. To maintain the norm of the forward and inverse transformed residual blocks, other scaling factors are used as part of the transformation process. The scaling factor is usually chosen according to certain constraints, such as the scaling factor being a power of 2 for the shift operation, the bit depth of the transform coefficients, the trade-off between accuracy and implementation cost, etc.
  • specifying a specific scaling factor for the inverse transform at the encoder 20 side by the inverse transform processing unit 212 (and for the corresponding inverse transform at the decoder 30 side by, for example, the inverse transform processing unit 312), and correspondingly, can The side 20 specifies the corresponding scaling factor for the forward transform through the transform processing unit 206 .
  • the video encoder 20 (correspondingly, the transform processing unit 206) can be used to output transform parameters such as one or more transform types, for example, directly output or output after encoding or compression by the entropy encoding unit 270 , for example, so that the video decoder 30 can receive and use the transformation parameters for decoding.
  • transform parameters such as one or more transform types, for example, directly output or output after encoding or compression by the entropy encoding unit 270 , for example, so that the video decoder 30 can receive and use the transformation parameters for decoding.
  • the quantization unit 208 is configured to quantize the transform coefficient 207 by, for example, scalar quantization or vector quantization, to obtain a quantized transform coefficient 209 .
  • Quantized transform coefficients 209 may also be referred to as quantized residual coefficients 209 .
  • the quantization process may reduce the bit depth associated with some or all of the transform coefficients 207 .
  • n-bit transform coefficients may be rounded down to m-bit transform coefficients during quantization, where n is greater than m.
  • QP quantization parameter
  • a suitable quantization step size can be indicated by a quantization parameter (quantization parameter, QP).
  • a quantization parameter may be an index to a predefined set of suitable quantization step sizes.
  • Quantization may include dividing by a quantization step size, while corresponding or inverse dequantization performed by the inverse quantization unit 210 or the like may include multiplying by a quantization step size.
  • Embodiments according to some standards such as HEVC may be used to determine the quantization step size using quantization parameters.
  • the quantization step size can be calculated from the quantization parameter using a fixed-point approximation of an equation involving division.
  • the video encoder 20 (correspondingly, the quantization unit 208) can be used to output a quantization parameter (quantization parameter, QP), for example, directly output or output after being encoded or compressed by the entropy encoding unit 270, for example, making the video Decoder 30 may receive and use the quantization parameters for decoding.
  • a quantization parameter quantization parameter, QP
  • the inverse quantization unit 210 is used to perform the inverse quantization of the quantization unit 208 on the quantization coefficients to obtain the dequantization coefficients 211, for example, perform the inverse quantization of the quantization scheme performed by the quantization unit 208 according to or use the same quantization step size as that of the quantization unit 208 plan.
  • the dequantized coefficients 211 may also be referred to as dequantized residual coefficients 211 , corresponding to the transform coefficients 207 , but due to loss caused by quantization, the dequantized coefficients 211 are usually not exactly the same as the transform coefficients.
  • the inverse transform processing unit 212 is configured to perform an inverse transform of the transform performed by the transform processing unit 206, for example, an inverse discrete cosine transform (discrete cosine transform, DCT) or an inverse discrete sine transform (discrete sine transform, DST), to transform in the pixel domain
  • DCT inverse discrete cosine transform
  • DST inverse discrete sine transform
  • a reconstructed residual block 213 (or corresponding dequantization coefficients 213) is obtained.
  • the reconstructed residual block 213 may also be referred to as a transform block 213 .
  • the reconstruction unit 214 (e.g., summer 214) is used to add the transform block 213 (i.e., the reconstructed residual block 213) to the predicted block 265 to obtain the reconstructed block 215 in the pixel domain, for example, the reconstructed residual block 213
  • the pixel value is added to the pixel value of the prediction block 265 .
  • the loop filter unit 220 (or “loop filter” 220 for short) is used to filter the reconstructed block 215 to obtain the filtered block 221, or generally used to filter the reconstructed pixels to obtain filtered pixel values.
  • a loop filter unit is used to smooth pixel transitions or improve video quality.
  • the loop filter unit 220 may include one or more loop filters, such as deblocking filters, pixel adaptive offset (sample-adaptive offset, SAO) filters, or one or more other filters, such as auto Adaptive loop filter (adaptive loop filter, ALF), noise suppression filter (noise suppression filter, NSF) or any combination.
  • the loop filter unit 220 may include a deblocking filter, an SAO filter, and an ALF filter.
  • the order of the filtering process may be deblocking filter, SAO filter and ALF filter.
  • a process called luma mapping with chroma scaling (LMCS) ie, an adaptive in-loop shaper
  • LMCS luma mapping with chroma scaling
  • This process is performed before deblocking.
  • the block filtering process can also be applied to intra sub-block edges, such as affine sub-block edges, ATMVP sub-block edges, sub-block transform (SBT) edges and intra sub-partition (ISP) edges.
  • loop filter unit 220 is shown in FIG. 2 as a loop filter, in other configurations, loop filter unit 220 may be implemented as a post-loop filter.
  • the filtering block 221 may also be referred to as a filtering reconstruction block 221 .
  • video encoder 20 (correspondingly, loop filter unit 220) can be used to output loop filter parameters (such as SAO filter parameters, ALF filter parameters or LMCS parameters), for example, directly or by entropy
  • the encoding unit 270 performs entropy encoding to output, for example, so that the decoder 30 can receive and use the same or different loop filter parameters for decoding.
  • a decoded picture buffer (DPB) 230 may be a reference picture memory that stores reference picture data for use by the video encoder 20 when encoding video data.
  • the DPB 230 may be formed from any of a variety of memory devices, such as dynamic random access memory (DRAM), including synchronous DRAM (synchronous DRAM, SDRAM), magnetoresistive RAM (magnetoresistive RAM, MRAM), Resistive RAM (resistive RAM, RRAM) or other types of storage devices.
  • DRAM dynamic random access memory
  • the decoded picture buffer 230 may be used to store one or more filter blocks 221 .
  • the decoded picture buffer 230 may also be used to store other previously filtered blocks, such as the previously reconstructed and filtered block 221, of the same current picture or a different picture such as a previous reconstructed picture, and may provide the complete previously reconstructed, i.e. decoded picture (and the corresponding reference blocks and pixels) and/or a partially reconstructed current image (and corresponding reference blocks and pixels), for example for inter-frame prediction.
  • the decoded image buffer 230 can also be used to store one or more unfiltered reconstruction blocks 215, or generally store unfiltered reconstruction pixels, for example, the reconstruction blocks 215 that have not been filtered by the loop filter unit 220, or have not been filtered. Any other processed reconstruction blocks or reconstructed pixels.
  • the mode selection unit 260 includes a segmentation unit 262, an inter prediction unit 244, and an intra prediction unit 254 for receiving or obtaining raw raw image data such as block 203 (current block 203 of current image 17) and reconstructed image data, e.g. filtered and/or unfiltered reconstructed pixels of the same (current) image and/or one or more previously decoded images or Rebuild blocks.
  • the reconstructed image data is used as reference image data required for prediction such as inter-frame prediction or intra-frame prediction to obtain a prediction block 265 or a prediction value 265 .
  • the mode selection unit 260 can be used to determine or select a partition for the current block (including no partition) and a prediction mode (such as intra or inter prediction mode), and generate a corresponding prediction block 265 to calculate and calculate the residual block 205
  • the reconstruction block 215 is reconstructed.
  • mode selection unit 260 is operable to select a partitioning and prediction mode (e.g., from among the prediction modes supported or available by mode selection unit 260) that provides the best match or the smallest residual (minimum Residual refers to better compression in transmission or storage), or provides minimum signaling overhead (minimum signaling overhead refers to better compression in transmission or storage), or considers or balances both of the above.
  • the mode selection unit 260 may be configured to determine the partition and prediction mode according to rate distortion optimization (RDO), that is, to select the prediction mode that provides the minimum rate distortion optimization.
  • RDO rate distortion optimization
  • the division unit 262 can be used to divide the image in the video sequence into coding tree units (coding tree unit, CTU) sequence, CTU 203 can be further divided into smaller block parts or sub-blocks (again forming blocks), for example, by iteratively using quad-tree partitioning (quad-tree partitioning, QT) partitioning, binary-tree partitioning (binary-tree partitioning, BT) partitioning or triple-tree partitioning (TT) partitioning or any combination thereof, and for example performing prediction on each of the block parts or sub-blocks, wherein the mode selection includes selecting the tree structure of the partitioning block 203 and selecting The prediction mode applied to each of the block part or sub-blocks.
  • quad-tree partitioning quad-tree partitioning
  • QT binary-tree partitioning
  • BT binary-tree partitioning
  • TT triple-tree partitioning
  • partitioning eg, performed by partition unit 262
  • prediction processing eg, performed by inter-prediction unit 244 and intra-prediction unit 254
  • the segmentation unit 262 may divide (or divide) an image block (or CTU) 203 into smaller parts, such as square or rectangular shaped small blocks.
  • a CTU consists of N ⁇ N luma pixel blocks and two corresponding chrominance pixel blocks.
  • the maximum allowed size of a luma block in a CTU is specified as 128 ⁇ 128 in the developing Versatile Video Coding (VVC) standard, but may be specified in the future to a value other than 128 ⁇ 128, such as 256 ⁇ 256.
  • VVC Versatile Video Coding
  • the CTUs of an image can be pooled/grouped into slices/coded block groups, coded blocks or bricks.
  • a coding block covers a rectangular area of an image, and a coding block can be divided into one or more bricks.
  • a brick consists of multiple CTU rows within an encoded block.
  • a coded block that is not partitioned into multiple bricks may be called a brick.
  • bricks are a true subset of coded blocks and are therefore not called coded blocks.
  • VVC supports two coded block group modes, namely raster scan slice/coded block group mode and rectangular slice mode.
  • RSCBG mode a slice/CBG contains a sequence of CBGs in a coded block raster scan of an image.
  • rectangular tile mode a tile contains multiple tiles of an image that together form a rectangular area of the image.
  • the tiles within the rectangular slice are arranged in the photo's tile raster scan order.
  • These smaller blocks can be further divided into smaller parts.
  • This is also known as tree splitting or hierarchical tree splitting, where the root block at root tree level 0 (hierarchy level 0, depth 0) etc. can be recursively split into two or more blocks at the next lower tree level, For example a node at tree level 1 (hierarchy level 1, depth 1).
  • These blocks can in turn be split into two or more blocks at the next lower level, e.g. tree level 2 (hierarchy level 2, depth 2), etc., until the end of the split (because the end criteria are met, e.g. maximum tree depth or minimum block size).
  • Blocks that are not further divided are also called leaf blocks or leaf nodes of the tree.
  • a tree divided into two parts is called a binary-tree (BT)
  • a tree divided into three parts is called a ternary-tree (TT)
  • a tree divided into four parts is called a quadtree ( quad-tree, QT).
  • a coding tree unit may be or include a CTB of luma pixels, two corresponding CTBs of chroma pixels of an image having an array of three pixels, or a CTB of pixels of a monochrome image or using three
  • a coding tree block can be an N ⁇ N pixel block, where N can be set to a certain value so that the components are divided into CTBs, which is segmentation.
  • a coding unit may be or include a coding block of luma pixels, two corresponding coding blocks of chrominance pixels of an image having three pixel arrays, or a coding block of pixels of a monochrome image or An encoded block of pixels of an image encoded using three separate color planes and syntax structures (for encoding pixels).
  • a coding block can be M ⁇ N pixel blocks, where M and N can be set to a certain value so that the CTB is divided into coding blocks, which is division.
  • a coding tree unit may be divided into a plurality of CUs according to HEVC by using a quadtree structure represented as a coding tree.
  • the decision whether to encode an image region using inter (temporal) prediction or intra (spatial) prediction is made at the leaf-CU level.
  • Each leaf CU can be further divided into one, two one or four PUs.
  • the same prediction process is used within a PU, and relevant information is transmitted to the decoder in units of PUs.
  • the leaf CU can be partitioned into transform units (TUs) according to other quadtree structures similar to the coding tree used for the CU.
  • VVC Versatile Video Coding
  • a combined quadtree of nested multi-type trees (such as binary and ternary trees) is used to partition for partition coding
  • the segmentation structure of the tree unit In the coding tree structure in the coding tree unit, the CU can be square or rectangular.
  • the coding tree unit (CTU) is first divided by the quadtree structure.
  • the quadtree leaf nodes are further composed of multi-type Tree structure segmentation.
  • Multi-type leaf nodes are called is a coding unit (CU), unless the CU is too large for the maximum transform length, such a segment is used for prediction and transform processing without any other partition.In most cases, this means that CU, PU and TU are in the quad.
  • CU coding unit
  • the block size in the coding block structure of the tree-nested multi-type tree is the same. This exception occurs when the maximum supported transform length is less than the width or height of the color component of the CU.
  • VVC has a quad-tree nested multi-type tree
  • the signaling mechanism the coding tree unit (CTU) is first divided by the quadtree structure as the root of the quadtree. Then each quadtree leaf node (when enough can be further split into a multi-type tree structure.
  • the first flag mtt_split_cu_flag
  • the second flag mtt_split_cu_vertical_flag
  • the decoder can derive the multi-type tree division mode (MttSplitMode) of the CU based on predefined rules or tables.
  • TT division when the width or height of the luma coding block is greater than 64, TT division is not allowed .
  • the width or height of the chroma encoding block is greater than 32, TT division is also not allowed.
  • the pipeline design divides the image into multiple virtual pipeline data units (virtual pipeline data unit, VPDU), and each VPDU is defined in the image as mutual Non-overlapping units.
  • VPDU virtual pipeline data unit
  • consecutive VPDUs are processed simultaneously in multiple pipeline stages.
  • the VPDU size is roughly proportional to the buffer size, so VPDUs need to be kept small.
  • the VPDU size can be set to the maximum transform block (TB) size.
  • TT ternary tree
  • BT binary tree
  • the tree node block is forced to be divided until all pixels of each coded CU are located within the image boundary.
  • the intra sub-partitions (intra sub-partitions, ISP) tool can vertically or horizontally divide the luma intra prediction block into two or four sub-parts according to the block size.
  • mode selection unit 260 of video encoder 20 may be configured to perform any combination of the segmentation techniques described above.
  • the video encoder 20 is configured to determine or select the best or optimal prediction mode from a set of (predetermined) prediction modes.
  • the set of prediction modes may include, for example, intra prediction modes and/or inter prediction modes.
  • the set of intra prediction modes can include 35 different intra prediction modes, for example, like DC (or mean) mode and Non-directional modes for planar mode, or directional modes as defined by HEVC, or can include 67 different intra prediction modes, for example, non-directional modes like DC (or mean) mode and planar mode, or as VVC
  • the directionality pattern defined in For example, several traditional angle intra prediction modes are adaptively replaced with wide angle intra prediction modes for non-square blocks defined in VVC. As another example, to avoid the division operation of DC prediction, only the longer side is used to calculate the average value of non-square blocks.
  • the intra prediction result of the planar mode can also be modified using a position dependent intra prediction combination (PDPC) method.
  • PDPC position dependent intra prediction combination
  • the intra-frame prediction unit 254 is configured to use the adjacent blocks of the same current image to reconstruct pixels according to the intra-frame prediction modes in the intra-frame prediction mode set to generate the intra-frame prediction block 265 .
  • Intra prediction unit 254 (or generally mode selection unit 260) is also configured to output intra prediction parameters (or generally information indicating the selected intra prediction mode for a block) in the form of syntax elements 266 to entropy encoding unit 270 , to be included in the encoded image data 21, so that the video decoder 30 can perform operations such as receiving and using prediction parameters for decoding.
  • the intra prediction modes in HEVC include DC prediction mode, planar prediction mode and 33 angle prediction modes, a total of 35 candidate prediction modes.
  • the current block can be intra-predicted using the pixels of the reconstructed image blocks on the left and above as references.
  • An image block used for performing intra-frame prediction on the current block in the peripheral area of the current block becomes a reference block, and pixels in the reference block are called reference pixels.
  • the DC prediction mode is suitable for the area with flat texture in the current block, and all pixels in this area use the average value of the reference pixels in the reference block as prediction;
  • the planar prediction mode is suitable for image blocks with smooth texture changes , the current block that meets this condition uses the reference pixels in the reference block to perform bilinear interpolation as the prediction of all pixels in the current block;
  • the angle prediction mode uses the characteristic that the texture of the current block is highly correlated with the texture of the adjacent reconstructed image block , copy the value of the reference pixel in the corresponding reference block along a certain angle as the prediction of all the pixels in the current block.
  • the HEVC encoder selects an optimal intra prediction mode from 35 candidate prediction modes for the current block, and writes the optimal intra prediction mode into the video stream.
  • the encoder/decoder will derive the three most probable modes from the respective optimal intra prediction modes of the reconstructed image blocks in the surrounding area using intra prediction. If given to the current block The selected optimal intra prediction mode is one of the three most probable modes, then encode a first index indicating that the selected optimal intra prediction mode is one of the three most probable modes; if selected The optimal intra prediction mode is not the three most probable modes, then encode a second index indicating that the selected optimal intra prediction mode is the other 32 modes (except the above three most probable modes among the 35 candidate prediction modes one of the other modes).
  • the HEVC standard uses a 5-bit fixed-length code as the aforementioned second index.
  • the method for the HEVC encoder to derive the three most probable modes includes: selecting the optimal intra prediction mode of the left adjacent image block and the upper adjacent image block of the current block into the set, if the two optimal intra prediction modes are the same, only one can be kept in the set. If the two optimal intra prediction modes are the same and both are angle prediction modes, then select two angle prediction modes adjacent to the angle direction to add to the set; otherwise, select the planar prediction mode, DC mode and vertical prediction mode in turn Join sets until the number of patterns in the set reaches 3.
  • the HEVC decoder After the HEVC decoder performs entropy decoding on the code stream, it obtains the mode information of the current block, which includes an indicator indicating whether the optimal intra prediction mode of the current block is among the three most probable modes, and the optimal intra prediction mode of the current block.
  • the set of inter prediction modes depends on the available reference pictures (i.e. e.g. the aforementioned at least part of previously decoded pictures stored in DBP 230) and on other inter prediction parameters, e.g. on whether the entire reference picture is used or only Use part of the reference image, e.g. the search window area around the area of the current block, to search for the best matching reference block, and/or e.g. depending on whether half-pel, quarter-pel and/or 16th interpolation is performed pixel interpolation.
  • the available reference pictures i.e. e.g. the aforementioned at least part of previously decoded pictures stored in DBP 230
  • other inter prediction parameters e.g. on whether the entire reference picture is used or only Use part of the reference image, e.g. the search window area around the area of the current block, to search for the best matching reference block, and/or e.g. depending on whether half-pel, quarter-pel and/or 16th interpolation is performed pixel interpolation.
  • skip mode and/or direct mode may also be employed.
  • the merge candidate list for this mode consists of the following five candidate types in order: Spatial MVP from spatially adjacent CUs, Temporal MVP from collocated CUs, History-based MVP from FIFO table, Pairwise MVP Average MVP and zero MV.
  • Decoder side motion vector refinement (DMVR) based on bilateral matching can be used to increase the accuracy of MV in merge mode.
  • Merge mode with MVD (merge mode with MVD, MMVD) comes from merge mode with motion vector difference. Send the MMVD flag immediately after sending the skip flag and the merge flag to specify whether the CU uses MMVD mode.
  • a CU-level adaptive motion vector resolution (AMVR) scheme may be used. AMVR supports CU's MVD encoding at different precisions.
  • the MVD of the current CU is adaptively selected.
  • a combined inter/intra prediction (CIIP) mode can be applied to the current CU.
  • a weighted average is performed on the inter-frame and intra-frame prediction signals to obtain CIIP prediction.
  • the affine motion field of a block is described by the motion information of 2 control points (4 parameters) or 3 control points (6 parameters) motion vector.
  • SBTMVP subblock-based temporal motion vector prediction
  • TMVP temporal motion vector prediction
  • Bi-directional optical flow (BDOF), formerly known as BIO, is a simplified version that reduces computation, especially in terms of the number of multiplications and the size of the multiplier.
  • the triangular partition mode the CU is evenly divided into two triangular parts in two ways: diagonal division and anti-diagonal division.
  • the bidirectional prediction mode extends simple averaging to support weighted averaging of two prediction signals.
  • the inter prediction unit 244 may include a motion estimation (motion estimation, ME) unit and a motion compensation (motion compensation, MC) unit (both are not shown in FIG. 2 ).
  • the motion estimation unit is operable to receive or acquire image block 203 (current image block 203 of current image 17) and decoded image 231, or at least one or more previously reconstructed blocks, e.g., of one or more other/different previously decoded images 231 Reconstruct blocks for motion estimation.
  • a video sequence may comprise a current picture and a previous decoded picture 231, or in other words, the current picture and a previous decoded picture 231 may be part of or form a sequence of pictures forming the video sequence.
  • the encoder 20 may be configured to select a reference block from a plurality of reference blocks in the same or different images in a plurality of other images, and assign the reference image (or reference image index) and/or the location (x, y coordinates) of the reference block ) and the position of the current block (spatial offset) are provided to the motion estimation unit as inter prediction parameters.
  • This offset is also called a motion vector (MV).
  • the motion compensation unit is configured to obtain, for example, receive, inter-frame prediction parameters, and perform inter-frame prediction according to or using the inter-frame prediction parameters to obtain an inter-frame prediction block 246 .
  • Motion compensation performed by the motion compensation unit may include extracting or generating a prediction block from a motion/block vector determined by motion estimation, and may include performing interpolation to sub-pixel precision. Interpolation filtering can generate pixels of other pixels from those of known pixels, thereby potentially increasing the number of candidate predictive blocks that can be used to encode an image block.
  • the motion compensation The compensation unit can locate the prediction block pointed to by the motion vector in one of the reference picture lists.
  • the motion compensation unit may also generate block- and video-slice-related syntax elements for use by video decoder 30 when decoding image blocks of video slices. Additionally, or instead of slices and corresponding syntax elements, coding block groups and/or coding blocks and corresponding syntax elements may be generated or used.
  • the motion vector (motion vector, MV) that can be added to the candidate motion vector list as an alternative includes the spatial phase of the current block
  • the MVs of adjacent and temporally adjacent image blocks, wherein the MVs of spatially adjacent image blocks may include the MV of the left candidate image block on the left of the current block and the MV of the upper candidate image block above the current block.
  • FIG. 4 is an exemplary schematic diagram of candidate image blocks provided by the embodiment of the present application. As shown in FIG.
  • the set of candidate image blocks on the left includes ⁇ A0, A1 ⁇ , and the upper
  • the set of candidate image blocks includes ⁇ B0, B1, B2 ⁇
  • the set of temporally adjacent candidate image blocks includes ⁇ C, T ⁇ .
  • the order can be to give priority to the set ⁇ A0, A1 ⁇ of the left candidate image block of the current block (consider A0 first, A0 is not available and then consider A1), and secondly consider the set of candidate image blocks above the current block ⁇ B0, B1, B2 ⁇ (consider B0 first, consider B1 if B0 is not available, and then consider B2 if B1 is not available), and finally consider the set ⁇ C, T ⁇ of candidate image blocks adjacent to the current block in time domain (consider T first, T is not available Consider C) again.
  • the optimal MV is determined from the candidate motion vector list through the rate distortion cost (RD cost), and the candidate motion vector with the smallest RD cost is used as the motion vector predictor (motion vector predictor, MVP).
  • RD cost rate distortion cost
  • MVP motion vector predictor
  • J represents RD cost
  • SAD is the absolute error sum (sum of absolute differences, SAD) between the pixel value of the prediction block obtained after motion estimation using the candidate motion vector and the pixel value of the current block
  • R represents the code rate
  • represents the Lagrangian multiplier
  • the encoding end transmits the determined index of the MVP in the candidate motion vector list to the decoding end. Further, the motion search can be performed in the neighborhood centered on the MVP to obtain the actual motion vector of the current block, and the encoding end calculates the motion vector difference (motion vector difference, MVD) between the MVP and the actual motion vector, and calculates the MVD passed to the decoder.
  • the decoding end parses the index, finds the corresponding MVP in the candidate motion vector list according to the index, parses the MVD, and adds the MVD and the MVP to obtain the actual motion vector of the current block.
  • the motion information that can be added to the candidate motion information list as an alternative includes the motion information of the image blocks adjacent to the current block in the spatial domain or in the temporal domain, where the spatial domain Adjacent image blocks and temporally adjacent image blocks can refer to Figure 4.
  • the candidate motion information corresponding to the spatial domain in the candidate motion information list comes from five spatially adjacent blocks (A0, A1, B0, B1, and B2) , if the neighboring blocks in space are unavailable or are intra-frame predicted, their motion information will not be added to the candidate motion information list.
  • the candidate motion information in the time domain of the current block is obtained by scaling the MV of the corresponding position block in the reference frame according to the picture order count (POC) of the reference frame and the current frame, and first judges the block whose position is T in the reference frame Whether it is available, if not available, select the block with position C. After obtaining the above candidate motion information list, determine the optimal motion information from the candidate motion information list through RD cost as the motion information of the current block.
  • the encoding end transmits the index value (denoted as merge index) of the position of the optimal motion information in the candidate motion information list to the decoding end.
  • the entropy coding unit 270 is used to use an entropy coding algorithm or scheme (for example, a variable length coding (variable length coding, VLC) scheme, a context adaptive VLC scheme (context adaptive VLC, CALVC), an arithmetic coding scheme, a binarization algorithm, Context Adaptive Binary Arithmetic Coding (CABAC), Syntax-based context-adaptive Binary Arithmetic Coding (SBAC), Probability Interval Partitioning Entropy (PIPE) ) encoding or other entropy encoding methods or techniques) are applied to the quantized residual coefficient 209, inter prediction parameters, intra prediction parameters, loop filter parameters and/or other syntax elements, and the obtained bit stream can be encoded by the output terminal 272 21 etc., so that the video decoder 30 etc. can receive and use parameters for decoding.
  • Encoded bitstream 21 may be transmitted to video decoder 30 or stored in memory for later transmission or retrieval by video decoder 30 .
  • a non-transform based encoder 20 may directly quantize the residual signal without a transform processing unit 206 for certain blocks or frames.
  • encoder 20 may have quantization unit 208 and inverse quantization unit 210 combined into a single unit.
  • the video decoder 30 is used to receive the encoded image data 21 (eg encoded bit stream 21 ) encoded by the encoder 20 to obtain a decoded image 331 .
  • the coded image data or bitstream comprises information for decoding said coded image data, eg data representing image blocks of a coded video slice (and/or coded block group or coded block) and associated syntax elements.
  • the decoder 30 includes an entropy decoding unit 304, an inverse quantization unit 310, an inverse transform processing unit 312, a reconstruction unit 314 (such as a summer 314), a loop filter 320, a decoded picture buffer (DBP ) 330, mode application unit 360, inter prediction unit 344, and intra prediction unit 354.
  • Inter prediction unit 344 may be or include a motion compensation unit.
  • video decoder 30 may perform a decoding process that is substantially inverse to the encoding process described with reference to video encoder 100 of FIG. 2 .
  • the inverse quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214, the loop filter 220, the decoded picture buffer DPB 230, the inter prediction unit 344 and the intra prediction unit 354 also constitute a video encoder 20's "built-in decoder".
  • the inverse quantization unit 310 may be functionally the same as the inverse quantization unit 110
  • the inverse transform processing unit 312 may be functionally the same as the inverse transform processing unit 122
  • the reconstruction unit 314 may be functionally the same as the reconstruction unit 214
  • the loop The filter 320 may be functionally the same as the loop filter 220
  • the decoded picture buffer 330 may be functionally the same as the decoded picture buffer 230 . Therefore, the explanation of the corresponding elements and functions of the video encoder 20 applies to the corresponding elements and functions of the video decoder 30 accordingly.
  • the entropy decoding unit 304 is used to analyze the bit stream 21 (or generally coded image data 21) and perform entropy decoding on the coded image data 21 to obtain quantization coefficients 309 and/or decoded coding parameters (not shown in FIG. 3 ), etc. , such as in inter prediction parameters (such as reference picture index and motion vector), intra prediction parameters (such as intra prediction mode or index), transformation parameters, quantization parameters, loop filter parameters and/or other syntax elements, etc. either or all.
  • the entropy decoding unit 304 may be configured to apply a decoding algorithm or scheme corresponding to the encoding scheme of the entropy encoding unit 270 of the encoder 20 .
  • Entropy decoding unit 304 may also be configured to provide inter prediction parameters, intra prediction parameters, and/or other syntax elements to mode application unit 360 , as well as other parameters to other units of decoder 30 .
  • Video decoder 30 may receive video slice and/or video block level syntax elements. Additionally, or instead of slices and corresponding syntax elements, it is possible to receive or use the coded Code block groups and/or coding blocks and corresponding syntax elements.
  • the inverse quantization unit 310 may be configured to receive a quantization parameter (quantization parameter, QP) (or generally information related to inverse quantization) and quantization coefficients from the encoded image data 21 (for example, parsed and/or decoded by the entropy decoding unit 304), and based on The quantization parameter performs inverse quantization on the decoded quantization coefficient 309 to obtain an inverse quantization coefficient 311 , and the inverse quantization coefficient 311 may also be called a transform coefficient 311 .
  • the inverse quantization process may include using quantization parameters calculated by video encoder 20 for each video block in the video slice to determine the degree of quantization, as well as the degree of inverse quantization that needs to be performed.
  • the inverse transform processing unit 312 is operable to receive dequantized coefficients 311 , also referred to as transform coefficients 311 , and apply a transform to the dequantized coefficients 311 to obtain a reconstructed residual block 213 in the pixel domain.
  • the reconstructed residual block 213 may also be referred to as a transform block 313 .
  • the transform may be an inverse transform, such as an inverse DCT, an inverse DST, an inverse integer transform, or a conceptually similar inverse transform process.
  • the inverse transform processing unit 312 may also be configured to receive transform parameters or corresponding information from the encoded image data 21 (eg, parsed and/or decoded by the entropy decoding unit 304 ) to determine the transform to apply to the dequantized coefficients 311 .
  • the reconstruction unit 314 (for example, the summer 314) is used to add the reconstruction residual block 313 to the prediction block 365 to obtain the reconstruction block 315 in the pixel domain, for example, the pixel value of the reconstruction residual block 313 and the prediction block 365 pixel values are added.
  • the loop filter unit 320 is used (in the encoding loop or after) to filter the reconstructed block 315 to obtain the filtered block 321 to smooth pixel transformation or improve video quality, etc.
  • the loop filter unit 320 may include one or more loop filters, such as deblocking filters, pixel adaptive offset (sample-adaptive offset, SAO) filters, or one or more other filters, such as auto Adaptive loop filter (ALF), noise suppression filter (NSF), or any combination.
  • the loop filter unit 220 may include a deblocking filter, an SAO filter, and an ALF filter. The order of the filtering process may be deblocking filter, SAO filter and ALF filter.
  • LMCS luma mapping with chroma scaling
  • This process is performed before deblocking.
  • the deblocking filtering process can also be applied to internal sub-block edges, such as affine sub-block edges, ATMVP sub-block edges, sub-block transform (sub-block transform, SBT) edges and intra sub-partition (ISP )edge.
  • loop filter unit 320 is shown in FIG. 3 as a loop filter, in other configurations, loop filter unit 320 may be implemented as a post-loop filter.
  • the decoded video block 321 from one picture is then stored in a decoded picture buffer 330 which stores the decoded picture 331 as a reference picture for subsequent motion compensation in other pictures and/or for respective output display.
  • the decoder 30 is used to output the decoded image 311 through the output terminal 312 and so on, for displaying or viewing by the user.
  • the inter prediction unit 344 may be functionally the same as the inter prediction unit 244 (especially the motion compensation unit), and the intra prediction unit 354 may be functionally the same as the inter prediction unit 254, and is based on the coded image data 21 (eg Partitioning and/or prediction parameters or corresponding information received by the entropy decoding unit 304 (parsed and/or decoded) determines partitioning or partitioning and performs prediction.
  • the mode application unit 360 can be used to perform prediction (intra-frame or inter-frame prediction) of each block according to the reconstructed image, block or corresponding pixels (filtered or unfiltered), to obtain the predicted block 365 .
  • the intra prediction unit 354 in the mode application unit 360 is used to generate an input frame based on the indicated intra prediction mode and data from a previously decoded block of the current picture.
  • a prediction block 365 based on an image block of the current video slice.
  • inter prediction unit 344 e.g., motion compensation unit
  • the element generates a prediction block 365 for a video block of the current video slice.
  • the predicted blocks may be generated from one of the reference pictures in one of the reference picture lists.
  • Video decoder 30 may construct reference frame list 0 and list 1 from the reference pictures stored in DPB 330 using a default construction technique.
  • slices e.g., video slices
  • the same or similar process can be applied to embodiments of encoding block groups (e.g., video encoding block groups) and/or encoding blocks (e.g., video encoding blocks),
  • video may be encoded using I, P or B coding block groups and/or coding blocks.
  • the mode application unit 360 is configured to determine prediction information for a video block of the current video slice by parsing motion vectors and other syntax elements, and use the prediction information to generate a prediction block for the current video block being decoded. For example, the mode application unit 360 uses some of the received syntax elements to determine the prediction mode (such as intra prediction or inter prediction), the inter prediction slice type (such as B slice, P slice or GPB slice) for encoding the video block of the video slice. slice), construction information for one or more reference picture lists for the slice, motion vectors for each inter-coded video block of the slice, inter prediction state for each inter-coded video block of the slice, Other information to decode video blocks within the current video slice.
  • the prediction mode such as intra prediction or inter prediction
  • the inter prediction slice type such as B slice, P slice or GPB slice
  • construction information for one or more reference picture lists for the slice motion vectors for each inter-coded video block of the slice, inter prediction state for each inter-coded video block of the slice, Other information to decode video blocks within the
  • encoding block groups e.g., video encoding block groups
  • encoding blocks e.g., video encoding blocks
  • video may be encoded using I, P or B coding block groups and/or coding blocks.
  • the video encoder 30 of FIG. 3 can also be used to segment and/or decode an image using slices (also called video slices), where an image can be segmented using one or more slices (typically non-overlapping). split or decode.
  • slices also called video slices
  • Each slice may include one or more blocks (eg, CTUs) or one or more block groups (eg, coded blocks in the H.265/HEVC/VVC standard and tiles in the VVC standard.
  • the video decoder 30 shown in FIG. 3 can also be configured to use slices/coded block groups (also called video coded block groups) and/or coded blocks (also called video coded block groups) ) to segment and/or decode an image, where an image may be segmented or decoded using one or more slices/coded block groups (usually non-overlapping), each slice/coded block group may consist of one or more A block (such as a CTU) or one or more coding blocks, etc., wherein each coding block may be in the shape of a rectangle or the like, and may include one or more complete or partial blocks (such as a CTU).
  • slices/coded block groups also called video coded block groups
  • coded blocks also called video coded block groups
  • video decoder 30 may be used to decode encoded image data 21 .
  • decoder 30 may generate an output video stream without loop filter unit 320 .
  • the non-transform based decoder 30 can directly inverse quantize the residual signal if some blocks or frames do not have the inverse transform processing unit 312 .
  • video decoder 30 may have inverse quantization unit 310 and inverse transform processing unit 312 combined into a single unit.
  • the processing result of the current step can be further processed, and then output to the next step.
  • further operations such as clipping or shifting operations, may be performed on the processing results of interpolation filtering, motion vector derivation or loop filtering.
  • the value of the motion vector is limited to a predefined range according to the representation bits of the motion vector. If the representation bit of the motion vector is bitDepth, the range is -2 ⁇ (bitDepth-1) to 2 ⁇ (bitDepth-1)-1, where " ⁇ " represents a power. For example, if the bitDepth is set to 16, the range is -32768 to 32767; if the bitDepth is set to 18, the range is -131072 to 131071.
  • the value of deriving a motion vector (e.g. the MVs of 4 4x4 sub-blocks in an 8x8 block) is constrained such that the maximum difference between the integer parts of the 4 4x4 sub-blocks MVs is not More than N pixels, for example, no more than 1 pixel.
  • a motion vector e.g. the MVs of 4 4x4 sub-blocks in an 8x8 block
  • bitDepth two ways to limit motion vectors based on bitDepth.
  • All other functions (also referred to as tools or techniques) of video encoder 20 and video decoder 30 are equally applicable to still image processing, such as residual calculation 204/304, transform 206, quantization 208, inverse quantization 210/310, (inverse ) transformation 212/312, segmentation 262/362, intra prediction 254/354 and/or loop filtering 220/320, entropy encoding 270 and entropy decoding 304.
  • FIG. 5 is an exemplary block diagram of a video decoding device 500 provided in an embodiment of the present application.
  • the video coding apparatus 500 is suitable for implementing the disclosed embodiments described herein.
  • the video decoding device 500 may be a decoder, such as the video decoder 30 in FIG. 1a, or an encoder, such as the video encoder 20 in FIG. 1a.
  • the video decoding device 500 includes: an input port 510 (or input port 510) for receiving data and a receiving unit (receiver unit, Rx) 520; a processor, a logic unit or a central processing unit (central processing unit) for processing data , CPU) 530;
  • the processor 530 here can be a neural network processor 530; a sending unit (transmitter unit, Tx) 540 and an output port 550 (or output port 550) for transmitting data; memory 560.
  • the video decoding device 500 may also include an optical-to-electrical (OE) component and an electrical-to-optical (EO) component coupled to the input port 510, the receiving unit 520, the transmitting unit 540 and the output port 550, For the exit or entrance of optical or electrical signals.
  • OE optical-to-electrical
  • EO electrical-to-optical
  • the processor 530 is realized by hardware and software.
  • Processor 530 may be implemented as one or more processor chips, cores (eg, multi-core processors), FPGAs, ASICs, and DSPs.
  • Processor 530 is in communication with ingress port 510 , receiving unit 520 , transmitting unit 540 , egress port 550 and memory 560 .
  • Processor 530 includes a decoding module 570 (eg, a neural network based decoding module 570).
  • the decoding module 570 implements the embodiments disclosed above. For example, the decode module 570 performs, processes, prepares, or provides for various encoding operations.
  • the decoding module 570 is implemented as instructions stored in the memory 560 and executed by the processor 530 .
  • Memory 560 including one or more magnetic disks, tape drives, and solid-state drives, may be used as an overflow data storage device for storing programs when such programs are selected for execution, and for storing instructions and data that are read during execution of the programs.
  • the memory 560 may be volatile and/or non-volatile, and may be a read-only memory (read-only memory, ROM), a random access memory (random access memory, RAM), a ternary content addressable memory (ternary content-addressable memory, TCAM) and/or static random-access memory (static random-access memory, SRAM).
  • FIG. 6 is an exemplary block diagram of an apparatus 600 provided in an embodiment of the present application.
  • the apparatus 600 may be used as either or both of the source device 12 and the destination device 14 in FIG. 1 a .
  • Processor 602 in apparatus 600 may be a central processing unit.
  • processor 602 may be any other type of device or devices, existing or to be developed in the future, capable of manipulating or processing information. While the disclosed implementations can be implemented using a single processor, such as processor 602 as shown, it is faster and more efficient to use more than one processor.
  • memory 604 in apparatus 600 may be a read only memory (ROM) device or a random access memory (RAM) device. Any other suitable type of storage device may be used as memory 604 .
  • Memory 604 may include code and data 606 accessed by processor 602 via bus 612 .
  • Memory 604 may also include an operating system 608 and application programs 610, including at least one program that allows processor 602 to perform the methods described herein.
  • application programs 610 may include applications 1 through N, and also include a video coding application that performs the methods described herein.
  • Apparatus 600 may also include one or more output devices, such as display 618 .
  • display 618 may be a touch-sensitive display that combines the display with touch-sensitive elements that may be used to sense touch input.
  • Display 618 may be coupled to processor 602 via bus 612 .
  • bus 612 in device 600 is described herein as a single bus, bus 612 may include multiple buses. Additionally, secondary storage may be directly coupled to other components of device 600 or accessed over a network, and may include a single integrated unit such as a memory card or multiple units such as multiple memory cards. Accordingly, apparatus 600 may have a wide variety of configurations.
  • FIG. 7 is a coding framework provided by an embodiment of the present application. It can be seen from FIG. 7 that the encoding framework provided by the present application includes a prediction module, a quantization module, an entropy encoder, an encoding substream buffer, and a substream interleaving module.
  • the input media content can be predicted, quantized, entropy coded and sub-stream interleaved to generate a bit stream.
  • a coding framework can include more or fewer modules.
  • an encoding framework can include more entropy encoders. Such as 5 entropy encoders.
  • FIG. 8 shows an encoding method provided by an embodiment of the present application.
  • This encoding method can be applied to the encoding framework shown in FIG. 7 .
  • the encoding method may include:
  • the foregoing media content may include at least one of an image, an image slice, or a video.
  • the media content may be firstly predicted to obtain a plurality of prediction data, and then the obtained plurality of prediction data may be quantized to obtain a plurality of syntax elements.
  • the input image can be divided into multiple image blocks, and then the multiple image blocks are input to the prediction module for prediction to obtain multiple prediction data, and then the obtained multiple prediction data are input to the quantization module for quantization to obtain multiple syntax elements .
  • multiple syntax elements may be classified first, and then according to the classification results, the syntax elements of different categories are sent to different entropy encoders for encoding to obtain different substreams.
  • the multiple syntax elements may be divided into syntax elements of the R channel, syntax elements of the G channel, and syntax elements of the B channel according to the channel to which each element syntax of the multiple syntax elements belongs. Then the syntax elements of the R channel are sent to the entropy encoder 1 for encoding to obtain substream 1; the syntax elements of the G channel are sent to the entropy encoder 2 for encoding to obtain substream 2; the syntax elements of the B channel are sent to the entropy encoder 3 is encoded to obtain substream 3.
  • a polling method may be used to send multiple syntax elements to multiple entropy encoders for encoding to obtain multiple substreams.
  • X syntax elements among the plurality of syntax elements can be sent to entropy encoder 1 for encoding, and then X syntax elements among the remaining syntax elements can be sent to entropy encoder 2 for encoding, and then the remaining syntax elements The X syntax elements in the element are sent to the entropy encoder 3 for encoding. This is repeated until all syntax elements are sent to the entropy encoder.
  • the bit stream includes a plurality of data packets, and each of the plurality of data packets includes an identifier for indicating the substream to which it belongs.
  • each sub-stream of the multiple sub-streams may be segmented into multiple data packets according to a preset segmentation length, and then a bit stream may be obtained according to the multiple data packets obtained through the segmentation.
  • Step 7 Input K data packets into the bit stream in sequence.
  • Step 8 Select the next coded substream buffer according to the preset order, and return to step 2; if all coded substream buffers have been processed, end the substream interleaving operation.
  • substream interleaving may also be performed according to the following steps: Step 1: Select a coded substream buffer. Step 2: Record the size of the remaining data in the current encoded substream buffer as S. If S is greater than or equal to N-M bits: take out data with a length of N-M bits from the buffer of the current coded substream. Step 3. If S is less than N-M bits: If the current image block is the last block of the input image or input image slice, then take all the data from the current encoded substream buffer, and add several 0s after the data until The data length is N-M bits; if the current image block is not the last block, skip to step six. Step 4.
  • Step 5 Use the data obtained in step 2 or step 3 as the data body, and add a data header; the length of the data header is M bits, and the content of the data header is the substream tag value corresponding to the current encoded substream buffer, which is constructed as shown in Figure 5 packets shown.
  • Step 5. Enter the data packet into the bit stream.
  • the above encoding buffer (ie the encoding substream buffer) may be a FIFO buffer.
  • the data that enters the encoding buffer first will leave the buffer first, and at the same time enter the entropy encoder first for entropy encoding.
  • the specific method for obtaining the bit stream according to multiple data packets may be processed by any method conceivable by those skilled in the art, which is not specifically limited in this embodiment of the present application.
  • the sub-stream 1 in the encoded sub-stream buffer 1 is divided into 4 data packets, namely 1 -1, 1-2, 1-3, and 1-4; the substream 2 in the encoded substream buffer 2 is divided into 2 data packets, which are 2-1 and 2-2 respectively; the encoded substream buffer 3
  • the sub-stream 3 in is divided into three data packets which are 3-1, 3-2 and 3-3 respectively.
  • the sub-stream interleaving module can first encode the data packets 1-1, 1-2, 1-3 and 1-4 in the encoded sub-stream buffer 1 into the bit stream, and then encode the data packets in the encoded sub-stream buffer 2 2-1 and 2-2 are encoded into the bitstream, and finally the data packets 3-1, 3-2 and 3-3 in the encoded substream buffer 3 are encoded into the bitstream.
  • the sub-stream 1 in the coded sub-stream buffer 1 is divided into 4 data packets which are 1-1, 1-2, 1-3 and 1-4 respectively;
  • the sub-stream 2 in the stream buffer 2 is divided into 2 data packets, which are 2-1 and 2-2 respectively;
  • the sub-stream 3 in the encoded sub-stream buffer 3 is divided into 3 data packets, which are 3- 1, 3-2 and 3-3.
  • the sub-stream interleaving module can first compile the data packet 1-1 in the coded sub-stream buffer 1 into the bit stream, then compile the data packet 2-1 in the coded sub-stream buffer 2 into the bit stream, and then code the data packet 2-1 in the coded sub-stream buffer 2
  • the packets 3-1 in the stream buffer 3 are encoded into the bitstream.
  • the data packets are then compiled into the bit stream in the order of data 1-2, 2-2, 3-3, 1-3, 3-3, 1-4.
  • the data packet may include a data header and a data body.
  • the data header is used to store the identifier of the data packet.
  • a data packet with a length of N bits may include a data header with a length of M bits and a data body with a length of N-M bits.
  • M and N are agreed fixed values in the encoding and decoding process.
  • the above data packets may have the same length.
  • the above data packets may all have a length of N bits.
  • a data header with a length of M bits can be extracted, and the remaining data of N-M bits can be used as a data body.
  • the data in the data header can be analyzed to obtain the mark of the data packet (also referred to as a sub-flow mark value).
  • each data packet in the encoded bit stream contains an identifier for indicating the substream to which the data packet belongs, and the decoding end can convert the data packet in the bit stream to They are respectively sent to multiple entropy decoders for parallel decoding.
  • using multiple entropy decoders for parallel decoding can improve the throughput of the decoder, thereby improving the decoding performance of the decoder.
  • FIG. 11 is a decoding framework provided by an embodiment of the present application. It can be seen from FIG. 11 that the encoding framework provided by the present application includes a substream deinterleaving module, a decoding substream buffer, an entropy decoder, an inverse quantization module, and a prediction reconstruction module.
  • the encoding framework provided by the present application includes a substream deinterleaving module, a decoding substream buffer, an entropy decoder, an inverse quantization module, and a prediction reconstruction module.
  • a decoding framework may include more or fewer modules.
  • the decoding framework can include more entropy decoders. Such as 5 entropy decoders.
  • FIG. 12 is a decoding method provided by an embodiment of the present application. This decoding method can be applied to the encoding framework shown in FIG. 11 . As shown in Figure 12, the decoding method may include:
  • the acquisition bitstream may be received through a display link.
  • the substream deinterleaving module may divide the bit stream into multiple data packets according to a preset segmentation length.
  • bit stream may be divided into multiple data packets according to the division length of N bits.
  • N is an integer.
  • the subflow to which each data packet in the multiple data packets belongs may be firstly determined according to identifiers of the multiple data packets. Each packet is then fed into the decoding buffer of the subflow to which each packet belongs. The data packets in each decoding buffer are sent to an entropy decoder corresponding to each buffer for decoding to obtain the plurality of syntax elements.
  • the substream deinterleaving module may first determine the substream to which each data packet of the multiple data packets belongs according to the identifiers of the multiple data packets. Then send the data packets of sub-stream 1 into decoding sub-stream buffer 1, send the data packets of sub-stream 2 into decoding sub-stream buffer 2, and send the data packets of sub-stream 3 into decoding sub-stream buffer 3. Then entropy decoder 1 decodes the decoded substream buffer 1 to obtain syntax elements, entropy decoder 2 decodes decoded substream buffer 2 to obtain syntax elements, and entropy decoder 3 decodes decoded substream buffer 3 to obtain syntax elements element.
  • the above decoding buffer may be a FIFO buffer.
  • the data packet that enters the decoding buffer first will leave the buffer first, and at the same time enter the entropy decoder first for entropy decoding.
  • multiple syntax elements may be dequantized first to obtain multiple residuals, and then the multiple residuals may be predicted, reconstructed and restored to media content.
  • the dequantization module may dequantize multiple syntax elements first to obtain multiple residuals, and then the prediction and reconstruction module predicts and reconstructs the multiple residuals to restore the media content.
  • dequantization and predictive reconstruction can be processed by any method conceivable by those skilled in the art, which is not specifically limited in this embodiment of the present application.
  • uniform inverse quantization may be used for inverse quantization.
  • the foregoing media content may include at least one of an image, an image slice, or a video.
  • the decoding method provided by the embodiment of the present application does not use a single entropy decoder for decoding during the decoding process, but sends data packets to multiple entropy decoders for parallel decoding according to the identifier of the data packets.
  • using multiple entropy decoders for parallel decoding can improve the throughput of the decoder, thereby improving the decoding performance of the decoder.
  • the decoder can use the identifier to quickly determine the entropy decoder corresponding to the data packet, thereby reducing the complexity of the parallel decoding process of multiple entropy decoders.
  • An encoding device for performing the above-mentioned encoding method will be introduced below with reference to FIG. 13 .
  • the encoding device includes hardware and/or software modules corresponding to each function.
  • the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software drives hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions in combination with the embodiments for each specific application, but this implementation should not be considered as exceeding the scope of the embodiments of the present application.
  • the embodiment of the present application may divide the functional modules of the encoding device according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules may be implemented in the form of hardware. It should be noted that the division of modules in this embodiment is schematic, and is only a logical function division, and there may be other division methods in actual implementation.
  • Fig. 13 shows the A schematic diagram of a possible composition of a coding device, as shown in FIG. 13 , the coding device 1300 may include: a syntax unit 1301 , a coding unit 1302 and a substream interleaving unit 1303 .
  • the syntax unit 1301 is configured to obtain multiple syntax elements according to media content.
  • the syntax unit 1301 may be used to execute S801 in the above encoding method.
  • An encoding unit 1302 configured to send multiple syntax elements to an entropy encoder for encoding to obtain multiple substreams;
  • the encoding unit 1302 may be configured to perform S802 in the above encoding method.
  • the substream interleaving unit 1303 is configured to interleave multiple substreams into a bitstream, where the bitstream includes multiple data packets, and the data packets include an identifier for indicating the substream to which they belong.
  • the substream interleaving unit 1303 may be configured to perform S803 in the above encoding method.
  • the syntax unit 1301 is specifically configured to: predict the media content to obtain multiple prediction data; quantize the multiple prediction data to obtain multiple syntax elements.
  • the sub-stream interleaving unit 1303 is specifically configured to: segment each sub-stream of the multiple sub-streams into multiple data packets according to a preset segmentation length; obtain a bit stream according to the multiple data packets.
  • a decoding device for performing the above decoding method will be introduced below with reference to FIG. 14 .
  • the decoding device includes hardware and/or software modules corresponding to each function.
  • the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software drives hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions in combination with the embodiments for each specific application, but this implementation should not be considered as exceeding the scope of the embodiments of the present application.
  • the functional modules of the decoding device may be divided according to the above method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules may be implemented in the form of hardware. It should be noted that the division of modules in this embodiment is schematic, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 14 shows a possible composition diagram of the decoding device involved in the above embodiment.
  • the decoding device 1400 may include: an acquisition unit 1401 , a substream deinterleaving unit 1402 , a decoding unit 1403 and a restoring unit 1404 .
  • the obtaining unit 1401 is configured to obtain a bit stream.
  • the obtaining unit 1401 may be configured to execute S1201 in the above decoding method.
  • the substream deinterleaving unit 1402 is configured to obtain multiple data packets according to the bitstream.
  • the substream deinterleaving unit 1402 may be used to execute S1202 in the above decoding method.
  • the decoding unit 1403 is configured to send the multiple data packets to multiple entropy decoders for decoding according to the identifiers of the multiple data packets to obtain multiple syntax elements.
  • the decoding unit 1403 may be configured to perform S1203 in the above decoding method.
  • a restore unit 1404 configured to restore media content according to multiple syntax elements.
  • the restore unit 1404 may be configured to execute S1404 in the above decoding method.
  • the substream deinterleaving unit 1402 is specifically configured to: the substream deinterleaving unit is specifically configured to: segment the bit stream into multiple data packets according to a preset segmentation length.
  • the decoding unit 1403 is specifically configured to: determine multiple numbers according to multiple data identifiers According to the sub-flow to which each data packet belongs in the packet; send each data packet to the decoding buffer of the sub-flow to which each data packet belongs; send the data packet in each decoding buffer to the corresponding entropy of each buffer
  • the decoder performs decoding to obtain a plurality of syntax elements.
  • the restoration unit 1404 is specifically configured to: perform dequantization on multiple syntax elements to obtain multiple residuals; predict and reconstruct multiple residuals to restore media content.
  • the sub-stream deinterleaving unit is specifically configured to: segment the bit stream into multiple data packets according to a preset segmentation length.
  • An embodiment of the present application also provides an encoding device, which includes: at least one processor, and when the at least one processor executes program codes or instructions, implements the above-mentioned related method steps to implement the encoding method in the above-mentioned embodiments.
  • the device may further include at least one memory, and the at least one memory is used to store the program code or instruction.
  • An embodiment of the present application also provides a decoding device, which includes: at least one processor, and when the at least one processor executes program codes or instructions, implements the above-mentioned related method steps to implement the decoding method in the above-mentioned embodiments.
  • the device may further include at least one memory, and the at least one memory is used to store the program code or instructions.
  • the embodiment of the present application also provides a computer storage medium, the computer storage medium stores computer instructions, and when the computer instructions are run on the encoding device, the encoding device executes the above-mentioned relevant method steps to implement the encoding and decoding method in the above-mentioned embodiments .
  • An embodiment of the present application also provides a computer program product, which, when running on a computer, causes the computer to execute the above-mentioned related steps, so as to implement the encoding and decoding method in the above-mentioned embodiment.
  • the embodiment of the present application also provides a codec device, which may specifically be a chip, an integrated circuit, a component, or a module.
  • the device may include a connected processor and a memory for storing instructions, or the device may include at least one processor for fetching instructions from an external memory.
  • the processor can execute instructions, so that the chip executes the encoding and decoding methods in the above method embodiments.
  • FIG. 15 shows a schematic structural diagram of a chip 1500 .
  • the chip 1500 includes one or more processors 1501 and an interface circuit 1502 .
  • the above-mentioned chip 1500 may further include a bus 1503 .
  • the processor 1501 may be an integrated circuit chip with signal processing capability. During implementation, each step of the encoding and decoding method described above may be completed by an integrated logic circuit of hardware in the processor 1501 or instructions in the form of software.
  • the above-mentioned processor 1501 may be a general-purpose processor, a digital signal processing (digital signal processing, DSP) device, an integrated circuit (application specific integrated circuit, ASIC), a field-programmable gate array (field-programmable gate array) , FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA field-programmable gate array
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the interface circuit 1502 can be used for sending or receiving data, instructions or information.
  • the processor 1501 can use the data, instructions or other information received by the interface circuit 1502 to process, and can send the processing completion information through the interface circuit 1502 .
  • the chip further includes a memory, which may include a read-only memory and a random access memory, and provides operation instructions and data to the processor.
  • a portion of the memory may also include non-volatile random access memory (non-volatile random access memory, NVRAM).
  • the memory stores executable software modules or data structures
  • the processor can call Stored operation instructions (the operation instructions can be stored in the operating system), and perform corresponding operations.
  • the chip may be used in the encoding device or DOP involved in the embodiment of the present application.
  • the interface circuit 1502 may be used to output an execution result of the processor 1501 .
  • processor 1501 and the interface circuit 1502 can be realized by hardware design, software design, or a combination of software and hardware, which is not limited here.
  • the device, computer storage medium, computer program product or chip provided in this embodiment is all used to execute the corresponding method provided above, therefore, the beneficial effects that it can achieve can refer to the corresponding method provided above The beneficial effects in the above will not be repeated here.
  • sequence numbers of the above-mentioned processes do not mean the order of execution, and the order of execution of the processes should be determined by their functions and internal logic, and should not be used in this application.
  • the implementation of the examples constitutes no limitation.
  • the disclosed system, device and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the above units is only a logical function division.
  • multiple units or components can be combined or can be Integrate into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the above-mentioned methods in various embodiments of the embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other various media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请实施例公开了编解码方法和装置,涉及媒体技术领域,能够提高解码器的解码性能。该方法包括:首先获取比特流。然后根据所述比特流得到多个数据包。之后根据所述多个数据包的标识将所述多个数据包送入多个熵解码器进行解码得到多个语法元素。最后根据所述多个语法元素还原媒体内容。本申请实施例提供的解码方法在解码过程中,不使用单个熵解码器进行解码,而是根据数据包的标识将数据包分别送入多个熵解码器并行解码。相较于使用单个熵解码器进行解码,使用多个熵解码器并行解码,可以提升解码器的吞吐量,从而提升解码器的解码性能。

Description

编解码方法和装置
本申请要求于2022年02月28日提交中国专利局、申请号为202210186914.6、申请名称为“编解码方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及媒体技术领域,尤其涉及编解码方法和装置。
背景技术
媒体设备在传输媒体内容时会使用到显示接口。显示接口在传输媒体内容时,可以通过编码操作对媒体内容进行压缩以降低媒体内容传输过程中的带宽量。接收端在收到压缩后的媒体内容后需要通过解码操作对压缩后的媒体内容进行解码还原媒体内容。
解码端对压缩后的媒体内容进行解码还原媒体内容的过程所需时长与解码器的解码性能相关。如何提升解码器的解码性能是本领域技术人员亟需解决的问题之一。
发明内容
本申请实施例提供了编解码方法和装置,能够提升解码器的解码性能。为达到上述目的,本申请实施例采用如下技术方案:
第一方面,本申请实施例提供了一种解码方法,该方法包括:首先获取比特流。然后根据所述比特流得到多个数据包。之后根据所述多个数据包的标识将所述多个数据包送入多个熵解码器进行解码得到多个语法元素。最后根据所述多个语法元素还原媒体内容。
可以看出,本申请实施例提供的解码方法在解码过程中,不使用单个熵解码器进行解码,而是根据数据包的标识将数据包分别送入多个熵解码器并行解码。相较于使用单个熵解码器进行解码,使用多个熵解码器并行解码,可以提升解码器的吞吐量,从而提升解码器的解码性能。另外,由于每个数据包都携带有标识,解码端可以利用该标识快速确定数据包对应的熵解码器,以达到用较低的复杂性实现多熵解码器并行解码的效果。
可选地,上述媒体内容可以包括图像、图像切片或视频中的至少一项。
在一种可能的实现方式中,所述根据所述多个数据包的标识将所述多个数据包送入多个熵解码器进行解码得到多个语法元素,可以包括:根据所述多个数据包的标识确定所述多个数据包中每个数据包所属的子流。将所述每个数据包送入每个数据包所属子流的解码缓冲区。将每个解码缓冲区中的数据包送入每个缓冲区对应的熵解码器进行解码得到所述多个语法元素。
可以理解的是,比特流可以由多个子流的数据包组成。而每个熵解码器对应一个子流。因此对于比特流中的每个数据包而言。每个数据包都能通过该数据包的标识确定该数据包所属的子流,然后将该数据包送往该数据包所属子流对应的熵解码器的解码缓冲区,之后熵解码器对解码缓冲区中的该数据包进行解码得到语法元素。即比特流切分得到的每个数 据包都能送入对应的熵解码器进行解码,从而实现多熵解码器并行解码。使用多个熵解码器并行解码,可以提升解码器的吞吐量,从而提升解码器的解码性能。
可选地,上述解码缓冲区可以为先进先出(first input first output,FIFO)缓冲区。先进入解码缓冲区的数据包,将率先离开缓冲区,同时率先进入熵解码器进行熵解码。
在一种可能的实现方式中,所述根据所述多个语法元素还原媒体内容,可以包括:对所述多个语法元素进行反量化得到多个残差;对所述多个残差进行预测重建还原媒体内容。
可以看出,一方面使用多个熵解码器并行解码,可以提升解码器的吞吐量,从而提升解码器的解码性能。另一方面,在对比特流中的多个数据包并行熵解码得到多个语法元素后,可以通过对着多个语法元素进行反量化和预测重建还原得到媒体内容。
其中,反量化和预测重建的具体方法可以采用本领域技术人员能够想到的任何一种方法进行处理,本申请实施例对此不作具体限定。例如,反量化可以采用均匀反量化。
在一种可能的实现方式中,所述根据所述比特流得到多个数据包,可以包括:按照预设切分长度将所述比特流切分为多个数据包。
示例性地,可以按照N比特的切分长度将比特流切分为多个数据包。其中,N为正整数。
可选地,所述数据包可以包括数据头和数据主体。其中,所述数据头用于存储所述数据包的标识。例如,长度为N比特的数据包可以包括长度为M比特的数据头和长度为N-M比特的数据主体。其中,上述M和N为编解码过程中约定好的固定值。
可选地,上述数据包的长度可以相同。
示例性地,上述数据包的长度可以均为N比特。在对长度为N比特的数据包进行切分时,可以提取出其中长度为M比特的数据头,余下N-M比特的数据作为数据主体。且可以通过对数据头中的数据进行解析,得到该数据包的标记(也可以称为子流标记值)。
第二方面,本申请实施例还提供了一种编码方法,该方法包括:首先根据媒体内容得到多个语法元素。然后将所述多个语法元素送入熵编码器进行编码得到多个子流。之后根据将所述多个子流交织为比特流,所述比特流包括多个数据包。其中,所述数据包包括用于指示所属的子流的标识。
可以看出,本申请实施例提供的编码方法中,编码得到比特流中的每个数据包均包含用于指示数据包所属子流的标识,解码端可以根据该标识将比特流中的数据包分别送入多个熵解码器并行解码。相较于使用单个熵解码器进行解码,使用多个熵解码器并行解码,可以提升解码器的吞吐量,从而提升解码器的解码性能。
可选地,上述媒体内容可以包括图像、图像切片或视频中的至少一项。
示例性地,可以将输入图像划分为图像块的形式,然后根据图像块得到多个语法元素。再然后将所述多个语法元素送入熵编码器进行编码得到多个子流。之后根据将所述多个子流交织为比特流。
在一种可能的实现方式中,所述将所述多个语法元素送入熵编码器进行编码得到多个子流,可以包括:将所述多个语法元素送入多个熵编码器进行编码得到多个子流。
示例性地,可以将所述多个语法元素送入3个熵编码器进行编码得到3个子流。
在一种可能的实现方式中,所述所将所述多个子流交织为比特流,包括:根据预设切分长度将所述多个子流的每个子流切分为多个数据包;根据所述多个数据包得到所述比特 流。
示例性地,可以按照以下的步骤进行子流交织:步骤一、选择一个编码子流缓冲区。步骤二、根据当前编码子流缓冲区中剩余数据的大小S,计算可构建的数据包个数K=floor(S/(N-M)),其中floor为向下取整函数。步骤三、若当前图像块为输入图像或输入图像切片的最后一个块,则令K=K+1。步骤四、从当前编码子流缓冲区中连续取K段长度为N-M比特的数据。步骤五、若步骤四取数据时,当前编码子流缓冲区中的数据小于N-M比特,则在此数据后补上若干个0,直至数据长度为N-M比特。步骤六、将步骤四或步骤五取得的数据作为数据主体,并添加数据头;数据头长度为M比特,数据头内容为当前编码子流缓冲区对应的子流标记值,构建成如图5所示的数据包。步骤七、将K个数据包依次输入比特流。步骤八、根据预置好的顺序,选择下一个编码子流缓冲区,并回到步骤二;若已处理完所有编码子流缓冲区,则结束子流交织操作。
又示例性地,也可以按照以下的步骤进行子流交织:步骤一、选择一个编码子流缓冲区。步骤二、记当前编码子流缓冲区中剩余数据的大小为S。如果S大于等于N-M比特:从当前编码子流缓冲区中取出长度为N-M比特的数据。步骤三、如果S小于N-M比特:如果当前图像块为输入图像或输入图像切片的最后一个块,则从当前编码子流缓冲区中取出所有数据,并在此数据后补上若干个0,直至数据长度为N-M比特;如果当前图像块非最后一个块,则跳到步骤六。步骤四、将步骤二或步骤三取得的数据作为数据主体,并添加数据头;数据头长度为M比特,数据头内容为当前编码子流缓冲区对应的子流标记值,构建成如图5所示的数据包。步骤五、将数据包输入比特流。步骤六、根据预置好的顺序,选择下一个编码子流缓冲区,并回到步骤二;若所有编码子流缓冲区中的数据均小于N-M比特,则结束子流交织操作。
可以看出,本申请实施例可以通过预设切分长度对每个子流切分得到多个数据包,然后根据这多个数据包得到比特流。由于编码得到比特流中的每个数据包均包含用于指示数据包所属子流的标识,解码端可以根据该标识将比特流中的数据包分别送入多个熵解码器并行解码。相较于使用单个熵解码器进行解码,使用多个熵解码器并行解码,可以提升解码器的吞吐量,从而提升解码器的解码性能。
可选地,上述编码缓冲区可以为FIFO缓冲区。先进入编码缓冲区的数据,将率先离开缓冲区,同时率先进入熵编码器进行熵编码。
其中,根据多个数据包得到比特流的具体方法可以采用本领域技术人员能够想到的任何一种方法进行处理,本申请实施例对此不作具体限定。
例如,可以按照子流顺序将子流的数据包编入比特流中。如先将第1个子流的数据包编入比特流,待第1个子流的数据包全部编入比特流后,再将第2个子流的数据包编入比特流,直至所有子流的数据包全部编入比特流。
又例如,可以按照轮询的顺序将子流的数据包编入比特流中。如包括3个子流,则可以先编入第1个子流的1个数据包,然后编入第2个子流的1个数据包,再编入第3个子流的1个数据包。由此往复直至所有子流的数据包全部编入比特流。
在一种可能的实现方式中,所述根据媒体内容得到多个语法元素,可以包括:对所述媒体内容进行预测得到多个预测数据;对所述多个预测数据进行量化得到所述多个语法元素。
其中,量化和预测的具体方法可以采用本领域技术人员能够想到的任何一种方法进行处理,本申请实施例对此不作具体限定。例如,量化可以采用均匀量化。
可以看出,本申请实施例可以根据媒体内容得到多个语法元素,然后通过对着多个语法元素编码得到比特流。由于比特流中的每个数据包均包含用于指示数据包所属子流的标识,解码端可以根据该标识将比特流中的数据包分别送入多个熵解码器并行解码。相较于使用单个熵解码器进行解码,使用多个熵解码器并行解码,可以提升解码器的吞吐量,从而提升解码器的解码性能。
可选地,所述数据包可以包括数据头和数据主体。其中,所述数据头用于存储所述数据包的标识。例如,长度为N比特的数据包可以包括长度为M比特的数据头和长度为N-M比特的数据主体。其中,上述M和N为编解码过程中约定好的固定值。
可选地,上述数据包的长度可以相同。
示例性地,上述数据包的长度可以均为N比特。在对长度为N比特的数据包进行切分时,可以提取出其中长度为M比特的数据头,余下N-M比特的数据作为数据主体。且可以通过对数据头中的数据进行解析,得到该数据包的标记(也可以称为子流标记值)。
第三方面,本申请实施例还提供了一种解码装置,该装置包括:获取单元、子流解交织单元、解码单元和还原单元。所述获取单元,用于获取比特流。所述子流解交织单元,用于根据所述比特流得到多个数据包。所述解码单元,用于根据所述多个数据包的标识将所述多个数据包送入多个熵解码器进行解码得到多个语法元素。所述还原单元,用于根据所述多个语法元素还原媒体内容。
在一种可能的实现方式中,所述解码单元具体用于:根据所述多个数据包的标识确定所述多个数据包中每个数据包所属的子流;将所述每个数据包送入每个数据包所属子流的解码缓冲区;将每个解码缓冲区中的数据包送入每个缓冲区对应的熵解码器进行解码得到所述多个语法元素。
在一种可能的实现方式中,所述还原单元具体用于:对所述多个语法元素进行反量化得到多个残差;对所述多个残差进行预测重建还原媒体内容。
在一种可能的实现方式中,所述子流解交织单元具体用于:所述子流解交织单元具体用于:按照预设切分长度将所述比特流切分为多个数据包。
可选地,所述数据包可以包括数据头和数据主体。其中,所述数据头用于存储所述数据包的标识。例如,长度为N比特的数据包可以包括长度为M比特的数据头和长度为N-M比特的数据主体。其中,上述M和N为编解码过程中约定好的固定值。
可选地,上述数据包的长度可以相同。
示例性地,上述数据包的长度可以均为N比特。在对长度为N比特的数据包进行切分时,可以提取出其中长度为M比特的数据头,余下N-M比特的数据作为数据主体。且可以通过对数据头中的数据进行解析,得到该数据包的标记(也可以称为子流标记值)。
第四方面,本申请实施例还提供了一种编码装置,该装置包括:语法单元、编码单元和子流交织单元。所述语法单元,用于根据媒体内容得到多个语法元素;所述编码单元,用于将所述多个语法元素送入熵编码器进行编码得到多个子流;所述子流交织单元,用于将所述多个子流交织为比特流,所述比特流包括多个数据包,所述数据包包括用于指示所属的子流的标识。
在一种可能的实现方式中,所述子流交织单元具体用于:根据预设切分长度将所述多个子流的每个子流切分为多个数据包;根据所述多个数据包得到所述比特流。
在一种可能的实现方式中,所述语法单元具体用于:对所述媒体内容进行预测得到多个预测数据;对所述多个预测数据进行量化得到所述多个语法元素。
可选地,所述数据包可以包括数据头和数据主体。其中,所述数据头用于存储所述数据包的标识。例如,长度为N比特的数据包可以包括长度为M比特的数据头和长度为N-M比特的数据主体。其中,上述M和N为编解码过程中约定好的固定值。
可选地,上述数据包的长度可以相同。
示例性地,上述数据包的长度可以均为N比特。在对长度为N比特的数据包进行切分时,可以提取出其中长度为M比特的数据头,余下N-M比特的数据作为数据主体。且可以通过对数据头中的数据进行解析,得到该数据包的标记(也可以称为子流标记值)。
第五方面,本申请实施例还提供一种解码装置,该装置包括:至少一个处理器,当所述至少一个处理器执行程序代码或指令时,实现上述第一方面或其任意可能的实现方式中所述的方法。
可选地,该装置还可以包括至少一个存储器,该至少一个存储器用于存储该程序代码或指令。
第六方面,本申请实施例还提供一种编码装置,该装置包括:至少一个处理器,当所述至少一个处理器执行程序代码或指令时,实现上述第二方面或其任意可能的实现方式中所述的方法。
可选地,该装置还可以包括至少一个存储器,该至少一个存储器用于存储该程序代码或指令。
第七方面,本申请实施例还提供一种芯片,包括:输入接口、输出接口、至少一个处理器。可选地,该芯片还包括存储器。该至少一个处理器用于执行该存储器中的代码,当该至少一个处理器执行该代码时,该芯片实现上述第一方面或其任意可能的实现方式中所述的方法。
可选地,上述芯片还可以为集成电路。
第八方面,本申请实施例还提供一种计算机可读存储介质,用于存储计算机程序,该计算机程序包括用于实现上述第一方面或其任意可能的实现方式中所述的方法。
第九方面,本申请实施例还提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机实现上述第一方面或其任意可能的实现方式中所述的方法。
本实施例提供的编解码装置、计算机存储介质、计算机程序产品和芯片均用于执行上文所提供的编解码方法,因此,其所能达到的有益效果可参考上文所提供的编解码方法中的有益效果,此处不再赘述。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请实施例的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1a为本申请实施例提供的译码系统的一种示例性框图;
图1b为本申请实施例提供的视频译码系统的一种示例性框图;
图2为本申请实施例提供的视频编码器的一种示例性框图;
图3为本申请实施例提供的视频解码器的一种示例性框图;
图4为本申请实施例提供的候选图像块的一种示例性的示意图;
图5为本申请实施例提供的视频译码设备的一种示例性框图;
图6为本申请实施例提供的装置的一种示例性框图;
图7为本申请实施例提供的一种编码框架的示意图;
图8为本申请实施例提供的一种编码方法的示意图;
图9为本申请实施例提供的一种子流交织的示意图;
图10为本申请实施例提供的另一种子流交织的示意图;
图11为本申请实施例提供的一种解码框架的示意图;
图12为本申请实施例提供的一种解码方法的示意图;
图13为本申请实施例提供的一种编码装置的示意图;
图14为本申请实施例提供的一种解码装置的示意图;
图15为本申请实施例提供的一种芯片的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请实施例一部分实施例,而不是全部的实施例。基于本申请实施例中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请实施例保护的范围。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例的说明书以及附图中的术语“第一”和“第二”等是用于区别不同的对象,或者用于区别对同一对象的不同处理,而不是用于描述对象的特定顺序。
此外,本申请实施例的描述中所提到的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选的还包括其他没有列出的步骤或单元,或可选的还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
需要说明的是,本申请实施例的描述中,“示例性地”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性地”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性地”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。
首先对本申请实施例涉及的术语进行解释。
接口压缩:媒体设备在传输图像、视频时会使用到显示接口,对通过显示接口的图像视频数据进行压缩、解压缩操作,简写为接口压缩。
比特流:对媒体内容(如图像内容、视频内容等)进行编码后生成的二进制流。
语法元素:对媒体内容进行预测、变换等典型编码操作后得到的数据,熵编码的主要输入。
熵编码器:将输入语法元素转换为比特流的编码器模块。
熵解码器:将输入比特流转换为语法元素的编码器模块。
子流:语法元素的子集经过熵编码后得到的码流。
子流标记值:标记数据包所隶属子流的索引。
子流交织:将多个子流合并为比特流的操作,或称为复用(multiplexing)。
子流解交织:从比特流中拆分出不同子流的操作,或称为解复用(demultiplexing)。
数据编解码包括数据编码和数据解码两部分。数据编码在源侧(或通常称为编码器侧)执行,通常包括处理(例如,压缩)原始数据以减少表示该原始数据所需的数据量(从而更高效存储和/或传输)。数据解码在目的地侧(或通常称为解码器侧)执行,通常包括相对于编码器侧作逆处理,以重建原始数据。本申请实施例涉及的数据的“编解码”应理解为数据的“编码”或“解码”。编码部分和解码部分也合称为编解码(编码和解码,CODEC)。
在无损数据编码情况下,可以重建原始数据,即重建的原始数据与原始数据具有相同的质量(假设存储或传输期间没有传输损耗或其他数据丢失)。在有损数据编码情况下,通过量化等执行进一步压缩,来减少表示原始数据所需的数据量,而解码器侧无法完全重建原始数据,即重建的原始数据的质量比原始数据的质量低或差。
本申请实施例可以应用于对视频数据以及其他具有压缩/解压缩需求的数据等。以下以视频数据的编码(简称视频编码)为例对本申请实施例进行说明,其他类型的数据(例如图像数据、音频数据、整数型数据以及其他具有压缩/解压缩需求的数据)可以参考以下描述,本申请实施例对此不再赘述。需要说明的是,相对于视频编码,音频数据以及整数型数据等数据的编码过程中无需将数据分割为块,而是可以直接对数据进行编码。
视频编码通常是指处理形成视频或视频序列的图像序列。在视频编码领域,术语“图像(picture)”、“帧(frame)”或“图片(image)”可以用作同义词。
几个视频编码标准属于“有损混合型视频编解码”(即,将像素域中的空间和时间预测与变换域中用于应用量化的2D变换编码结合)。视频序列中的每个图像通常分割成不重叠的块集合,通常在块级上进行编码。换句话说,编码器通常在块(视频块)级处理即编码视频,例如,通过空间(帧内)预测和时间(帧间)预测来产生预测块;从当前块(当前处理/待处理的块)中减去预测块,得到残差块;在变换域中变换残差块并量化残差块,以减少待传输(压缩)的数据量,而解码器侧将相对于编码器的逆处理部分应用于编码或压缩的块,以重建用于表示的当前块。另外,编码器需要重复解码器的处理步骤,使得编码器和解码器生成相同的预测(例如,帧内预测和帧间预测)和/或重建像素,用于处理,即编码后续块。
在以下译码系统10的实施例中,编码器20和解码器30根据图1a至图3进行描述。
图1a为本申请实施例提供的译码系统10的一种示例性框图,例如可以利用本申请实施例技术的视频译码系统10(或简称为译码系统10)。视频译码系统10中的视频编码器20(或简称为编码器20)和视频解码器30(或简称为解码器30)代表可用于根据本申请实施例中描述的各种示例执行各技术的设备等。
如图1a所示,译码系统10包括源设备12,源设备12用于将编码图像等编码图像数据21提供给用于对编码图像数据21进行解码的目的设备14。
源设备12包括编码器20,另外即可选地,可包括图像源16、图像预处理器等预处理器(或预处理单元)18、通信接口(或通信单元)22。
图像源16可包括或可以为任意类型的用于捕获现实世界图像等的图像捕获设备,和/或任意类型的图像生成设备,例如用于生成计算机动画图像的计算机图形处理器或任意类型的用于获取和/或提供现实世界图像、计算机生成图像(例如,屏幕内容、虚拟现实(virtual reality,VR)图像和/或其任意组合(例如增强现实(augmented reality,AR)图像)的设备。所述图像源可以为存储上述图像中的任意图像的任意类型的内存或存储器。
为了区分预处理器(或预处理单元)18执行的处理,图像(或图像数据)17也可称为原始图像(或原始图像数据)17。
预处理器18用于接收原始图像数据17,并对原始图像数据17进行预处理,得到预处理图像(或预处理图像数据)19。例如,预处理器18执行的预处理可包括修剪、颜色格式转换(例如从RGB转换为YCbCr)、调色或去噪。可以理解的是,预处理单元18可以为可选组件。
视频编码器(或编码器)20用于接收预处理图像数据19并提供编码图像数据21(下面将根据图2等进一步描述)。
源设备12中的通信接口22可用于:接收编码图像数据21并通过通信信道13向目的设备14等另一设备或任何其他设备发送编码图像数据21(或其他任意处理后的版本),以便存储或直接重建。
目的设备14包括解码器30,另外即可选地,可包括通信接口(或通信单元)28、后处理器(或后处理单元)32和显示设备34。
目的设备14中的通信接口28用于直接从源设备12或从存储设备等任意其他源设备接收编码图像数据21(或其他任意处理后的版本),例如,存储设备为编码图像数据存储设备,并将编码图像数据21提供给解码器30。
通信接口22和通信接口28可用于通过源设备12与目的设备14之间的直连通信链路,例如直接有线或无线连接等,或者通过任意类型的网络,例如有线网络、无线网络或其任意组合、任意类型的私网和公网或其任意类型的组合,发送或接收编码图像数据(或编码数据)21。
例如,通信接口22可用于将编码图像数据21封装为报文等合适的格式,和/或使用任意类型的传输编码或处理来处理所述编码后的图像数据,以便在通信链路或通信网络上进行传输。
通信接口28与通信接口22对应,例如,可用于接收传输数据,并使用任意类型的对应传输解码或处理和/或解封装对传输数据进行处理,得到编码图像数据21。
通信接口22和通信接口28均可配置为如图1a中从源设备12指向目的设备14的对应通信信道13的箭头所指示的单向通信接口,或双向通信接口,并且可用于发送和接收消息等,以建立连接,确认并交换与通信链路和/或例如编码后的图像数据传输等数据传输相关的任何其他信息,等等。
视频解码器(或解码器)30用于接收编码图像数据21并提供解码图像数据(或解码 图像数据)31(下面将根据图3等进一步描述)。
后处理器32用于对解码后的图像等解码图像数据31(也称为重建后的图像数据)进行后处理,得到后处理后的图像等后处理图像数据33。后处理单元32执行的后处理可以包括例如颜色格式转换(例如从YCbCr转换为RGB)、调色、修剪或重采样,或者用于产生供显示设备34等显示的解码图像数据31等任何其他处理。
显示设备34用于接收后处理图像数据33,以向用户或观看者等显示图像。显示设备34可以为或包括任意类型的用于表示重建后图像的显示器,例如,集成或外部显示屏或显示器。例如,显示屏可包括液晶显示器(liquid crystal display,LCD)、有机发光二极管(organic light emitting diode,OLED)显示器、等离子显示器、投影仪、微型LED显示器、硅基液晶显示器(liquid crystal on silicon,LCoS)、数字光处理器(digital light processor,DLP)或任意类型的其他显示屏。
译码系统10还包括训练引擎25,训练引擎25用于训练编码器20(尤其是编码器20中的熵编码单元270)或解码器30(尤其是解码器30中的熵解码单元304),以根据估计得到的估计概率分布对待编码图像块进行熵编码,训练引擎25的详细说明请参考下述方法测试施例。
尽管图1a示出了源设备12和目的设备14作为独立的设备,但设备实施例也可以同时包括源设备12和目的设备14或同时包括源设备12和目的设备14的功能,即同时包括源设备12或对应功能和目的设备14或对应功能。在这些实施例中,源设备12或对应功能和目的设备14或对应功能可以使用相同硬件和/或软件或通过单独的硬件和/或软件或其任意组合来实现。
根据描述,图1a所示的源设备12和/或目的设备14中的不同单元或功能的存在和(准确)划分可能根据实际设备和应用而有所不同,这对技术人员来说是显而易见的。
请参考图1b,图1b为本申请实施例提供的视频译码系统40的一种示例性框图,编码器20(例如视频编码器20)或解码器30(例如视频解码器30)或两者都可通过如图1b所示的视频译码系统40中的处理电路实现,例如一个或多个微处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application-specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)、离散逻辑、硬件、视频编码专用处理器或其任意组合。请参考图2和图3,图2为本申请实施例提供的视频编码器的一种示例性框图,图3为本申请实施例提供的视频解码器的一种示例性框图。编码器20可以通过处理电路46实现,以包含参照图2编码器20论述的各种模块和/或本文描述的任何其他编码器系统或子系统。解码器30可以通过处理电路46实现,以包含参照图3解码器30论述的各种模块和/或本文描述的任何其他解码器系统或子系统。所述处理电路46可用于执行下文论述的各种操作。如图5所示,如果部分技术在软件中实施,则设备可以将软件的指令存储在合适的非瞬时性计算机可读存储介质中,并且使用一个或多个处理器在硬件中执行指令,从而执行本申请实施例技术。视频编码器20和视频解码器30中的其中一个可作为组合编解码器(encoder/decoder,CODEC)的一部分集成在单个设备中,如图1b所示。
源设备12和目的设备14可包括各种设备中的任一种,包括任意类型的手持设备或固定设备,例如,笔记本电脑或膝上型电脑、手机、智能手机、平板或平板电脑、相机、台 式计算机、机顶盒、电视机、显示设备、数字媒体播放器、视频游戏控制台、视频流设备(例如,内容业务服务器或内容分发服务器)、广播接收设备、广播发射设备以及监控设备等等,并可以不使用或使用任意类型的操作系统。源设备12和目的设备14也可以是云计算场景中的设备,例如云计算场景中的虚拟机等。在一些情况下,源设备12和目的设备14可配备用于无线通信的组件。因此,源设备12和目的设备14可以是无线通信设备。
源设备12和目的设备14可以安装虚拟现实(virtual reality,VR)应用、增强现实(augmented reality,AR)应用或者混合现实(mixed reality,MR)应用等虚拟场景应用程序(application,APP),并可以基于用户的操作(例如点击、触摸、滑动、抖动、声控等)运行VR应用、AR应用或者MR应用。源设备12和目的设备14可以通过摄像头和/或传感器采集环境中任意物体的图像/视频,再根据采集的图像/视频在显示设备上显示虚拟物体,该虚拟物体可以是VR场景、AR场景或MR场景中的虚拟物体(即虚拟环境中的物体)。
需要说明的是,本申请实施例中,源设备12和目的设备14中的虚拟场景应用程序可以是源设备12和目的设备14自身内置的应用程序,也可以是用户自行安装的第三方服务商提供的应用程序,对此不做具体限定。
此外,源设备12和目的设备14可以安装实时视频传输应用,例如直播应用。源设备12和目的设备14可以通过摄像头采集图像/视频,再将采集的图像/视频在显示设备上显示。
在一些情况下,图1a所示的视频译码系统10仅仅是示例性的,本申请实施例提供的技术可适用于视频编码设置(例如,视频编码或视频解码),这些设置不一定包括编码设备与解码设备之间的任何数据通信。在其他示例中,数据从本地存储器中检索,通过网络发送,等等。视频编码设备可以对数据进行编码并将数据存储到存储器中,和/或视频解码设备可以从存储器中检索数据并对数据进行解码。在一些示例中,编码和解码由相互不通信而只是编码数据到存储器和/或从存储器中检索并解码数据的设备来执行。
请参考图1b,图1b为本申请实施例提供的视频译码系统40的一种示例性框图,如图1b所示,视频译码系统40可以包含成像设备41、视频编码器20、视频解码器30(和/或藉由处理电路46实施的视频编/解码器)、天线42、一个或多个处理器43、一个或多个内存存储器44和/或显示设备45。
如图1b所示,成像设备41、天线42、处理电路46、视频编码器20、视频解码器30、处理器43、内存存储器44和/或显示设备45能够互相通信。在不同实例中,视频译码系统40可以只包含视频编码器20或只包含视频解码器30。
在一些实例中,天线42可以用于传输或接收视频数据的经编码比特流。另外,在一些实例中,显示设备45可以用于呈现视频数据。处理电路46可以包含专用集成电路(application-specific integrated circuit,ASIC)逻辑、图形处理器、通用处理器等。视频译码系统40也可以包含可选的处理器43,该可选处理器43类似的可以包含专用集成电路(application-specific integrated circuit,ASIC)逻辑、图形处理器、通用处理器等。另外,内存存储器44可以是任何类型的存储器,例如易失性存储器(例如,静态随机存取存储器(static random access memory,SRAM)、动态随机存储器(dynamic random access memory,DRAM)等)或非易失性存储器(例如,闪存等)等。在非限制性实例中,内存存储器 44可以由超速缓存内存实施。在其他实例中,处理电路46可以包含存储器(例如,缓存等)用于实施图像缓冲器等。
在一些实例中,通过逻辑电路实施的视频编码器20可以包含(例如,通过处理电路46或内存存储器44实施的)图像缓冲器和(例如,通过处理电路46实施的)图形处理单元。图形处理单元可以通信耦合至图像缓冲器。图形处理单元可以包含通过处理电路46实施的视频编码器20,以实施参照图2和/或本文中所描述的任何其他编码器系统或子系统所论述的各种模块。逻辑电路可以用于执行本文所论述的各种操作。
在一些实例中,视频解码器30可以以类似方式通过处理电路46实施,以实施参照图3的视频解码器30和/或本文中所描述的任何其他解码器系统或子系统所论述的各种模块。在一些实例中,逻辑电路实施的视频解码器30可以包含(通过处理电路46或内存存储器44实施的)图像缓冲器和(例如,通过处理电路46实施的)图形处理单元。图形处理单元可以通信耦合至图像缓冲器。图形处理单元可以包含通过处理电路46实施的视频解码器30,以实施参照图3和/或本文中所描述的任何其他解码器系统或子系统所论述的各种模块。
在一些实例中,天线42可以用于接收视频数据的经编码比特流。如所论述,经编码比特流可以包含本文所论述的与编码视频帧相关的数据、指示符、索引值、模式选择数据等,例如与编码分割相关的数据(例如,变换系数或经量化变换系数,(如所论述的)可选指示符,和/或定义编码分割的数据)。视频译码系统40还可包含耦合至天线42并用于解码经编码比特流的视频解码器30。显示设备45用于呈现视频帧。
应理解,本申请实施例中对于参考视频编码器20所描述的实例,视频解码器30可以用于执行相反过程。关于信令语法元素,视频解码器30可以用于接收并解析这种语法元素,相应地解码相关视频数据。在一些例子中,视频编码器20可以将语法元素熵编码成经编码视频比特流。在此类实例中,视频解码器30可以解析这种语法元素,并相应地解码相关视频数据。
为便于描述,参考通用视频编码(versatile video coding,VVC)参考软件或由ITU-T视频编码专家组(video coding experts group,VCEG)和ISO/IEC运动图像专家组(motion picture experts group,MPEG)的视频编码联合工作组(joint collaboration team on video coding,JCT-VC)开发的高性能视频编码(high-efficiency video coding,HEVC)描述本申请实施例。本领域普通技术人员理解本申请实施例不限于HEVC或VVC。
编码器和编码方法
如图2所示,视频编码器20包括输入端(或输入接口)201、残差计算单元204、变换处理单元206、量化单元208、反量化单元210、逆变换处理单元212、重建单元214、环路滤波器220、解码图像缓冲器(decoded picture buffer,DPB)230、模式选择单元260、熵编码单元270和输出端(或输出接口)272。模式选择单元260可包括帧间预测单元244、帧内预测单元254和分割单元262。帧间预测单元244可包括运动估计单元和运动补偿单元(未示出)。图2所示的视频编码器20也可称为混合型视频编码器或基于混合型视频编解码器的视频编码器。
参见图2,帧间预测单元为经过训练的目标模型(亦称为神经网络),该神经网络用于处理输入图像或图像区域或图像块,以生成输入图像块的预测值。例如,用于帧间预测 的神经网络用于接收输入的图像或图像区域或图像块,并且生成输入的图像或图像区域或图像块的预测值。
残差计算单元204、变换处理单元206、量化单元208和模式选择单元260组成编码器20的前向信号路径,而反量化单元210、逆变换处理单元212、重建单元214、缓冲器216、环路滤波器220、解码图像缓冲器(decoded picture buffer,DPB)230、帧间预测单元244和帧内预测单元254组成编码器的后向信号路径,其中编码器20的后向信号路径对应于解码器的信号路径(参见图3中的解码器30)。反量化单元210、逆变换处理单元212、重建单元214、环路滤波器220、解码图像缓冲器230、帧间预测单元244和帧内预测单元254还组成视频编码器20的“内置解码器”。
图像和图像分割(图像和块)
编码器20可用于通过输入端201等接收图像(或图像数据)17,例如,形成视频或视频序列的图像序列中的图像。接收的图像或图像数据也可以是预处理后的图像(或预处理后的图像数据)19。为简单起见,以下描述使用图像17。图像17也可称为当前图像或待编码的图像(尤其是在视频编码中将当前图像与其他图像区分开时,其它图像例如同一视频序列,即也包括当前图像的视频序列,中的之前编码后图像和/或解码后图像)。
(数字)图像为或可以视为具有强度值的像素点组成的二维阵列或矩阵。阵列中的像素点也可以称为像素(pixel或pel)(图像元素的简称)。阵列或图像在水平方向和垂直方向(或轴线)上的像素点数量决定了图像的大小和/或分辨率。为了表示颜色,通常采用三个颜色分量,即图像可以表示为或包括三个像素点阵列。在RBG格式或颜色空间中,图像包括对应的红色、绿色和蓝色像素点阵列。但是,在视频编码中,每个像素通常以亮度/色度格式或颜色空间表示,例如YCbCr,包括Y指示的亮度分量(有时也用L表示)以及Cb、Cr表示的两个色度分量。亮度(luma)分量Y表示亮度或灰度水平强度(例如,在灰度等级图像中两者相同),而两个色度(chrominance,简写为chroma)分量Cb和Cr表示色度或颜色信息分量。相应地,YCbCr格式的图像包括亮度像素点值(Y)的亮度像素点阵列和色度值(Cb和Cr)的两个色度像素点阵列。RGB格式的图像可以转换或变换为YCbCr格式,反之亦然,该过程也称为颜色变换或转换。如果图像是黑白的,则该图像可以只包括亮度像素点阵列。相应地,图像可以为例如单色格式的亮度像素点阵列或4:2:0、4:2:2和4:4:4彩色格式的亮度像素点阵列和两个相应的色度像素点阵列。
在一个实施例中,视频编码器20的实施例可包括图像分割单元(图2中未示出),用于将图像17分割成多个(通常不重叠)图像块203。这些块在H.265/HEVC和VVC标准中也可以称为根块、宏块(H.264/AVC)或编码树块(coding tree block,CTB),或编码树单元(coding tree unit,CTU)。分割单元可用于对视频序列中的所有图像使用相同的块大小和使用限定块大小的对应网格,或在图像或图像子集或图像组之间改变块大小,并将每个图像分割成对应块。
在其他实施例中,视频编码器可用于直接接收图像17的块203,例如,组成所述图像17的一个、几个或所有块。图像块203也可以称为当前图像块或待编码图像块。
与图像17一样,图像块203同样是或可认为是具有强度值(像素点值)的像素点组成的二维阵列或矩阵,但是图像块203的比图像17的小。换句话说,块203可包括一个像素点阵列(例如,单色图像17情况下的亮度阵列或彩色图像情况下的亮度阵列或色度 阵列)或三个像素点阵列(例如,彩色图像17情况下的一个亮度阵列和两个色度阵列)或根据所采用的颜色格式的任何其他数量和/或类型的阵列。块203的水平方向和垂直方向(或轴线)上的像素点数量限定了块203的大小。相应地,块可以为M×N(M列×N行)个像素点阵列,或M×N个变换系数阵列等。
在一个实施例中,图2所示的视频编码器20用于逐块对图像17进行编码,例如,对每个块203执行编码和预测。
在一个实施例中,图2所示的视频编码器20还可以用于使用片(也称为视频片)分割和/或编码图像,其中图像可以使用一个或多个片(通常为不重叠的)进行分割或编码。每个片可包括一个或多个块(例如,编码树单元CTU)或一个或多个块组(例如H.265/HEVC/VVC标准中的编码区块(tile)和VVC标准中的砖(brick)。
在一个实施例中,图2所示的视频编码器20还可以用于使用片/编码区块组(也称为视频编码区块组)和/或编码区块(也称为视频编码区块)对图像进行分割和/或编码,其中图像可以使用一个或多个片/编码区块组(通常为不重叠的)进行分割或编码,每个片/编码区块组可包括一个或多个块(例如CTU)或一个或多个编码区块等,其中每个编码区块可以为矩形等形状,可包括一个或多个完整或部分块(例如CTU)。
残差计算
残差计算单元204用于通过如下方式根据图像块(或原始块)203和预测块265来计算残差块205(后续详细介绍了预测块265):例如,逐个像素点(逐个像素)从图像块203的像素点值中减去预测块265的像素点值,得到像素域中的残差块205。
变换
变换处理单元206用于对残差块205的像素点值执行离散余弦变换(discrete cosine transform,DCT)或离散正弦变换(discrete sine transform,DST)等,得到变换域中的变换系数207。变换系数207也可称为变换残差系数,表示变换域中的残差块205。
变换处理单元206可用于应用DCT/DST的整数化近似,例如为H.265/HEVC指定的变换。与正交DCT变换相比,这种整数化近似通常由某一因子按比例缩放。为了维持经过正变换和逆变换处理的残差块的范数,使用其他比例缩放因子作为变换过程的一部分。比例缩放因子通常是根据某些约束条件来选择的,例如比例缩放因子是用于移位运算的2的幂、变换系数的位深度、准确性与实施成本之间的权衡等。例如,在编码器20侧通过逆变换处理单元212为逆变换(以及在解码器30侧通过例如逆变换处理单元312为对应逆变换)指定具体的比例缩放因子,以及相应地,可以在编码器20侧通过变换处理单元206为正变换指定对应比例缩放因子。
在一个实施例中,视频编码器20(对应地,变换处理单元206)可用于输出一种或多种变换的类型等变换参数,例如,直接输出或由熵编码单元270进行编码或压缩后输出,例如使得视频解码器30可接收并使用变换参数进行解码。
量化
量化单元208用于通过例如标量量化或矢量量化对变换系数207进行量化,得到量化变换系数209。量化变换系数209也可称为量化残差系数209。
量化过程可减少与部分或全部变换系数207有关的位深度。例如,可在量化期间将n位变换系数向下舍入到m位变换系数,其中n大于m。可通过调整量化参数(quantization  parameter,QP)修改量化程度。例如,对于标量量化,可以应用不同程度的比例来实现较细或较粗的量化。较小量化步长对应较细量化,而较大量化步长对应较粗量化。可通过量化参数(quantization parameter,QP)指示合适的量化步长。例如,量化参数可以为合适的量化步长的预定义集合的索引。例如,较小的量化参数可对应精细量化(较小量化步长),较大的量化参数可对应粗糙量化(较大量化步长),反之亦然。量化可包括除以量化步长,而反量化单元210等执行的对应或逆解量化可包括乘以量化步长。根据例如HEVC一些标准的实施例可用于使用量化参数来确定量化步长。一般而言,可以根据量化参数使用包含除法的等式的定点近似来计算量化步长。可以引入其他比例缩放因子来进行量化和解量化,以恢复可能由于在用于量化步长和量化参数的等式的定点近似中使用的比例而修改的残差块的范数。在一种示例性实现方式中,可以合并逆变换和解量化的比例。或者,可以使用自定义量化表并在比特流中等将其从编码器向解码器指示。量化是有损操作,其中量化步长越大,损耗越大。
在一个实施例中,视频编码器20(对应地,量化单元208)可用于输出量化参数(quantization parameter,QP),例如,直接输出或由熵编码单元270进行编码或压缩后输出,例如使得视频解码器30可接收并使用量化参数进行解码。
反量化
反量化单元210用于对量化系数执行量化单元208的反量化,得到解量化系数211,例如,根据或使用与量化单元208相同的量化步长执行与量化单元208所执行的量化方案的反量化方案。解量化系数211也可称为解量化残差系数211,对应于变换系数207,但是由于量化造成损耗,反量化系数211通常与变换系数不完全相同。
逆变换
逆变换处理单元212用于执行变换处理单元206执行的变换的逆变换,例如,逆离散余弦变换(discrete cosine transform,DCT)或逆离散正弦变换(discrete sine transform,DST),以在像素域中得到重建残差块213(或对应的解量化系数213)。重建残差块213也可称为变换块213。
重建
重建单元214(例如,求和器214)用于将变换块213(即重建残差块213)添加到预测块265,以在像素域中得到重建块215,例如,将重建残差块213的像素点值和预测块265的像素点值相加。
滤波
环路滤波器单元220(或简称“环路滤波器”220)用于对重建块215进行滤波,得到滤波块221,或通常用于对重建像素点进行滤波以得到滤波像素点值。例如,环路滤波器单元用于顺利进行像素转变或提高视频质量。环路滤波器单元220可包括一个或多个环路滤波器,例如去块滤波器、像素点自适应偏移(sample-adaptive offset,SAO)滤波器或一个或多个其他滤波器,例如自适应环路滤波器(adaptive loop filter,ALF)、噪声抑制滤波器(noise suppression filter,NSF)或任意组合。例如,环路滤波器单元220可以包括去块滤波器、SAO滤波器和ALF滤波器。滤波过程的顺序可以是去块滤波器、SAO滤波器和ALF滤波器。再例如,增加一个称为具有色度缩放的亮度映射(luma mapping with chroma scaling,LMCS)(即自适应环内整形器)的过程。该过程在去块之前执行。再例如,去 块滤波过程也可以应用于内部子块边缘,例如仿射子块边缘、ATMVP子块边缘、子块变换(sub-block transform,SBT)边缘和内子部分(intra sub-partition,ISP)边缘。尽管环路滤波器单元220在图2中示为环路滤波器,但在其他配置中,环路滤波器单元220可以实现为环后滤波器。滤波块221也可称为滤波重建块221。
在一个实施例中,视频编码器20(对应地,环路滤波器单元220)可用于输出环路滤波器参数(例如SAO滤波参数、ALF滤波参数或LMCS参数),例如,直接输出或由熵编码单元270进行熵编码后输出,例如使得解码器30可接收并使用相同或不同的环路滤波器参数进行解码。
解码图像缓冲器
解码图像缓冲器(decoded picture buffer,DPB)230可以是存储参考图像数据以供视频编码器20在编码视频数据时使用的参考图像存储器。DPB 230可以由多种存储器设备中的任一种形成,例如动态随机存取存储器(dynamic random access memory,DRAM),包括同步DRAM(synchronous DRAM,SDRAM)、磁阻RAM(magnetoresistive RAM,MRAM)、电阻RAM(resistive RAM,RRAM)或其他类型的存储设备。解码图像缓冲器230可用于存储一个或多个滤波块221。解码图像缓冲器230还可用于存储同一当前图像或例如之前的重建图像等不同图像的其他之前的滤波块,例如之前重建和滤波的块221,并可提供完整的之前重建即解码图像(和对应参考块和像素点)和/或部分重建的当前图像(和对应参考块和像素点),例如用于帧间预测。解码图像缓冲器230还可用于存储一个或多个未经滤波的重建块215,或一般存储未经滤波的重建像素点,例如,未被环路滤波单元220滤波的重建块215,或未进行任何其他处理的重建块或重建像素点。
模式选择(分割和预测)
模式选择单元260包括分割单元262、帧间预测单元244和帧内预测单元254,用于从解码图像缓冲器230或其他缓冲器(例如,列缓冲器,图2中未显示)接收或获得原始块203(当前图像17的当前块203)和重建图像数据等原始图像数据,例如,同一(当前)图像和/或一个或多个之前解码图像的滤波和/或未经滤波的重建像素点或重建块。重建图像数据用作帧间预测或帧内预测等预测所需的参考图像数据,以得到预测块265或预测值265。
模式选择单元260可用于为当前块(包括不分割)和预测模式(例如帧内或帧间预测模式)确定或选择一种分割,生成对应的预测块265,以对残差块205进行计算和对重建块215进行重建。
在一个实施例中,模式选择单元260可用于选择分割和预测模式(例如,从模式选择单元260支持的或可用的预测模式中),所述预测模式提供最佳匹配或者说最小残差(最小残差是指传输或存储中更好地压缩),或者提供最小信令开销(最小信令开销是指传输或存储中更好地压缩),或者同时考虑或平衡以上两者。模式选择单元260可用于根据码率失真优化(rate distortion Optimization,RDO)确定分割和预测模式,即选择提供最小码率失真优化的预测模式。本文“最佳”、“最低”、“最优”等术语不一定指总体上“最佳”、“最低”、“最优”的,但也可以指满足终止或选择标准的情况,例如,超过或低于阈值的值或其他限制可能导致“次优选择”,但会降低复杂度和处理时间。
换言之,分割单元262可用于将视频序列中的图像分割为编码树单元(coding tree unit, CTU)序列,CTU 203可进一步被分割成较小的块部分或子块(再次形成块),例如,通过迭代使用四叉树(quad-tree partitioning,QT)分割、二叉树(binary-tree partitioning,BT)分割或三叉树(triple-tree partitioning,TT)分割或其任意组合,并且用于例如对块部分或子块中的每一个执行预测,其中模式选择包括选择分割块203的树结构和选择应用于块部分或子块中的每一个的预测模式。
下文将详细地描述由视频编码器20执行的分割(例如,由分割单元262执行)和预测处理(例如,由帧间预测单元244和帧内预测单元254执行)。
分割
分割单元262可将一个图像块(或CTU)203分割(或划分)为较小的部分,例如正方形或矩形形状的小块。对于具有三个像素点阵列的图像,一个CTU由N×N个亮度像素点块和两个对应的色度像素点块组成。CTU中亮度块的最大允许大小在正在开发的通用视频编码(versatile video coding,VVC)标准中被指定为128×128,但是将来可指定为不同于128×128的值,例如256×256。图像的CTU可以集中/分组为片/编码区块组、编码区块或砖。一个编码区块覆盖着一个图像的矩形区域,一个编码区块可以分成一个或多个砖。一个砖由一个编码区块内的多个CTU行组成。没有分割为多个砖的编码区块可以称为砖。但是,砖是编码区块的真正子集,因此不称为编码区块。VVC支持两种编码区块组模式,分别为光栅扫描片/编码区块组模式和矩形片模式。在光栅扫描编码区块组模式,一个片/编码区块组包含一个图像的编码区块光栅扫描中的编码区块序列。在矩形片模式中,片包含一个图像的多个砖,这些砖共同组成图像的矩形区域。矩形片内的砖按照片的砖光栅扫描顺序排列。这些较小块(也可称为子块)可进一步分割为更小的部分。这也称为树分割或分层树分割,其中在根树级别0(层次级别0、深度0)等的根块可以递归的分割为两个或两个以上下一个较低树级别的块,例如树级别1(层次级别1、深度1)的节点。这些块可以又分割为两个或两个以上下一个较低级别的块,例如树级别2(层次级别2、深度2)等,直到分割结束(因为满足结束标准,例如达到最大树深度或最小块大小)。未进一步分割的块也称为树的叶块或叶节点。分割为两个部分的树称为二叉树(binary-tree,BT),分割为三个部分的树称为三叉树(ternary-tree,TT),分割为四个部分的树称为四叉树(quad-tree,QT)。
例如,编码树单元(CTU)可以为或包括亮度像素点的CTB、具有三个像素点阵列的图像的色度像素点的两个对应CTB、或单色图像的像素点的CTB或使用三个独立颜色平面和语法结构(用于编码像素点)编码的图像的像素点的CTB。相应地,编码树块(CTB)可以为N×N个像素点块,其中N可以设为某个值使得分量划分为CTB,这就是分割。编码单元(coding unit,CU)可以为或包括亮度像素点的编码块、具有三个像素点阵列的图像的色度像素点的两个对应编码块、或单色图像的像素点的编码块或使用三个独立颜色平面和语法结构(用于编码像素点)编码的图像的像素点的编码块。相应地,编码块(CB)可以为M×N个像素点块,其中M和N可以设为某个值使得CTB划分为编码块,这就是分割。
例如,在实施例中,根据HEVC可通过使用表示为编码树的四叉树结构将编码树单元(CTU)划分为多个CU。在叶CU级作出是否使用帧间(时间)预测或帧内(空间)预测对图像区域进行编码的决定。每个叶CU可以根据PU划分类型进一步划分为一个、两 个或四个PU。一个PU内使用相同的预测过程,并以PU为单位向解码器传输相关信息。在根据PU划分类型应用预测过程得到残差块之后,可以根据类似于用于CU的编码树的其他四叉树结构将叶CU分割为变换单元(TU)。
例如,在实施例中,根据当前正在开发的最新视频编码标准(称为通用视频编码(VVC),使用嵌套多类型树(例如二叉树和三叉树)的组合四叉树来划分用于分割编码树单元的分段结构。在编码树单元内的编码树结构中,CU可以为正方形或矩形。例如,编码树单元(CTU)首先由四叉树结构进行分割。四叉树叶节点进一步由多类型树结构分割。多类型树形结构有四种划分类型:垂直二叉树划分(SPLIT_BT_VER)、水平二叉树划分(SPLIT_BT_HOR)、垂直三叉树划分(SPLIT_TT_VER)和水平三叉树划分(SPLIT_TT_HOR)。多类型树叶节点称为编码单元(CU),除非CU对于最大变换长度而言太大,这样的分段用于预测和变换处理,无需其他任何分割。在大多数情况下,这表示CU、PU和TU在四叉树嵌套多类型树的编码块结构中的块大小相同。当最大支持变换长度小于CU的彩色分量的宽度或高度时,就会出现该异常。VVC制定了具有四叉树嵌套多类型树的编码结构中的分割划分信息的唯一信令机制。在信令机制中,编码树单元(CTU)作为四叉树的根首先被四叉树结构分割。然后每个四叉树叶节点(当足够大可以被)被进一步分割为一个多类型树结构。在多类型树结构中,通过第一标识(mtt_split_cu_flag)指示节点是否进一步分割,当对节点进一步分割时,先用第二标识(mtt_split_cu_vertical_flag)指示划分方向,再用第三标识(mtt_split_cu_binary_flag)指示划分是二叉树划分或三叉树划分。根据mtt_split_cu_vertical_flag和mtt_split_cu_binary_flag的值,解码器可以基于预定义规则或表格推导出CU的多类型树划分模式(MttSplitMode)。需要说明的是,对于某种设计,例如VVC硬件解码器中的64×64的亮度块和32×32的色度流水线设计,当亮度编码块的宽度或高度大于64时,不允许进行TT划分。当色度编码块的宽度或高度大于32时,也不允许TT划分。流水线设计将图像分为多个虚拟流水线数据单元(virtual pipeline data unit,VPDU),每个VPDU在图像中定义为互不重叠的单元。在硬件解码器中,连续的VPDU在多个流水线阶段同时处理。在大多数流水线阶段,VPDU大小与缓冲器大小大致成正比,因此需要保持较小的VPDU。在大多数硬件解码器中,VPDU大小可以设置为最大变换块(transform block,TB)大小。但是,在VVC中,三叉树(TT)和二叉树(BT)的分割可能会增加VPDU的大小。
另外,需要说明的是,当树节点块的一部分超出底部或图像右边界时,强制对该树节点块进行划分,直到每个编码CU的所有像素点都位于图像边界内。
例如,所述帧内子分割(intra sub-partitions,ISP)工具可以根据块大小将亮度帧内预测块垂直或水平的分为两个或四个子部分。
在一个示例中,视频编码器20的模式选择单元260可以用于执行上文描述的分割技术的任意组合。
如上所述,视频编码器20用于从(预定的)预测模式集合中确定或选择最好或最优的预测模式。预测模式集合可包括例如帧内预测模式和/或帧间预测模式。
帧内预测
帧内预测模式集合可包括35种不同的帧内预测模式,例如,像DC(或均值)模式和 平面模式的非方向性模式,或如HEVC定义的方向性模式,或者可包括67种不同的帧内预测模式,例如,像DC(或均值)模式和平面模式的非方向性模式,或如VVC中定义的方向性模式。例如,若干传统角度帧内预测模式自适应地替换为VVC中定义的非正方形块的广角帧内预测模式。又例如,为了避免DC预测的除法运算,仅使用较长边来计算非正方形块的平均值。并且,平面模式的帧内预测结果还可以使用位置决定的帧内预测组合(position dependent intra prediction combination,PDPC)方法修改。
帧内预测单元254用于根据帧内预测模式集合中的帧内预测模式使用同一当前图像的相邻块地重建像素点来生成帧内预测块265。
帧内预测单元254(或通常为模式选择单元260)还用于输出帧内预测参数(或通常为指示块的选定帧内预测模式的信息)以语法元素266的形式发送到熵编码单元270,以包含到编码图像数据21中,从而视频解码器30可执行操作,例如接收并使用用于解码的预测参数。
HEVC中的帧内预测模式包括直流预测模式,平面预测模式和33种角度预测模式,共计35个候选预测模式。当前块可以使用左侧和上方已重建图像块的像素作为参考进行帧内预测。当前块的周边区域中用来对当前块进行帧内预测的图像块成为参考块,参考块中的像素称为参考像素。35个候选预测模式中,直流预测模式适用于当前块中纹理平坦的区域,该区域中所有像素均使用参考块中的参考像素的平均值作为预测;平面预测模式适用于纹理平滑变化的图像块,符合该条件的当前块使用参考块中的参考像素进行双线性插值作为当前块中的所有像素的预测;角度预测模式利用当前块的纹理与相邻已重建图像块的纹理高度相关的特性,沿某一角度复制对应的参考块中的参考像素的值作为当前块中的所有像素的预测。
HEVC编码器给当前块从35个候选预测模式中选择一个最优帧内预测模式,并将该最优帧内预测模式写入视频码流。为提升帧内预测的编码效率,编码器/解码器会从周边区域中、采用帧内预测的已重建图像块各自的最优帧内预测模式中推导出3个最可能模式,如果给当前块选择的最优帧内预测模式是这3个最可能模式的其中之一,则编码一个第一索引指示所选择的最优帧内预测模式是这3个最可能模式的其中之一;如果选中的最优帧内预测模式不是这3个最可能模式,则编码一个第二索引指示所选择的最优帧内预测模式是其他32个模式(35个候选预测模式中除前述3个最可能模式外的其他模式)的其中之一。HEVC标准使用5比特的定长码作为前述第二索引。
HEVC编码器推导出3个最可能模式的方法包括:选取当前块的左相邻图像块和上相邻图像块的最优帧内预测模式放入集合,如果这两个最优帧内预测模式相同,则集合中只保留一个即可。如果这两个最优帧内预测模式相同且均为角度预测模式,则再选取与该角度方向邻近的两个角度预测模式加入集合;否则,依次选择平面预测模式、直流模式和竖直预测模式加入集合,直到集合中的模式数量达到3。
HEVC解码器对码流做熵解码后,获得当前块的模式信息,该模式信息包括指示当前块的最优帧内预测模式是否在3个最可能模式中的指示标识,以及当前块的最优帧内预测模式在3个最可能模式中的索引或者当前块的最优帧内预测模式在其他32个模式中的索引。
帧间预测
在可能的实现中,帧间预测模式集合取决于可用参考图像(即,例如前述存储在DBP230中的至少部分之前解码的图像)和其他帧间预测参数,例如取决于是否使用整个参考图像或只使用参考图像的一部分,例如当前块的区域附近的搜索窗口区域,来搜索最佳匹配参考块,和/或例如取决于是否执行半像素、四分之一像素和/或16分之一内插的像素内插。
除上述预测模式外,还可以采用跳过模式和/或直接模式。
例如,扩展合并预测,这个模式的合并候选列表由以下五个候选类型按顺序组成:来自空间相邻CU的空间MVP、来自并置CU的时间MVP、来自FIFO表的基于历史的MVP、成对平均MVP和零MV。可以使用基于双边匹配的解码器侧运动矢量修正(decoder side motion vector refinement,DMVR)来增加合并模式的MV的准确度。带有MVD的合并模式(merge mode with MVD,MMVD)来自有运动矢量差异的合并模式。在发送跳过标志和合并标志之后立即发送MMVD标志,以指定CU是否使用MMVD模式。可以使用CU级自适应运动矢量分辨率(adaptive motion vector resolution,AMVR)方案。AMVR支持CU的MVD以不同的精度进行编码。根据当前CU的预测模式,自适应地选择当前CU的MVD。当CU以合并模式进行编码时,可以将合并的帧间/帧内预测(combined inter/intra prediction,CIIP)模式应用于当前CU。对帧间和帧内预测信号进行加权平均,得到CIIP预测。对于仿射运动补偿预测,通过2个控制点(4参数)或3个控制点(6参数)运动矢量的运动信息来描述块的仿射运动场。基于子块的时间运动矢量预测(subblock-based temporal motion vector prediction,SbTMVP),与HEVC中的时间运动矢量预测(temporal motion vector prediction,TMVP)类似,但预测的是当前CU内的子CU的运动矢量。双向光流(bi-directional optical flow,BDOF)以前称为BIO,是一种减少计算的简化版本,特别是在乘法次数和乘数大小方面的计算。在三角形分割模式中,CU以对角线划分和反对角线划分两种划分方式被均匀划分为两个三角形部分。此外,双向预测模式在简单平均的基础上进行了扩展,以支持两个预测信号的加权平均。
帧间预测单元244可包括运动估计(motion estimation,ME)单元和运动补偿(motion compensation,MC)单元(两者在图2中未示出)。运动估计单元可用于接收或获取图像块203(当前图像17的当前图像块203)和解码图像231,或至少一个或多个之前重建块,例如,一个或多个其它/不同之前解码图像231的重建块,来进行运动估计。例如,视频序列可包括当前图像和之前的解码图像231,或换句话说,当前图像和之前的解码图像231可以为形成视频序列的图像序列的一部分或形成该图像序列。
例如,编码器20可用于从多个其他图像中的同一或不同图像的多个参考块中选择参考块,并将参考图像(或参考图像索引)和/或参考块的位置(x、y坐标)与当前块的位置之间的偏移(空间偏移)作为帧间预测参数提供给运动估计单元。该偏移也称为运动矢量(motion vector,MV)。
运动补偿单元用于获取,例如接收,帧间预测参数,并根据或使用该帧间预测参数执行帧间预测,得到帧间预测块246。由运动补偿单元执行的运动补偿可能包含根据通过运动估计确定的运动/块矢量来提取或生成预测块,还可能包括对子像素精度执行内插。内插滤波可从已知像素的像素点中产生其他像素的像素点,从而潜在地增加可用于对图像块进行编码的候选预测块的数量。一旦接收到当前图像块的PU对应的运动矢量时,运动补 偿单元可在其中一个参考图像列表中定位运动矢量指向的预测块。
运动补偿单元还可以生成与块和视频片相关的语法元素,以供视频解码器30在解码视频片的图像块时使用。此外,或者作为片和相应语法元素的替代,可以生成或使用编码区块组和/或编码区块以及相应语法元素。
在获取先进的运动矢量预测(advanced motion vector prediction,AMVP)模式中的候选运动矢量列表的过程中,作为备选可以加入候选运动矢量列表的运动矢量(motion vector,MV)包括当前块的空域相邻和时域相邻的图像块的MV,其中空域相邻的图像块的MV又可以包括位于当前块左侧的左方候选图像块的MV和位于当前块上方的上方候选图像块的MV。示例性的,请参考图4,图4为本申请实施例提供的候选图像块的一种示例性的示意图,如图4所示,左方候选图像块的集合包括{A0,A1},上方候选图像块的集合包括{B0,B1,B2},时域相邻的候选图像块的集合包括{C,T},这三个集合均可以作为备选被加入到候选运动矢量列表中,但是根据现有编码标准,AMVP的候选运动矢量列表的最大长度为2,因此需要根据规定的顺序从三个集合中确定在候选运动矢量列表中加入最多两个图像块的MV。该顺序可以是优先考虑当前块的左方候选图像块的集合{A0,A1}(先考虑A0,A0不可得再考虑A1),其次考虑当前块的上方候选图像块的集合{B0,B1,B2}(先考虑B0,B0不可得再考虑B1,B1不可得再考虑B2),最后考虑当前块的时域相邻的候选图像块的集合{C,T}(先考虑T,T不可得再考虑C)。
得到上述候选运动矢量列表后,通过率失真代价(rate distortion cost,RD cost)从候选运动矢量列表中确定最优的MV,将RD cost最小的候选运动矢量作为当前块的运动矢量预测值(motion vector predictor,MVP)。率失真代价由以下公式计算获得:
J=SAD+λR
其中,J表示RD cost,SAD为使用候选运动矢量进行运动估计后得到的预测块的像素值与当前块的像素值之间的绝对误差和(sum of absolute differences,SAD),R表示码率,λ表示拉格朗日乘子。
编码端将确定出的MVP在候选运动矢量列表中的索引传递到解码端。进一步地,可以在MVP为中心的邻域内进行运动搜索获得当前块实际的运动矢量,编码端计算MVP与实际的运动矢量之间的运动矢量差值(motion vector difference,MVD),并将MVD也传递到解码端。解码端解析索引,根据该索引在候选运动矢量列表中找到对应的MVP,解析MVD,将MVD与MVP相加得到当前块实际的运动矢量。
在获取融合(Merge)模式中的候选运动信息列表的过程中,作为备选可以加入候选运动信息列表的运动信息包括当前块的空域相邻或时域相邻的图像块的运动信息,其中空域相邻的图像块和时域相邻的图像块可参照图4,候选运动信息列表中对应于空域的候选运动信息来自于空间相邻的5个块(A0、A1、B0、B1和B2),若空域相邻块不可得或者为帧内预测,则其运动信息不加入候选运动信息列表。当前块的时域的候选运动信息根据参考帧和当前帧的图序计数(picture order count,POC)对参考帧中对应位置块的MV进行缩放后获得,先判断参考帧中位置为T的块是否可得,若不可得则选择位置为C的块。得到上述候选运动信息列表后,通过RD cost从候选运动信息列表中确定最优的运动信息作为当前块的运动信息。编码端将最优的运动信息在候选运动信息列表中位置的索引值(记为merge index)传递到解码端。
熵编码
熵编码单元270用于将熵编码算法或方案(例如,可变长度编码(variable length coding,VLC)方案、上下文自适应VLC方案(context adaptive VLC,CALVC)、算术编码方案、二值化算法、上下文自适应二进制算术编码(context adaptive binary arithmetic coding,CABAC)、基于语法的上下文自适应二进制算术编码(syntax-based context-adaptive binary arithmetic coding,SBAC)、概率区间分割熵(probability interval partitioning entropy,PIPE)编码或其它熵编码方法或技术)应用于量化残差系数209、帧间预测参数、帧内预测参数、环路滤波器参数和/或其他语法元素,得到可以通过输出端272以编码比特流21等形式输出的编码图像数据21,使得视频解码器30等可以接收并使用用于解码的参数。可将编码比特流21传输到视频解码器30,或将其保存在存储器中稍后由视频解码器30传输或检索。
视频编码器20的其他结构变体可用于对视频流进行编码。例如,基于非变换的编码器20可以在某些块或帧没有变换处理单元206的情况下直接量化残差信号。在另一种实现方式中,编码器20可以具有组合成单个单元的量化单元208和反量化单元210。
解码器和解码方法
如图3所示,视频解码器30用于接收例如由编码器20编码的编码图像数据21(例如编码比特流21),得到解码图像331。编码图像数据或比特流包括用于解码所述编码图像数据的信息,例如表示编码视频片(和/或编码区块组或编码区块)的图像块的数据和相关的语法元素。
在图3的示例中,解码器30包括熵解码单元304、反量化单元310、逆变换处理单元312、重建单元314(例如求和器314)、环路滤波器320、解码图像缓冲器(DBP)330、模式应用单元360、帧间预测单元344和帧内预测单元354。帧间预测单元344可以为或包括运动补偿单元。在一些示例中,视频解码器30可执行大体上与参照图2的视频编码器100描述的编码过程相反的解码过程。
如编码器20所述,反量化单元210、逆变换处理单元212、重建单元214、环路滤波器220、解码图像缓冲器DPB230、帧间预测单元344和帧内预测单元354还组成视频编码器20的“内置解码器”。相应地,反量化单元310在功能上可与反量化单元110相同,逆变换处理单元312在功能上可与逆变换处理单元122相同,重建单元314在功能上可与重建单元214相同,环路滤波器320在功能上可与环路滤波器220相同,解码图像缓冲器330在功能上可与解码图像缓冲器230相同。因此,视频编码器20的相应单元和功能的解释相应地适用于视频解码器30的相应单元和功能。
熵解码
熵解码单元304用于解析比特流21(或一般为编码图像数据21)并对编码图像数据21执行熵解码,得到量化系数309和/或解码后的编码参数(图3中未示出)等,例如帧间预测参数(例如参考图像索引和运动矢量)、帧内预测参数(例如帧内预测模式或索引)、变换参数、量化参数、环路滤波器参数和/或其他语法元素等中的任一个或全部。熵解码单元304可用于应用编码器20的熵编码单元270的编码方案对应的解码算法或方案。熵解码单元304还可用于向模式应用单元360提供帧间预测参数、帧内预测参数和/或其他语法元素,以及向解码器30的其他单元提供其他参数。视频解码器30可以接收视频片和/或视频块级的语法元素。此外,或者作为片和相应语法元素的替代,可以接收或使用编 码区块组和/或编码区块以及相应语法元素。
反量化
反量化单元310可用于从编码图像数据21(例如通过熵解码单元304解析和/或解码)接收量化参数(quantization parameter,QP)(或一般为与反量化相关的信息)和量化系数,并基于所述量化参数对所述解码的量化系数309进行反量化以获得反量化系数311,所述反量化系数311也可以称为变换系数311。反量化过程可包括使用视频编码器20为视频片中的每个视频块计算的量化参数来确定量化程度,同样也确定需要执行的反量化的程度。
逆变换
逆变换处理单元312可用于接收解量化系数311,也称为变换系数311,并对解量化系数311应用变换以得到像素域中的重建残差块213。重建残差块213也可称为变换块313。变换可以为逆变换,例如逆DCT、逆DST、逆整数变换或概念上类似的逆变换过程。逆变换处理单元312还可以用于从编码图像数据21(例如通过熵解码单元304解析和/或解码)接收变换参数或相应信息,以确定应用于解量化系数311的变换。
重建
重建单元314(例如,求和器314)用于将重建残差块313添加到预测块365,以在像素域中得到重建块315,例如,将重建残差块313的像素点值和预测块365的像素点值相加。
滤波
环路滤波器单元320(在编码环路中或之后)用于对重建块315进行滤波,得到滤波块321,从而顺利进行像素转变或提高视频质量等。环路滤波器单元320可包括一个或多个环路滤波器,例如去块滤波器、像素点自适应偏移(sample-adaptive offset,SAO)滤波器或一个或多个其他滤波器,例如自适应环路滤波器(adaptive loop filter,ALF)、噪声抑制滤波器(noise suppression filter,NSF)或任意组合。例如,环路滤波器单元220可以包括去块滤波器、SAO滤波器和ALF滤波器。滤波过程的顺序可以是去块滤波器、SAO滤波器和ALF滤波器。再例如,增加一个称为具有色度缩放的亮度映射(luma mapping with chroma scaling,LMCS)(即自适应环内整形器)的过程。该过程在去块之前执行。再例如,去块滤波过程也可以应用于内部子块边缘,例如仿射子块边缘、ATMVP子块边缘、子块变换(sub-block transform,SBT)边缘和内子部分(intra sub-partition,ISP)边缘。尽管环路滤波器单元320在图3中示为环路滤波器,但在其他配置中,环路滤波器单元320可以实现为环后滤波器。
解码图像缓冲器
随后将一个图像中的解码视频块321存储在解码图像缓冲器330中,解码图像缓冲器330存储作为参考图像的解码图像331,参考图像用于其他图像和/或分别输出显示的后续运动补偿。
解码器30用于通过输出端312等输出解码图像311,向用户显示或供用户查看。
预测
帧间预测单元344在功能上可与帧间预测单元244(特别是运动补偿单元)相同,帧内预测单元354在功能上可与帧间预测单元254相同,并基于从编码图像数据21(例如 通过熵解码单元304解析和/或解码)接收的分割和/或预测参数或相应信息决定划分或分割和执行预测。模式应用单元360可用于根据重建图像、块或相应的像素点(已滤波或未滤波)执行每个块的预测(帧内或帧间预测),得到预测块365。
当将视频片编码为帧内编码(intra coded,I)片时,模式应用单元360中的帧内预测单元354用于根据指示的帧内预测模式和来自当前图像的之前解码块的数据生成用于当前视频片的图像块的预测块365。当视频图像编码为帧间编码(即,B或P)片时,模式应用单元360中的帧间预测单元344(例如运动补偿单元)用于根据运动矢量和从熵解码单元304接收的其他语法元素生成用于当前视频片的视频块的预测块365。对于帧间预测,可从其中一个参考图像列表中的其中一个参考图像产生这些预测块。视频解码器30可以根据存储在DPB 330中的参考图像,使用默认构建技术来构建参考帧列表0和列表1。除了片(例如视频片)或作为片的替代,相同或类似的过程可应用于编码区块组(例如视频编码区块组)和/或编码区块(例如视频编码区块)的实施例,例如视频可以使用I、P或B编码区块组和/或编码区块进行编码。
模式应用单元360用于通过解析运动矢量和其他语法元素,确定用于当前视频片的视频块的预测信息,并使用预测信息产生用于正在解码的当前视频块的预测块。例如,模式应用单元360使用接收到的一些语法元素确定用于编码视频片的视频块的预测模式(例如帧内预测或帧间预测)、帧间预测片类型(例如B片、P片或GPB片)、用于片的一个或多个参考图像列表的构建信息、用于片的每个帧间编码视频块的运动矢量、用于片的每个帧间编码视频块的帧间预测状态、其它信息,以解码当前视频片内的视频块。除了片(例如视频片)或作为片的替代,相同或类似的过程可应用于编码区块组(例如视频编码区块组)和/或编码区块(例如视频编码区块)的实施例,例如视频可以使用I、P或B编码区块组和/或编码区块进行编码。
在一个实施例中,图3的视频编码器30还可以用于使用片(也称为视频片)分割和/或解码图像,其中图像可以使用一个或多个片(通常为不重叠的)进行分割或解码。每个片可包括一个或多个块(例如CTU)或一个或多个块组(例如H.265/HEVC/VVC标准中的编码区块和VVC标准中的砖。
在一个实施例中,图3所示的视频解码器30还可以用于使用片/编码区块组(也称为视频编码区块组)和/或编码区块(也称为视频编码区块)对图像进行分割和/或解码,其中图像可以使用一个或多个片/编码区块组(通常为不重叠的)进行分割或解码,每个片/编码区块组可包括一个或多个块(例如CTU)或一个或多个编码区块等,其中每个编码区块可以为矩形等形状,可包括一个或多个完整或部分块(例如CTU)。
视频解码器30的其他变型可用于对编码图像数据21进行解码。例如,解码器30可以在没有环路滤波器单元320的情况下产生输出视频流。例如,基于非变换的解码器30可以在某些块或帧没有逆变换处理单元312的情况下直接反量化残差信号。在另一种实现方式中,视频解码器30可以具有组合成单个单元的反量化单元310和逆变换处理单元312。
应理解,在编码器20和解码器30中,可以对当前步骤的处理结果进一步处理,然后输出到下一步骤。例如,在插值滤波、运动矢量推导或环路滤波之后,可以对插值滤波、运动矢量推导或环路滤波的处理结果进行进一步的运算,例如裁剪(clip)或移位(shift)运算。
应该注意的是,可以对当前块的推导运动矢量(包括但不限于仿射模式的控制点运动矢量、仿射、平面、ATMVP模式的子块运动矢量、时间运动矢量等)进行进一步运算。例如,根据运动矢量的表示位将运动矢量的值限制在预定义范围。如果运动矢量的表示位为bitDepth,则范围为-2^(bitDepth-1)至2^(bitDepth-1)-1,其中“^”表示幂次方。例如,如果bitDepth设置为16,则范围为-32768~32767;如果bitDepth设置为18,则范围为-131072~131071。例如,推导运动矢量的值(例如一个8×8块中的4个4×4子块的MV)被限制,使得所述4个4×4子块MV的整数部分之间的最大差值不超过N个像素,例如不超过1个像素。这里提供了两种根据bitDepth限制运动矢量的方法。
尽管上述实施例主要描述了视频编解码,但应注意的是,译码系统10、编码器20和解码器30的实施例以及本文描述的其他实施例也可以用于静止图像处理或编解码,即视频编解码中独立于任何先前或连续图像的单个图像的处理或编解码。一般情况下,如果图像处理仅限于单个图像17,帧间预测单元244(编码器)和帧间预测单元344(解码器)可能不可用。视频编码器20和视频解码器30的所有其他功能(也称为工具或技术)同样可用于静态图像处理,例如残差计算204/304、变换206、量化208、反量化210/310、(逆)变换212/312、分割262/362、帧内预测254/354和/或环路滤波220/320、熵编码270和熵解码304。
请参考图5,图5为本申请实施例提供的视频译码设备500的一种示例性框图。视频译码设备500适用于实现本文描述的公开实施例。在一个实施例中,视频译码设备500可以是解码器,例如图1a中的视频解码器30,也可以是编码器,例如图1a中的视频编码器20。
视频译码设备500包括:用于接收数据的入端口510(或输入端口510)和接收单元(receiver unit,Rx)520;用于处理数据的处理器、逻辑单元或中央处理器(central processing unit,CPU)530;例如,这里的处理器530可以是神经网络处理器530;用于传输数据的发送单元(transmitter unit,Tx)540和出端口550(或输出端口550);用于存储数据的存储器560。视频译码设备500还可包括耦合到入端口510、接收单元520、发送单元540和出端口550的光电(optical-to-electrical,OE)组件和电光(electrical-to-optical,EO)组件,用于光信号或电信号的出口或入口。
处理器530通过硬件和软件实现。处理器530可实现为一个或多个处理器芯片、核(例如,多核处理器)、FPGA、ASIC和DSP。处理器530与入端口510、接收单元520、发送单元540、出端口550和存储器560通信。处理器530包括译码模块570(例如,基于神经网络的译码模块570)。译码模块570实施上文所公开的实施例。例如,译码模块570执行、处理、准备或提供各种编码操作。因此,通过译码模块570为视频译码设备500的功能提供了实质性的改进,并且影响了视频译码设备500到不同状态的切换。或者,以存储在存储器560中并由处理器530执行的指令来实现译码模块570。
存储器560包括一个或多个磁盘、磁带机和固态硬盘,可以用作溢出数据存储设备,用于在选择执行程序时存储此类程序,并且存储在程序执行过程中读取的指令和数据。存储器560可以是易失性和/或非易失性的,可以是只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、三态内容寻址存储器(ternary content-addressable memory,TCAM)和/或静态随机存取存储器(static random-access  memory,SRAM)。
请参考图6,图6为本申请实施例提供的装置600的一种示例性框图,装置600可用作图1a中的源设备12和目的设备14中的任一个或两个。
装置600中的处理器602可以是中央处理器。或者,处理器602可以是现有的或今后将研发出的能够操控或处理信息的任何其他类型设备或多个设备。虽然可以使用如图所示的处理器602等单个处理器来实施已公开的实现方式,但使用一个以上的处理器速度更快和效率更高。
在一种实现方式中,装置600中的存储器604可以是只读存储器(ROM)设备或随机存取存储器(RAM)设备。任何其他合适类型的存储设备都可以用作存储器604。存储器604可以包括处理器602通过总线612访问的代码和数据606。存储器604还可包括操作系统608和应用程序610,应用程序610包括允许处理器602执行本文所述方法的至少一个程序。例如,应用程序610可以包括应用1至N,还包括执行本文所述方法的视频译码应用。
装置600还可以包括一个或多个输出设备,例如显示器618。在一个示例中,显示器618可以是将显示器与可用于感测触摸输入的触敏元件组合的触敏显示器。显示器618可以通过总线612耦合到处理器602。
虽然装置600中的总线612在本文中描述为单个总线,但是总线612可以包括多个总线。此外,辅助储存器可以直接耦合到装置600的其他组件或通过网络访问,并且可以包括存储卡等单个集成单元或多个存储卡等多个单元。因此,装置600可以具有各种各样的配置。
请参考图7,图7为本申请实施例提供的一种编码框架。通过图7可以看出本申请提供编码框架包括预测模块、量化模块、熵编码器、编码子流缓冲区和子流交织模块。
可以将对输入的媒体内容进行预测、量化、熵编码和子流交织生成比特流。
需要说明的是,上述编码框架仅仅是示意性的。编码框架可以包括更多或更少的模块。例如,编码框架可以包括更多的熵编码器。如5个熵编码器。
请参考图8,图8为本申请实施例提供的一种编码方法。该编码方法可以适用于图7所示的编码框架。如图8所示,该编码方法可以包括:
S801、根据媒体内容得到多个语法元素。
可选地,上述媒体内容可以包括图像、图像切片或视频中的至少一项。
在一种可能的实现方式中,可以先对媒体内容进行预测得到多个预测数据,然后对得到的多个预测数据进行量化得到多个语法元素。
示例性地,可以将输入图像划分为多个图像块,然后将多个图像块输入预测模块进行预测得到多个预测数据,然后将得到的多个预测数据输入量化模块进行量化得到多个语法元素。
需要说明的是,量化和预测的具体方法可以采用本领域技术人员能够想到的任何一种方法进行处理,本申请实施例对此不作具体限定。例如,量化可以采用均匀量化。
S802、将多个语法元素送入熵编码器进行编码得到多个子流。
在一种可能的实现方式中,可以先对多个语法元素进行分类,然后根据分类结果将不同类别的语法元素就送入不同熵编码器进行编码得到不同子流。
示例性地,可以根据多个语法元素中每个元素语法所属的通道将多个语法元素分为R通道的语法元素、G通道的语法元素和B通道的语法元素。然后将R通道的语法元素送入熵编码器1进行编码得到子流1;将G通道的语法元素送入熵编码器2进行编码得到子流2;将B通道的语法元素送入熵编码器3进行编码得到子流3。
在另一种可能的实现方式中,可以采用轮询的方法将多个语法元素送入多个熵编码器进行编码得到多个子流。
示例性地,可以先将多个语法元素中的X个语法元素送入熵编码器1进行编码,然后将剩余语法元素中的X个语法元素送入熵编码器2进行编码,再将剩余语法元素中的X个语法元素送入熵编码器3进行编码。以此往复直至所有语法元素均被送入熵编码器。
S803、将多个子流交织为比特流。
其中,比特流包括多个数据包,上述多个数据包中的每个数据包均包括用于指示所属的子流的标识。
在一种可能的实现方式中,可以先根据预设切分长度将多个子流的每个子流切分为多个数据包,然后根据切分得到的多个数据包得到比特流。
示例性地,可以按照以下的步骤进行子流交织:步骤一、选择一个编码子流缓冲区。步骤二、根据当前编码子流缓冲区中剩余数据的大小S,计算可构建的数据包个数K=floor(S/(N-M)),其中floor为向下取整函数。步骤三、若当前图像块为输入图像或输入图像切片的最后一个块,则令K=K+1。步骤四、从当前编码子流缓冲区中连续取K段长度为N-M比特的数据。步骤五、若步骤四取数据时,当前编码子流缓冲区中的数据小于N-M比特,则在此数据后补上若干个0,直至数据长度为N-M比特。步骤六、将步骤四或步骤五取得的数据作为数据主体,并添加数据头;数据头长度为M比特,数据头内容为当前编码子流缓冲区对应的子流标记值,构建成如图5所示的数据包。步骤七、将K个数据包依次输入比特流。步骤八、根据预置好的顺序,选择下一个编码子流缓冲区,并回到步骤二;若已处理完所有编码子流缓冲区,则结束子流交织操作。
又示例性地,也可以按照以下的步骤进行子流交织:步骤一、选择一个编码子流缓冲区。步骤二、记当前编码子流缓冲区中剩余数据的大小为S。如果S大于等于N-M比特:从当前编码子流缓冲区中取出长度为N-M比特的数据。步骤三、如果S小于N-M比特:如果当前图像块为输入图像或输入图像切片的最后一个块,则从当前编码子流缓冲区中取出所有数据,并在此数据后补上若干个0,直至数据长度为N-M比特;如果当前图像块非最后一个块,则跳到步骤六。步骤四、将步骤二或步骤三取得的数据作为数据主体,并添加数据头;数据头长度为M比特,数据头内容为当前编码子流缓冲区对应的子流标记值,构建成如图5所示的数据包。步骤五、将数据包输入比特流。步骤六、根据预置好的顺序,选择下一个编码子流缓冲区,并回到步骤二;若所有编码子流缓冲区中的数据均小于N-M比特,则结束子流交织操作。
可选地,上述编码缓冲区(即编码子流缓冲区)可以为FIFO缓冲区。先进入编码缓冲区的数据,将率先离开缓冲区,同时率先进入熵编码器进行熵编码。
其中,根据多个数据包得到比特流的具体方法可以采用本领域技术人员能够想到的任何一种方法进行处理,本申请实施例对此不作具体限定。
示例性地,如图9所示,编码子流缓冲区1中的子流1切分为4个数据包分别即为1 -1、1-2、1-3和1-4;编码子流缓冲区2中的子流2切分为2个数据包分别即为2-1和2-2;编码子流缓冲区3中的子流3切分为3个数据包分别即为3-1、3-2和3-3。子流交织模块可以先将编码子流缓冲区1中的数据包1-1、1-2、1-3和1-4编入比特流,然后再将编码子流缓冲区2中的数据包2-1和2-2编入比特流,最后将编码子流缓冲区3中的数据包3-1、3-2和3-3编入比特流。
又示例性地,如图10所示,编码子流缓冲区1中的子流1切分为4个数据包分别即为1-1、1-2、1-3和1-4;编码子流缓冲区2中的子流2切分为2个数据包分别即为2-1和2-2;编码子流缓冲区3中的子流3切分为3个数据包分别即为3-1、3-2和3-3。子流交织模块可以先将编码子流缓冲区1中的数据包1-1编入比特流,然后再将编码子流缓冲区2中的数据包2-1编入比特流,再将编码子流缓冲区3中的数据包3-1编入比特流。之后按照数据1-2、2-2、3-3、1-3、3-3、1-4的顺序将数据包编入比特流。
可选地,所述数据包可以包括数据头和数据主体。其中,所述数据头用于存储所述数据包的标识。
示例性地,如表1所示,长度为N比特的数据包可以包括长度为M比特的数据头和长度为N-M比特的数据主体。其中,上述M和N为编解码过程中约定好的固定值。
表1
可选地,上述数据包的长度可以相同。
示例性地,上述数据包的长度可以均为N比特。在对长度为N比特的数据包进行切分时,可以提取出其中长度为M比特的数据头,余下N-M比特的数据作为数据主体。且可以通过对数据头中的数据进行解析,得到该数据包的标记(也可以称为子流标记值)。
可以看出,本申请实施例提供的编码方法中,编码得到比特流中的每个数据包均包含用于指示数据包所属子流的标识,解码端可以根据该标识将比特流中的数据包分别送入多个熵解码器并行解码。相较于使用单个熵解码器进行解码,使用多个熵解码器并行解码,可以提升解码器的吞吐量,从而提升解码器的解码性能。
请参考图11,图11为本申请实施例提供的一种解码框架。通过图11可以看出本申请提供编码框架包括子流解交织模块、解码子流缓冲区、熵解码器、反量化模块和预测重建模块。
需要说明的是,上述解码框架仅仅是示意性的。解码框架可以包括更多或更少的模块。例如,解码框架可以包括更多的熵解码器。如5个熵解码器。
请参考图12,图12为本申请实施例提供的一种解码方法。该解码方法可以适用于图11所示的编码框架。如图12所示,该解码方法可以包括:
S1201、获取比特流。
示例地,可以通过显示链路接收获取比特流。
S1202、根据比特流得到多个数据包。
在一种可能的实现方式中,子流解交织模块可以按照预设切分长度将比特流切分为多个数据包。
示例性地,可以按照N比特的切分长度将比特流切分为多个数据包。其中,N均为整数。
S1203、根据多个数据包的标识将多个数据包送入多个熵解码器进行解码得到多个语法元素。
在一种可能的实现方式中,可以先根据多个数据包的标识确定这多个数据包中每个数据包所属的子流。然后将每个数据包送入每个数据包所属子流的解码缓冲区。将每个解码缓冲区中的数据包送入每个缓冲区对应的熵解码器进行解码得到所述多个语法元素。
示例性地,子流解交织模块可以先根据多个数据包的标识确定这多个数据包中每个数据包所属的子流。然后将子流1的数据包送入解码子流缓冲区1,将子流2的数据包送入解码子流缓冲区2,将子流3的数据包送入解码子流缓冲区3。之后熵解码器1对解码子流缓冲区1进行解码得到语法元素,熵解码器2对解码子流缓冲区2进行解码得到语法元素,熵解码器3对解码子流缓冲区3进行解码得到语法元素。
可选地,上述解码缓冲区可以为FIFO缓冲区。先进入解码缓冲区的数据包,将率先离开缓冲区,同时率先进入熵解码器进行熵解码。
S1204、根据多个语法元素还原媒体内容。
在一种可能的实现方式中,可以先对多个语法元素进行反量化得到多个残差,然后对多个残差进行预测重建还原媒体内容。
示例性地,反量化模块可以先对多个语法元素进行反量化得到多个残差,然后预测重建模块对多个残差进行预测重建还原媒体内容。
其中,反量化和预测重建的具体方法可以采用本领域技术人员能够想到的任何一种方法进行处理,本申请实施例对此不作具体限定。例如,反量化可以采用均匀反量化。
可选地,上述媒体内容可以包括图像、图像切片或视频中的至少一项。
可以看出,本申请实施例提供的解码方法在解码过程中,不使用单个熵解码器进行解码,而是根据数据包的标识将数据包分别送入多个熵解码器并行解码。相较于使用单个熵解码器进行解码,使用多个熵解码器并行解码,可以提升解码器的吞吐量,从而提升解码器的解码性能。另外,由于每个数据包都携带有标识,解码端可以利用该标识快速确定数据包对应的熵解码器,由此降低了多熵解码器并行解码过程的复杂性。
下面将结合图13介绍用于执行上述编码方法的编码装置。
可以理解的是,编码装置为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
本申请实施例可以根据上述方法示例对编码装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图13示出了上述实施例中涉及的 编码装置的一种可能的组成示意图,如图13所示,该编码装置1300可以包括:语法单元1301、编码单元1302和子流交织单元1303。
语法单元1301,用于根据媒体内容得到多个语法元素。
示例性地,语法单元1301可以用于执行上述编码方法中的S801。
编码单元1302,用于将多个语法元素送入熵编码器进行编码得到多个子流;
示例性地,编码单元1302可以用于执行上述编码方法中的S802。
子流交织单元1303,用于将多个子流交织为比特流,比特流包括多个数据包,数据包包括用于指示所属的子流的标识。
示例性地,子流交织单元1303可以用于执行上述编码方法中的S803。
在一种可能的实现方式中,语法单元1301具体用于:对媒体内容进行预测得到多个预测数据;对多个预测数据进行量化得到多个语法元素。
在一种可能的实现方式中,子流交织单元1303具体用于:根据预设切分长度将多个子流的每个子流切分为多个数据包;根据多个数据包得到比特流。
下面将结合图14介绍用于执行上述解码方法的解码装置。
可以理解的是,解码装置为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
本申请实施例可以根据上述方法示例对解码装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图14示出了上述实施例中涉及的解码装置的一种可能的组成示意图,如图14所示,该解码装置1400可以包括:获取单元1401、子流解交织单元1402、解码单元1403和还原单元1404。
获取单元1401,用于获取比特流。
示例性地,获取单元1401可以用于执行上述解码方法中的S1201。
子流解交织单元1402,用于根据比特流得到多个数据包。
示例性地,子流解交织单元1402可以用于执行上述解码方法中的S1202。
解码单元1403,用于根据多个数据包的标识将多个数据包送入多个熵解码器进行解码得到多个语法元素。
示例性地,解码单元1403可以用于执行上述解码方法中的S1203。
还原单元1404,用于根据多个语法元素还原媒体内容。
示例性地,还原单元1404可以用于执行上述解码方法中的S1404。
在一种可能的实现方式中,子流解交织单元1402具体用于:子流解交织单元具体用于:按照预设切分长度将比特流切分为多个数据包。
在一种可能的实现方式中,解码单元1403具体用于:根据多个数据标识确定多个数 据包中每个数据包所属的子流;将每个数据包送入每个数据包所属子流的解码缓冲区;将每个解码缓冲区中的数据包送入每个缓冲区对应的熵解码器进行解码得到多个语法元素。
在一种可能的实现方式中,还原单元1404具体用于:对多个语法元素进行反量化得到多个残差;对多个残差进行预测重建还原媒体内容。子流解交织单元具体用于:按照预设切分长度将比特流切分为多个数据包。
本申请实施例还提供一种编码装置,该装置包括:至少一个处理器,当所述至少一个处理器执行程序代码或指令时,实现上述相关方法步骤实现上述实施例中的编码方法。
可选地,该装置还可以包括至少一个存储器,该至少一个存储器用于存储该程序代码或指令。
本申请实施例还提供一种解码装置,该装置包括:至少一个处理器,当所述至少一个处理器执行程序代码或指令时,实现上述相关方法步骤实现上述实施例中的解码方法。
可选地,该装置还可以包括至少一个存储器,该至少一个存储器用于存储该程序代码或指令。
本申请实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在编码装置上运行时,使得编码装置执行上述相关方法步骤实现上述实施例中的编解码方法。
本申请实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的编解码方法。
本申请实施例还提供一种编解码装置,这个装置具体可以是芯片、集成电路、组件或模块。具体的,该装置可包括相连的处理器和用于存储指令的存储器,或者该装置包括至少一个处理器,用于从外部存储器获取指令。当装置运行时,处理器可执行指令,以使芯片执行上述各方法实施例中的编解码方法。
图15示出了一种芯片1500的结构示意图。芯片1500包括一个或多个处理器1501以及接口电路1502。可选的,上述芯片1500还可以包含总线1503。
处理器1501可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述编解码方法的各步骤可以通过处理器1501中的硬件的集成逻辑电路或者软件形式的指令完成。
可选地,上述的处理器1501可以是通用处理器、数字信号处理(digital signal proce ssing,DSP)器、集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
接口电路1502可以用于数据、指令或者信息的发送或者接收,处理器1501可以利用接口电路1502接收的数据、指令或者其他信息,进行加工,可以将加工完成信息通过接口电路1502发送出去。
可选的,芯片还包括存储器,存储器可以包括只读存储器和随机存取存储器,并向处理器提供操作指令和数据。存储器的一部分还可以包括非易失性随机存取存储器(non-vo latile random access memory,NVRAM)。
可选的,存储器存储了可执行软件模块或者数据结构,处理器可以通过调用存储器存 储的操作指令(该操作指令可存储在操作系统中),执行相应的操作。
可选的,芯片可以使用在本申请实施例涉及的编码装置或DOP中。可选的,接口电路1502可用于输出处理器1501的执行结果。关于本申请实施例的一个或多个实施例提供的编解码方法可参考前述各个实施例,这里不再赘述。
需要说明的,处理器1501、接口电路1502各自对应的功能既可以通过硬件设计实现,也可以通过软件设计来实现,还可以通过软硬件结合的方式来实现,这里不作限制。
其中,本实施例提供的装置、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
应理解,在本申请实施例的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其他的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其他的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
上述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请实施例各个实施例上述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限 于此,任何熟悉本技术领域的技术人员在本申请实施例揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应所述以权利要求的保护范围为准。

Claims (26)

  1. 一种解码方法,其特征在于,包括:
    获取比特流;
    根据所述比特流得到多个数据包;
    根据所述多个数据包的标识将所述多个数据包送入多个熵解码器进行解码得到多个语法元素;
    根据所述多个语法元素还原媒体内容。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述多个数据包的标识将所述多个数据包送入多个熵解码器进行解码得到多个语法元素,包括:
    根据所述多个数据包的标识确定所述多个数据包中每个数据包所属的子流;
    将所述每个数据包送入每个数据包所属子流的解码缓冲区;
    将每个解码缓冲区中的数据包送入每个缓冲区对应的熵解码器进行解码得到所述多个语法元素。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述多个语法元素还原媒体内容,包括:
    对所述多个语法元素进行反量化得到多个残差;
    对所述多个残差进行预测重建还原媒体内容。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述根据所述比特流得到多个数据包,包括:
    按照预设切分长度将所述比特流切分为多个数据包。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述数据包包括数据头和数据主体,所述数据头用于存储所述数据包的标识。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述多个数据包的长度相同。
  7. 一种编码方法,其特征在于,包括:
    根据媒体内容得到多个语法元素;
    将所述多个语法元素送入熵编码器进行编码得到多个子流;
    将所述多个子流交织为比特流,所述比特流包括多个数据包,所述数据包包括用于指示所属的子流的标识。
  8. 根据权利要求7所述的方法,其特征在于,所述将所述多个子流交织为比特流,包括:
    根据预设切分长度将所述多个子流的每个子流切分为多个数据包;
    根据所述多个数据包得到所述比特流。
  9. 根据权利要求7或8所述的方法,其特征在于,所述根据媒体内容得到多个语法元素,包括:
    对所述媒体内容进行预测得到多个预测数据;
    对所述多个预测数据进行量化得到所述多个语法元素。
  10. 根据权利要求7至9中任一项所述的方法,其特征在于,所述数据包包括数据头 和数据主体,所述数据头用于存储所述数据包的标识。
  11. 根据权利要求7至10中任一项所述的方法,其特征在于,所述多个数据包的长度相同。
  12. 一种解码装置,其特征在于,包括:获取单元、子流解交织单元、解码单元和还原单元;
    所述获取单元,用于获取比特流;
    所述子流解交织单元,用于根据所述比特流得到多个数据包;
    所述解码单元,用于根据所述多个数据包的标识将所述多个数据包送入多个熵解码器进行解码得到多个语法元素;
    所述还原单元,用于根据所述多个语法元素还原媒体内容。
  13. 根据权利要求12所述的装置,其特征在于,所述解码单元具体用于:
    根据所述多个数据包的标识确定所述多个数据包中每个数据包所属的子流;
    将所述每个数据包送入每个数据包所属子流的解码缓冲区;
    将每个解码缓冲区中的数据包送入每个缓冲区对应的熵解码器进行解码得到所述多个语法元素。
  14. 根据权利要求12或13所述的装置,其特征在于,所述还原单元具体用于:
    对所述多个语法元素进行反量化得到多个残差;
    对所述多个残差进行预测重建还原媒体内容。
  15. 根据权利要求12至14中任一项所述的装置,其特征在于,所述子流解交织单元具体用于:
    按照预设切分长度将所述比特流切分为多个数据包。
  16. 根据权利要求12至15中任一项所述的装置,其特征在于,所述数据包包括数据头和数据主体,所述数据头用于存储所述数据包的标识。
  17. 根据权利要求12至16中任一项所述的装置,其特征在于,所述多个数据包的长度相同。
  18. 一种编码装置,其特征在于,所述装置包括:语法单元、编码单元和子流交织单元;
    所述语法单元,用于根据媒体内容得到多个语法元素;
    所述编码单元,用于将所述多个语法元素送入熵编码器进行编码得到多个子流;
    所述子流交织单元,用于将所述多个子流交织为比特流,所述比特流包括多个数据包,所述数据包包括用于指示所属的子流的标识。
  19. 根据权利要求18所述的装置,其特征在于,所述子流交织单元具体用于:
    根据预设切分长度将所述多个子流的每个子流切分为多个数据包;
    根据所述多个数据包得到所述比特流。
  20. 根据权利要求18或19所述的装置,其特征在于,所述语法单元具体用于:
    对所述媒体内容进行预测得到多个预测数据;
    对所述多个预测数据进行量化得到所述多个语法元素。
  21. 根据权利要求18至20中任一项所述的装置,其特征在于,所述数据包包括数据头和数据主体,所述数据头用于存储所述数据包的标识。
  22. 根据权利要求18至21中任一项所述的装置,其特征在于,所述多个数据包的长度相同。
  23. 一种解码装置,包括至少一个处理器和存储器,其特征在于,所述至少一个处理器执行存储在存储器中的程序或指令,以使得所述解码装置实现上述权利要求1至6中任一项所述的方法。
  24. 一种编码装置,包括至少一个处理器和存储器,其特征在于,所述至少一个处理器执行存储在存储器中的程序或指令,以使得所述编码装置实现上述权利要求7至11中任一项所述的方法。
  25. 一种计算机可读存储介质,用于存储计算机程序,其特征在于,当所述计算机程序在计算机或处理器运行时,使得所述计算机或所述处理器实现上述权利要求1至6中任一项或权利要求7至11中任一项所述的方法。
  26. 一种计算机程序产品,所述计算机程序产品中包含指令,其特征在于,当所述指令在计算机或处理器上运行时,使得所述计算机或所述处理器实现上述权利要求1至6中任一项或权利要求7至11中任一项所述的方法。
PCT/CN2023/076726 2022-02-28 2023-02-17 编解码方法和装置 WO2023160470A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210186914.6 2022-02-28
CN202210186914.6A CN116708787A (zh) 2022-02-28 2022-02-28 编解码方法和装置

Publications (1)

Publication Number Publication Date
WO2023160470A1 true WO2023160470A1 (zh) 2023-08-31

Family

ID=87764838

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/076726 WO2023160470A1 (zh) 2022-02-28 2023-02-17 编解码方法和装置

Country Status (2)

Country Link
CN (1) CN116708787A (zh)
WO (1) WO2023160470A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110248872A1 (en) * 2010-04-13 2011-10-13 Research In Motion Limited Methods and devices for load balancing in parallel entropy coding and decoding
CN110324625A (zh) * 2018-03-31 2019-10-11 华为技术有限公司 一种视频解码方法和装置
WO2020153512A1 (ko) * 2019-01-23 2020-07-30 엘지전자 주식회사 비디오 신호의 처리 방법 및 장치
CN111557093A (zh) * 2017-12-12 2020-08-18 相干逻辑公司 低时延视频编解码器和利用并行处理的传输

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110248872A1 (en) * 2010-04-13 2011-10-13 Research In Motion Limited Methods and devices for load balancing in parallel entropy coding and decoding
CN111557093A (zh) * 2017-12-12 2020-08-18 相干逻辑公司 低时延视频编解码器和利用并行处理的传输
CN110324625A (zh) * 2018-03-31 2019-10-11 华为技术有限公司 一种视频解码方法和装置
WO2020153512A1 (ko) * 2019-01-23 2020-07-30 엘지전자 주식회사 비디오 신호의 처리 방법 및 장치

Also Published As

Publication number Publication date
CN116708787A (zh) 2023-09-05

Similar Documents

Publication Publication Date Title
JP2023078188A (ja) エンコーダ、デコーダ、インター予測のための対応する方法
CN115243039B (zh) 一种视频图像预测方法及装置
WO2020114510A1 (zh) 用于多假设编码的加权预测方法及装置
WO2021109978A1 (zh) 视频编码的方法、视频解码的方法及相应装置
CN115243048B (zh) 视频图像解码、编码方法及装置
WO2020228560A1 (zh) 候选运动矢量列表获取方法、装置及编解码器
WO2020244579A1 (zh) Mpm列表构建方法、色度块的帧内预测模式获取方法及装置
CN111416981B (zh) 视频图像解码、编码方法及装置
WO2020232845A1 (zh) 一种帧间预测的方法和装置
KR102660120B1 (ko) 이중 예측 옵티컬 플로 계산 및 이중 예측 보정에서 블록 레벨 경계 샘플 그레이디언트 계산을 위한 정수 그리드 참조 샘플의 위치를 계산하는 방법
AU2020261145B2 (en) Picture prediction method and apparatus, and computer-readable storage medium
AU2024201357A1 (en) Picture prediction method and apparatus, and computer-readable storage medium
WO2020253681A1 (zh) 融合候选运动信息列表的构建方法、装置及编解码器
CN111432219B (zh) 一种帧间预测方法及装置
WO2020155791A1 (zh) 帧间预测方法和装置
WO2023072068A1 (zh) 图像编解码方法和装置
WO2023011420A1 (zh) 编解码方法和装置
CN112055211B (zh) 视频编码器及qp设置方法
CN113228632B (zh) 用于局部亮度补偿的编码器、解码器、以及对应方法
CN111277840B (zh) 变换方法、反变换方法以及视频编码器和视频解码器
WO2020151274A1 (zh) 图像显示顺序的确定方法、装置和视频编解码设备
WO2020135615A1 (zh) 视频图像解码方法及装置
WO2023160470A1 (zh) 编解码方法和装置
WO2020147514A1 (zh) 视频编码器、视频解码器及相应方法
CN114930834A (zh) 编码器、解码器及灵活档次配置的对应方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23759095

Country of ref document: EP

Kind code of ref document: A1