CN112135149A - Entropy coding/decoding method and device of syntax element and codec - Google Patents

Entropy coding/decoding method and device of syntax element and codec Download PDF

Info

Publication number
CN112135149A
CN112135149A CN201910550626.2A CN201910550626A CN112135149A CN 112135149 A CN112135149 A CN 112135149A CN 201910550626 A CN201910550626 A CN 201910550626A CN 112135149 A CN112135149 A CN 112135149A
Authority
CN
China
Prior art keywords
motion vector
candidate motion
syntax element
value
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910550626.2A
Other languages
Chinese (zh)
Other versions
CN112135149B (en
Inventor
陈旭
杨海涛
张恋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910550626.2A priority Critical patent/CN112135149B/en
Priority to PCT/CN2020/096363 priority patent/WO2020259353A1/en
Publication of CN112135149A publication Critical patent/CN112135149A/en
Application granted granted Critical
Publication of CN112135149B publication Critical patent/CN112135149B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Abstract

The application discloses an entropy coding/decoding method, a device and a coder/decoder of a syntax element, wherein the method comprises the following steps: judging whether the length of the current fusion candidate motion vector list is greater than a preset value or not; if the length of the fusion candidate motion vector list is greater than the preset value, entropy coding/decoding a value of a first syntax element by adopting a bypass coding mode, wherein the first syntax element is used for indicating that a first candidate motion vector or a second candidate motion vector is selected from the fusion candidate motion vector list to be used as a basic motion vector of motion vector expansion, and the first candidate motion vector and the second candidate motion vector are any two candidate motion vectors in the fusion candidate motion vector list. The application can be implemented to reduce the complexity of encoding/decoding.

Description

Entropy coding/decoding method and device of syntax element and codec
Technical Field
The present application relates to the field of video encoding and decoding, and in particular, to an entropy encoding/decoding method and apparatus for syntax elements, and an encoder/decoder.
Background
Digital video capabilities can be incorporated into a wide variety of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, Personal Digital Assistants (PDAs), laptop or desktop computers, tablet computers, electronic book readers, digital cameras, digital recording devices, digital media players, video gaming devices, video gaming consoles, cellular or satellite radio telephones (so-called "smart phones"), video teleconferencing devices, video streaming devices, and the like. Digital video devices implement video compression techniques, such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264/MPEG-4 part 10 Advanced Video Coding (AVC), the video coding standard H.265/High Efficiency Video Coding (HEVC), and extensions of such standards. Video devices may transmit, receive, encode, decode, and/or store digital video information more efficiently by implementing such video compression techniques.
Context-based Adaptive Binary arithmetic coding (CABAC) is a commonly used entropy coding technique for coding and decoding syntax element values. The CABAC processing mainly includes binarization (binarization), context modeling (context coding), and binary arithmetic coding (binary arithmetic coding). The binarization is to perform binary processing on an input non-binary syntax element value and uniquely convert the input non-binary syntax element value into a binary sequence (namely a binary string); context modeling means that for each bit in a binary string, a probability model of the bit is determined according to context information (e.g., coding information in a reconstructed region around a node corresponding to a syntax element); the binary arithmetic coding means that the corresponding bit is coded according to the probability value in the probability model, and the probability value in the probability model is updated according to the value of the bit.
In the related art, the above-mentioned CABAC technique is used to entropy encode/decode the syntax element MMVD _ cand _ flag used in the Merge Motion Vector Difference (MMVD) technique. But the complexity of the entropy encoding/decoding is high.
Disclosure of Invention
The embodiment of the application provides an entropy coding/decoding method and device of a syntax element and a coder/decoder, and complexity of coding/decoding is reduced.
In a first aspect, an embodiment of the present application provides a method for entropy coding of a syntax element, including:
judging whether the length of the current fusion candidate motion vector list is greater than a preset value or not; if the length of the fusion candidate motion vector list is greater than the preset value, entropy coding is carried out on the value of a first syntax element by adopting a bypass coding mode, wherein the first syntax element is used for indicating that a first candidate motion vector or a second candidate motion vector is selected from the fusion candidate motion vector list to be used as a basic motion vector of motion vector expansion, and the first candidate motion vector and the second candidate motion vector are any two candidate motion vectors in the fusion candidate motion vector list.
According to the method and the device, the value of the first syntax element mmvd _ cand _ flag is subjected to entropy coding in a bypass coding mode, a specific probability model does not need to be distributed to bits in a binary string of the mmvd _ cand _ flag, and the coding complexity is reduced.
In a possible implementation manner, if the length of the fusion candidate motion vector list is greater than the preset value, entropy encoding the value of the first syntax element in a bypass encoding mode, further includes: and if the length of the fusion candidate motion vector list is greater than the preset value, setting a switch identifier indicating the bypass coding mode as a first value, and performing entropy coding on the value of the first syntax element by adopting the bypass coding mode.
By the switch identification indicating whether to entropy encode the value of the first syntax element in the bypass coding mode, the coding efficiency can be improved.
In one possible implementation, the first syntax element is mmvd _ cand _ flag.
In a second aspect, an embodiment of the present application provides a method for entropy decoding a syntax element, including:
judging whether the length of the current fusion candidate motion vector list is greater than a preset value or not; and if the length of the fusion candidate motion vector list is greater than the preset value, performing entropy decoding on a value of a first syntax element by adopting a bypass coding mode, wherein the first syntax element is used for indicating that a first candidate motion vector or a second candidate motion vector is selected from the fusion candidate motion vector list to be used as a basic motion vector of motion vector expansion, and the first candidate motion vector and the second candidate motion vector are any two candidate motion vectors in the fusion candidate motion vector list.
According to the method and the device, the entropy decoding is carried out on the value of the first syntax element mmvd _ cand _ flag in a bypass decoding mode, a specific probability model does not need to be distributed to bits in a binary string of the mmvd _ cand _ flag, and decoding complexity is reduced.
In a possible implementation manner, if the length of the fusion candidate motion vector list is greater than the preset value, entropy encoding the value of the first syntax element in a bypass encoding mode, further includes: and if the length of the fusion candidate motion vector list is greater than the preset value, setting a switch identifier indicating the bypass coding mode as a first value, and performing entropy decoding on the value of the first syntax element by adopting the bypass coding mode.
By the switch identifying whether to entropy decode the value of the first syntax element in the bypass decoding mode, the decoding efficiency can be improved.
In one possible implementation, the first syntax element is mmvd _ cand _ flag.
In a third aspect, an embodiment of the present application provides an entropy encoding apparatus, including:
the judging module is used for judging whether the length of the current fusion candidate motion vector list is larger than a preset value or not; and the encoding module is used for entropy encoding a value of a first syntax element by adopting a bypass encoding mode if the length of the fusion candidate motion vector list is greater than the preset value, wherein the first syntax element is used for indicating that a first candidate motion vector or a second candidate motion vector is selected from the fusion candidate motion vector list to be used as a basic motion vector of motion vector expansion, and the first candidate motion vector and the second candidate motion vector are any two candidate motion vectors in the fusion candidate motion vector list.
In a possible implementation manner, the encoding module is further configured to set a switch identifier indicating the bypass encoding mode to a first value if the length of the fusion candidate motion vector list is greater than the preset value, and perform entropy encoding on the value of the first syntax element by using the bypass encoding mode.
In one possible implementation, the first syntax element is mmvd _ cand _ flag.
In a fourth aspect, an embodiment of the present application provides an entropy decoding apparatus, including:
the judging module is used for judging whether the length of the current fusion candidate motion vector list is larger than a preset value or not; a decoding module, configured to perform entropy decoding on a value of a first syntax element in a bypass coding mode if the length of the fusion candidate motion vector list is greater than the preset value, where the first syntax element is used to indicate that a first candidate motion vector or a second candidate motion vector is selected from the fusion candidate motion vector list as a basic motion vector for motion vector expansion, and the first candidate motion vector and the second candidate motion vector are any two candidate motion vectors in the fusion candidate motion vector list.
In a possible implementation manner, the decoding module is further configured to set a switch identifier indicating the bypass coding mode to a first value if the length of the fusion candidate motion vector list is greater than the preset value, and perform entropy decoding on the value of the first syntax element by using the bypass coding mode.
In one possible implementation, the first syntax element is mmvd _ cand _ flag.
In a fifth aspect, an embodiment of the present application provides a video encoder for encoding an image block, including:
inter-frame prediction means for predicting motion information of a currently encoded image block based on target candidate motion information, and determining a predicted pixel value of the currently encoded image block based on the motion information of the currently encoded image block;
the entropy encoding device as defined in any one of the above third aspects, configured to encode, into a bitstream, an index identifier of the target candidate motion information, where the index identifier indicates the target candidate motion information for the currently encoded image block;
a reconstruction module to reconstruct the current encoded image block based on the predicted pixel values.
In a sixth aspect, an embodiment of the present application provides a video decoder, configured to decode a picture block from a bitstream, including:
the entropy decoding apparatus according to any one of the above fourth aspects, configured to decode an index identifier from a code stream, where the index identifier is used to indicate target candidate motion information of a currently decoded image block;
inter-frame prediction means for predicting motion information of a currently decoded image block based on the target candidate motion information indicated by the index flag, and determining a predicted pixel value of the currently decoded image block based on the motion information of the currently decoded image block;
a reconstruction module to reconstruct the current decoded image block based on the predicted pixel values.
In a seventh aspect, an embodiment of the present application provides a video encoding apparatus, including: a non-volatile memory and a processor coupled to each other, the processor calling program code stored in the memory to perform a method as described in any one of the above first aspects.
In an eighth aspect, an embodiment of the present application provides a video decoding apparatus, including: a non-volatile memory and a processor coupled to each other, the processor calling program code stored in the memory to perform a method as described in any one of the above second aspects.
In a ninth aspect, the present application provides a computer-readable storage medium storing program code, wherein the program code includes instructions for performing part or all of the steps of any one of the methods of the first or second aspects.
In a tenth aspect, embodiments of the present application provide a computer program product, which when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first or second aspects.
It should be understood that the third to tenth aspects of the present application are consistent with the technical solutions of the first or second aspect of the present application, and the beneficial effects achieved by the aspects and the corresponding possible embodiments are similar, and are not described in detail again.
It can be seen that the embodiment of the present application reduces encoding/decoding complexity by performing entropy encoding/decoding using a bypass encoding/decoding mode on the value of the first syntax element mmvd _ cand _ flag without allocating a specific probability model to bits in the binary string of the mmvd _ cand _ flag.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
FIG. 1A is a block diagram of an example of a video encoding and decoding system 10 for implementing embodiments of the present application;
FIG. 1B is a block diagram of an example of a video coding system 40 for implementing embodiments of the present application;
FIG. 2 is a block diagram of an example structure of an encoder 20 for implementing embodiments of the present application;
FIG. 3 is a block diagram of an example structure of a decoder 30 for implementing embodiments of the present application;
FIG. 4 is a block diagram of an example of a video coding apparatus 400 for implementing an embodiment of the present application;
FIG. 5 is a block diagram of another example of an encoding device or a decoding device for implementing embodiments of the present application;
FIG. 6 is a flow chart illustrating an entropy coding method for syntax elements for implementing an embodiment of the present application;
FIG. 7 is another flow chart diagram of a method for entropy decoding of syntax elements for implementing an embodiment of the present application;
FIG. 8 is a block diagram of an entropy coding apparatus 800 for implementing an embodiment of the present application;
fig. 9 is a block diagram of an entropy decoding apparatus 900 for implementing an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings. In the following description, reference is made to the accompanying drawings which form a part hereof and in which is shown by way of illustration specific aspects of embodiments of the application or in which specific aspects of embodiments of the application may be employed. It should be understood that embodiments of the present application may be used in other ways and may include structural or logical changes not depicted in the drawings. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present application is defined by the appended claims. For example, it should be understood that the disclosure in connection with the described methods may equally apply to the corresponding apparatus or system for performing the methods, and vice versa. For example, if one or more particular method steps are described, the corresponding apparatus may comprise one or more units, such as functional units, to perform the described one or more method steps (e.g., a unit performs one or more steps, or multiple units, each of which performs one or more of the multiple steps), even if such one or more units are not explicitly described or illustrated in the figures. On the other hand, for example, if a particular apparatus is described based on one or more units, such as functional units, the corresponding method may comprise one step to perform the functionality of the one or more units (e.g., one step performs the functionality of the one or more units, or multiple steps, each of which performs the functionality of one or more of the plurality of units), even if such one or more steps are not explicitly described or illustrated in the figures. Further, it is to be understood that features of the various exemplary embodiments and/or aspects described herein may be combined with each other, unless explicitly stated otherwise.
The technical scheme related to the embodiment of the application can be applied to the existing video coding standards (such as H.264, HEVC and the like), and can also be applied to the future video coding standards (such as H.266 standard). The terminology used in the description of the embodiments section of the present application is for the purpose of describing particular embodiments of the present application only and is not intended to be limiting of the present application. Some concepts that may be involved in embodiments of the present application are briefly described below.
Video coding generally refers to processing a sequence of pictures that form a video or video sequence. In the field of video coding, the terms "picture", "frame" or "image" may be used as synonyms. Video encoding as used herein means video encoding or video decoding. Video encoding is performed on the source side, typically including processing (e.g., by compressing) the original video picture to reduce the amount of data required to represent the video picture for more efficient storage and/or transmission. Video decoding is performed at the destination side, typically involving inverse processing with respect to the encoder, to reconstruct the video pictures. Embodiments are directed to video picture "encoding" to be understood as referring to "encoding" or "decoding" of a video sequence. The combination of the encoding part and the decoding part is also called codec (encoding and decoding).
A video sequence comprises a series of images (pictures) which are further divided into slices (slices) which are further divided into blocks (blocks). Video coding performs the coding process in units of blocks, and in some new video coding standards, the concept of blocks is further extended. For example, in the h.264 standard, there is a Macroblock (MB), which may be further divided into a plurality of prediction blocks (partitions) that can be used for predictive coding. In the High Efficiency Video Coding (HEVC) standard, basic concepts such as a Coding Unit (CU), a Prediction Unit (PU), and a Transform Unit (TU) are adopted, and various block units are functionally divided, and a brand new tree-based structure is adopted for description. For example, a CU may be partitioned into smaller CUs according to a quadtree, and the smaller CUs may be further partitioned to form a quadtree structure, where the CU is a basic unit for partitioning and encoding an encoded image. There is also a similar tree structure for PU and TU, and PU may correspond to a prediction block, which is the basic unit of predictive coding. The CU is further partitioned into PUs according to a partitioning pattern. A TU may correspond to a transform block, which is a basic unit for transforming a prediction residual. However, CU, PU and TU are basically concepts of blocks (or image blocks).
For example, in HEVC, a CTU is split into multiple CUs by using a quadtree structure represented as a coding tree. A decision is made at the CU level whether to encode a picture region using inter-picture (temporal) or intra-picture (spatial) prediction. Each CU may be further split into one, two, or four PUs according to the PU split type. The same prediction process is applied within one PU and the relevant information is transmitted to the decoder on a PU basis. After obtaining the residual block by applying a prediction process based on the PU split type, the CU may be partitioned into Transform Units (TUs) according to other quadtree structures similar to the coding tree used for the CU. In recent developments of video compression techniques, the coding blocks are partitioned using Quad-tree and binary tree (QTBT) partition frames. In the QTBT block structure, a CU may be square or rectangular in shape.
Herein, for convenience of description and understanding, an image block to be encoded in a currently encoded image may be referred to as a current block, e.g., in encoding, referring to a block currently being encoded; in decoding, refers to the block currently being decoded. A decoded image block in a reference picture used for predicting the current block is referred to as a reference block, i.e. a reference block is a block that provides a reference signal for the current block, wherein the reference signal represents pixel values within the image block. A block in the reference picture that provides a prediction signal for the current block may be a prediction block, wherein the prediction signal represents pixel values or sample values or a sampled signal within the prediction block. For example, after traversing multiple reference blocks, a best reference block is found that will provide prediction for the current block, which is called a prediction block.
In the case of lossless video coding, the original video picture can be reconstructed, i.e., the reconstructed video picture has the same quality as the original video picture (assuming no transmission loss or other data loss during storage or transmission). In the case of lossy video coding, the amount of data needed to represent the video picture is reduced by performing further compression, e.g., by quantization, while the decoder side cannot fully reconstruct the video picture, i.e., the quality of the reconstructed video picture is lower or worse than the quality of the original video picture.
Several video coding standards of h.261 belong to the "lossy hybrid video codec" (i.e., the combination of spatial and temporal prediction in the sample domain with 2D transform coding in the transform domain for applying quantization). Each picture of a video sequence is typically partitioned into non-overlapping sets of blocks, typically encoded at the block level. In other words, the encoder side typically processes, i.e., encodes, video at the block (video block) level, e.g., generates a prediction block by spatial (intra-picture) prediction and temporal (inter-picture) prediction, subtracts the prediction block from the current block (currently processed or block to be processed) to obtain a residual block, transforms the residual block and quantizes the residual block in the transform domain to reduce the amount of data to be transmitted (compressed), while the decoder side applies the inverse processing portion relative to the encoder to the encoded or compressed block to reconstruct the current block for representation. In addition, the encoder replicates the decoder processing loop such that the encoder and decoder generate the same prediction (e.g., intra-prediction and inter-prediction) and/or reconstruction for processing, i.e., encoding, subsequent blocks.
The system architecture to which the embodiments of the present application apply is described below. Referring to fig. 1A, fig. 1A schematically shows a block diagram of a video encoding and decoding system 10 to which an embodiment of the present application is applied. As shown in fig. 1A, video encoding and decoding system 10 may include a source device 12 and a destination device 14, source device 12 generating encoded video data, and thus source device 12 may be referred to as a video encoding apparatus. Destination device 14 may decode the encoded video data generated by source device 12, and thus destination device 14 may be referred to as a video decoding apparatus. Various implementations of source apparatus 12, destination apparatus 14, or both may include one or more processors and memory coupled to the one or more processors. The memory can include, but is not limited to, RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures that can be accessed by a computer, as described herein. Source apparatus 12 and destination apparatus 14 may comprise a variety of devices, including desktop computers, mobile computing devices, notebook (e.g., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called "smart" phones, televisions, cameras, display devices, digital media players, video game consoles, on-board computers, wireless communication devices, or the like.
Although fig. 1A depicts source apparatus 12 and destination apparatus 14 as separate apparatuses, an apparatus embodiment may also include the functionality of both source apparatus 12 and destination apparatus 14 or both, i.e., source apparatus 12 or corresponding functionality and destination apparatus 14 or corresponding functionality. In such embodiments, source device 12 or corresponding functionality and destination device 14 or corresponding functionality may be implemented using the same hardware and/or software, or using separate hardware and/or software, or any combination thereof.
A communication connection may be made between source device 12 and destination device 14 over link 13, and destination device 14 may receive encoded video data from source device 12 via link 13. Link 13 may comprise one or more media or devices capable of moving encoded video data from source apparatus 12 to destination apparatus 14. In one example, link 13 may include one or more communication media that enable source device 12 to transmit encoded video data directly to destination device 14 in real-time. In this example, source apparatus 12 may modulate the encoded video data according to a communication standard, such as a wireless communication protocol, and may transmit the modulated video data to destination apparatus 14. The one or more communication media may include wireless and/or wired communication media such as a Radio Frequency (RF) spectrum or one or more physical transmission lines. The one or more communication media may form part of a packet-based network, such as a local area network, a wide area network, or a global network (e.g., the internet). The one or more communication media may include routers, switches, base stations, or other apparatuses that facilitate communication from source apparatus 12 to destination apparatus 14.
Source device 12 includes an encoder 20, and in the alternative, source device 12 may also include a picture source 16, a picture preprocessor 18, and a communication interface 22. In one implementation, the encoder 20, the picture source 16, the picture preprocessor 18, and the communication interface 22 may be hardware components of the source device 12 or may be software programs of the source device 12. Described below, respectively:
the picture source 16, which may include or be any kind of picture capturing device, is used for capturing, for example, a real-world picture, and/or any kind of picture or comment generation device (for screen content encoding, some text on the screen is also considered as part of the picture or image to be encoded), such as a computer graphics processor for generating a computer animation picture, or any kind of device for acquiring and/or providing a real-world picture, a computer animation picture (e.g., screen content, a Virtual Reality (VR) picture), and/or any combination thereof (e.g., an Augmented Reality (AR) picture). The picture source 16 may be a camera for capturing pictures or a memory for storing pictures, and the picture source 16 may also include any kind of (internal or external) interface for storing previously captured or generated pictures and/or for obtaining or receiving pictures. When picture source 16 is a camera, picture source 16 may be, for example, an integrated camera local or integrated in the source device; when the picture source 16 is a memory, the picture source 16 may be an integrated memory local or integrated, for example, in the source device. When the picture source 16 comprises an interface, the interface may for example be an external interface receiving pictures from an external video source, for example an external picture capturing device such as a camera, an external memory or an external picture generating device, for example an external computer graphics processor, a computer or a server. The interface may be any kind of interface according to any proprietary or standardized interface protocol, e.g. a wired or wireless interface, an optical interface.
The picture can be regarded as a two-dimensional array or matrix of pixel elements (picture elements). The pixels in the array may also be referred to as sampling points. The number of sampling points of the array or picture in the horizontal and vertical directions (or axes) defines the size and/or resolution of the picture. To represent color, three color components are typically employed, i.e., a picture may be represented as or contain three sample arrays. For example, in RBG format or color space, a picture includes corresponding arrays of red, green, and blue samples. However, in video coding, each pixel is typically represented in a luminance/chrominance format or color space, e.g. for pictures in YUV format, comprising a luminance component (sometimes also indicated with L) indicated by Y and two chrominance components indicated by U and V. The luminance (luma) component Y represents luminance or gray level intensity (e.g., both are the same in a gray scale picture), while the two chrominance (chroma) components U and V represent chrominance or color information components. Accordingly, a picture in YUV format includes a luma sample array of luma sample values (Y), and two chroma sample arrays of chroma values (U and V). Pictures in RGB format can be converted or transformed into YUV format and vice versa, a process also known as color transformation or conversion. If the picture is black and white, the picture may include only an array of luminance samples. In the embodiment of the present application, the pictures transmitted from the picture source 16 to the picture processor may also be referred to as raw picture data 17.
Picture pre-processor 18 is configured to receive original picture data 17 and perform pre-processing on original picture data 17 to obtain pre-processed picture 19 or pre-processed picture data 19. For example, the pre-processing performed by picture pre-processor 18 may include trimming, color format conversion (e.g., from RGB format to YUV format), toning, or de-noising.
An encoder 20 (or video encoder 20) for receiving the pre-processed picture data 19, processing the pre-processed picture data 19 with a relevant prediction mode (such as the prediction mode in various embodiments herein), thereby providing encoded picture data 21 (structural details of the encoder 20 will be described further below based on fig. 2 or fig. 4 or fig. 5). In some embodiments, the encoder 20 may be configured to perform various embodiments described hereinafter to implement the application of the chroma block prediction method described herein on the encoding side.
A communication interface 22, which may be used to receive encoded picture data 21 and may transmit encoded picture data 21 over link 13 to destination device 14 or any other device (e.g., memory) for storage or direct reconstruction, which may be any device for decoding or storage. Communication interface 22 may, for example, be used to encapsulate encoded picture data 21 into a suitable format, such as a data packet, for transmission over link 13.
Destination device 14 includes a decoder 30, and optionally destination device 14 may also include a communication interface 28, a picture post-processor 32, and a display device 34. Described below, respectively:
communication interface 28 may be used to receive encoded picture data 21 from source device 12 or any other source, such as a storage device, such as an encoded picture data storage device. The communication interface 28 may be used to transmit or receive the encoded picture data 21 by way of a link 13 between the source device 12 and the destination device 14, or by way of any type of network, such as a direct wired or wireless connection, any type of network, such as a wired or wireless network or any combination thereof, or any type of private and public networks, or any combination thereof. Communication interface 28 may, for example, be used to decapsulate data packets transmitted by communication interface 22 to obtain encoded picture data 21.
Both communication interface 28 and communication interface 22 may be configured as a one-way communication interface or a two-way communication interface, and may be used, for example, to send and receive messages to establish a connection, acknowledge and exchange any other information related to a communication link and/or data transfer, such as an encoded picture data transfer.
A decoder 30 (otherwise referred to as decoder 30) for receiving the encoded picture data 21 and providing decoded picture data 31 or decoded pictures 31 (structural details of the decoder 30 will be described further below based on fig. 3 or fig. 4 or fig. 5). In some embodiments, the decoder 30 may be configured to perform various embodiments described hereinafter to implement the application of the chroma block prediction method described herein on the decoding side.
A picture post-processor 32 for performing post-processing on the decoded picture data 31 (also referred to as reconstructed picture data) to obtain post-processed picture data 33. Post-processing performed by picture post-processor 32 may include: color format conversion (e.g., from YUV format to RGB format), toning, trimming or resampling, or any other process may also be used to transmit post-processed picture data 33 to display device 34.
A display device 34 for receiving the post-processed picture data 33 for displaying pictures to, for example, a user or viewer. Display device 34 may be or may include any type of display for presenting the reconstructed picture, such as an integrated or external display or monitor. For example, the display may include a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED) display, a plasma display, a projector, a micro LED display, a liquid crystal on silicon (LCoS), a Digital Light Processor (DLP), or any other display of any kind.
Although fig. 1A depicts source device 12 and destination device 14 as separate devices, device embodiments may also include the functionality of both source device 12 and destination device 14 or both, i.e., source device 12 or corresponding functionality and destination device 14 or corresponding functionality. In such embodiments, source device 12 or corresponding functionality and destination device 14 or corresponding functionality may be implemented using the same hardware and/or software, or using separate hardware and/or software, or any combination thereof.
It will be apparent to those skilled in the art from this description that the existence and (exact) division of the functionality of the different elements, or source device 12 and/or destination device 14 as shown in fig. 1A, may vary depending on the actual device and application. Source device 12 and destination device 14 may comprise any of a variety of devices, including any type of handheld or stationary device, such as a notebook or laptop computer, a mobile phone, a smartphone, a tablet or tablet computer, a camcorder, a desktop computer, a set-top box, a television, a camera, an in-vehicle device, a display device, a digital media player, a video game console, a video streaming device (e.g., a content service server or a content distribution server), a broadcast receiver device, a broadcast transmitter device, etc., and may not use or use any type of operating system.
Both encoder 20 and decoder 30 may be implemented as any of a variety of suitable circuits, such as one or more microprocessors, Digital Signal Processors (DSPs), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), discrete logic, hardware, or any combinations thereof. If the techniques are implemented in part in software, an apparatus may store instructions of the software in a suitable non-transitory computer-readable storage medium and may execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Any of the foregoing, including hardware, software, a combination of hardware and software, etc., may be considered one or more processors.
In some cases, the video encoding and decoding system 10 shown in fig. 1A is merely an example, and the techniques of this application may be applicable to video encoding settings (e.g., video encoding or video decoding) that do not necessarily involve any data communication between the encoding and decoding devices. In other examples, the data may be retrieved from local storage, streamed over a network, and so on. A video encoding device may encode and store data to a memory, and/or a video decoding device may retrieve and decode data from a memory. In some examples, the encoding and decoding are performed by devices that do not communicate with each other, but merely encode data to and/or retrieve data from memory and decode data.
Referring to fig. 1B, fig. 1B is an illustrative diagram of an example of a video coding system 40 including the encoder 20 of fig. 2 and/or the decoder 30 of fig. 3, according to an example embodiment. Video coding system 40 may implement a combination of the various techniques of the embodiments of the present application. In the illustrated embodiment, video coding system 40 may include an imaging device 41, an encoder 20, a decoder 30 (and/or a video codec implemented by logic 47 of a processing unit 46), an antenna 42, one or more processors 43, one or more memories 44, and/or a display device 45.
As shown in fig. 1B, the imaging device 41, the antenna 42, the processing unit 46, the logic circuit 47, the encoder 20, the decoder 30, the processor 43, the memory 44, and/or the display device 45 can communicate with each other. As discussed, although video coding system 40 is depicted with encoder 20 and decoder 30, in different examples video coding system 40 may include only encoder 20 or only decoder 30.
In some instances, antenna 42 may be used to transmit or receive an encoded bitstream of video data. Additionally, in some instances, display device 45 may be used to present video data. In some examples, logic 47 may be implemented by processing unit 46. The processing unit 46 may comprise application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, or the like. Video decoding system 40 may also include an optional processor 43, which optional processor 43 similarly may include application-specific integrated circuit (ASIC) logic, a graphics processor, a general-purpose processor, or the like. In some examples, the logic 47 may be implemented in hardware, such as video encoding specific hardware, and the processor 43 may be implemented in general purpose software, an operating system, and so on. In addition, the Memory 44 may be any type of Memory, such as a volatile Memory (e.g., Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), etc.) or a nonvolatile Memory (e.g., flash Memory, etc.), and the like. In a non-limiting example, storage 44 may be implemented by a speed cache memory. In some instances, logic circuitry 47 may access memory 44 (e.g., to implement an image buffer). In other examples, logic 47 and/or processing unit 46 may include memory (e.g., cache, etc.) for implementing image buffers, etc.
In some examples, encoder 20, implemented by logic circuitry, may include an image buffer (e.g., implemented by processing unit 46 or memory 44) and a graphics processing unit (e.g., implemented by processing unit 46). The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include an encoder 20 implemented by logic circuitry 47 to implement the various modules discussed with reference to fig. 2 and/or any other encoder system or subsystem described herein. Logic circuitry may be used to perform various operations discussed herein.
In some examples, decoder 30 may be implemented by logic circuitry 47 in a similar manner to implement the various modules discussed with reference to decoder 30 of fig. 3 and/or any other decoder system or subsystem described herein. In some examples, logic circuit implemented decoder 30 may include an image buffer (implemented by processing unit 2820 or memory 44) and a graphics processing unit (e.g., implemented by processing unit 46). The graphics processing unit may be communicatively coupled to the image buffer. The graphics processing unit may include a decoder 30 implemented by logic circuitry 47 to implement the various modules discussed with reference to fig. 3 and/or any other decoder system or subsystem described herein.
In some instances, antenna 42 may be used to receive an encoded bitstream of video data. As discussed, the encoded bitstream may include data related to the encoded video frame, indicators, index values, mode selection data, etc., discussed herein, such as data related to the encoding partition (e.g., transform coefficients or quantized transform coefficients, (as discussed) optional indicators, and/or data defining the encoding partition). Video coding system 40 may also include a decoder 30 coupled to antenna 42 and used to decode the encoded bitstream. The display device 45 is used to present video frames.
It should be understood that for the example described with reference to encoder 20 in the embodiments of the present application, decoder 30 may be used to perform the reverse process. With respect to signaling syntax elements, decoder 30 may be configured to receive and parse such syntax elements and decode the associated video data accordingly. In some examples, encoder 20 may entropy encode the syntax elements into an encoded video bitstream. In such instances, decoder 30 may parse such syntax elements and decode the relevant video data accordingly.
It should be noted that the entropy encoding/decoding method for syntax elements described in the embodiment of the present application is mainly used for the entropy encoding/decoding process, which exists in both the encoder 20 and the decoder 30, and the encoder 20 and the decoder 30 in the embodiment of the present application may be a video standard protocol such as h.263, h.264, HEVV, MPEG-2, MPEG-4, VP8, VP9, or a codec corresponding to a next generation video standard protocol (e.g., h.266).
Referring to fig. 2, fig. 2 shows a schematic/conceptual block diagram of an example of an encoder 20 for implementing embodiments of the present application. In the example of fig. 2, encoder 20 includes a residual calculation unit 204, a transform processing unit 206, a quantization unit 208, an inverse quantization unit 210, an inverse transform processing unit 212, a reconstruction unit 214, a buffer 216, a loop filter unit 220, a Decoded Picture Buffer (DPB) 230, a prediction processing unit 260, and an entropy encoding unit 270. Prediction processing unit 260 may include inter prediction unit 244, intra prediction unit 254, and mode selection unit 262. Inter prediction unit 244 may include a motion estimation unit and a motion compensation unit (not shown). The encoder 20 shown in fig. 2 may also be referred to as a hybrid video encoder or a video encoder according to a hybrid video codec.
For example, the residual calculation unit 204, the transform processing unit 206, the quantization unit 208, the prediction processing unit 260, and the entropy encoding unit 270 form a forward signal path of the encoder 20, and, for example, the inverse quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214, the buffer 216, the loop filter 220, the Decoded Picture Buffer (DPB) 230, the prediction processing unit 260 form a backward signal path of the encoder, wherein the backward signal path of the encoder corresponds to a signal path of a decoder (see the decoder 30 in fig. 3).
The encoder 20 receives, e.g., via an input 202, a picture 201 or an image block 203 of a picture 201, e.g., a picture in a sequence of pictures forming a video or a video sequence. Image block 203 may also be referred to as a current picture block or a picture block to be encoded, and picture 201 may be referred to as a current picture or a picture to be encoded (especially when the current picture is distinguished from other pictures in video encoding, such as previously encoded and/or decoded pictures in the same video sequence, i.e., a video sequence that also includes the current picture).
An embodiment of the encoder 20 may comprise a partitioning unit (not shown in fig. 2) for partitioning the picture 201 into a plurality of blocks, e.g. image blocks 203, typically into a plurality of non-overlapping blocks. The partitioning unit may be used to use the same block size for all pictures in a video sequence and a corresponding grid defining the block size, or to alter the block size between pictures or subsets or groups of pictures and partition each picture into corresponding blocks.
In one example, prediction processing unit 260 of encoder 20 may be used to perform any combination of the above-described segmentation techniques.
Like picture 201, image block 203 is also or can be considered as a two-dimensional array or matrix of sample points having sample values, although its size is smaller than picture 201. In other words, the image block 203 may comprise, for example, one sample array (e.g., a luma array in the case of a black and white picture 201) or three sample arrays (e.g., a luma array and two chroma arrays in the case of a color picture) or any other number and/or class of arrays depending on the color format applied. The number of sampling points in the horizontal and vertical directions (or axes) of the image block 203 defines the size of the image block 203.
The encoder 20 as shown in fig. 2 is used to encode a picture 201 block by block, e.g. performing encoding and prediction for each image block 203.
The residual calculation unit 204 is configured to calculate a residual block 205 based on the picture image block 203 and the prediction block 265 (further details of the prediction block 265 are provided below), e.g. by subtracting sample values of the prediction block 265 from sample values of the picture image block 203 sample by sample (pixel by pixel) to obtain the residual block 205 in the sample domain.
The transform processing unit 206 is configured to apply a transform, such as a Discrete Cosine Transform (DCT) or a Discrete Sine Transform (DST), on the sample values of the residual block 205 to obtain transform coefficients 207 in a transform domain. The transform coefficients 207 may also be referred to as transform residual coefficients and represent the residual block 205 in the transform domain.
The transform processing unit 206 may be used to apply integer approximations of DCT/DST, such as the transform specified for HEVC/h.265. Such integer approximations are typically scaled by some factor compared to the orthogonal DCT transform. To maintain the norm of the residual block processed by the forward transform and the inverse transform, an additional scaling factor is applied as part of the transform process. The scaling factor is typically selected based on certain constraints, e.g., the scaling factor is a power of 2 for a shift operation, a trade-off between bit depth of transform coefficients, accuracy and implementation cost, etc. For example, a specific scaling factor may be specified on the decoder 30 side for the inverse transform by, for example, inverse transform processing unit 212 (and on the encoder 20 side for the corresponding inverse transform by, for example, inverse transform processing unit 212), and correspondingly, a corresponding scaling factor may be specified on the encoder 20 side for the forward transform by transform processing unit 206.
Quantization unit 208 is used to quantize transform coefficients 207, e.g., by applying scalar quantization or vector quantization, to obtain quantized transform coefficients 209. Quantized transform coefficients 209 may also be referred to as quantized residual coefficients 209. The quantization process may reduce the bit depth associated with some or all of transform coefficients 207. For example, an n-bit transform coefficient may be rounded down to an m-bit transform coefficient during quantization, where n is greater than m. The quantization level may be modified by adjusting a Quantization Parameter (QP). For example, for scalar quantization, different scales may be applied to achieve finer or coarser quantization. Smaller quantization steps correspond to finer quantization and larger quantization steps correspond to coarser quantization. An appropriate quantization step size may be indicated by a Quantization Parameter (QP). For example, the quantization parameter may be an index of a predefined set of suitable quantization step sizes. For example, a smaller quantization parameter may correspond to a fine quantization (smaller quantization step size) and a larger quantization parameter may correspond to a coarse quantization (larger quantization step size), or vice versa. The quantization may comprise a division by a quantization step size and a corresponding quantization or inverse quantization, e.g. performed by inverse quantization 210, or may comprise a multiplication by a quantization step size. Embodiments according to some standards, such as HEVC, may use a quantization parameter to determine the quantization step size. In general, the quantization step size may be calculated based on the quantization parameter using a fixed point approximation of an equation that includes division. Additional scaling factors may be introduced for quantization and dequantization to recover the norm of the residual block that may be modified due to the scale used in the fixed point approximation of the equation for the quantization step size and quantization parameter. In one example implementation, the inverse transform and inverse quantization scales may be combined. Alternatively, a custom quantization table may be used and signaled from the encoder to the decoder, e.g., in a bitstream. Quantization is a lossy operation, where the larger the quantization step size, the greater the loss.
The inverse quantization unit 210 is configured to apply inverse quantization of the quantization unit 208 on the quantized coefficients to obtain inverse quantized coefficients 211, e.g., to apply an inverse quantization scheme of the quantization scheme applied by the quantization unit 208 based on or using the same quantization step as the quantization unit 208. The dequantized coefficients 211 may also be referred to as dequantized residual coefficients 211, corresponding to transform coefficients 207, although the loss due to quantization is typically not the same as the transform coefficients.
The inverse transform processing unit 212 is configured to apply an inverse transform of the transform applied by the transform processing unit 206, for example, an inverse Discrete Cosine Transform (DCT) or an inverse Discrete Sine Transform (DST), to obtain an inverse transform block 213 in the sample domain. The inverse transform block 213 may also be referred to as an inverse transform dequantized block 213 or an inverse transform residual block 213.
The reconstruction unit 214 (e.g., summer 214) is used to add the inverse transform block 213 (i.e., the reconstructed residual block 213) to the prediction block 265 to obtain the reconstructed block 215 in the sample domain, e.g., to add sample values of the reconstructed residual block 213 to sample values of the prediction block 265.
Optionally, a buffer unit 216 (or simply "buffer" 216), such as a line buffer 216, is used to buffer or store the reconstructed block 215 and corresponding sample values, for example, for intra prediction. In other embodiments, the encoder may be used to use the unfiltered reconstructed block and/or corresponding sample values stored in buffer unit 216 for any class of estimation and/or prediction, such as intra prediction.
For example, an embodiment of encoder 20 may be configured such that buffer unit 216 is used not only to store reconstructed blocks 215 for intra prediction 254, but also for loop filter unit 220 (not shown in fig. 2), and/or such that buffer unit 216 and decoded picture buffer unit 230 form one buffer, for example. Other embodiments may be used to use filtered block 221 and/or blocks or samples from decoded picture buffer 230 (neither shown in fig. 2) as input or basis for intra prediction 254.
The loop filter unit 220 (or simply "loop filter" 220) is used to filter the reconstructed block 215 to obtain a filtered block 221, so as to facilitate pixel transition or improve video quality. Loop filter unit 220 is intended to represent one or more loop filters, such as a deblocking filter, a sample-adaptive offset (SAO) filter, or other filters, such as a bilateral filter, an Adaptive Loop Filter (ALF), or a sharpening or smoothing filter, or a collaborative filter. Although loop filter unit 220 is shown in fig. 2 as an in-loop filter, in other configurations, loop filter unit 220 may be implemented as a post-loop filter. The filtered block 221 may also be referred to as a filtered reconstructed block 221. The decoded picture buffer 230 may store the reconstructed encoded block after the loop filter unit 220 performs a filtering operation on the reconstructed encoded block.
Embodiments of encoder 20 (correspondingly, loop filter unit 220) may be configured to output loop filter parameters (e.g., sample adaptive offset information), e.g., directly or after entropy encoding by entropy encoding unit 270 or any other entropy encoding unit, e.g., such that decoder 30 may receive and apply the same loop filter parameters for decoding.
Decoded Picture Buffer (DPB) 230 may be a reference picture memory that stores reference picture data for use by encoder 20 in encoding video data. DPB 230 may be formed from any of a variety of memory devices, such as Dynamic Random Access Memory (DRAM) including Synchronous DRAM (SDRAM), Magnetoresistive RAM (MRAM), Resistive RAM (RRAM), or other types of memory devices. The DPB 230 and the buffer 216 may be provided by the same memory device or separate memory devices. In a certain example, a Decoded Picture Buffer (DPB) 230 is used to store filtered blocks 221. Decoded picture buffer 230 may further be used to store other previous filtered blocks, such as previous reconstructed and filtered blocks 221, of the same current picture or of a different picture, such as a previous reconstructed picture, and may provide the complete previous reconstructed, i.e., decoded picture (and corresponding reference blocks and samples) and/or the partially reconstructed current picture (and corresponding reference blocks and samples), e.g., for inter prediction. In a certain example, if reconstructed block 215 is reconstructed without in-loop filtering, Decoded Picture Buffer (DPB) 230 is used to store reconstructed block 215.
Prediction processing unit 260, also referred to as block prediction processing unit 260, is used to receive or obtain image block 203 (current image block 203 of current picture 201) and reconstructed picture data, e.g., reference samples of the same (current) picture from buffer 216 and/or reference picture data 231 of one or more previously decoded pictures from decoded picture buffer 230, and to process such data for prediction, i.e., to provide prediction block 265, which may be inter-predicted block 245 or intra-predicted block 255.
The mode selection unit 262 may be used to select a prediction mode (e.g., intra or inter prediction mode) and/or a corresponding prediction block 245 or 255 used as the prediction block 265 to calculate the residual block 205 and reconstruct the reconstructed block 215.
Embodiments of mode selection unit 262 may be used to select prediction modes (e.g., from those supported by prediction processing unit 260) that provide the best match or the smallest residual (smallest residual means better compression in transmission or storage), or that provide the smallest signaling overhead (smallest signaling overhead means better compression in transmission or storage), or both. The mode selection unit 262 may be configured to determine a prediction mode based on Rate Distortion Optimization (RDO), i.e., select a prediction mode that provides the minimum rate distortion optimization, or select a prediction mode in which the associated rate distortion at least meets the prediction mode selection criteria.
The prediction processing performed by the example of the encoder 20 (e.g., by the prediction processing unit 260) and the mode selection performed (e.g., by the mode selection unit 262) will be explained in detail below.
As described above, the encoder 20 is configured to determine or select the best or optimal prediction mode from a set of (predetermined) prediction modes. The prediction mode set may include, for example, intra prediction modes and/or inter prediction modes.
The intra prediction mode set may include 35 different intra prediction modes, for example, non-directional modes such as DC (or mean) mode and planar mode, or directional modes as defined in h.265, or may include 67 different intra prediction modes, for example, non-directional modes such as DC (or mean) mode and planar mode, or directional modes as defined in h.266 under development.
In possible implementations, the set of inter Prediction modes may include, for example, an Advanced Motion Vector Prediction (AMVP) mode and a merge (merge) mode depending on available reference pictures (i.e., at least partially decoded pictures stored in the DBP230, for example, as described above) and other inter Prediction parameters, e.g., depending on whether the entire reference picture or only a portion of the reference picture, such as a search window region of a region surrounding the current block, is used to search for a best matching reference block, and/or depending on whether pixel interpolation, such as half-pixel and/or quarter-pixel interpolation, is applied, for example. In a specific implementation, the inter prediction mode set may include an improved control point-based AMVP mode and an improved control point-based merge mode according to an embodiment of the present application. In one example, intra-prediction unit 254 may be used to perform any combination of the inter-prediction techniques described below.
In addition to the above prediction mode, embodiments of the present application may also apply a skip mode and/or a direct mode.
The prediction processing unit 260 may further be configured to partition the image block 203 into smaller block partitions or sub-blocks, for example, by iteratively using quad-tree (QT) partitions, binary-tree (BT) partitions, or triple-tree (TT) partitions, or any combination thereof, and to perform prediction, for example, for each of the block partitions or sub-blocks, wherein mode selection includes selecting a tree structure of the partitioned image block 203 and selecting a prediction mode to apply to each of the block partitions or sub-blocks.
The inter prediction unit 244 may include a Motion Estimation (ME) unit (not shown in fig. 2) and a Motion Compensation (MC) unit (not shown in fig. 2). The motion estimation unit is used to receive or obtain a picture image block 203 (current picture image block 203 of current picture 201) and a decoded picture 231, or at least one or more previously reconstructed blocks, e.g., reconstructed blocks of one or more other/different previously decoded pictures 231, for motion estimation. For example, the video sequence may comprise a current picture and a previously decoded picture 31, or in other words, the current picture and the previously decoded picture 31 may be part of, or form, a sequence of pictures forming the video sequence.
For example, the encoder 20 may be configured to select a reference block from a plurality of reference blocks of the same or different one of a plurality of other pictures and provide the reference picture and/or an offset (spatial offset) between a position (X, Y coordinates) of the reference block and a position of the current block to a motion estimation unit (not shown in fig. 2) as an inter prediction parameter. This offset is also called a Motion Vector (MV).
The motion compensation unit is configured to obtain inter-prediction parameters and perform inter-prediction based on or using the inter-prediction parameters to obtain an inter-prediction block 245. The motion compensation performed by the motion compensation unit (not shown in fig. 2) may involve taking or generating a prediction block based on a motion/block vector determined by motion estimation (possibly performing interpolation to sub-pixel precision). Interpolation filtering may generate additional pixel samples from known pixel samples, potentially increasing the number of candidate prediction blocks that may be used to encode a picture block. Upon receiving the motion vector for the PU of the current picture block, motion compensation unit 246 may locate the prediction block in one reference picture list to which the motion vector points. Motion compensation unit 246 may also generate syntax elements associated with the blocks and video slices for use by decoder 30 in decoding picture blocks of the video slices.
Specifically, the inter prediction unit 244 may transmit a syntax element including an inter prediction parameter (e.g., indication information for selecting an inter prediction mode for current block prediction after traversing a plurality of inter prediction modes) to the entropy encoding unit 270. In a possible application scenario, if there is only one inter prediction mode, the inter prediction parameters may not be carried in the syntax element, and the decoding end 30 can directly use the default prediction mode for decoding. It will be appreciated that the inter prediction unit 244 may be used to perform any combination of inter prediction techniques.
The intra prediction unit 254 is used to obtain, for example, a picture block 203 (current picture block) of the same picture and one or more previously reconstructed blocks, e.g., reconstructed neighboring blocks, to be received for intra estimation. For example, the encoder 20 may be configured to select an intra-prediction mode from a plurality of (predetermined) intra-prediction modes.
Embodiments of encoder 20 may be used to select an intra prediction mode based on optimization criteria, such as based on a minimum residual (e.g., an intra prediction mode that provides a prediction block 255 that is most similar to current picture block 203) or a minimum code rate distortion.
The intra-prediction unit 254 is further configured to determine the intra-prediction block 255 based on the intra-prediction parameters as the selected intra-prediction mode. In any case, after selecting the intra-prediction mode for the block, intra-prediction unit 254 is also used to provide intra-prediction parameters, i.e., information indicating the selected intra-prediction mode for the block, to entropy encoding unit 270. In one example, intra-prediction unit 254 may be used to perform any combination of intra-prediction techniques.
Specifically, the above-described intra prediction unit 254 may transmit a syntax element including an intra prediction parameter (such as indication information of selecting an intra prediction mode for current block prediction after traversing a plurality of intra prediction modes) to the entropy encoding unit 270. In a possible application scenario, if there is only one intra-prediction mode, the intra-prediction parameters may not be carried in the syntax element, and the decoding end 30 may directly use the default prediction mode for decoding.
Entropy encoding unit 270 is configured to apply an entropy encoding algorithm or scheme (e.g., a Variable Length Coding (VLC) scheme, a Context Adaptive VLC (CAVLC) scheme, an arithmetic coding scheme, a Context Adaptive Binary Arithmetic Coding (CABAC), syntax-based context-adaptive binary arithmetic coding (SBAC), Probability Interval Partitioning Entropy (PIPE) coding, or other entropy encoding methods or techniques) to individual or all of quantized residual coefficients 209, inter-prediction parameters, intra-prediction parameters, and/or loop filter parameters (or not) to obtain encoded picture data 21 that may be output by output 272 in the form of, for example, encoded bitstream 21. The encoded bitstream may be transmitted to video decoder 30, or archived for later transmission or retrieval by video decoder 30. Entropy encoding unit 270 may also be used to entropy encode other syntax elements of the current video slice being encoded.
Other structural variations of video encoder 20 may be used to encode the video stream. For example, the non-transform based encoder 20 may quantize the residual signal directly without the transform processing unit 206 for certain blocks or frames. In another embodiment, encoder 20 may have quantization unit 208 and inverse quantization unit 210 combined into a single unit.
Specifically, in the embodiment of the present application, the encoder 20 may be used to implement an entropy coding method of a syntax element described in the following embodiments.
It should be understood that other structural variations of the video encoder 20 may be used to encode the video stream. For example, for some image blocks or image frames, video encoder 20 may quantize the residual signal directly without processing by transform processing unit 206 and, correspondingly, without processing by inverse transform processing unit 212; alternatively, for some image blocks or image frames, the video encoder 20 does not generate residual data and accordingly does not need to be processed by the transform processing unit 206, the quantization unit 208, the inverse quantization unit 210, and the inverse transform processing unit 212; alternatively, video encoder 20 may store the reconstructed image block directly as a reference block without processing by filter 220; alternatively, the quantization unit 208 and the inverse quantization unit 210 in the video encoder 20 may be merged together. The loop filter 220 is optional, and in the case of lossless compression coding, the transform processing unit 206, the quantization unit 208, the inverse quantization unit 210, and the inverse transform processing unit 212 are optional. It should be appreciated that the inter prediction unit 244 and the intra prediction unit 254 may be selectively enabled according to different application scenarios.
Referring to fig. 3, fig. 3 shows a schematic/conceptual block diagram of an example of a decoder 30 for implementing embodiments of the present application. Video decoder 30 is operative to receive encoded picture data (e.g., an encoded bitstream) 21, e.g., encoded by encoder 20, to obtain a decoded picture 231. During the decoding process, video decoder 30 receives video data, such as an encoded video bitstream representing picture blocks of an encoded video slice and associated syntax elements, from video encoder 20.
In the example of fig. 3, decoder 30 includes entropy decoding unit 304, inverse quantization unit 310, inverse transform processing unit 312, reconstruction unit 314 (e.g., summer 314), buffer 316, loop filter 320, decoded picture buffer 330, and prediction processing unit 360. The prediction processing unit 360 may include an inter prediction unit 344, an intra prediction unit 354, and a mode selection unit 362. In some examples, video decoder 30 may perform a decoding pass that is substantially reciprocal to the encoding pass described with reference to video encoder 20 of fig. 2.
Entropy decoding unit 304 is to perform entropy decoding on encoded picture data 21 to obtain, for example, quantized coefficients 309 and/or decoded encoding parameters (not shown in fig. 3), such as any or all of inter-prediction, intra-prediction parameters, loop filter parameters, and/or other syntax elements (decoded). The entropy decoding unit 304 is further for forwarding the inter-prediction parameters, the intra-prediction parameters, and/or other syntax elements to the prediction processing unit 360. Video decoder 30 may receive syntax elements at the video slice level and/or the video block level.
Inverse quantization unit 310 may be functionally identical to inverse quantization unit 110, inverse transform processing unit 312 may be functionally identical to inverse transform processing unit 212, reconstruction unit 314 may be functionally identical to reconstruction unit 214, buffer 316 may be functionally identical to buffer 216, loop filter 320 may be functionally identical to loop filter 220, and decoded picture buffer 330 may be functionally identical to decoded picture buffer 230.
Prediction processing unit 360 may include inter prediction unit 344 and intra prediction unit 354, where inter prediction unit 344 may be functionally similar to inter prediction unit 244 and intra prediction unit 354 may be functionally similar to intra prediction unit 254. The prediction processing unit 360 is typically used to perform block prediction and/or to obtain a prediction block 365 from the encoded data 21, as well as to receive or obtain (explicitly or implicitly) prediction related parameters and/or information about the selected prediction mode from, for example, the entropy decoding unit 304.
When the video slice is encoded as an intra-coded (I) slice, intra-prediction unit 354 of prediction processing unit 360 is used to generate a prediction block 365 for the picture block of the current video slice based on the signaled intra-prediction mode and data from previously decoded blocks of the current frame or picture. When a video frame is encoded as an inter-coded (i.e., B or P) slice, inter prediction unit 344 (e.g., a motion compensation unit) of prediction processing unit 360 is used to generate a prediction block 365 for the video block of the current video slice based on the motion vectors and other syntax elements received from entropy decoding unit 304. For inter prediction, a prediction block may be generated from one reference picture within one reference picture list. Video decoder 30 may construct the reference frame list using default construction techniques based on the reference pictures stored in DPB 330: list 0 and list 1.
Prediction processing unit 360 is used to determine prediction information for the video blocks of the current video slice by parsing the motion vectors and other syntax elements, and to generate a prediction block for the current video block being decoded using the prediction information. In an example of the present application, prediction processing unit 360 uses some of the syntax elements received to determine a prediction mode (e.g., intra or inter prediction) for encoding video blocks of a video slice, an inter prediction slice type (e.g., B-slice, P-slice, or GPB-slice), construction information for one or more of a reference picture list of the slice, a motion vector for each inter-coded video block of the slice, an inter prediction state for each inter-coded video block of the slice, and other information to decode video blocks of a current video slice. In another example of the present disclosure, the syntax elements received by video decoder 30 from the bitstream include syntax elements received in one or more of an Adaptive Parameter Set (APS), a Sequence Parameter Set (SPS), a Picture Parameter Set (PPS), or a slice header.
Inverse quantization unit 310 may be used to inverse quantize (i.e., inverse quantize) the quantized transform coefficients provided in the bitstream and decoded by entropy decoding unit 304. The inverse quantization process may include using quantization parameters calculated by video encoder 20 for each video block in the video slice to determine the degree of quantization that should be applied and likewise the degree of inverse quantization that should be applied.
Inverse transform processing unit 312 is used to apply an inverse transform (e.g., an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process) to the transform coefficients in order to produce a block of residuals in the pixel domain.
The reconstruction unit 314 (e.g., summer 314) is used to add the inverse transform block 313 (i.e., reconstructed residual block 313) to the prediction block 365 to obtain the reconstructed block 315 in the sample domain, e.g., by adding sample values of the reconstructed residual block 313 to sample values of the prediction block 365.
Loop filter unit 320 (either during or after the encoding cycle) is used to filter reconstructed block 315 to obtain filtered block 321 to facilitate pixel transitions or improve video quality. In one example, loop filter unit 320 may be used to perform any combination of the filtering techniques described below. Loop filter unit 320 is intended to represent one or more loop filters, such as a deblocking filter, a sample-adaptive offset (SAO) filter, or other filters, such as a bilateral filter, an Adaptive Loop Filter (ALF), or a sharpening or smoothing filter, or a collaborative filter. Although loop filter unit 320 is shown in fig. 3 as an in-loop filter, in other configurations, loop filter unit 320 may be implemented as a post-loop filter.
Decoded video block 321 in a given frame or picture is then stored in decoded picture buffer 330, which stores reference pictures for subsequent motion compensation.
Decoder 30 is used to output decoded picture 31, e.g., via output 332, for presentation to or viewing by a user.
Other variations of video decoder 30 may be used to decode the compressed bitstream. For example, decoder 30 may generate an output video stream without loop filter unit 320. For example, the non-transform based decoder 30 may directly inverse quantize the residual signal without the inverse transform processing unit 312 for certain blocks or frames. In another embodiment, video decoder 30 may have inverse quantization unit 310 and inverse transform processing unit 312 combined into a single unit.
Specifically, in the present embodiment, the decoder 30 is configured to implement an entropy decoding method of a syntax element described in the following embodiments.
It should be understood that other structural variations of the video decoder 30 may be used to decode the encoded video bitstream. For example, video decoder 30 may generate an output video stream without processing by filter 320; alternatively, for some image blocks or image frames, the quantized coefficients are not decoded by entropy decoding unit 304 of video decoder 30 and, accordingly, do not need to be processed by inverse quantization unit 310 and inverse transform processing unit 312. Loop filter 320 is optional; and the inverse quantization unit 310 and the inverse transform processing unit 312 are optional for the case of lossless compression. It should be understood that the inter prediction unit and the intra prediction unit may be selectively enabled according to different application scenarios.
It should be understood that, in the encoder 20 and the decoder 30 of the present application, the processing result of a certain link may be further processed and then output to the next link, for example, after the links such as interpolation filtering, motion vector derivation, or loop filtering, the processing result of the corresponding link is further subjected to operations such as Clip or shift.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a video coding apparatus 400 (e.g., a video encoding apparatus 400 or a video decoding apparatus 400) provided by an embodiment of the present application. Video coding apparatus 400 is suitable for implementing the embodiments described herein. In one embodiment, video coding device 400 may be a video decoder (e.g., decoder 30 of fig. 1A) or a video encoder (e.g., encoder 20 of fig. 1A). In another embodiment, video coding device 400 may be one or more components of decoder 30 of fig. 1A or encoder 20 of fig. 1A described above.
Video coding apparatus 400 includes: an ingress port 410 and a reception unit (Rx)420 for receiving data, a processor, logic unit or Central Processing Unit (CPU)430 for processing data, a transmitter unit (Tx)440 and an egress port 450 for transmitting data, and a memory 460 for storing data. Video coding device 400 may also include optical-to-Electrical (EO) components and optical-to-electrical (opto) components coupled with ingress port 410, receiver unit 420, transmitter unit 440, and egress port 450 for egress or ingress of optical or electrical signals.
The processor 430 is implemented by hardware and software. Processor 430 may be implemented as one or more CPU chips, cores (e.g., multi-core processors), FPGAs, ASICs, and DSPs. Processor 430 is in communication with inlet port 410, receiver unit 420, transmitter unit 440, outlet port 450, and memory 460. Processor 430 includes a coding module 470 (e.g., encoding module 470 or decoding module 470). The encoding/decoding module 470 implements embodiments disclosed herein to implement the chroma block prediction methods provided by embodiments of the present application. For example, the encoding/decoding module 470 implements, processes, or provides various encoding operations. Accordingly, substantial improvements are provided to the functionality of the video coding apparatus 400 by the encoding/decoding module 470 and affect the transition of the video coding apparatus 400 to different states. Alternatively, the encode/decode module 470 is implemented as instructions stored in the memory 460 and executed by the processor 430.
The memory 460, which may include one or more disks, tape drives, and solid state drives, may be used as an over-flow data storage device for storing programs when such programs are selectively executed, and for storing instructions and data that are read during program execution. The memory 460 may be volatile and/or nonvolatile, and may be Read Only Memory (ROM), Random Access Memory (RAM), random access memory (TCAM), and/or Static Random Access Memory (SRAM).
Referring to fig. 5, fig. 5 is a simplified block diagram of an apparatus 500 that may be used as either or both of source device 12 and destination device 14 in fig. 1A according to an example embodiment. Apparatus 500 may implement the techniques of this application. In other words, fig. 5 is a schematic block diagram of an implementation manner of an encoding apparatus or a decoding apparatus (simply referred to as a decoding apparatus 500) of the embodiment of the present application. Among other things, the decoding device 500 may include a processor 510, a memory 530, and a bus system 550. Wherein the processor is connected with the memory through the bus system, the memory is used for storing instructions, and the processor is used for executing the instructions stored by the memory. The memory of the coding device stores program code, and the processor may invoke the program code stored in the memory to perform the various video encoding or decoding methods described herein, and in particular the various new entropy encoding/decoding methods of syntax elements. To avoid repetition, it is not described in detail here.
In the embodiment of the present application, the processor 510 may be a Central Processing Unit (CPU), and the processor 510 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 530 may include a Read Only Memory (ROM) device or a Random Access Memory (RAM) device. Any other suitable type of memory device may also be used for memory 530. Memory 530 may include code and data 531 to be accessed by processor 510 using bus 550. Memory 530 may further include an operating system 533 and application programs 535, the application programs 535 including at least one program that allows processor 510 to perform the video encoding or decoding methods described herein, and in particular the entropy encoding/decoding methods of syntax elements described herein.
The bus system 550 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. For clarity of illustration, however, the various buses are designated in the figure as bus system 550.
Optionally, the translator device 500 may also include one or more output devices, such as a display 570. In one example, the display 570 may be a touch-sensitive display that incorporates a display with a touch-sensitive unit operable to sense touch input. A display 570 may be connected to the processor 510 via the bus 550.
The scheme of the embodiment of the application is explained in detail as follows:
fig. 6 is a flowchart illustrating an entropy coding method for implementing syntax elements according to an embodiment of the present application. This process 600 may be performed by video encoder 20. Process 600 is described as a series of steps or operations, it being understood that process 600 may be performed in various orders and/or concurrently, and is not limited to the order of execution shown in fig. 6. As shown in fig. 6, the entropy coding method of the syntax element includes:
step 601, judging whether the length of the current fusion candidate motion vector list is larger than a preset value.
The length of the merge candidate Motion Vector list refers to the number of candidate Motion Vectors (MVs) included in the merge candidate Motion Vector list (the number is expressed by MaxNumMergeCand, for example). The MMVD technology utilizes a fusion candidate motion vector list, firstly selects a candidate MV from the fusion candidate motion vector list as a basic MV, and then carries out MV extended expression based on the basic MV, wherein the MV extended expression comprises three elements, namely an MV starting point, a motion step length (Distance) and a motion Direction (Direction).
(1) Selecting candidate MVs
The selected candidate MV is the MV starting point, i.e. the selected candidate MV is used to determine the initial position of the MV, using the existing list of fused candidate motion vectors. Referring to table 1, a Base candidate index (Base candidate IDX) is used to indicate an index of a candidate MV in the fusion candidate motion vector list, and an Nth MVP is used to indicate an Nth MV in the fusion candidate motion vector list. Wherein, the Base candidate IDX ═ 0 corresponds to the first MV in the candidate motion vector list, the Base candidate IDX ═ 1 corresponds to the second MV in the candidate motion vector list, the Base candidate IDX ═ 2 corresponds to the third MV in the candidate motion vector list, and the Base candidate IDX ═ 3 corresponds to the fourth MV in the candidate motion vector list. The candidate MV corresponding to the candidate motion vector list may be represented by a Base candidate IDX, and if the number of candidate MVs available for selection in the fusion candidate motion vector list is 1, the Base candidate IDX may not be used.
TABLE 1
Base candidate IDX 0 1 2 3
Nth MVP 1st MVP 2nd MVP 3rd MVP 4th MVP
(2) Determining step size
The step size identifier (Distance IDX) is used to represent the offset Distance of the MV. Referring to table 2, Distance IDX is used to indicate an index of a Pixel Distance from an initial position, and Pixel Distance is used to indicate a Pixel Distance from an initial position (MV start point). The Distance between pixels is 1/4-pel (i.e., 1/4 pixels), the Distance between pixels is 1/2-pel (i.e., 1/2 pixels), the Distance between pixels is 2-pel (i.e., one pixel), the Distance between pixels is 2-pel (i.e., two pixels), the Distance between pixels is 4-pel (i.e., four pixels), the Distance between pixels is 8-pel (i.e., eight pixels), the Distance between pixels is 6-pel (i.e., 16-pel), and the Distance between pixels is 32-pel (i.e., thirty-two pixels). Its corresponding pixel Distance can be represented by Distance IDX.
TABLE 2
Distance IDX 0 1 2 3 4 5 6 7
Pixel distance 1/4-pel 1/2-pel 1-pel 2-pel 4-pel 8-pel 16-pel 32-pel
(3) Determining direction
The Direction flag (Direction IDX) is used to indicate the offset Direction of the MV. Referring to table 3, the Direction IDX is used to indicate the MV shift Direction based on the initial position (MV start point), x-axis is used to indicate the component of the shift Direction on the x-axis, and y-axis is used to indicate the component of the shift Direction on the y-axis. The Direction IDX of 00 indicates a deviation toward the x-axis forward Direction, the y-axis is not limited, the Direction IDX of 01 indicates a deviation toward the x-axis reverse Direction, the y-axis is not limited, the Direction IDX of 10 indicates a deviation toward the y-axis forward Direction, the x-axis is not limited, the Direction IDX of 11 indicates a deviation toward the y-axis reverse Direction, and the x-axis is not limited. Its corresponding offset Direction can be represented by the Direction IDX.
TABLE 3
Direction IDX 00 01 10 11
x-axis + N/A N/A
y-axis N/A N/A +
The process of determining the predicted pixel value of the current block by adopting the MMVD technology comprises the following steps: firstly, determining an MV starting point according to the Base candidate IDX, then determining an offset Direction based on the MV starting point according to the Direction IDX, and finally determining a pixel Distance which is offset from the MV starting point in the offset Direction indicated by the Direction IDX according to the Distance IDX. For example, the Base candidate IDX is 0, the Direction IDX is 00, and the Distance IDX is 2, which indicate that the MV motion vector shifted by one pixel in the forward Direction of the x-axis with the first MV in the fusion candidate motion vector list as the MV starting point is the MV of the current block, so as to predict or obtain the predicted pixel value of the current block.
In the conventional VVC draft, a syntax element mmvd _ cand _ flag [ x0] [ y0] is used to indicate which candidate MV is selected from any two candidate MVs in the fusion candidate motion vector list (e.g., the first candidate MV and the second candidate MV, or the first candidate MV and the third candidate VM, or the third candidate MV and the sixth candidate MV in the fusion candidate motion vector list) as a base MV, where generally mmvd _ cand _ flag [ x0] [ y0] ═ 0 indicates that the first candidate MV of any two candidate MVs is selected, and mmvd _ cand _ flag [ x0] [ y0] } 1 indicates that the second candidate MV of any two candidate MVs is selected. But if the value of the syntax element does not appear in the bitstream, the default mmvd _ cand _ flag x0 y0 is 0. (x0, y0) represents the position of the current block in the current image, i.e., the coordinate position of the pixel point of the top left vertex of the current block relative to the pixel point of the top left vertex of the current image.
As described above, the first syntax element mmvd _ cand _ flag is used to indicate that the first candidate MV or the second candidate MV is selected from the fusion candidate motion vector list as the base MV of motion vector expansion, and the first candidate MV and the second candidate MV are any two candidate MVs in the fusion candidate motion vector list. It can be seen that the first syntax element mmvd _ cand _ flag has a meaning that it is only necessary to determine whether the number of candidate MVs available for selection in the fusion candidate motion vector list is greater than a preset value (e.g., 1), so that the present application first determines whether the length of the current fusion candidate motion vector list is greater than the preset value.
It should be noted that the first syntax element mmvd _ cand _ flag may also be used to indicate that any candidate MV is selected from any N candidate MVs in the fusion candidate motion vector list as the base MV of motion vector extension, where N >1 and N < ═ MaxNumMergeCand. For example, the number of candidate MVs included in the fusion candidate motion vector list is 6, that is, maxnummergeerd ═ 6, and the indexes of the six candidate MVs are 0,1, 2, … …, and 5, respectively. When N is 6, the value of mmvd _ cand _ flag may be any one of 0 to 5. When N is 3, the indices of any N candidate MVs are 0,1 and 2, respectively, the value of mmvd _ cand _ flag may be any one of 0-2.
Step 602, if the length of the fusion candidate motion vector list is greater than a preset value, entropy coding the value of the first syntax element by using a bypass coding mode.
CABAC is a commonly used entropy coding technique for coding and decoding syntax element values. The CABAC processing mainly includes binarization (binarization), context modeling (context coding), and binary arithmetic coding (binary arithmetic coding). The binarization is to perform binary processing on an input non-binary syntax element value and uniquely convert the input non-binary syntax element value into a binary sequence (namely a binary string); context modeling means that for each bin (bin) in a binary string, a probability model of the bin is determined according to context information (e.g., coding information in a reconstructed region around a node corresponding to a syntax element); the binary arithmetic coding means that the corresponding bit is coded according to the probability value in the probability model, and the probability value in the probability model is updated according to the value of the bit.
The basic principle of arithmetic coding is: dividing the [0,1 ] interval into non-overlapping subintervals according to the occurrence probability of different values (namely 0 or 1) of the bit elements in the binary string, wherein the width of the subintervals is just the probability of each value, so that different bit elements of the binary string correspond to each subinterval one by one, then recursively mapping the subintervals to finally obtain a cell, and selecting a representative cell from the cell to perform binary conversion and then outputting the representative cell as actual coding. Statistically, the closer the probability of a bit being 1 is to 0.5, the more bits are needed to encode the bit; the closer the probability of a bit being 1 is to 0 or 1, the fewer bits are needed to encode that bit.
The regular coding mode (regular coding mode) and the Bypass coding mode (Bypass coding mode) include context modeling (context coding) and binary arithmetic coding (binary arithmetic coding), which are two different methods of entropy coding.
In the bypass coding mode, assuming that the probability of occurrence of both binary symbols 0 and 1 is fixed to 0.5, compared to the conventional coding mode: firstly, the probability estimation and updating process of the bypass coding mode is omitted, namely context modeling is not needed, and updating of a context model is not needed; secondly, the operation process of sub-dividing the probability interval of the bypass coding mode is simplified, namely, the probability interval only needs to be divided equally, and the conventional coding mode needs to sub-divide the current probability interval according to the estimated probability. It can be seen that the bypass coding mode can be seen as a special case of the regular coding mode.
The process of entropy coding a value of a first syntax element (mmvd _ cand _ flag) using a bypass coding mode includes:
(1) binarization
Binary transforming the value of the first syntax element mmvd _ cand _ flag in syntax elements results in a binary string comprising one or more bits (bins). For example, the value of mmvd _ cand _ flag is 0 or 1, so that binarization thereof may result in a binary bit, where a bit of 0 indicates that the value of mmvd _ cand _ flag is 0, and a bit of 1 indicates that the value of mmvd _ cand _ flag is 1. Alternatively, binarizing the mmvd _ cand _ flag may also obtain two binary bits, where 00 indicates that the value of mmvd _ cand _ flag is 0, and 01 indicates that the value of mmvd _ cand _ flag is 1. Binarization of the mmvd _ cand _ flag may also result in one or more binary bits, a bit of 0 indicating that the value of mmvd _ cand _ flag is 0 and a bit of 10 indicating that the value of mmvd _ cand _ flag is 1. It should be noted that, the present application may also use other methods to perform binary transformation on the value of mmvd _ cand _ flag, which is not specifically limited.
In another embodiment, the binarization process may be omitted.
(2) Entropy coding using a bypass coding mode
a. Determining probability values of bits
In the present application, since the bypass coding mode is adopted, it is not necessary to assign a specific probability model to each bit in the binary string obtained in the previous step, but the probabilities of 0 and 1 being taken as each bit are considered to be equal, that is, 0.5 each.
b. Partitioning probability intervals according to probability
Illustratively, the initial probability interval is [0,1), and the initial probability interval can be divided into two sub-probability intervals according to the probability of 0.5: first probability interval [0,0.5) and second probability interval [0.5,1), mmvd _ cand _ flag being 0 for the first probability interval and mmvd _ cand _ flag being 1 for the second probability interval.
Illustratively, the probability interval can be extended by extending the length of the initial probability interval [0,1) to 510, which is represented by a 9-bit binary number, i.e., [0,2 ]9) Thus, the first probability interval corresponding to mmvd _ cand _ flag being 0 is [0,2 ]8) The second probability interval corresponding to mmvd _ cand _ flag ═ 1 is [ 2%8,29)。
c. Encoding process
The coding string of the binary string is obtained according to the first probability interval and the second probability interval. For example, when the length of the probability interval is less than one-half of the left boundary value of the probability interval, the renormalization operation may be performed to shift the length of the probability interval by one bit to the left and extend the length to more than one-half of the left boundary value of the probability interval, and then the highest-order bit may be output after shifting the left boundary of the probability interval by one bit to the left, so as to obtain the coding string of mmvd _ cand _ flag.
For example, the binary string of mmvd _ cand _ flag is 0, and the highest bit 0 is output after shifting left boundary 000000000 of the first probability interval by one bit, which is the encoding string for 0 value of mmvd _ cand _ flag. The binary string with mmvd _ cand _ flag equal to 1 is 1, and the left boundary 11111111 of the second probability interval is added to the length 2 of the second probability interval8Then 111111111 is obtained, and the highest bit 1 is output after the left shift by one bit, which is the coding string of the 1 value of mmvd _ cand _ flag.
It should be noted that, in the present application, entropy coding of the first syntax element mmvd _ cand _ flag in the syntax elements is understood to be coding bits in the binary string of the value of mmvd _ cand _ flag, specifically, coding the bits according to the probability values of the bits in the binary string of the value of mmvd _ cand _ flag.
According to the method and the device, the value of the first syntax element mmvd _ cand _ flag is entropy coded by adopting a bypass coding mode, a specific probability model does not need to be distributed to bits in a binary string of the mmvd _ cand _ flag, and the coding complexity is reduced.
In one possible implementation, if the length of the fusion candidate motion vector list is greater than a preset value, a switch flag (byPassFlag) indicating a bypass coding mode is set to a first value, and the value of the first syntax element is entropy-coded using the bypass coding mode.
The background operation step of the encoder for starting the bypass coding mode for entropy coding of the value of the first syntax element is that the switch flag read by the encoder to the bypass coding mode is set to the first value. The byPassFlag may be similar to a switch whether entropy encoding is performed in the bypass encoding mode, for example, the value of byPassFlag is 0 or 1, byPassFlag ═ 0 indicates that entropy encoding is not performed in the bypass encoding mode, and byPassFlag ═ 1 indicates that entropy encoding is performed in the bypass encoding mode. Therefore, the value of mmvd _ cand _ flag is entropy-encoded only in the byPassFlag value of 1, and the value of mmvd _ cand _ flag is not entropy-encoded in the byPassFlag value of 0, for example, the value of mmvd _ cand _ flag may be entropy-encoded in the normal encoding mode.
Fig. 7 is another flowchart illustrating an entropy decoding method for syntax elements according to an embodiment of the present application. This process 700 may be performed by video decoder 30. Process 700 is described as a series of steps or operations, it being understood that process 700 may be performed in various orders and/or concurrently, and is not limited to the order of execution shown in FIG. 7. As shown in fig. 7, the entropy decoding method of a syntax element includes:
step 701, judging whether the length of the current fusion candidate motion vector list is larger than a preset value.
The length of the merge candidate motion vector list refers to the number of candidate MVs included in the merge candidate motion vector list (this number is indicated by MaxNumMergeCand, for example).
Step 702, if the length of the fusion candidate motion vector list is greater than a preset value, performing entropy decoding on the encoded string of the first syntax element by using a bypass decoding mode.
The first syntax element is used for indicating that a first candidate motion vector or a second candidate motion vector is selected from the fusion candidate motion vector list to be used as a basic motion vector of motion vector expansion, and the first candidate motion vector and the second candidate motion vector are any two candidate motion vectors in the fusion candidate motion vector list.
Similar to the method embodiment shown in fig. 6, for the case of the first syntax element mmvd _ cand _ flag, the present embodiment uses the bypass decoding mode to perform entropy decoding on the mmvd _ cand _ flag, so that no specific probability model needs to be allocated to the bits in the binary string of the mmvd _ cand _ flag, thereby reducing the decoding complexity.
The difference from the embodiment of the method shown in fig. 6 is that the embodiment performs entropy decoding on the coded string of the mmvd _ cand _ flag in the code stream to obtain the value of the mmvd _ cand _ flag.
Illustratively, the process of entropy decoding the encoded string of the first syntax element (mmvd _ cand _ flag) using the bypass decoding mode includes:
(1) obtaining a first engine parameter and a second engine parameter
The first engine parameter is used to indicate the length of the probability interval and the second engine parameter is used to indicate the left boundary of the probability interval. Illustratively, the first engine parameter is ivlCurrRange and the second engine parameter is ivlOffset, and as described above, the probability interval [0,1) is extended to a length of 510, which is represented by a binary number of 9 bits, i.e., [0,2 ]9). ivlCurrRange is 510, ivlOffset is 000000000 (i.e., ivlOffset is a 9-bit binary number corresponding to the unsigned integer obtained from read _ bit (9)).
(2) Update the value of ivlOffset and obtain the value of binVal (i.e., mmvd _ cand _ flag)
a. The value of ivlOffset is multiplied by 2, i.e., left shifted by one bit, and then a 1-bit binary number is obtained by using read _ bits (1), and the binary number and ivlOffset are bitwise anded to obtain a new value of ivlOffset.
b. The new value of ivlOffset is compared to the value of ivlCurrRange:
if the new ivlOffset value is greater than ivlCurrRange, setting the value of binVal to 1 and subtracting the ivlOffset value from the ivlCurrRange value;
otherwise, the value of binVal is set to 0.
According to the method and the device, the entropy decoding is carried out on the coding string of the first syntax element mmvd _ cand _ flag in a bypass decoding mode, a specific probability model does not need to be distributed to bits in the binary string of the mmvd _ cand _ flag, and decoding complexity is reduced.
In one possible implementation, if the length of the fusion candidate motion vector list is greater than a preset value, a switch flag (byPassFlag) indicating a bypass decoding mode is set to a first value, and the value of the first syntax element is subjected to entropy using the bypass decoding mode.
The background operation step in which the decoder initiates the bypass decoding mode to entropy decode the value of the first syntax element is that the decoder reads that the switch flag of the bypass decoding mode is set to the first value. The byPassFlag may be similar to a switch whether to perform entropy decoding in the bypass decoding mode, for example, the value of byPassFlag is 0 or 1, byPassFlag being 0 indicates that entropy decoding is not performed in the bypass decoding mode, and byPassFlag being 1 indicates that entropy decoding is performed in the bypass decoding mode. Therefore, the value of mmvd _ cand _ flag is entropy-decoded only in the bypass decoding mode when byPassFlag is 1, and is not entropy-decoded in the bypass decoding mode when byPassFlag is 0, for example, the value of mmvd _ cand _ flag may be entropy-decoded in the normal decoding mode.
Illustratively, the results of simulation experiments of the solution of the present application were performed on VTM-5.0 reference software, as shown in tables 4 and 5. Experiments show that, in the process of entropy coding/decoding the mmvd _ cand _ flag value, a bypass coding/decoding mode is adopted compared with a conventional coding/decoding mode, and under the condition that a video sequence adopts a Random Access configuration, Y, U and V respectively represent three components (Y represents a luminance component, and U and V represent chrominance components) of a video data format, performance gain results of coding/decoding by the bypass coding/decoding mode compared with the conventional coding/decoding mode can be simulated corresponding to different test sequence categories (for example, Class A1, Class A2, Class B, Class C, Class E and the like), and average simulation results (overtur) show that the performance is basically unchanged. And the encoding time (EncT) and the decoding time (DecT) can be simulated according to different test sequence types to obtain a comparison result of the time length required by encoding/decoding by adopting the bypass encoding/decoding mode compared with the conventional encoding/decoding mode, and the average simulation result also shows that the processing time length is reduced.
Under the configuration that a video sequence adopts Low Delay (Low Delay), Y, U and V respectively represent three components of a video signal, and performance gain results of encoding/decoding adopting a bypass encoding/decoding mode compared with a conventional encoding/decoding mode can be obtained by simulation corresponding to different test sequence types, and the display performance of the average simulation results (Overall) is improved to a certain extent. The encoding time (EncT) and the decoding time (DecT) can be simulated according to different test sequence types, and the comparison result of the time length required by encoding/decoding in the bypass encoding/decoding mode is obtained compared with the conventional encoding/decoding mode, so that the average simulation result also shows that the processing time length is reduced.
TABLE 4
Figure BDA0002105391030000241
TABLE 5
Figure BDA0002105391030000242
It should be noted that the simulation result is an exemplary description of entropy coding/decoding a video sequence after the bypass coding/decoding mode is adopted for the first syntax element mmvd _ cand _ flag, and it cannot be used as a sole proof of the implementation effect of the technical solution of the present application.
Based on the same inventive concept as the method, the embodiment of the application also provides an entropy coding device. Fig. 8 is a block diagram of an entropy encoding apparatus 800 for implementing an embodiment of the present application, where the entropy encoding apparatus 800 includes a determining module 801 and an encoding module 802, where:
a judging module 801, configured to judge whether a length of the current fusion candidate motion vector list is greater than a preset value; an encoding module 802, configured to perform entropy encoding on a value of a first syntax element in a bypass encoding mode if the length of the fusion candidate motion vector list is greater than the preset value, where the first syntax element is used to indicate that a first candidate motion vector or a second candidate motion vector is selected from the fusion candidate motion vector list as a basic motion vector for motion vector expansion, and the first candidate motion vector and the second candidate motion vector are any two candidate motion vectors in the fusion candidate motion vector list.
In a possible implementation manner, the encoding module 802 is further configured to set a switch identifier indicating the bypass encoding mode to a first value if the length of the fusion candidate motion vector list is greater than the preset value, and perform entropy encoding on the value of the first syntax element by using the bypass encoding mode.
In one possible implementation, the first syntax element is mmvd _ cand _ flag.
It should be noted that the above-mentioned determining module 801 and the encoding module 802 can be applied to an entropy encoding process at an encoding end. Specifically, at the encoding side, these modules may be applied to the entropy encoding unit 270 of the aforementioned encoder 20.
It should be further noted that, for the specific implementation process of the determining module 801 and the encoding module 802, reference may be made to the detailed description of the embodiment in fig. 6, and for simplicity of the description, details are not repeated here.
Based on the same inventive concept as the method, the embodiment of the application also provides an entropy decoding device. Fig. 9 is a block diagram of an entropy decoding apparatus 900 for implementing an embodiment of the present application, where the entropy decoding apparatus 900 includes a determining module 901 and a decoding module 902, where:
a judging module 901, configured to judge whether a length of the current fusion candidate motion vector list is greater than a preset value; a decoding module 902, configured to, if the length of the fusion candidate motion vector list is greater than the preset value, perform entropy decoding on a value of a first syntax element in a bypass coding mode, where the first syntax element is used to indicate that a first candidate motion vector or a second candidate motion vector is selected from the fusion candidate motion vector list as a base motion vector for motion vector expansion, and the first candidate motion vector and the second candidate motion vector are any two candidate motion vectors in the fusion candidate motion vector list.
In a possible implementation manner, the decoding module 902 is further configured to set a switch identifier indicating the bypass coding mode to a first value if the length of the fusion candidate motion vector list is greater than the preset value, and perform entropy decoding on the value of the first syntax element by using the bypass coding mode.
In one possible implementation, the first syntax element is mmvd _ cand _ flag.
It should be noted that the above-mentioned determining module 901 and the decoding module 902 can be applied to the entropy decoding process at the decoding end. In particular, at the decoding end, these modules may be applied to the entropy decoding unit 304 of the aforementioned decoder 30.
It should be further noted that, for the specific implementation processes of the determining module 901 and the decoding module 902, reference may be made to the detailed description of the embodiment in fig. 7, and for simplicity of the description, details are not repeated here.
Those of skill in the art will appreciate that the functions described in connection with the various illustrative logical blocks, modules, and algorithm steps described in the disclosure herein may be implemented as hardware, software, firmware, or any combination thereof. If implemented in software, the functions described in the various illustrative logical blocks, modules, and steps may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. The computer-readable medium may include a computer-readable storage medium, which corresponds to a tangible medium, such as a data storage medium, or any communication medium including a medium that facilitates transfer of a computer program from one place to another (e.g., according to a communication protocol). In this manner, a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium, such as a signal or carrier wave. A data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described herein. The computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that the computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, Application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Thus, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Additionally, in some aspects, the functions described by the various illustrative logical blocks, modules, and steps described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques may be fully implemented in one or more circuits or logic elements.
The techniques of this application may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an Integrated Circuit (IC), or a set of ICs (e.g., a chipset). Various components, modules, or units are described in this application to emphasize functional aspects of means for performing the disclosed techniques, but do not necessarily require realization by different hardware units. Indeed, as described above, the various units may be combined in a codec hardware unit, in conjunction with suitable software and/or firmware, or provided by an interoperating hardware unit (including one or more processors as described above).
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The above description is only an exemplary embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. A method for entropy coding of a syntax element, comprising:
judging whether the length of the current fusion candidate motion vector list is greater than a preset value or not;
if the length of the fusion candidate motion vector list is greater than the preset value, entropy coding is carried out on the value of a first syntax element by adopting a bypass coding mode, wherein the first syntax element is used for indicating that a first candidate motion vector or a second candidate motion vector is selected from the fusion candidate motion vector list to be used as a basic motion vector of motion vector expansion, and the first candidate motion vector and the second candidate motion vector are any two candidate motion vectors in the fusion candidate motion vector list.
2. The method of claim 1, wherein entropy encoding the value of the first syntax element using a bypass coding mode if the length of the list of fusion candidate motion vectors is greater than the predetermined value, further comprising:
and if the length of the fusion candidate motion vector list is greater than the preset value, setting a switch identifier indicating the bypass coding mode as a first value, and performing entropy coding on the value of the first syntax element by adopting the bypass coding mode.
3. The method of claim 1 or 2, wherein the first syntax element is mmvd cand flag.
4. A method of entropy decoding of a syntax element, comprising:
judging whether the length of the current fusion candidate motion vector list is greater than a preset value or not;
and if the length of the fusion candidate motion vector list is greater than the preset value, performing entropy decoding on a value of a first syntax element by adopting a bypass coding mode, wherein the first syntax element is used for indicating that a first candidate motion vector or a second candidate motion vector is selected from the fusion candidate motion vector list to be used as a basic motion vector of motion vector expansion, and the first candidate motion vector and the second candidate motion vector are any two candidate motion vectors in the fusion candidate motion vector list.
5. The method of claim 4, wherein entropy encoding the value of the first syntax element using a bypass coding mode if the length of the fusion candidate motion vector list is greater than the predetermined value, further comprising:
and if the length of the fusion candidate motion vector list is greater than the preset value, setting a switch identifier indicating the bypass coding mode as a first value, and performing entropy decoding on the value of the first syntax element by adopting the bypass coding mode.
6. The method of claim 4 or 5, wherein the first syntax element is mmvd cand flag.
7. An entropy encoding apparatus, characterized by comprising:
the judging module is used for judging whether the length of the current fusion candidate motion vector list is larger than a preset value or not;
and the encoding module is used for entropy encoding a value of a first syntax element by adopting a bypass encoding mode if the length of the fusion candidate motion vector list is greater than the preset value, wherein the first syntax element is used for indicating that a first candidate motion vector or a second candidate motion vector is selected from the fusion candidate motion vector list to be used as a basic motion vector of motion vector expansion, and the first candidate motion vector and the second candidate motion vector are any two candidate motion vectors in the fusion candidate motion vector list.
8. The apparatus of claim 7, wherein the encoding module is further configured to set a switch flag indicating the bypass coding mode to a first value if the length of the fusion candidate motion vector list is greater than the preset value, and wherein the bypass coding mode is used for entropy encoding the value of the first syntax element.
9. The apparatus of claim 7 or 8, wherein the first syntax element is mmvd cand flag.
10. An entropy decoding apparatus, comprising:
the judging module is used for judging whether the length of the current fusion candidate motion vector list is larger than a preset value or not;
a decoding module, configured to perform entropy decoding on a value of a first syntax element in a bypass coding mode if the length of the fusion candidate motion vector list is greater than the preset value, where the first syntax element is used to indicate that a first candidate motion vector or a second candidate motion vector is selected from the fusion candidate motion vector list as a basic motion vector for motion vector expansion, and the first candidate motion vector and the second candidate motion vector are any two candidate motion vectors in the fusion candidate motion vector list.
11. The apparatus of claim 10, wherein the decoding module is further configured to set a switch flag indicating the bypass coding mode to a first value if the length of the fusion candidate motion vector list is greater than the preset value, and wherein the bypass coding mode is used for entropy decoding the value of the first syntax element.
12. The apparatus of claim 10 or 11, wherein the first syntax element is mmvd cand flag.
13. A video encoder for encoding an image block, comprising:
inter-frame prediction means for predicting motion information of a currently encoded image block based on target candidate motion information, and determining a predicted pixel value of the currently encoded image block based on the motion information of the currently encoded image block;
entropy encoding means as claimed in any one of claims 7 to 9, configured to encode an index identification of the target candidate motion information into a codestream, the index identification indicating the target candidate motion information for the currently encoded picture block;
a reconstruction module to reconstruct the current encoded image block based on the predicted pixel values.
14. A video decoder for decoding a picture block from a bitstream, comprising:
the entropy decoding apparatus of any one of claims 10 to 12, configured to decode an index identifier from a code stream, where the index identifier is used to indicate target candidate motion information of a currently decoded image block;
inter-frame prediction means for predicting motion information of a currently decoded image block based on the target candidate motion information indicated by the index flag, and determining a predicted pixel value of the currently decoded image block based on the motion information of the currently decoded image block;
a reconstruction module to reconstruct the current decoded image block based on the predicted pixel values.
15. A video encoding device comprising: a non-volatile memory and a processor coupled to each other, the processor calling program code stored in the memory to perform the method as described in any one of claims 1-3.
16. A video decoding apparatus, comprising: a non-volatile memory and a processor coupled to each other, the processor calling program code stored in the memory to perform the method as described in any one of claims 4-6.
CN201910550626.2A 2019-06-24 2019-06-24 Entropy encoding/decoding method and device of syntax element and codec Active CN112135149B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910550626.2A CN112135149B (en) 2019-06-24 2019-06-24 Entropy encoding/decoding method and device of syntax element and codec
PCT/CN2020/096363 WO2020259353A1 (en) 2019-06-24 2020-06-16 Entropy coding/decoding method for syntactic element, device, and codec

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910550626.2A CN112135149B (en) 2019-06-24 2019-06-24 Entropy encoding/decoding method and device of syntax element and codec

Publications (2)

Publication Number Publication Date
CN112135149A true CN112135149A (en) 2020-12-25
CN112135149B CN112135149B (en) 2023-07-18

Family

ID=73849818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910550626.2A Active CN112135149B (en) 2019-06-24 2019-06-24 Entropy encoding/decoding method and device of syntax element and codec

Country Status (2)

Country Link
CN (1) CN112135149B (en)
WO (1) WO2020259353A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086678A (en) * 2022-08-22 2022-09-20 北京达佳互联信息技术有限公司 Video encoding method and device, and video decoding method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013106987A1 (en) * 2012-01-16 2013-07-25 Mediatek Singapore Pte. Ltd. Methods and apparatuses of bypass coding and reducing contexts for some syntax elements
CN103858430A (en) * 2011-09-29 2014-06-11 夏普株式会社 Image decoding apparatus, image decoding method and image encoding apparatus
CN103931194A (en) * 2011-06-16 2014-07-16 弗兰霍菲尔运输应用研究公司 Entropy coding of motion vector differences
CN103931188A (en) * 2012-01-16 2014-07-16 联发科技(新加坡)私人有限公司 Method and apparatus for context-adaptive binary arithmetic coding of syntax elements
US20150091921A1 (en) * 2013-09-27 2015-04-02 Apple Inc. Wavefront encoding with parallel bit stream encoding
CN104641640A (en) * 2012-07-16 2015-05-20 三星电子株式会社 Video encoding method and video encoding apparatus and video decoding method and video decoding apparatus for signaling SAO parameter
CN107071460A (en) * 2010-12-14 2017-08-18 M&K控股株式会社 Equipment for encoding motion pictures
GB201820902D0 (en) * 2018-12-20 2019-02-06 Canon Kk Video coding and decoding

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2486731B1 (en) * 2009-10-05 2018-11-07 InterDigital Madison Patent Holdings Methods and apparatus for adaptive filtering of prediction pixels for chroma components in video encoding and decoding
US9538180B2 (en) * 2012-12-17 2017-01-03 Qualcomm Incorporated Motion vector prediction in video coding
US9554150B2 (en) * 2013-09-20 2017-01-24 Qualcomm Incorporated Combined bi-predictive merging candidates for 3D video coding
US10440399B2 (en) * 2015-11-13 2019-10-08 Qualcomm Incorporated Coding sign information of video data

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107071460A (en) * 2010-12-14 2017-08-18 M&K控股株式会社 Equipment for encoding motion pictures
CN103931194A (en) * 2011-06-16 2014-07-16 弗兰霍菲尔运输应用研究公司 Entropy coding of motion vector differences
CN103858430A (en) * 2011-09-29 2014-06-11 夏普株式会社 Image decoding apparatus, image decoding method and image encoding apparatus
WO2013106987A1 (en) * 2012-01-16 2013-07-25 Mediatek Singapore Pte. Ltd. Methods and apparatuses of bypass coding and reducing contexts for some syntax elements
CN103931188A (en) * 2012-01-16 2014-07-16 联发科技(新加坡)私人有限公司 Method and apparatus for context-adaptive binary arithmetic coding of syntax elements
CN104641640A (en) * 2012-07-16 2015-05-20 三星电子株式会社 Video encoding method and video encoding apparatus and video decoding method and video decoding apparatus for signaling SAO parameter
CN108235030A (en) * 2012-07-16 2018-06-29 三星电子株式会社 SAO coding methods and equipment and SAO coding/decoding methods and equipment
US20150091921A1 (en) * 2013-09-27 2015-04-02 Apple Inc. Wavefront encoding with parallel bit stream encoding
GB201820902D0 (en) * 2018-12-20 2019-02-06 Canon Kk Video coding and decoding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BENJAMIN BROSS 等: "《Versatile Video Coding (Draft 4)》", 《JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11,13TH MEETING: MARRAKECH, MA, 9–18 JAN. 2019,JVET-M1001-V7》 *
GUICHUN LI 等: "《CE4-related: Fix of MMVD signalling》", 《JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11,14TH MEETING: GENEVA, CH, 19–27 MARCH 2019, JVET-N0380》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086678A (en) * 2022-08-22 2022-09-20 北京达佳互联信息技术有限公司 Video encoding method and device, and video decoding method and device

Also Published As

Publication number Publication date
WO2020259353A1 (en) 2020-12-30
CN112135149B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
CN111277828B (en) Video encoding and decoding method, video encoder and video decoder
CN115243048B (en) Video image decoding and encoding method and device
CN112055200A (en) MPM list construction method, and chroma block intra-frame prediction mode acquisition method and device
CN111788833B (en) Inter-frame prediction method and device and corresponding encoder and decoder
CN111355959A (en) Image block division method and device
CN112118447B (en) Construction method, device and coder-decoder for fusion candidate motion information list
CN111263166B (en) Video image prediction method and device
CN111327899A (en) Video decoder and corresponding method
WO2020259353A1 (en) Entropy coding/decoding method for syntactic element, device, and codec
CN113366850B (en) Video encoder, video decoder and corresponding methods
CN111277840B (en) Transform method, inverse transform method, video encoder and video decoder
CN111726617B (en) Optimization method, device and coder-decoder for fusing motion vector difference technology
CN112637590A (en) Video encoder, video decoder and corresponding methods
CN113316939A (en) Context modeling method and device for zone bit
CN111901593A (en) Image dividing method, device and equipment
CN112135128A (en) Image prediction method, coding tree node division method and device thereof
CN111327894A (en) Block division method, video encoding and decoding method and video encoder and decoder
CN113170147B (en) Video encoder, video decoder, and corresponding methods
CN112135148B (en) Non-separable transformation method and device
CN111294603B (en) Video encoding and decoding method and device
CN111726630A (en) Processing method and device based on triangular prediction unit mode
CN112135129A (en) Inter-frame prediction method and device
CN113615191A (en) Method and device for determining image display sequence and video coding and decoding equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant