CN112565747A - Decoding and encoding method, device and equipment - Google Patents
Decoding and encoding method, device and equipment Download PDFInfo
- Publication number
- CN112565747A CN112565747A CN201910919775.1A CN201910919775A CN112565747A CN 112565747 A CN112565747 A CN 112565747A CN 201910919775 A CN201910919775 A CN 201910919775A CN 112565747 A CN112565747 A CN 112565747A
- Authority
- CN
- China
- Prior art keywords
- motion information
- block
- peripheral
- prediction mode
- peripheral matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The application provides a decoding and encoding method, a decoding and encoding device and equipment thereof, wherein the method comprises the following steps: constructing a motion information prediction mode candidate list of the current block; selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list; if the target motion information prediction mode is a target motion information angle prediction mode, filling unavailable motion information in the peripheral blocks of the current block; determining the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block. By the scheme, the coding performance can be improved.
Description
Technical Field
The present application relates to the field of encoding and decoding technologies, and in particular, to a decoding method, an encoding device, and an encoding apparatus.
Background
In order to achieve the purpose of saving space, video images are transmitted after being coded, and the complete video coding method can comprise the processes of prediction, transformation, quantization, entropy coding, filtering and the like. The predictive coding comprises intra-frame coding and inter-frame coding, wherein the inter-frame coding uses the correlation of a video time domain to predict the pixels of the current image by using the pixels adjacent to the coded image so as to achieve the aim of effectively removing the video time domain redundancy. In inter-frame coding, a motion vector is used to represent a relative displacement between a current image block of a current frame video image and a reference image block of a reference frame video image.
In the prior art, a current block does not need to be divided into blocks, but only one piece of motion information can be determined for the current block directly by indicating a motion information index or a difference information index. Since all sub-blocks inside the current block share one motion information, for some moving objects that are small, the best motion information can be obtained only after the current block is divided into blocks. However, if the current block is divided into a plurality of sub-blocks, additional bit overhead is generated.
Disclosure of Invention
The application provides a decoding and encoding method, device and equipment thereof, which can improve the encoding performance.
The application provides a decoding method, which is applied to a decoding end and comprises the following steps:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle of the motion information angle prediction mode, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block;
Selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
if the target motion information prediction mode is a target motion information angle prediction mode, filling unavailable motion information in the peripheral blocks of the current block;
determining the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
The application provides a coding method, which is applied to a coding end, and the method comprises the following steps:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle of the motion information angle prediction mode, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block;
If the motion information angle prediction mode exists in the motion information prediction mode candidate list, filling unavailable motion information in the peripheral blocks of the current block;
for each motion information angle prediction mode in the motion information angle prediction mode candidate list, determining motion information of a current block according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
The application provides a decoding device, is applied to the decoding end, the device includes:
a construction module for constructing a motion information prediction mode candidate list of a current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle of the motion information angle prediction mode, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block;
A selection module for selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
a filling module, configured to fill motion information that is unavailable in neighboring blocks of the current block if the target motion information prediction mode is a target motion information angle prediction mode;
a determining module, configured to determine motion information of a current block according to motion information of a plurality of neighboring matching blocks pointed by preconfigured angles of the target motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
The application provides a coding device, is applied to the code end, the device includes:
a construction module for constructing a motion information prediction mode candidate list of a current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle of the motion information angle prediction mode, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block;
A filling module, configured to fill motion information that is unavailable in neighboring blocks of the current block if a motion information angle prediction mode exists in the motion information prediction mode candidate list;
a determining module, configured to determine, for each motion information angle prediction mode in the motion information prediction mode candidate list, motion information of a current block according to motion information of a plurality of neighboring matching blocks pointed by a preconfigured angle of the motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
The application provides a decoding side device, including: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle of the motion information angle prediction mode, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block;
Selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
if the target motion information prediction mode is a target motion information angle prediction mode, filling unavailable motion information in the peripheral blocks of the current block;
determining the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
The application provides a coding end device, including: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle of the motion information angle prediction mode, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block;
If the motion information angle prediction mode exists in the motion information prediction mode candidate list, filling unavailable motion information in the peripheral blocks of the current block;
for each motion information angle prediction mode in the motion information angle prediction mode candidate list, determining motion information of a current block according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
According to the technical scheme, the current block does not need to be divided, the bit overhead caused by sub-block division can be effectively solved, for example, motion information is provided for each sub-region of the current block on the basis that the current block is not divided into sub-blocks, different sub-regions of the current block can correspond to the same or different motion information, so that the coding performance is improved, the problem of transmitting a large amount of motion information is solved, and a large amount of bits can be saved.
Drawings
FIG. 1 is a schematic diagram of a video coding framework in one embodiment of the present application;
FIGS. 2A-2B are schematic diagrams illustrating the partitioning of a current block according to an embodiment of the present application;
FIG. 3 is a schematic view of several sub-regions in one embodiment of the present application;
FIG. 4 is a flow chart of a decoding method in one embodiment of the present application;
fig. 5A and 5B are schematic diagrams of a motion information angle prediction mode in an embodiment of the present application;
FIG. 6 is a flow chart of an encoding method in one embodiment of the present application;
FIG. 7 is a flow chart of a decoding method in one embodiment of the present application;
FIG. 8 is a flow chart of an encoding method in one embodiment of the present application;
FIGS. 9A-9E are schematic diagrams of peripheral blocks of a current block in one embodiment of the present application;
FIGS. 10A-10N are schematic diagrams of motion compensation in one embodiment of the present application;
fig. 11A is a block diagram of a decoding apparatus according to an embodiment of the present application;
fig. 11B is a block diagram of an encoding device according to an embodiment of the present application;
fig. 11C is a hardware configuration diagram of a decoding-side device according to an embodiment of the present application;
fig. 11D is a hardware configuration diagram of an encoding-side device according to an embodiment of the present application.
Detailed Description
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items. It will be understood that, although the terms first, second, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The embodiment of the application provides a decoding and encoding method, a decoding and encoding device and equipment thereof, and can relate to the following concepts:
motion Vector (MV): in inter coding, a motion vector is used to represent the relative displacement between the current block of the current frame image and the reference block of the reference frame image, for example, there is a strong temporal correlation between image a of the current frame and image B of the reference frame, when image block a1 (current block) of image a is transmitted, a motion search can be performed in image B to find image block B1 (reference block) that best matches image block a1, and to determine the relative displacement between image block a1 and image block B1, which is also the motion vector of image block a 1. Each divided image block has a corresponding motion vector transmitted to a decoding side, and if the motion vector of each image block is independently encoded and transmitted, especially divided into a large number of image blocks of small size, a considerable number of bits are consumed. In order to reduce the bit number used for encoding the motion vector, the spatial correlation between adjacent image blocks can be utilized, the motion vector of the current image block to be encoded is predicted according to the motion vector of the adjacent encoded image block, and then the prediction difference is encoded, so that the bit number representing the motion vector can be effectively reduced. In the process of encoding the Motion Vector of the current block, the Motion Vector of the current block can be predicted by using the Motion Vector of the adjacent encoded block, and then the Difference value (MVD) between the predicted value (MVP) of the Motion Vector and the true estimate value of the Motion Vector can be encoded, thereby effectively reducing the encoding bit number of the Motion Vector.
Motion Information (Motion Information): in order to accurately acquire information of the reference block, index information of the reference frame image is required to indicate which reference frame image is used, in addition to the motion vector. In video coding technology, for a current frame picture, a reference frame picture list can be generally established, and the reference frame picture index information indicates that the current block adopts a few reference frame pictures in the reference frame picture list. Many coding techniques also support multiple reference picture lists, and therefore, an index value, which may be referred to as a reference direction, may also be used to indicate which reference picture list is used. In the video encoding technology, motion-related information such as a motion vector, a reference frame index, and a reference direction may be collectively referred to as motion information.
Rate-Distortion principle (Rate-Distortion Optimized): there are two major indicators for evaluating coding efficiency: code rate and Peak Signal to Noise Ratio (PSNR), the smaller the bit stream, the larger the compression rate, and the larger the PSNR, the better the reconstructed image quality, and in the mode selection, the discrimination formula is essentially the comprehensive evaluation of the two. For example, the cost for a mode: j (mode) ═ D + λ R, illustratively, D denotes Distortion, which can generally be measured using SSE metrics, SSE referring to the sum of the mean square of the differences between the reconstructed image block and the source image; and λ is a lagrange multiplier, and R is the actual number of bits required for encoding the image block in the mode, including the sum of bits required for encoding mode information, motion information, residual errors and the like.
Intra and inter prediction (intra and inter) techniques: intra-frame prediction refers to predictive coding using reconstructed pixel values of spatially neighboring blocks of a current block (i.e., pictures in the same frame as the current block). Inter-frame prediction refers to performing predictive coding by using reconstructed pixel values of time-domain neighboring blocks of a current block (i.e., images in different frames from the current block), and inter-frame prediction refers to using video time-domain correlation, and because a video sequence contains strong time-domain correlation, pixels of a current image are predicted by using pixels of neighboring coded images, thereby achieving the purpose of effectively removing video time-domain redundancy.
Skip mode and direct mode: in inter-frame prediction, because a video has strong temporal correlation, namely two adjacent frames of images in a temporal domain have many similar blocks, a block of a current frame is subjected to motion search in an adjacent reference image, and a block which is most matched with the current block is found as a reference block. Because the similarity between the reference block and the current block is high and the difference between the reference block and the current block is small, the code rate cost for coding the difference is far less than the code rate cost brought by directly coding the pixel value of the current block. In order to represent the position of the block that most closely matches the current block, much motion information needs to be encoded and transmitted to the decoding side so that the decoding side knows the position of the most closely matching block. The motion information, especially the motion vector information, needs to consume a large amount of code rate for coding transmission. In order to save the rate overhead of the part, a special mode for comparatively saving the motion vector information coding is designed in the video coding standard: skip mode and direct mode.
In the skip mode or the direct mode, the motion information of the current block is completely multiplexed with the motion information of some neighboring block in the temporal or spatial domain, for example, one motion information is selected from the motion information sets of a plurality of surrounding blocks as the motion information of the current block. Thus, in either skip mode or direct mode, only one index value needs to be encoded to indicate which motion information in the set of motion information the current block uses, and the difference between skip mode and direct mode is that: the skip mode does not require coding of the residual, and the direct mode does require coding of the residual. Obviously, the skip mode or the direct mode can greatly save the coding overhead of the motion information.
Skip (Skip) mode: a coding end does not need to transmit residual error information and MVD, and only needs to transmit indexes of motion information. The decoding end can deduce the motion information of the current block by analyzing the index of the motion information, and after obtaining the motion information, the decoding end determines a predicted value by using the motion information and directly uses the predicted value as a reconstruction value.
Direct (Direct) mode: in the mode of inter-frame prediction, a coding end needs to transmit residual information, but does not need to transmit MVD, and only needs to transmit the index of motion information. The decoding end can deduce the motion information of the current block by analyzing the index of the motion information, after obtaining the motion information, the motion information is used for determining a predicted value, and the predicted value is added with a residual value to obtain a reconstruction value.
HMVP (History based Motion Vector Prediction) mode: the HMVP mode is a technique adopted in a new generation of video coding standards, and its principle is to predict motion information of a current block using motion information of a previously reconstructed block. The motion information of the previously reconstructed block is preserved by building a list of HMVPs, which is updated when a block is decoded and the motion information changes. Therefore, for the current block, motion information in the list of HMVPs is always available, and by using the motion information in the list of HMVPs, prediction accuracy can be improved.
MHBSKIP mode: the MHBSKIP mode is a prediction mode of a skip mode or a direct mode, and predicts motion information of a current block using motion information of spatial neighboring blocks of the current block. For example, the MHBSKIP mode constructs three motion information of bi-directional, backward and forward directions to predict the current block through the motion information of the spatial neighboring blocks of the current block.
When predicting a current block in a skip mode or a direct mode, a motion information prediction mode candidate list needs to be created for the current block. When creating the motion information prediction mode candidate list, the motion information prediction mode candidate list sequentially includes temporal motion information candidates, spatial motion information candidates (i.e., candidate motion information of MHBSKIP mode), and HMVP motion information candidates (i.e., candidate motion information of HMVP mode). The number of the temporal candidate motion information is 1, the number of the spatial candidate motion information is 3, and the number of the HMVP candidate motion information is 8. Of course, the number of temporal candidate motion information may be other values, the number of spatial candidate motion information may be other values, and the number of HMVP candidate motion information may be other values.
Intra Block Copy (IBC) refers to allowing reference to the same frame, the reference data of the current Block is from the same frame, and Intra Block Copy may also be referred to as Intra Block Copy. The intra block copy mode uses the block vector to obtain the predicted value of the current block, and based on the characteristic that a large number of repeated textures exist in the same frame in the screen content, when the predicted value of the current block is obtained by using the block vector, the compression efficiency of the screen content sequence can be improved.
The intra block copy mode is different from the inter prediction mode in that the inter prediction mode needs to provide a motion vector for a current block and then obtain a prediction value in a reference frame according to the motion vector, and the intra block copy mode needs to provide a block displacement vector (block vector for short) for the current block and then obtain the prediction value in the current frame according to the block displacement vector.
Block Vector (Block Vector, BV): the block vector is applied in an intra block copy technique, which uses the block vector for motion compensation, i.e., the block vector is used to obtain the prediction value of the current block. Unlike motion vectors, block vectors represent the relative displacement between the current block and the best matching block in the current frame encoded block. Based on the characteristic that a large number of repeated textures exist in the same frame, when the block vector is adopted to obtain the predicted value of the current block, the compression efficiency can be obviously improved.
Motion information angle prediction mode: the angle prediction mode is used for predicting motion information, namely, the angle prediction mode is used for inter-frame coding and is not used for intra-frame coding, and the angle prediction mode of the motion information selects a matching block but not a matched pixel point.
The motion information angle prediction mode is used for indicating a preconfigured angle, selecting a peripheral matching block for a sub-region of the current block from peripheral blocks of the current block according to the preconfigured angle, and determining one or more motion information of the current block according to the motion information of the peripheral matching block, that is, for each sub-region of the current block, determining the motion information of the sub-region according to the motion information of the peripheral matching block. Also, the peripheral matching block is a block at a specified position determined from peripheral blocks of the current block at a pre-configured angle.
The video coding framework comprises the following steps: referring to fig. 1, a schematic diagram of a video encoding framework is shown, where the video encoding framework is used to implement a processing flow at an encoding end in the embodiment of the present application, the schematic diagram of the video decoding framework is similar to that in fig. 1, and is not described herein again, and the video decoding framework may be used to implement a processing flow at a decoding end in the embodiment of the present application. Illustratively, in the video encoding framework and the video decoding framework, intra prediction, motion estimation/motion compensation, reference picture buffer, in-loop filtering, reconstruction, transformation, quantization, inverse transformation, inverse quantization, entropy encoder, etc. modules may be included. At the encoding end, the processing flow at the encoding end can be realized through the matching among the modules, and at the decoding end, the processing flow at the decoding end can be realized through the matching among the modules.
In the conventional manner, there is only one motion information for the current block, i.e., all sub-blocks inside the current block share one motion information. For a scene with a small moving target, the optimal motion information can be obtained only after the current block is divided, and if the current block is not divided, the current block only has one motion information, so that the prediction precision is not high. Referring to fig. 2A, the region C, the region G, and the region H are regions within the current block, and are not subblocks divided within the current block. Assuming that the current block uses the motion information of the block F, each area within the current block uses the motion information of the block F. Since the distance between the area H and the block F in the current block is long, if the area H also uses the motion information of the block F, the prediction accuracy of the motion information of the area H is not high. The motion information of the sub-block inside the current block cannot utilize the coded motion information around the current block, so that the available motion information is reduced, and the accuracy of the motion information is not high. For example, the sub-block I of the current block can only use the motion information of the sub-blocks C, G, and H, but cannot use the motion information of the image blocks a, B, F, D, and E.
In view of the above findings, an embodiment of the present application provides a decoding method and an encoding method, which enable a current block to correspond to a plurality of pieces of motion information on the basis of not dividing the current block, that is, on the basis of not increasing overhead caused by sub-block division, so as to improve prediction accuracy of the motion information of the current block. Because the current block is not divided, the consumption of extra bits for transmitting the division mode is avoided, and the bit overhead is saved. For each region of the current block (here, any region in the current block, where the size of the region is smaller than or equal to the size of the current block and is not a sub-block obtained by dividing the current block), the motion information of each region of the current block may be obtained by using the encoded motion information around the current block. Referring to fig. 2B, C is a sub-region inside the current block, A, B, D, E and F are encoded blocks around the current block, the motion information of the current sub-region C can be directly obtained by using an angular prediction method, and other sub-regions inside the current block are obtained by using the same method. Therefore, for the current block, different motion information can be obtained without carrying out block division on the current block, and bit overhead of part of block division is saved.
Referring to FIG. 3, the current block includes 9 regions (hereinafter, referred to as sub-regions within the current block), such as sub-region f 1-sub-region f9, which are sub-regions within the current block, not sub-blocks into which the current block is divided. For different sub-regions of the sub-region f 1-the sub-region f9, the same or different motion information may be associated, and therefore, on the basis of not dividing the current block, the current block may also be associated with a plurality of motion information, for example, the sub-region f1 is associated with motion information 1, the sub-region f2 is associated with motion information 2, and so on. For example, in determining the motion information of the sub-region f5, the motion information of the block a1, the block a2, the block A3, the block E, the block B1, the block B2 and the block B3, i.e., the motion information of the encoded blocks around the current block, may be utilized to provide more motion information for the sub-region f 5. Of course, the motion information of the tile A1, tile A2, tile A3, etc. may also be utilized for the motion information of other sub-regions of the current block.
In the embodiment of the present application, a process of constructing a motion information prediction mode candidate list is involved, for example, for any motion information angle prediction mode, it is determined to add the motion information angle prediction mode to the motion information prediction mode candidate list or to prohibit adding the motion information angle prediction mode to the motion information prediction mode candidate list. The filling process involves, for example, filling motion information that is not available in the peripheral blocks of the current block. For example, the motion information of the current block is determined by using the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the motion information angle prediction mode, and the prediction value of the current block is determined according to the motion information of the current block.
When the construction process of the motion information prediction mode candidate list and the filling process of the motion information are realized, the motion information angle prediction mode is firstly subjected to duplication checking treatment, and then unavailable motion information in peripheral blocks is filled, so that the complexity of a decoding end is reduced, and the decoding performance is improved. For example, for the horizontal motion information angle prediction mode, the vertical motion information angle prediction mode, the horizontal upward motion information angle prediction mode, the horizontal downward motion information angle prediction mode, the vertical rightward motion information angle prediction mode, etc., the duplication checking process is performed first, if the horizontal downward motion information angle prediction mode and the vertical rightward motion information angle prediction mode are not repeated, the non-repeated horizontal downward motion information angle prediction mode and vertical rightward motion information angle prediction mode may be added to the motion information prediction mode candidate list of the current block, so that the motion information prediction mode candidate list can be obtained first, and then after the decoding end selects the target motion information prediction mode of the current block from the motion information prediction mode candidate list, if the target motion information prediction mode is the motion information angle prediction mode, the decoding side can fill in motion information that is not available in the surrounding blocks.
After the decoding end selects the target motion information prediction mode of the current block from the motion information prediction mode candidate list, if the target motion information prediction mode is not the motion information angle prediction mode, the decoding end can not fill the unavailable motion information in the peripheral blocks, so that the decoding end reduces the filling operation of the motion information, reduces the complexity of the decoding end and improves the decoding performance.
The decoding method and the encoding method in the embodiments of the present application will be described below with reference to several specific embodiments.
Example 1: referring to fig. 4, a flowchart of a decoding method is shown, which can be applied to a decoding end, and the method includes:
in step 401, a motion information prediction mode candidate list of a current block is constructed.
Illustratively, when constructing a motion information prediction mode candidate list of a current block, aiming at any motion information angle prediction mode of the current block, based on a preconfigured angle of the motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed by the preconfigured angle from peripheral blocks of the current block, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; and for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block.
For example, the process of constructing the motion information prediction mode candidate list of the current block may include:
step a1, aiming at any motion information angle prediction mode of the current block, based on the pre-configuration angle of the motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed by the pre-configuration angle from the peripheral blocks of the current block.
The motion information angle prediction mode is used for indicating a preconfigured angle, selecting a peripheral matching block for a sub-region of the current block from peripheral blocks of the current block according to the preconfigured angle, and determining one or more motion information of the current block according to the motion information of the peripheral matching block, that is, for each sub-region of the current block, determining the motion information of the sub-region according to the motion information of the peripheral matching block. Also, the peripheral matching block is a block at a specified position determined from peripheral blocks of the current block at a pre-configured angle.
For example, the peripheral blocks may include blocks adjacent to the current block; alternatively, the peripheral blocks may include blocks adjacent to the current block and non-adjacent blocks. Of course, the peripheral block may also include other blocks, which is not limited in this regard.
For example, the motion information angle prediction mode may include, but is not limited to, one or any combination of the following: the method comprises a horizontal motion information angle prediction mode, a vertical motion information angle prediction mode, a horizontal upward motion information angle prediction mode, a horizontal downward motion information angle prediction mode and a vertical rightward motion information angle prediction mode. Of course, the above are just a few examples of the motion information angle prediction mode, and there may be other types of motion information angle prediction modes, and the motion information angle prediction mode is related to the preconfigured angle, for example, the preconfigured angle may also be 10 degrees, 20 degrees, and the like. Referring to fig. 5A, a schematic diagram of a horizontal motion information angle prediction mode, a vertical motion information angle prediction mode, a horizontal upward motion information angle prediction mode, a horizontal downward motion information angle prediction mode, and a vertical rightward motion information angle prediction mode is shown, where different motion information angle prediction modes correspond to different preconfigured angles.
In summary, a plurality of peripheral matching blocks pointed to by the preconfigured angle may be selected from the peripheral blocks of the current block based on the preconfigured angle of the motion information angle prediction mode. For example, referring to fig. 5A, a plurality of peripheral matching blocks to which the preconfigured angle of the horizontal motion information angle prediction mode points, a plurality of peripheral matching blocks to which the preconfigured angle of the vertical motion information angle prediction mode points, a plurality of peripheral matching blocks to which the preconfigured angle of the horizontal upward motion information angle prediction mode points, a plurality of peripheral matching blocks to which the preconfigured angle of the horizontal downward motion information angle prediction mode points, and a plurality of peripheral matching blocks to which the preconfigured angle of the vertical rightward motion information angle prediction mode points are shown.
Step a2, if the plurality of peripheral matching blocks at least include a first peripheral matching block and a second peripheral matching block to be traversed, for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block when the motion information of the first peripheral matching block and the second peripheral matching block is different.
In a possible embodiment, after selecting the first peripheral matching block and the second peripheral matching block to be traversed from the plurality of peripheral matching blocks, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angular prediction mode is added to the motion information prediction mode candidate list of the current block.
In a possible embodiment, after selecting the first peripheral matching block and the second peripheral matching block to be traversed from the plurality of peripheral matching blocks, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
In one possible embodiment, after selecting the first peripheral matching block and the second peripheral matching block to be traversed from the plurality of peripheral matching blocks, for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, when the motion information of the first peripheral matching block and the second peripheral matching block is different, the motion information angle prediction mode may be added to the motion information prediction mode candidate list of the current block.
In one possible embodiment, after the first peripheral matching block and the second peripheral matching block to be traversed are selected from the plurality of peripheral matching blocks, for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, when the motion information of the first peripheral matching block and the second peripheral matching block is the same, the motion information angle prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
In a possible embodiment, after selecting the first peripheral matching block and the second peripheral matching block to be traversed from the plurality of peripheral matching blocks, for the first peripheral matching block and the second peripheral matching block to be traversed, if there is a peripheral block in the first peripheral matching block and the second peripheral matching block, which is an intra block copy mode, an intra block, an unencoded block, and/or a prediction mode, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block. Or, if the intra block, the non-coded block and/or the peripheral block of which the prediction mode is the intra block copy mode exist in the first peripheral matching block and the second peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block. Or if at least one of the first peripheral matching block and the second peripheral matching block is positioned outside the image of the current block or outside the image slice of the current block, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block. Or if at least one of the first peripheral matching block and the second peripheral matching block is positioned outside the image of the current block or outside the image slice of the current block, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. And for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. And for the first peripheral matching block and the second peripheral matching block to be traversed, if the first peripheral matching block and the second peripheral matching block both have available motion information, and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information. If there is available motion information in both the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed in sequence are selected from a plurality of peripheral matching blocks. And judging whether the second peripheral matching block and the third peripheral matching block have available motion information or not if the first peripheral matching block and the second peripheral matching block have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block are the same, aiming at the first peripheral matching block and the second peripheral matching block to be traversed. If the available motion information exists in the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. For a first and a second peripheral matching block to be traversed, if at least one of the first and the second peripheral matching block does not have available motion information, the motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. For a first peripheral matching block and a second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block may be prohibited.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. For the first and second peripheral matching blocks to be traversed, if at least one of the first and second peripheral matching blocks does not have available motion information, it may be continuously determined whether both the second and third peripheral matching blocks have available motion information. If there is available motion information for both the second peripheral matching block and the third peripheral matching block, the motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. For the first and second peripheral matching blocks to be traversed, if at least one of the first and second peripheral matching blocks does not have available motion information, it may be continuously determined whether both the second and third peripheral matching blocks have available motion information. If there is available motion information for both the second and third perimeter matching blocks, the motion information angular prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block when the motion information of the second and third perimeter matching blocks is the same.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed in sequence are selected from a plurality of peripheral matching blocks. And judging whether the available motion information exists in the second peripheral matching block and the third peripheral matching block or not if at least one of the first peripheral matching block and the second peripheral matching block does not have the available motion information aiming at the first peripheral matching block and the second peripheral matching block to be traversed. If there is no available motion information in at least one of the second peripheral matching block and the third peripheral matching block, adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed in sequence are selected from a plurality of peripheral matching blocks. And judging whether the second peripheral matching block and the third peripheral matching block have available motion information or not if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information. If there is no motion information available in at least one of the second peripheral matching block and the third peripheral matching block, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
In the above embodiment, the process of determining whether there is available motion information in any peripheral matching block may include, but is not limited to: if the peripheral matching block is an interframe coded block, determining that available motion information exists in the peripheral matching block; otherwise, determining that the peripheral matching block has no available motion information.
In the above embodiment, the process of determining whether there is available motion information in any peripheral matching block may include, but is not limited to: and if the prediction mode of the peripheral matching block is the intra block copy mode, determining that no available motion information exists in the peripheral matching block. For example, the intra block copy mode needs to provide a block displacement vector (referred to as a block vector for short) for the current block, and then obtain a prediction value in the current frame according to the block displacement vector. Unlike motion vectors, block vectors represent the relative displacement between the current block and the best matching block in the current frame encoded block.
In the above embodiment, the process of determining whether there is available motion information in any peripheral matching block may include, but is not limited to: and if the peripheral matching block is positioned outside the image of the current block or the peripheral matching block is positioned outside the image slice of the current block, determining that the peripheral matching block has no available motion information. And if the peripheral matching block is an uncoded block, determining that no available motion information exists in the peripheral matching block. And if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block.
In one possible embodiment, the motion information prediction mode candidate list includes candidate motion information of a skip mode or a direct mode, and the candidate motion information of the skip mode or the direct mode includes but is not limited to: temporal candidate motion information, spatial candidate motion information, HMVP candidate motion information. Based on this, when the motion information angle prediction mode is added to the motion information prediction mode candidate list, the motion information angle prediction mode is located between the spatial domain candidate motion information and the HMVP candidate motion information. Of course, the above is only an example, and the order of the motion information prediction mode candidate list may be set in other orders.
For example, the motion information prediction mode candidate list sequentially includes: time domain candidate motion information, space domain candidate motion information, motion information angle prediction mode and HMVP candidate motion information; or, the time domain candidate motion information, the spatial domain candidate motion information, the HMVP candidate motion information, the motion information angle prediction mode; or, spatial domain candidate motion information, temporal domain candidate motion information, motion information angle prediction mode, HMVP candidate motion information; or, spatial domain candidate motion information, temporal domain candidate motion information, HMVP candidate motion information, motion information angle prediction mode; or, time domain candidate motion information, motion information angle prediction mode, spatial domain candidate motion information, HMVP candidate motion information; or the motion information angle prediction mode, the time domain candidate motion information, the spatial domain candidate motion information and the HMVP candidate motion information.
The number of temporal candidate motion information, the number of spatial candidate motion information, the number of motion information angular prediction modes, and the number of HMVP candidate motion information may be arbitrarily configured, but is not limited thereto.
In step 402, a target motion information prediction mode of a current block is selected from a motion information prediction mode candidate list.
In a possible implementation manner, the peripheral blocks of the current block may be traversed according to a traversal order from the left peripheral block to the top peripheral block of the current block, and a first peripheral block with available motion information may be traversed; if the peripheral block comprises a first peripheral block without available motion information before, filling the motion information of the peripheral block into the first peripheral block; and then, continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise a second peripheral block without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block.
For example, the traversing in the traversal order from the left peripheral block to the top peripheral block of the current block may include: if the current block does not have the left peripheral block, traversing the upper peripheral block of the current block; and traversing the left peripheral block of the current block if the current block does not have the upper peripheral block. The left peripheral block may include a block adjacent to the left of the current block and a non-adjacent block. The upper-side peripheral block may include a block adjacent to the upper side of the current block and a non-adjacent block. The number of first perimeter blocks may be one or more, all perimeter blocks before the first traversed perimeter block where there is available motion information. The first peripheral block may be an unencoded block or an intra block or a peripheral block whose prediction mode is an intra block copy mode; the second peripheral block may be an unencoded block or an intra block or a peripheral block whose prediction mode is an intra block copy mode.
In another possible implementation, after the target motion information prediction mode of the current block is selected from the motion information prediction mode candidate list, if the target motion information prediction mode is a target motion information angle prediction mode and a plurality of peripheral matching blocks pointed by a preconfigured angle of the motion information angle prediction mode include peripheral blocks without available motion information, filling unavailable motion information in the peripheral blocks of the current block.
For example, for a peripheral block for which there is no available motion information, the available motion information of a neighboring block of the peripheral block is filled as the motion information of the peripheral block; or, filling the available motion information of the reference block at the corresponding position of the peripheral block in the time domain reference frame as the motion information of the peripheral block; or, filling default motion information into the motion information of the peripheral block.
In a possible embodiment, the current block may be divided into at least one sub-region; and aiming at each sub-region of the current block, determining the motion information of the sub-region according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode. And aiming at each sub-area of the current block, determining a target predicted value of the sub-area according to the motion information of the sub-area, and determining the predicted value of the current block according to the target predicted value of each sub-area.
In another possible embodiment, the current block may be divided into at least one sub-region; and aiming at each sub-region of the current block, determining the motion information of the sub-region according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode. For each sub-region of the current block, determining a motion compensation value of the sub-region according to the motion information of the sub-region; if the sub-area meets the condition of using the bidirectional optical flow, acquiring a bidirectional optical flow offset value of the sub-area; determining a target predicted value of the sub-region according to a forward motion compensation value in the motion compensation values of the sub-region, a backward motion compensation value in the motion compensation values of the sub-region and a bidirectional optical flow offset value of the sub-region; and determining the predicted value of the current block according to the target predicted value of each sub-area.
Exemplarily, after determining the motion compensation value of the sub-region according to the motion information of the sub-region, if the sub-region does not satisfy the condition of using the bidirectional optical flow, determining the target prediction value of the sub-region according to the motion compensation value of the sub-region; if the motion information of the sub-area is unidirectional motion information, the sub-area does not meet the condition of using bidirectional optical flow; or, if the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located is not located between two reference frames in the time sequence, the sub-region does not satisfy the condition of using the bidirectional optical flow.
For example, if the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located between two reference frames in the temporal sequence, the sub-region satisfies the condition of using bidirectional optical flow.
Illustratively, obtaining a bi-directional optical flow offset value for a sub-region comprises: determining a first pixel value and a second pixel value according to the motion information of the sub-region; the first pixel value is a forward motion compensation value and a forward extension value of the subregion, the forward extension value being copied from the forward motion compensation value or obtained from a reference pixel position of a forward reference frame; the second pixel value is a backward motion compensation value and a backward extension value of the sub-region, and the backward extension value is copied from the backward motion compensation value or obtained from a reference pixel position of a backward reference frame; determining a forward reference frame and a backward reference frame according to the motion information of the sub-regions; a bi-directional optical flow offset value for the sub-region is determined from the first pixel value and the second pixel value.
In a possible implementation, before constructing the motion information prediction mode candidate list of the current block, first indication information may be obtained, where the first indication information is located in the sequence parameter set level; when the value of the first indication information is the first value, the first indication information is used for indicating the starting of the motion information angle prediction technology; and when the value of the first indication information is the second value, the first indication information is used for indicating to close the motion information angle prediction technology.
In a possible implementation, if the current block uses the motion information angle prediction mode, the current block closes the motion vector adjustment technique at the decoding end; the decoding-end motion vector adjusting technology adjusts a motion vector according to the matching criterion of the forward reference pixel value and the backward reference pixel value.
In one possible implementation, if the current block uses the motion information angular prediction mode, the current block initiates a bi-directional optical flow technique.
As can be seen from the above technical solutions, in the embodiments of the present application, the current block does not need to be divided into sub-blocks, the division information of each sub-region of the current block can be determined based on the motion information angle prediction mode, and the bit overhead caused by sub-block division can be effectively solved. By adding the motion information angle prediction modes with the incompletely same motion information into the motion information prediction mode candidate list, the motion information angle prediction modes only with single motion information are removed, the number of the motion information angle prediction modes in the motion information prediction mode candidate list is reduced, the number of bits for coding a plurality of pieces of motion information can be reduced, and the coding performance is further improved.
Fig. 5B is a schematic diagram of a horizontal motion information angle prediction mode, a vertical motion information angle prediction mode, a horizontal upward motion information angle prediction mode, a horizontal downward motion information angle prediction mode, and a vertical rightward motion information angle prediction mode. As can be seen from fig. 5B, some motion information angle prediction modes may make the motion information of each sub-region inside the current block the same, for example, a horizontal motion information angle prediction mode, a vertical motion information angle prediction mode, and a horizontal upward motion information angle prediction mode, and such motion information angle prediction modes need to be eliminated. Some motion information angle prediction modes may make the motion information of each sub-region inside the current block different, for example, a horizontal downward motion information angle prediction mode and a vertical rightward motion information angle prediction mode, and such motion information angle prediction modes need to be reserved, i.e., may be added to the motion information prediction mode candidate list. Obviously, if a horizontal motion information angle prediction mode, a vertical motion information angle prediction mode, a horizontal upward motion information angle prediction mode, a horizontal downward motion information angle prediction mode, and a vertical rightward motion information angle prediction mode are all added to the motion information prediction mode candidate list, when the index of the horizontal downward motion information angle prediction mode is coded, 0001 may need to be coded to represent the mode because the horizontal motion information angle prediction mode, the vertical motion information angle prediction mode, and the horizontal upward motion information angle prediction mode exist in the front (the sequence of each motion information angle prediction mode is not fixed, and is only an example here). However, in the embodiment of the present application, only the horizontal downward motion information angle prediction mode and the vertical rightward motion information angle prediction mode are added to the motion information prediction mode candidate list, and the addition of the horizontal motion information angle prediction mode, the vertical motion information angle prediction mode, and the horizontal upward motion information angle prediction mode to the motion information prediction mode candidate list is prohibited, that is, the horizontal motion information angle prediction mode, the vertical motion information angle prediction mode, and the horizontal upward motion information angle prediction mode do not exist before the horizontal downward motion information angle prediction mode, and therefore, when the index of the horizontal downward motion information angle prediction mode is encoded, only 0 may be encoded to represent the mode. In summary, the above manner can reduce bit overhead caused by encoding index information of the motion information angle prediction mode, reduce hardware complexity while saving bit overhead, avoid the problem of low performance gain caused by the motion information angle prediction mode of a single motion information, and reduce the number of bits for encoding a plurality of motion information angle prediction modes.
In the embodiment of the application, the motion information angle prediction mode is subjected to duplicate checking processing, and then unavailable motion information in peripheral blocks is filled, so that the complexity of a decoding end is reduced, and the decoding performance is improved. For example, for a horizontal motion information angle prediction mode, a vertical motion information angle prediction mode, a horizontal upward motion information angle prediction mode, a horizontal downward motion information angle prediction mode, a vertical rightward motion information angle prediction mode, and the like, a duplication checking process is performed first, and the non-repetitive horizontal downward motion information angle prediction mode and vertical rightward motion information angle prediction mode are added to a motion information prediction mode candidate list, so that the motion information prediction mode candidate list can be obtained first, and at this time, motion information that is not available in the peripheral blocks is not filled. After the decoding end selects the target motion information prediction mode of the current block from the motion information prediction mode candidate list, if the target motion information prediction mode is other modes except the horizontal downward motion information angle prediction mode and the vertical rightward motion information angle prediction mode, the unavailable motion information in the peripheral blocks does not need to be filled, and therefore the decoding end can reduce the filling operation of the motion information.
Example 2: referring to fig. 6, a flow chart of an encoding method is schematically shown, which can be applied to an encoding end, and the method includes:
Illustratively, when constructing a motion information prediction mode candidate list of a current block, aiming at any motion information angle prediction mode of the current block, based on a preconfigured angle of the motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed by the preconfigured angle from peripheral blocks of the current block, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; and adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block when the motion information of the first peripheral matching block and the second peripheral matching block is different if the available motion information exists in the first peripheral matching block and the second peripheral matching block aiming at the first peripheral matching block and the second peripheral matching block to be traversed.
In one possible approach, if there is no motion information available for at least one of the first and second peripheral matched blocks, the motion information angular prediction mode is added to the motion information prediction mode candidate list of the current block.
In one possible approach, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
In one possible approach, if there is available motion information for both the first and second peripheral matching blocks, when the motion information of the first and second peripheral matching blocks is the same, the motion information angular prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
In the above embodiment, the process of determining whether there is available motion information in any one peripheral matching block may include: if the peripheral matching block is an interframe coded block, determining that available motion information exists in the peripheral matching block; otherwise, determining that the peripheral matching block has no available motion information.
In the above embodiment, the process of determining whether there is available motion information in any one peripheral matching block may include: and if the prediction mode of the peripheral matching block is the intra block copy mode, determining that no available motion information exists in the peripheral matching block.
In the above embodiment, the process of determining whether there is available motion information in any one peripheral matching block may include: if the peripheral matching block is positioned outside the image where the current block is positioned or the peripheral matching block is positioned outside the image slice where the current block is positioned, determining that the peripheral matching block does not have available motion information; if the peripheral matching block is an uncoded block, determining that the peripheral matching block does not have available motion information; and if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block.
The process of constructing the motion information prediction mode candidate list at the encoding end can be referred to as embodiment 1, and is not described herein again.
In one possible embodiment, the motion information prediction mode candidate list includes candidate motion information of a skip mode or a direct mode, and the candidate motion information of the skip mode or the direct mode includes but is not limited to: temporal candidate motion information, spatial candidate motion information, HMVP candidate motion information. Based on this, when the motion information angle prediction mode is added to the motion information prediction mode candidate list, the motion information angle prediction mode is located between the spatial domain candidate motion information and the HMVP candidate motion information. Of course, the above is only an example, and the order of the motion information prediction mode candidate list may be arbitrarily set.
In a possible implementation manner, the peripheral blocks of the current block may be traversed according to a traversal order from the left peripheral block to the top peripheral block of the current block, and a first peripheral block with available motion information may be traversed; if the peripheral block comprises a first peripheral block without available motion information before, filling the motion information of the peripheral block into the first peripheral block; and then, continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise a second peripheral block without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block.
For example, the traversing in the traversal order from the left peripheral block to the top peripheral block of the current block may include: if the current block does not have the left peripheral block, traversing the upper peripheral block of the current block; and traversing the left peripheral block of the current block if the current block does not have the upper peripheral block. The left peripheral block may include a block adjacent to the left of the current block and a non-adjacent block. The upper-side peripheral block may include a block adjacent to the upper side of the current block and a non-adjacent block. The number of first perimeter blocks may be one or more, all perimeter blocks before the first traversed perimeter block where there is available motion information. The first peripheral block may be an unencoded block or an intra block or a peripheral block whose prediction mode is an intra block copy mode; the second peripheral block may be an unencoded block or an intra block or a peripheral block whose prediction mode is an intra block copy mode.
In another possible implementation, for each motion information angular prediction mode in the motion information angular prediction mode candidate list, if a plurality of peripheral matching blocks pointed to by a preconfigured angle of the motion information angular prediction mode include peripheral blocks for which no available motion information exists, filling motion information that is not available in the peripheral blocks of the current block.
For example, for a peripheral block for which there is no available motion information, the available motion information of a neighboring block of the peripheral block is filled as the motion information of the peripheral block; or, filling the available motion information of the reference block at the corresponding position of the peripheral block in the time domain reference frame as the motion information of the peripheral block; or, filling default motion information into the motion information of the peripheral block.
In a possible embodiment, the current block may be divided into at least one sub-region; and determining the motion information of each sub-region of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the motion information angle prediction mode. And aiming at each sub-area of the current block, determining a target predicted value of the sub-area according to the motion information of the sub-area, and determining the predicted value of the current block according to the target predicted value of each sub-area.
In another possible embodiment, the current block may be divided into at least one sub-region; and aiming at each sub-region of the current block, determining the motion information of the sub-region according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode. For each sub-region of the current block, determining a motion compensation value of the sub-region according to the motion information of the sub-region; if the sub-area meets the condition of using the bidirectional optical flow, acquiring a bidirectional optical flow offset value of the sub-area; determining a target predicted value of the sub-region according to a forward motion compensation value in the motion compensation values of the sub-region, a backward motion compensation value in the motion compensation values of the sub-region and a bidirectional optical flow offset value of the sub-region; and determining the predicted value of the current block according to the target predicted value of each sub-area.
Exemplarily, after determining the motion compensation value of the sub-region according to the motion information of the sub-region, if the sub-region does not satisfy the condition of using the bidirectional optical flow, determining the target prediction value of the sub-region according to the motion compensation value of the sub-region; if the motion information of the sub-area is unidirectional motion information, the sub-area does not meet the condition of using bidirectional optical flow; or, if the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located is not located between two reference frames in the time sequence, the sub-region does not satisfy the condition of using the bidirectional optical flow.
For example, if the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located between two reference frames in the temporal sequence, the sub-region satisfies the condition of using bidirectional optical flow.
Illustratively, obtaining a bi-directional optical flow offset value for a sub-region comprises: determining a first pixel value and a second pixel value according to the motion information of the sub-region; the first pixel value is a forward motion compensation value and a forward extension value of the subregion, the forward extension value being copied from the forward motion compensation value or obtained from a reference pixel position of a forward reference frame; the second pixel value is a backward motion compensation value and a backward extension value of the sub-region, and the backward extension value is copied from the backward motion compensation value or obtained from a reference pixel position of a backward reference frame; determining a forward reference frame and a backward reference frame according to the motion information of the sub-regions; a bi-directional optical flow offset value for the sub-region is determined from the first pixel value and the second pixel value.
In a possible implementation, before constructing the motion information prediction mode candidate list of the current block, first indication information may be obtained, where the first indication information is located in the sequence parameter set level; when the value of the first indication information is the first value, the first indication information is used for indicating the starting of the motion information angle prediction technology; and when the value of the first indication information is the second value, the first indication information is used for indicating to close the motion information angle prediction technology.
In a possible implementation, if the current block uses the motion information angle prediction mode, the current block closes the motion vector adjustment technique at the decoding end; the decoding-end motion vector adjusting technology adjusts a motion vector according to the matching criterion of the forward reference pixel value and the backward reference pixel value.
According to the technical scheme, the current block does not need to be divided, the bit overhead caused by sub-block division can be effectively solved, for example, motion information is provided for each sub-region of the current block on the basis that the current block is not divided into sub-blocks, different sub-regions of the current block can correspond to the same or different motion information, so that the coding performance is improved, the problem of transmitting a large amount of motion information is solved, and a large amount of bits can be saved.
Example 3: based on the same application concept as the above method, referring to fig. 7, a schematic flow chart of an encoding method proposed in the embodiment of the present application is shown, where the method may be applied to an encoding end, and the method may include:
in step 701, an encoding end constructs a motion information prediction mode candidate list of a current block, where the motion information prediction mode candidate list may include at least one motion information angle prediction mode. Of course, the motion information prediction mode candidate list may also include other types of motion information prediction modes (i.e. not motion information angle prediction modes), which is not limited herein.
For example, a motion information prediction mode candidate list may be constructed for the current block, that is, all sub-regions in the current block may correspond to the same motion information prediction mode candidate list; alternatively, multiple motion information prediction mode candidate lists may be constructed for the current block, i.e., all sub-regions within the current block may correspond to the same or different motion information prediction mode candidate lists. For convenience of description, an example of constructing a motion information prediction mode candidate list for a current block will be described.
The motion information angle prediction mode may be an angle prediction mode for predicting motion information, i.e., used for inter-frame coding, rather than intra-frame coding, and the motion information angle prediction mode selects a matching block rather than a matching pixel point.
The motion information prediction mode candidate list may be constructed in a conventional manner, or in the motion information prediction mode candidate list construction manner in embodiment 1, which is not limited to this construction manner.
In step 702, if there is a motion information angle prediction mode in the motion information prediction mode candidate list, the encoding end fills the unavailable motion information in the peripheral blocks of the current block. For example, traversing the peripheral blocks of the current block according to the traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the first peripheral block without available motion information is included in the peripheral block, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block if the peripheral blocks behind the peripheral block comprise the second peripheral block without available motion information.
In step 703, the encoding end sequentially traverses each motion information angle prediction mode in the motion information prediction mode candidate list. Dividing the current block into at least one sub-region according to the currently traversed motion information angle prediction mode; and for each sub-region, determining the motion information of the sub-region according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configuration angles of the motion information angle prediction mode. For example, a peripheral matching block corresponding to the sub-region is selected from a plurality of peripheral matching blocks pointed by the pre-configured angle of the motion information angle prediction mode; and determining the motion information of the sub-area according to the motion information of the selected peripheral matching block.
It should be noted that, since the step 702 fills the peripheral blocks without available motion information in the peripheral blocks of the current block, all the plurality of peripheral matching blocks pointed by the preconfigured angle according to the motion information angle prediction mode in the step 703 have available motion information, and the motion information of the sub-region can be determined by using the available motion information of the peripheral matching blocks.
In step 706, the encoding end selects a target motion information prediction mode of the current block from the motion information prediction mode candidate list, where the target motion information prediction mode is a motion information angle prediction mode or other types of motion information prediction modes.
For example, after steps 703-705 are performed for each motion information angle prediction mode (e.g., horizontal downward motion information angle prediction mode, etc.) in the motion information prediction mode candidate list, the target prediction value of the current block can be obtained. Based on the target predicted value of the current block, the encoding end determines the rate distortion cost value of the motion information angle prediction mode by adopting a rate distortion principle, and the determination mode is not limited. For other types of motion information prediction modes R (obtained in a traditional manner) in the motion information prediction mode candidate list, determining the motion information of the current block according to the motion information prediction mode R, determining a target prediction value of the current block according to the motion information of the current block, and then determining the rate-distortion cost value of the motion information prediction mode R, without limitation.
Then, the motion information prediction mode corresponding to the minimum rate distortion cost is determined as a target motion information prediction mode, which may be a motion information angle prediction mode or another type of motion information prediction mode R.
Example 4: based on the same application concept as the above method, referring to fig. 8, a flowchart of a decoding method provided in an embodiment of the present application is shown, where the method may be applied to a decoding end, and the method may include:
in step 801, a decoding end constructs a motion information prediction mode candidate list of a current block, where the motion information prediction mode candidate list may include at least one motion information angle prediction mode. Of course, the motion information prediction mode candidate list may also include other types of motion information prediction modes (i.e. not motion information angle prediction modes), which is not limited herein.
In step 802, the decoding end selects a target motion information prediction mode of the current block from the motion information prediction mode candidate list, where the target motion information prediction mode is a target motion information angle prediction mode or other types of motion information prediction modes.
The process of selecting the target motion information prediction mode at the decoding end may include: upon receiving the coded bitstream, obtaining indication information from the coded bitstream, the indication information indicating index information of the target motion information prediction mode in the motion information prediction mode candidate list. For example, when the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream carries indication information, where the indication information is used to indicate index information of the target motion information prediction mode in the motion information prediction mode candidate list. It is assumed that the motion information prediction mode candidate list sequentially includes: the motion information prediction mode comprises a horizontal downward motion information angle prediction mode, a vertical rightward motion information angle prediction mode and a motion information prediction mode R, wherein the indication information is used for indicating index information 1, and the index information 1 represents the first motion information prediction mode in the motion information prediction mode candidate list. Based on this, the decoding side acquires index information 1 from the coded bit stream.
The decoding end selects the motion information prediction mode corresponding to the index information from the motion information prediction mode candidate list, and determines the selected motion information prediction mode as the target motion information prediction mode of the current block. For example, when the indication information is used to indicate index information 1, the decoding end may determine the 1 st motion information prediction mode in the motion information prediction mode candidate list as the target motion information prediction mode of the current block.
In a possible implementation manner, if the target motion information prediction mode is not the motion information angle prediction mode, the unavailable motion information in the peripheral blocks of the current block is not needed to be filled, so that the decoding end can reduce the filling operation of the motion information.
in step 804, available motion information exists in all the peripheral matching blocks pointed by the pre-configured angles according to the target motion information angle prediction mode, and the motion information of the sub-region can be determined according to the available motion information.
Example 5: in the above-described embodiment, a process of constructing a motion information prediction mode candidate list, that is, determining whether to add the motion information angle prediction mode to the motion information prediction mode candidate list or to prohibit the motion information angle prediction mode from being added to the motion information prediction mode candidate list for any one motion information angle prediction mode, includes:
step b1, obtaining at least one motion information angle prediction mode of the current block.
For example, the following motion information angle prediction modes may be obtained: the method comprises a horizontal motion information angle prediction mode, a vertical motion information angle prediction mode, a horizontal upward motion information angle prediction mode, a horizontal downward motion information angle prediction mode and a vertical rightward motion information angle prediction mode. Of course, the above manner is only an example, and the preconfigured angle may be any angle between 0-360 degrees, and the horizontal direction of the center point of the sub-region to the right may be located as 0 degree, so that any angle rotated counterclockwise from 0 degree may be the preconfigured angle, or the center point of the sub-region may be located as 0 degree to other directions. In practical applications, the preconfigured angle may be a fractional angle, such as 22.5 degrees, or the like.
And b2, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode.
Step b3, based on the characteristics of whether the motion information is available in the plurality of peripheral matching blocks, whether the available motion information in the plurality of peripheral matching blocks is the same, etc., adding the motion information angle prediction mode to the motion information prediction mode candidate list, or prohibiting adding the motion information angle prediction mode to the motion information prediction mode candidate list.
The determination process of step b3 will be described below with reference to several specific cases.
In case one, a first peripheral matching block and a second peripheral matching block to be traversed are selected from the plurality of peripheral matching blocks, and if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block and a second peripheral matching block to be traversed from the plurality of peripheral matching blocks, and if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angular prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
For example, if an intra block, an unencoded block, and/or a peripheral block whose prediction mode is an intra block copy mode exist in the first peripheral matching block and the second peripheral matching block, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block.
And if the intra block, the non-coded block and/or the peripheral block of which the prediction mode is an intra block copy mode exist in the first peripheral matching block and the second peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
And if at least one of the first peripheral matching block and the second peripheral matching block is positioned outside the image of the current block or outside the image slice of the current block, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block.
And if at least one of the first peripheral matching block and the second peripheral matching block is positioned outside the image of the current block or outside the image slice of the current block, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block.
And in case two, selecting a first peripheral matching block and a second peripheral matching block to be traversed from the plurality of peripheral matching blocks, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, and if the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block and a second peripheral matching block to be traversed from the plurality of peripheral matching blocks, and if available motion information exists in both the first peripheral matching block and the second peripheral matching block, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block when the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are the same.
And in case III, selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. And if the available motion information exists in the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. And if the first peripheral matching block and the second peripheral matching block both have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuing to judge whether the second peripheral matching block and the third peripheral matching block both have available motion information. If there is available motion information in both the second peripheral matching block and the third peripheral matching block, the motion information angular prediction mode is added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. And if the first peripheral matching block and the second peripheral matching block both have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuing to judge whether the second peripheral matching block and the third peripheral matching block both have available motion information. If the available motion information exists in both the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
And in case four, selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block has no available motion information, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if the available motion information exists in the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if at least one of the second peripheral matching block and the third peripheral matching block does not have available motion information, the motion information angular prediction mode is added to the motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is no motion information available in at least one of the second peripheral matching block and the third peripheral matching block, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
And fifthly, if the plurality of peripheral matching blocks all have available motion information and the motion information of the plurality of peripheral matching blocks is not completely the same, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block.
If there is available motion information in all the peripheral matching blocks and the motion information of the peripheral matching blocks is completely the same, the motion information angular prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
Case six, if there is no available motion information in at least one of the plurality of peripheral matching blocks, the motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block.
If there is no motion information available for at least one of the plurality of peripheral matching blocks, the motion information angular prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
If at least one of the plurality of peripheral matching blocks does not have available motion information and the motion information of the plurality of peripheral matching blocks is not identical, adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block.
If at least one of the plurality of peripheral matching blocks does not have available motion information and the motion information of the plurality of peripheral matching blocks is completely the same, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
For the case five and the case six, the determination manner in which the motion information of the plurality of peripheral matching blocks is not exactly the same/exactly the same may include, but is not limited to: selecting at least one first peripheral matching block (e.g., all or a portion of all peripheral matching blocks) from the plurality of peripheral matching blocks; for each first peripheral matching block, a second peripheral matching block corresponding to the first peripheral matching block is selected from the plurality of peripheral matching blocks. If the motion information of the first peripheral matching block is different from the motion information of the second peripheral matching block, determining that the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different; and if the motion information of the first peripheral matching block is the same as that of the second peripheral matching block, determining that the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are the same. Based on this, if the motion information of any pair of peripheral matching blocks to be compared is different, it is determined that the motion information of the plurality of peripheral matching blocks is not completely the same. And if the motion information of all the peripheral matching blocks to be compared is the same, determining that the motion information of the peripheral matching blocks is completely the same.
For cases five and six, the determination that there is no available motion information in at least one of the plurality of peripheral matching blocks may include, but is not limited to: selecting at least one first peripheral matching block from the plurality of peripheral matching blocks; for each first peripheral matching block, a second peripheral matching block corresponding to the first peripheral matching block is selected from the plurality of peripheral matching blocks. If at least one of any pair of peripheral matching blocks to be compared (i.e. the first peripheral matching block and the second peripheral matching block) does not have available motion information, determining that at least one of the plurality of peripheral matching blocks does not have available motion information. And if all the peripheral matching blocks to be compared have available motion information, determining that the plurality of peripheral matching blocks have available motion information.
In each of the above cases, selecting a first peripheral matching block from the plurality of peripheral matching blocks may include: taking any one of the plurality of peripheral matching blocks as a first peripheral matching block; alternatively, a specified one of the plurality of peripheral matching blocks is set as a first peripheral matching block. Selecting a second peripheral matching block from the plurality of peripheral matching blocks may include: selecting a second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks according to the traversal step size and the position of the first peripheral matching block; the traversal step may be a block spacing between the first and second perimeter matched blocks.
For cases three and four, selecting a third peripheral matching block from the plurality of peripheral matching blocks may include: according to the traversal step size and the position of the second peripheral matching block, selecting a third peripheral matching block corresponding to the second peripheral matching block from the plurality of peripheral matching blocks; the traversal step can be a block spacing between the second perimeter matched block and the third perimeter matched block.
For example, for a peripheral matching block a1, a peripheral matching block a2, a peripheral matching block A3, a peripheral matching block a4, and a peripheral matching block a5 arranged in this order, examples of the respective peripheral matching blocks for different cases are as follows:
for cases one and two, assuming perimeter matching block A1 is the first perimeter matching block and the traversal step size is 2, then the second perimeter matching block to which perimeter matching block A1 corresponds is perimeter matching block A3. For cases three and four, assuming perimeter matching block A1 is the first perimeter matching block and the traversal step size is 2, then the second perimeter matching block to which perimeter matching block A1 corresponds is perimeter matching block A3. The third perimeter match block to perimeter match block A3 is perimeter match block a 5.
For cases five and six, assuming that perimeter match block A1 and perimeter match block A3 are both the first perimeter match block and the traversal step size is 2, when perimeter match block A1 is the first perimeter match block, then the second perimeter match block is perimeter match block A3. When the peripheral matching block A3 is the first peripheral matching block, then the second peripheral matching block is the peripheral matching block A5.
For example, before selecting the peripheral matching block from the plurality of peripheral matching blocks, the traversal step may be determined based on the size of the current block. And controlling the comparison times of the motion information through the traversal step length. For example, assuming that the size of the peripheral matching block is 4 × 4 and the size of the current block is 16 × 16, the current block corresponds to 4 peripheral matching blocks for the horizontal motion information angular prediction mode. In order to control the comparison frequency of the motion information to be 1, the traversal step size may be 2 or 3, if the traversal step size is 2, the first peripheral matching block is the 1 st peripheral matching block, and the second peripheral matching block is the 3 rd peripheral matching block; or the first peripheral matching block is the 2 nd peripheral matching block, and the second peripheral matching block is the 4 th peripheral matching block. If the traversal step size is 3, the first peripheral matching block is the 1 st peripheral matching block, and the second peripheral matching block is the 4 th peripheral matching block. For another example, in order to control the number of comparison times of the motion information to be 2, the traversal step size may be 1, the first peripheral matching block is the 1 st peripheral matching block and the 3 rd peripheral matching block, the second peripheral matching block corresponding to the 1 st peripheral matching block is the 2 nd peripheral matching block, and the second peripheral matching block corresponding to the 3 rd peripheral matching block is the 4 th peripheral matching block. Of course, the above is only an example of the horizontal motion information angle prediction mode, and the traversal step size may also be determined in other ways, which is not limited to this. Moreover, for other motion information angle prediction modes except the horizontal motion information angle prediction mode, the mode of determining the traversal step length refers to the horizontal motion information angle prediction mode, and is not repeated herein.
In each of the above cases, the process of determining whether there is available motion information in any one peripheral matching block may include, but is not limited to: if the peripheral matching block is an interframe coded block, determining that available motion information exists in the peripheral matching block; otherwise, determining that the peripheral matching block has no available motion information. And if the prediction mode of the peripheral matching block is the intra block copy mode, determining that no available motion information exists in the peripheral matching block. And if the peripheral matching block is positioned outside the image of the current block or the peripheral matching block is positioned outside the image slice of the current block, determining that the peripheral matching block has no available motion information. And if the peripheral matching block is an uncoded block, determining that no available motion information exists in the peripheral matching block. And if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block.
Example 6: in the above embodiments, the process of constructing the motion information prediction mode candidate list is described below with reference to several specific application scenarios.
Application scenario 1: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is not decoded yet (i.e. the peripheral block is an uncoded block), or the peripheral block is an intra block, or the prediction mode of the peripheral block is an intra block copy mode, it indicates that the peripheral block does not have available motion information. If a peripheral block exists, the peripheral block is not an unencoded block, the peripheral block is not an intra block, and the prediction mode of the peripheral block is not an intra block copy mode, it is indicated that the peripheral block has available motion information.
Referring to FIG. 9A, the width of the current block is W, the height of the current block is H, let m be W/4, n be H/4, and the current blockThe pixel point at the upper left corner of the part is (x, y), and the peripheral block where (x-1, y + H + W-1) is located is A0,A0Is 4 x 4. Traversing the peripheral blocks in the clockwise direction, and respectively marking each peripheral block of 4 x 4 as A1、A2、…、A2m+2n,A2m+2nIs the peripheral block where the pixel point (x + W + H-1, y-1) is located. For each motion information angle prediction mode, based on the preconfigured angle of the motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed by the preconfigured angle from the peripheral blocks, and selecting a peripheral matching block to be traversed from the plurality of peripheral matching blocks (for example, selecting a first peripheral matching block and a second peripheral matching block to be traversed, or selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially). If the available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, or if both the first peripheral matching block and the second peripheral matching block have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block is the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list.
If the available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, or if both the first peripheral matching block and the second peripheral matching block have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block is the same, the comparison result of the two peripheral matching blocks is the same, and the second peripheral matching block and the third peripheral matching block are continuously compared.
If the available motion information exists in the second peripheral matching block and the third peripheral matching block, and the motion information of the second peripheral matching block and the motion information of the third peripheral matching block are different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list. If at least one of the second peripheral matching block and the third peripheral matching block does not have available motion information, or if both the second peripheral matching block and the third peripheral matching block have available motion information and the motion information of the second peripheral matching block and the third peripheral matching block is the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list.
For example, if the left neighboring block of the current block exists and the upper neighboring block does not exist, for the horizontal motion information angular prediction mode, A will bem-1+H/8As a first peripheral matching block, Am+n-1As the second peripheral matching block, of course, Am-1+H/8And Am+n-1As just one example, other peripheral matching blocks pointed by the preconfigured angle of the horizontal motion information angle prediction mode may also be used as the first peripheral matching block or the second peripheral matching block, which is similar to the implementation manner and is not described in detail later. Judgment of A by the above comparison methodm-1+H/8And Am+n-1Is the same. If the motion information angle prediction mode is the same, the addition of the horizontal motion information angle prediction mode to the motion information prediction mode candidate list of the current block is prohibited. If not, the horizontal motion information angular prediction mode is added to the motion information prediction mode candidate list of the current block.
For the angle prediction mode of horizontal downward motion information, AW/8-1As a first peripheral matching block, Am-1As a second peripheral matching block, Am-1+H/8As the third peripheral matching block, of course, the above is only an example, and other peripheral matching blocks pointed to by the preconfigured angle of the horizontal downward motion information angle prediction mode may be used as the first peripheral matching block and the second peripheral matching block And the implementation manners of the two peripheral matching blocks or the third peripheral matching block are similar, and are not described in detail later. Judgment of A by the above comparison methodW/8-1And Am-1Is the same. If not, the horizontal downward motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block. If the two are the same, the comparison method is used for judging Am-1And Am-1+H/8Is the same. If Am-1And Am-1+H/8If the comparison result is different, the horizontal downward motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block. If Am-1And Am-1+H/8Is the same, addition of the horizontal downward motion information angle prediction mode to the motion information prediction mode candidate list of the current block is prohibited.
For example, if the left neighboring block of the current block does not exist and the upper neighboring block exists, for the vertical motion information angular prediction mode, A is usedm+n+1+W/8As a first peripheral matching block, Am+n+1As the second peripheral matching block, of course, the above is only an example, and the first peripheral matching block and the second peripheral matching block are not limited. Judgment of A by the above comparison methodm+n+1+W/8And Am+n+1Is the same. If the motion information angle prediction mode is the same, the addition of the vertical motion information angle prediction mode to the motion information prediction mode candidate list of the current block is prohibited. If not, the vertical motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block.
For the vertical rightward motion information angle prediction mode, Am+n+1+W/8As a first peripheral matching block, A2m+n+1As a second peripheral matching block, A2m+n+1+H/8As the third peripheral matching block, of course, the above is only an example, and the first peripheral matching block, the second peripheral matching block, and the third peripheral matching block are not limited. Judgment of A by the above comparison methodm+n+1+W/8And A2m+n+1Is the same. If not, the vertical right motion information angle prediction mode is added to the motion information prediction mode candidate list. If it isSimilarly, the above comparison method is used to judge A2m+n+1And A2m+n+1+H/8Is the same. If A2m+n+1And A2m+n+1+H/8If the comparison result is different, the vertical right motion information angle prediction mode is added to the motion information prediction mode candidate list. If A2m+n+1And A2m+n+1+H/8If the comparison result is the same, the addition of the vertical rightward motion information angle prediction mode to the motion information prediction mode candidate list is prohibited.
For example, if the left neighboring block of the current block exists and the upper neighboring block of the current block also exists, for the horizontal down motion information angle prediction mode, A will beW/8-1As a first peripheral matching block, Am-1As a second peripheral matching block, Am-1+H/8As the third peripheral matching block, of course, the above is only an example, and the first peripheral matching block, the second peripheral matching block, and the third peripheral matching block are not limited. Judgment of A by the above comparison method W/8-1And Am-1Is the same. If not, the horizontal downward motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block. If the two are the same, judging A by using the comparison methodm-1And Am-1+H/8Is the same. If Am-1And Am-1+H/8If the comparison result is different, the horizontal downward motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block. If Am-1And Am-1+H/8If the comparison result is the same, the addition of the horizontal downward motion information angle prediction mode to the motion information prediction mode candidate list of the current block is prohibited.
For the horizontal motion information angle prediction mode, Am-1+H/8As a first peripheral matching block, Am+n-1As the second peripheral matching block, of course, the above is only an example, and the first peripheral matching block and the second peripheral matching block are not limited. Judgment of A by the above comparison methodm-1+H/8And Am+n-1Is the same. If the motion information angle prediction mode is the same as the horizontal motion information angle prediction mode, the horizontal motion information angle prediction mode is not added to the motion information predictorAnd testing a mode candidate list. If not, the horizontal motion information angle prediction mode is added to the motion information prediction mode candidate list.
For the angle prediction mode of the horizontal upward motion information, Am+n-1As a first peripheral matching block, Am+nAs a second peripheral matching block, Am+n+1As the third peripheral matching block, of course, the above is only an example, and the first peripheral matching block, the second peripheral matching block, and the third peripheral matching block are not limited. Judgment of A by the above comparison methodm+n-1And Am+nIs the same. If not, the horizontal upward motion information angular prediction mode is added to the motion information prediction mode candidate list of the current block. If the two are the same, judging A by using the comparison methodm+nAnd Am+n+1Is the same. If Am+nAnd Am+n+1If the comparison result is different, the horizontal upward motion information angle prediction mode is added to the motion information prediction mode candidate list. If Am+nAnd Am+n+1If the comparison result is the same, addition of the horizontal upward motion information angle prediction mode to the motion information prediction mode candidate list is prohibited.
For the angle prediction mode of vertical motion information, Am+n+1+W/8As a first peripheral matching block, Am+n+1As the second peripheral matching block, of course, the above is just one example, and the first peripheral matching block and the second peripheral matching block are referred to. Judgment of A by the above comparison method m+n+1+W/8And Am+n+1Is the same. If the same, the addition of the vertical motion information angular prediction mode to the motion information prediction mode candidate list of the current block may be prohibited. If not, the vertical motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block.
For the vertical rightward motion information angle prediction mode, Am+n+1+W/8As a first peripheral matching block, A2m+n+1As a second peripheral matching block, A2m+n+1+H/8As a third peripheral matching block, of course, the above is only an example, for which the third oneA perimeter matching block, a second perimeter matching block, and a third perimeter matching block are not limiting. Judgment of A by the above comparison methodm+n+1+W/8And A2m+n+1Is the same. If not, the vertical right motion information angle prediction mode is added to the motion information prediction mode candidate list. If the two are the same, the comparison method is used for judging A2m+n+1And A2m+n+1+H/8Is the same. If A2m+n+1And A2m+n+1+H/8If the comparison result is different, the vertical right motion information angle prediction mode is added to the motion information prediction mode candidate list. If A2m+n+1And A2m+n+1+H/8If the comparison result is the same, the addition of the vertical rightward motion information angle prediction mode to the motion information prediction mode candidate list is prohibited.
Application scenario 2: similar to the implementation of application scenario 1, except that: in the application scenario 2, it is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor to distinguish whether an upper adjacent block of the current block exists or not. For example, the above-described processing is performed regardless of whether a left neighboring block of the current block exists or not and whether an upper neighboring block of the current block exists or not.
Application scenario 3: similar to the implementation of application scenario 1, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. Other processes are similar to the application scenario 1 and are not described in detail herein.
Application scenario 4: similar to the implementation of application scenario 1, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. Other processes are similar to the application scenario 1 and are not described in detail herein.
Application scenario 5: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral blocks of the current block do not exist means that the peripheral blocks are located outside the image where the current block is located, or the peripheral blocks are located inside the image where the current block is located, but the peripheral blocks are located outside the image slice where the current block is located. If the peripheral block does not exist, or the peripheral block is an uncoded block, or the peripheral block is an intra block, or the prediction mode of the peripheral block is an intra block copy mode, it indicates that the peripheral block does not have available motion information. If the peripheral block exists, the peripheral block is not an uncoded block, the peripheral block is not an intra block, and the prediction mode of the peripheral block is not an intra block copy mode, it is indicated that the peripheral block has available motion information.
Referring to FIG. 9A, the width of the current block is W, the height of the current block is H, let m be W/4, n be H/4, and the peripheral block where the pixel points at the upper left corner of the current block are (x, y), (x-1, y + H + W-1) are located is A0,A0Is 4 x 4. Traversing the peripheral blocks in the clockwise direction, and respectively marking each peripheral block of 4 x 4 as A1、A2、…、A2m+2n,A2m+2nIs the peripheral block where the pixel point (x + W + H-1, y-1) is located.
And for each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed by the preset angle from the peripheral blocks based on the preset angle of the motion information angle prediction mode, and selecting the peripheral matching blocks to be traversed from the plurality of peripheral matching blocks. Unlike the application scenario 1, if at least one of the first and second peripheral matching blocks does not have available motion information, or both the first and second peripheral matching blocks have available motion information and the motion information of the first and second peripheral matching blocks is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list.
If the available motion information exists in the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is forbidden to be added to the motion information prediction mode candidate list; or, continuing to compare the second peripheral matched block to the third peripheral matched block.
If at least one of the second peripheral matching block and the third peripheral matching block does not have available motion information, or both the second peripheral matching block and the third peripheral matching block have available motion information and the motion information of the second peripheral matching block and the third peripheral matching block is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list. Or, if the available motion information exists in both the second peripheral matching block and the third peripheral matching block, and the motion information of the second peripheral matching block and the third peripheral matching block is the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list.
Based on the comparison method, the corresponding processing flow refers to application scenario 1, for example, if the left neighboring block of the current block exists and the upper neighboring block does not exist, for the horizontal motion information angle prediction mode, the comparison method is used to determine am-1+H/8And Am+n-1Is the same. If the same, the addition of the horizontal motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. If not, the horizontal motion information angular prediction mode is added to the motion information prediction mode candidate list.
For the angle prediction mode of the horizontal downward motion information, the comparison method is used for judging AW/8-1And Am-1Is the same. If not, the horizontal down motion information angular prediction mode is added to the motion information prediction mode candidate list. If the two are the same, the comparison method is used for judging Am-1And Am-1+H/8Is the same. If Am-1And Am-1+H/8If the comparison result is different, the horizontal downward motion information angle prediction mode may be added to the motion information prediction mode candidate list. If Am-1And Am-1+H/8If the comparison result is the same, the addition of the horizontal downward motion information angle prediction mode to the motion information prediction mode candidate list may be prohibited.
Application scenario 6: similar to the implementation of application scenario 5, except that: it is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. For example, whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not, the processing is performed in the manner of the application scenario 5.
Application scenario 7: similar to the implementation of application scenario 5, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. Other processing procedures are similar to the application scenario 5, and are not repeated here.
Application scenario 8: similar to the implementation of application scenario 5, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. In the application scenario 8, it is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor to distinguish whether an upper adjacent block of the current block exists or not. Other processing procedures are similar to the application scenario 5, and are not repeated here.
Application scenario 9: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is an uncoded block, or the peripheral block is an intra block, or the prediction mode of the peripheral block is an intra block copy mode, it indicates that the peripheral block does not have available motion information. If the peripheral block exists, the peripheral block is not an uncoded block, the peripheral block is not an intra block, and the prediction mode of the peripheral block is not an intra block copy mode, it is indicated that the peripheral block has available motion information. For each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed to by a preconfigured angle from peripheral blocks based on the preconfigured angle of the motion information angle prediction mode, and selecting at least one first peripheral matching block (such as one or more) from the plurality of peripheral matching blocks; for each first peripheral matching block, selecting a second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks.
Taking each combination of the first peripheral matching block and the second peripheral matching block as a matching block group, for example, a1, A3, a5 are selected as the first peripheral matching block from the plurality of peripheral matching blocks, a2 is selected as the second peripheral matching block corresponding to a1 from the plurality of peripheral matching blocks, a4 is selected as the second peripheral matching block corresponding to A3 from the plurality of peripheral matching blocks, A6 is selected as the second peripheral matching block corresponding to a5 from the plurality of peripheral matching blocks, then the matching block group 1 includes a1 and a2, the matching block group 2 includes A3 and a4, and the matching block group 3 includes a5 and A6. The above-mentioned a1, a2, A3, a4, a5 and a6 are any peripheral matching blocks among the plurality of peripheral matching blocks, and the selection manner thereof may be configured empirically, without limitation.
For each matching block group, if available motion information exists in both the two peripheral matching blocks in the matching block group and the motion information of the two peripheral matching blocks is different, the comparison result of the matching block group is different. If at least one of the two peripheral matching blocks in the matching block group does not have available motion information, or both of the two peripheral matching blocks have available motion information and the motion information of the two peripheral matching blocks is the same, the comparison result of the matching block group is the same. If the comparison results of all the matching block groups are the same, forbidding adding the motion information angle prediction mode to the motion information prediction mode candidate list; and if the comparison result of any matching block group is different, adding the motion information angle prediction mode to the motion information prediction mode candidate list.
For example, if the left-adjacent block of the current block exists, the upper-adjacent blockIf the horizontal motion information angle prediction mode does not exist, judging at least one matching block group A by using the comparison methodiAnd Aj(i and j have a value in the range of [ m, m + n-1 ]]And i and j are different, i and j can be selected at will and are within the value range). If the comparison results for all matching block groups are the same, the addition of horizontal motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block. For the angle prediction mode of the horizontal downward motion information, the comparison method is used for judging at least one matching block group AiAnd Aj(i and j have a value in the range of [0, m + n-2 ]]And i and j are not the same). If the comparison results for all matching block groups are the same, the addition of the horizontal down motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal down motion information angular prediction mode is added to the motion information prediction mode candidate list. For example, if the left neighboring block of the current block does not exist and the upper neighboring block exists, the above-mentioned comparison method is used to determine at least one matching block group A for the vertical motion information angle prediction mode iAnd Aj(i and j have a value in the range of [ m + n +1, 2m + n ]]And i and j are not the same). If the comparison results for all matching block groups are the same, then addition of the vertical motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the vertical motion information angle prediction mode is added to the motion information prediction mode candidate list. For the angle prediction mode of the vertical rightward movement information, the comparison method is used for judging at least one matching block group AiAnd Aj(i and j have a value in the range of [ m + n +2,2m +2n [)]And i and j are not the same). If the comparison results for all matching block groups are the same, then addition of the vertical right motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the vertical right motion information angle prediction mode is added to the motion information prediction mode candidate list.
For example, if a left neighboring block of the current block exists,the upper adjacent block of the current block also exists, and for the angle prediction mode of the horizontal downward motion information, the above comparison method is used to judge at least one matching block group AiAnd Aj(i and j have a value in the range of [0, m + n-2 ]]And i and j are not the same). If the comparison results for all matching block groups are the same, the addition of the horizontal down motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal down motion information angular prediction mode is added to the motion information prediction mode candidate list. For the angle prediction mode of the horizontal motion information, the above comparison method is used to determine at least one matching block group A iAnd Aj(i and j have a value in the range of [ m, m + n-1 ]]And i and j are not the same). If the comparison results for all matching block groups are the same, the addition of horizontal motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block. For the angle prediction mode of the horizontal upward movement information, the comparison method is used for judging at least one matching block group AiAnd Aj(i and j have a value in the range of [ m +1, 2m + n-1 ]]And i and j are not the same). If the comparison results for all matching block groups are the same, adding horizontal upward motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal up motion information angle prediction mode is added to the motion information prediction mode candidate list. For the angle prediction mode of the vertical motion information, the above comparison method is used to determine at least one matching block group AiAnd Aj(i and j have a value in the range of [ m + n +1, 2m + n ]]And i and j are not the same). If the comparison results for all matching block groups are the same, then addition of the vertical motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the vertical motion information angle prediction mode is added to the motion information prediction mode candidate list. For the angle prediction mode of the vertical rightward movement information, the comparison method is used to judge at least one matching block group A iAnd Aj(i and j have a value in the range of [ m + n +2,2m +2n [)]And is andi and j are not identical). If the comparison results for all matching block groups are the same, then addition of the vertical right motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the vertical right motion information angle prediction mode is added to the motion information prediction mode candidate list.
Application scenario 10: similar to the implementation of application scenario 9, except that: in the application scenario 10, it is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor to distinguish whether an upper adjacent block of the current block exists or not.
Application scenario 11: similar to the implementation of application scenario 9, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. The other processes are similar to the application scenario 9 and will not be described in detail here.
Application scenario 12: similar to the implementation of application scenario 9, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. The other processes are similar to the application scenario 9 and will not be described in detail here.
Application scenario 13: similar to the implementation of application scenario 9, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. Unlike application scenario 9, the comparison may be:
for each matching block group, if at least one of two peripheral matching blocks in the matching block group does not have available motion information, or both the two peripheral matching blocks have available motion information and the motion information of the two peripheral matching blocks is different, the comparison result of the matching block group is different. If the available motion information exists in both the two peripheral matching blocks in the matching block group and the motion information of the two peripheral matching blocks is the same, the comparison result of the matching block group is the same. If the comparison results of all the matching block groups are the same, forbidding adding the motion information angle prediction mode to the motion information prediction mode candidate list; and if the comparison result of any matching block group is different, adding the motion information angle prediction mode to the motion information prediction mode candidate list.
Based on the comparison method, other processes are similar to the application scenario 9, and are not repeated herein.
Application scenario 14: similar to the implementation of application scenario 9, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. In contrast to the comparison in the application scenario 9, the comparison can be seen in the application scenario 10. Based on the comparison method, other processes are similar to the application scenario 9, and are not repeated herein.
Application scenario 15: similar to the implementation of application scenario 9, except that: in contrast to the comparison in the application scenario 9, the comparison can be seen in the application scenario 10. Other processes are similar to the application scenario 9, and are not repeated here.
Application scenario 16: similar to the implementation of application scenario 9, except that: it is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. In contrast to the comparison in the application scenario 9, the comparison can be seen in the application scenario 10. Other processes are similar to the application scenario 9, and are not repeated here.
Application scenario 17: referring to fig. 9B, in order to reduce the complexity of hardware implementation, a downsampling method is used for performing duplicate checking.
Example 7: in the above embodiments, regarding to filling in motion information that is not available in the peripheral block, the following describes a filling process of motion information in connection with several specific application scenarios.
Application scenario 1: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is not decoded yet (i.e. the peripheral block is an uncoded block), or the peripheral block is an intra block, or the prediction mode of the peripheral block is an intra block copy mode, it indicates that the peripheral block does not have available motion information. If a peripheral block exists, the peripheral block is not an unencoded block, the peripheral block is not an intra block, and the prediction mode of the peripheral block is not an intra block copy mode, it is indicated that the peripheral block has available motion information.
Referring to FIG. 9A, the width of the current block is W, the height of the current block is H, let m be W/4, n be H/4, and the peripheral block where the pixel points at the upper left corner of the current block are (x, y), (x-1, y + H + W-1) are located is A 0,A0Is 4 x 4. Traversing the peripheral blocks in the clockwise direction, and respectively marking each peripheral block of 4 x 4 as A1、A2、…、A2m+2n,A2m+2nIs the peripheral block where the pixel point (x + W + H-1, y-1) is located.
For example, if the left neighboring block of the current block exists and the upper neighboring block does not exist, the padding process is as follows: from A0To Am+n-1And traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as Ai. If i is greater than 0, then AiMotion information of previously traversed peripheral blocks, all using AiIs filled in. Judging whether i is equal to m + n-1, if so, finishing the filling, and exiting the filling process; otherwise, from Ai+1To Am+n-1And traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
Referring to FIG. 9A, assume AiIs A4Then A can be substitutediPreviously traversed perimeter blocks (e.g. A)0、A1、A2、A3) All using A4Is filled in. Suppose traversing to A5When found, A5There is no available motion information, then use A5Nearest neighbor previous perimeter block a4Is filled in, assuming traversal to a6When found, A 6There is no available motion information, then use A6Nearest neighbor previous perimeter block a5The motion information of (a) is filled, and so on.
The left adjacent block of the current block does not exist, the upper adjacent block exists, and the filling process is as follows: from Am+n+1To A2m+2nAnd traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as Ai. If i is greater than m + n +1, then AiMotion information of previously traversed peripheral blocks, all using AiIs filled in. Judging whether i is equal to 2m +2n, if so, finishing the filling, and exiting the filling process; otherwise, from Ai+1To A2m+2nAnd traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
The left adjacent block and the upper adjacent block of the current block exist, and the filling process is as follows: from A0To A2m+2nAnd traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as Ai. If i is greater than 0, then AiMotion information of previously traversed peripheral blocks, all using AiIs filled in. Judging whether i is equal to 2m +2n, if so, finishing the filling, and exiting the filling process; otherwise, from A i+1To A2m+2nAnd traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
In the above embodiment, the peripheral block for which there is no available motion information may be an unencoded block, an intra block, or a peripheral block whose prediction mode is an intra block copy mode.
Application scenario 2: similar to the implementation of application scenario 1, except that: whether a left adjacent block of the current block exists or not and whether an upper adjacent block of the current block exists or not are not distinguished, for example, whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not are all processed in the following way: from A0To A2m+2nAnd traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as Ai. If i is greater than 0, then AiMotion information of previously traversed peripheral blocks, all using AiIs filled in. Judging whether i is equal to 2m +2n, if so, finishing the filling, and exiting the filling process; otherwise, from Ai+1To A2m+2nAnd traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
Application scenario 3: similar to the implementation of application scenario 1, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. Other implementation processes are referred to as application scenario 1, and are not described in detail herein.
Application scenario 4: similar to the implementation of application scenario 1, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. Other implementation processes are referred to as application scenario 1, and are not described in detail herein.
Application scenario 5: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is not decoded yet (i.e. the peripheral block is an uncoded block), or the peripheral block is an intra block, or the prediction mode of the peripheral block is an intra block copy mode, it indicates that the peripheral block does not have available motion information. If a peripheral block exists, the peripheral block is not an unencoded block, the peripheral block is not an intra block, and the prediction mode of the peripheral block is not an intra block copy mode, it is indicated that the peripheral block has available motion information.
If the left neighboring block of the current block exists and the upper neighboring block does not exist, the padding process is as follows: from A0To Am+n-1And performing sequential traversal, and if the motion information of the traversed peripheral block is unavailable, filling the motion information of the peripheral block by using zero motion information or the motion information of the time domain corresponding position of the peripheral block. If the left neighboring block of the current block does not exist and the upper neighboring block exists, the padding process is as follows: from Am+n+1To A2m+2nAnd performing sequential traversal, and if the motion information of the traversed peripheral block is unavailable, filling the motion information of the peripheral block by using zero motion information or the motion information of the time domain corresponding position of the peripheral block. If the left neighboring block of the current block exists and the upper neighboring block exists, the padding process is as follows: from A 0To A2m+2nPerforming sequential traversal if the traversed peripheral blocks moveIf the motion information is not available, filling the motion information of the peripheral block by using zero motion information or the motion information of the corresponding position of the time domain of the peripheral block.
Application scenario 6: similar to the implementation of application scenario 5, except that: whether the left adjacent block and the upper adjacent block of the current block exist or not is not distinguished, and whether the left adjacent block and the upper adjacent block of the current block exist or not is judged from A0To A2m+2nAnd performing sequential traversal, and if the motion information of the traversed peripheral block is unavailable, filling the motion information of the peripheral block by using zero motion information or the motion information of the time domain corresponding position of the peripheral block.
Application scenario 7: similar to the implementation of application scenario 5, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. Other implementation processes are referred to in application scenario 5, and are not described in detail herein.
Application scenario 8: similar to the implementation of application scenario 5, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. Other implementation processes are referred to in application scenario 5, and are not described in detail herein.
Application scenario 9-application scenario 16: similar to the implementation of application scenarios 1-8, the difference is that: the width of the current block is W, the height of the current block is H, the size of m is W/8, the size of n is H/8, and the peripheral block A0Is of a size of8 by 8, each 8 by 8 peripheral block is respectively marked as A1、A2、…、A2m+2nThat is, the size of each peripheral block is changed from 4 × 4 to 8 × 8, and other implementation processes may refer to the above application scenario, and are not repeated herein.
Application scenario 17: referring to fig. 9C, the width and height of the current block are both 16, and the motion information of the peripheral blocks is stored in a minimum unit of 4 × 4. Suppose A 14、A15、A16And A17And filling the uncoded blocks if the uncoded blocks are uncoded, wherein the filling method can be any one of the following methods: padding with available motion information of neighboring blocks; filling by adopting default motion information; and filling the available motion information of the position block corresponding to the time domain reference frame. Of course, the above-described manner is merely an example, and is not limited thereto. If the size of the current block is other sizes, the above-mentioned manner can be adopted for filling, and the detailed description is omitted here
Application scenario 18: referring to fig. 9D, the width and height of the current block are both 16, and the motion information of the surrounding blocks is stored in the minimum unit of 4 × 4. Suppose A7For an intra block, the intra block needs to be filled, and the filling method may be any one of the following methods: padding with available motion information of neighboring blocks; filling by adopting default motion information; and filling the available motion information of the position block corresponding to the time domain reference frame. Of course, the above-described manner is merely an example, and is not limited thereto. If the size of the current block is other sizes, the padding can also be performed in the above manner, which is not described herein again.
Application scenario 19: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is not decoded yet (i.e. the peripheral block is an uncoded block), or the peripheral block is an intra block, or the prediction mode of the peripheral block is an intra block copy mode, it indicates that the peripheral block does not have available motion information. If a peripheral block exists, the peripheral block is not an unencoded block, the peripheral block is not an intra block, and the prediction mode of the peripheral block is not an intra block copy mode, it is indicated that the peripheral block has available motion information.
Referring to FIG. 9A, the width of the current block is W, the height of the current block is H, let m be W/4, n be H/4, and the peripheral block where the pixel points at the upper left corner of the current block are (x, y), (x-1, y + H + W-1) are located is A0,A0Is 4 x 4. Traversing the peripheral blocks in the clockwise direction, and respectively marking each peripheral block of 4 x 4 as A1、A2、…、A2m+2n,A2m+2nIs the peripheral block where the pixel point (x + W + H-1, y-1) is located.
If the motion information angle prediction mode is a horizontal downward motion information angle prediction mode, the traversal range is A0To Am+n-2From A0To Am+n-2And traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as Ai. If i is greater than 0, AiMotion information of previously traversed peripheral blocks, all using AiIs filled in. Judging whether i is equal to m + n-2, if so, finishing the filling, and exiting the filling process; otherwise from Ai+1To Am+n-2And traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
If the motion information angle prediction mode is the horizontal motion information angle prediction mode, the traversal range is AmTo A m+n-1From AmTo Am+n-1And performing sequential traversal, wherein the specific filling process refers to the above embodiment, and the detailed description is not repeated here.
If the motion information angle prediction mode is the horizontal upward motion information angle prediction mode, the traversal range is Am+1To A2m+n-1From Am+1To A2m+n-1And performing sequential traversal, wherein the specific filling process refers to the above embodiment, and the detailed description is not repeated here.
If the motion information angle prediction mode is the vertical motion information angle prediction mode, the traversal range is Am+n+1To A2m+nFrom Am+n+1To A2m+nAnd performing sequential traversal, wherein the specific filling process refers to the above embodiment, and the detailed description is not repeated here.
If the motion information angle prediction mode is a vertical rightward motion information angle prediction mode, the traversal range is Am+n+2To A2m+2nFrom Am+n+2To A2m+2nAnd performing sequential traversal, wherein the specific filling process refers to the above embodiment, and the detailed description is not repeated here.
Application scenario 20: referring to fig. 9E, it is necessary to fill motion information in the intra or non-encoded blocks of the surrounding blocks or the surrounding blocks whose prediction mode is intra block copy mode, and the filling method is similar to the filling method of the intra reference pixels.
Example 8: in the above embodiments, the motion compensation is performed by using a motion information angle prediction mode, for example, motion information of each sub-region of a current block is determined according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode, and for each sub-region, a target prediction value of the sub-region is determined according to the motion information of the sub-region. The following describes a motion compensation process for determining each sub-region in conjunction with a specific application scenario.
Application scenario 1: and selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle corresponding to the motion information angle prediction mode. The current block is divided into at least one sub-region, and the dividing manner is not limited. And aiming at each sub-region of the current block, selecting a peripheral matching block corresponding to the sub-region from the plurality of peripheral matching blocks, and determining the motion information of the sub-region according to the motion information of the selected peripheral matching block. Then, for each sub-region, the target prediction value of the sub-region is determined according to the motion information of the sub-region, and the determination process is not limited.
For example, the motion information of the selected peripheral matching block may be used as the motion information of the sub-region. If the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information can be used as the motion information of the sub-region; assuming that the motion information of the peripheral matching block is bidirectional motion information, the bidirectional motion information may be used as the motion information of the sub-region, or one of the bidirectional motion information may be used as the motion information of the sub-region, or the other of the bidirectional motion information may be used as the motion information of the sub-region.
For example, the sub-region partition information may be independent of the motion information angle prediction mode, such as sub-region partition information of the current block according to which the current block is partitioned into at least one sub-region, determined according to the size of the current block. For example, if the size of the current block satisfies: if the width is greater than or equal to the preset size parameter (empirically configured, such as 8) and the height is greater than or equal to the preset size parameter, the size of the sub-region is 8 × 8, i.e. the current block is divided into at least one sub-region in a manner of 8 × 8.
For example, when the motion information angle prediction mode is a horizontal upward motion information angle prediction mode, a horizontal downward motion information angle prediction mode, or a vertical rightward motion information angle prediction mode, if the width of the current block is greater than or equal to a preset size parameter and the height of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8 × 8; if the width of the current block is smaller than the preset size parameter, or the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4 x 4. When the motion information angle prediction mode is a horizontal motion information angle prediction mode, if the width of the current block is larger than a preset size parameter, the size of the sub-region is 4 × 4 of the current block, or the size of the sub-region is 4 × 4; if the width of the current block is equal to the preset size parameter and the height of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8 x 8; if the width of the current block is smaller than the preset size parameter, the size of the sub-region is 4 x 4. When the motion information angle prediction mode is a vertical motion information angle prediction mode, if the height of the current block is greater than a preset size parameter, the size of the sub-region is 4 × the height of the current block, or the size of the sub-region is 4 × 4; if the height of the current block is higher than the preset size parameter and the width of the current block is larger than or equal to the preset size parameter, the size of the sub-region is 8 x 8; if the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4 x 4. When the angular motion information prediction mode is the horizontal angular motion information prediction mode, if the width of the current block is greater than 8, the size of the sub-region may also be 4 × 4. When the angular motion information prediction mode is the vertical angular motion information prediction mode, if the height of the current block is greater than 8, the size of the sub-region may also be 4 × 4. Of course, the above are only examples, and the preset size parameter may be 8, and may be greater than 8.
Application scenario 2: and selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle corresponding to the motion information angle prediction mode. The current block is divided into at least one sub-region in a 8 x 8 manner (i.e., the size of the sub-region is 8 x 8). And aiming at each sub-region of the current block, selecting a peripheral matching block corresponding to the sub-region from the plurality of peripheral matching blocks, and determining the motion information of the sub-region according to the motion information of the selected peripheral matching block. And for each sub-area, determining a target predicted value of the sub-area according to the motion information of the sub-area, and not limiting the determination process.
Application scenario 3: referring to fig. 10A, motion compensation is performed at an angle for each 4 × 4 sub-region in the current block. And if the motion information of the peripheral matching block is unidirectional motion information, determining the unidirectional motion information as the motion information of the sub-area. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the sub-region, or determining the forward motion information or the backward motion information in the bidirectional motion information as the motion information of the sub-region.
According to fig. 10A, the size of the current block is 4 × 8, and when the target motion information prediction mode of the current block is the horizontal mode, the current block is divided into two sub-regions having the same size, one of the 4 × 4 sub-regions corresponds to the peripheral matching block a1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 1. Another 4 × 4 sub-region corresponds to the peripheral matching block a2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 2. When the target motion information prediction mode of the current block is a vertical mode, two sub-regions with the same size are divided, wherein one 4 × 4 sub-region corresponds to the peripheral matching block B1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B1. Another 4 × 4 sub-region corresponds to the peripheral matching block B1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B1. When the target motion information prediction mode of the current block is in the horizontal upward direction, two sub-areas with the same size are divided, wherein one 4 x 4 sub-area corresponds to the peripheral matching block E, and the motion information of the 4 x 4 sub-area is determined according to the motion information of the E. Another 4 × 4 sub-region corresponds to the peripheral matching block a1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 1. When the target motion information prediction mode of the current block is horizontal downward, two sub-regions with the same size are divided, wherein one 4 x 4 sub-region corresponds to the peripheral matching block A2, and the motion information of the 4 x 4 sub-region is determined according to the motion information of A2. Another 4 × 4 sub-region corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. When the target motion information prediction mode of the current block is horizontal downward, two sub-regions with the same size are divided, wherein one 4 x 4 sub-region corresponds to the peripheral matching block B2, and the motion information of the 4 x 4 sub-region is determined according to the motion information of B2. Another 4 × 4 sub-region corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3.
Application scenario 4: referring to fig. 10B, if the width W of the current block is less than 8 and the height H of the current block is greater than 8, motion compensation can be performed on each sub-region in the current block as follows: and if the angle prediction mode is the angle prediction mode aiming at the vertical motion information, performing motion compensation on each 4 × H sub-area according to the vertical angle. If the angle prediction mode is other angle prediction modes (such as a horizontal motion information angle prediction mode, a horizontal upward motion information angle prediction mode, a horizontal downward motion information angle prediction mode, a vertical rightward motion information angle prediction mode and the like), motion compensation is performed according to a certain angle for each 4 x 4 sub-area in the current block.
According to fig. 10B, when the size of the current block is 4 × 16 and the target motion information prediction mode of the current block is the horizontal mode, 4 sub-regions having a size of 4 × 4 are divided, one of the 4 × 4 sub-regions corresponds to the peripheral matching block a1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 1. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 4. When the target motion information prediction mode of the current block is the vertical mode, 4 sub-regions with the size of 4 × 4 may be divided, each sub-region with the size of 4 × 4 corresponds to the peripheral matching block B1, and the motion information of each sub-region with the size of 4 × 4 is determined according to the motion information of B1. The motion information of the four sub-regions are the same, so in this embodiment, the current block itself may not be sub-region divided, and the current block itself serves as a sub-region corresponding to a peripheral matching block B1, and the motion information of the current block is determined according to the motion information of B1.
When the target motion information prediction mode of the current block is a horizontal upward mode, 4 sub-regions with the size of 4 x 4 are divided, one 4 x 4 sub-region corresponds to the peripheral matching block E, and the motion information of the 4 x 4 sub-region is determined according to the motion information of the E. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 1. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. When the target motion information prediction mode of the current block is a horizontal down mode, 4 sub-regions with the size of 4 × 4 are divided, one 4 × 4 sub-region corresponds to the peripheral matching block a2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 4. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 5. When the target motion information prediction mode of the current block is a horizontal down mode, 4 sub-regions with the size of 4 × 4 are divided, one 4 × 4 sub-region corresponds to the peripheral matching block B2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B4. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B5.
Application scenario 5: referring to fig. 10C, if the width W of the current block is greater than 8 and the height H of the current block is less than 8, then each sub-region in the current block may be motion compensated as follows: and if the angle prediction mode is the horizontal motion information angle prediction mode, performing motion compensation on each W4 sub-area according to the horizontal angle. If the angular prediction mode is other angular prediction modes, motion compensation may be performed according to a certain angle for each 4 × 4 sub-region in the current block.
According to fig. 10C, the size of the current block is 16 × 4, and when the target motion information prediction mode of the current block is the horizontal mode, 4 sub-regions with the size of 4 × 4 may be divided, each sub-region with 4 × 4 corresponds to the peripheral matching block a1, and the motion information of each sub-region with 4 × 4 is determined according to the motion information of a 1. The motion information of the four sub-regions are the same, so in this embodiment, the current block itself may not be sub-region divided, and the current block itself serves as a sub-region corresponding to a peripheral matching block a1, and the motion information of the current block is determined according to the motion information of a 1. When the target motion information prediction mode of the current block is a vertical mode, 4 sub-regions with the size of 4 × 4 are divided, wherein one 4 × 4 sub-region corresponds to the peripheral matching block B1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B1. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B4.
When the target motion information prediction mode of the current block is a horizontal upward mode, 4 sub-regions with the size of 4 x 4 are divided, one 4 x 4 sub-region corresponds to the peripheral matching block E, and the motion information of the 4 x 4 sub-region is determined according to the motion information of the E. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B1. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3. When the target motion information prediction mode of the current block is a horizontal down mode, 4 sub-regions with the size of 4 × 4 are divided, wherein one 4 × 4 sub-region corresponds to the peripheral matching block a2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 4. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 5. When the target motion information prediction mode of the current block is a vertical right mode, 4 sub-regions with the size of 4 × 4 are divided, wherein one 4 × 4 sub-region corresponds to the peripheral matching block B2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B4. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B5.
Application scenario 6: the width W of the current block is equal to 8, and the height H of the current block is equal to 8, then motion compensation is performed on each 8 × 8 sub-region (i.e. the sub-region is the current block itself) in the current block according to a certain angle. If the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block may be selected from the motion information of the plurality of peripheral matching blocks according to the corresponding angle with respect to the motion information of the sub-region. For example, referring to fig. 10D, for the horizontal motion information angle prediction mode, motion information of the peripheral matching block a1 may be selected, and motion information of the peripheral matching block a2 may also be selected. Referring to fig. 10E, for the vertical motion information angle prediction mode, motion information of the peripheral matching block B1 may be selected, and motion information of the peripheral matching block B2 may also be selected. Referring to fig. 10F, for the horizontal upward motion information angle prediction mode, motion information of the peripheral matching block E may be selected, motion information of the peripheral matching block B1 may be selected, and motion information of the peripheral matching block a1 may be selected. Referring to fig. 10G, for the horizontal downward motion information angle prediction mode, motion information of the peripheral matching block a2 may be selected, motion information of the peripheral matching block A3 may be selected, and motion information of the peripheral matching block a4 may be selected. Referring to fig. 10H, for the vertical right motion information angle prediction mode, motion information of the peripheral matching block B2, motion information of the peripheral matching block B3, and motion information of the peripheral matching block B4 may be selected.
According to fig. 10D, when the size of the current block is 8 × 8 and the target motion information prediction mode of the current block is the horizontal mode, the current block is divided into sub-regions having a size of 8 × 8, the sub-regions correspond to the surrounding matching block a1, and the motion information of the sub-regions is determined according to the motion information of a 1. Or, the sub-area corresponds to the peripheral matching block a2, and the motion information of the sub-area is determined according to the motion information of a 2. According to fig. 10E, the size of the current block is 8 × 8, and when the target motion information prediction mode of the current block is the vertical mode, a sub-region having a size of 8 × 8 is divided, and the sub-region corresponds to the peripheral matching block B1, and the motion information of the sub-region is determined according to the motion information of B1. Or, the sub-area corresponds to the peripheral matching block B2, and the motion information of the sub-area is determined according to the motion information of B2. According to fig. 10F, the size of the current block is 8 × 8, and when the target motion information prediction mode of the current block is the horizontal up mode, the current block is divided into sub-regions having a size of 8 × 8, the sub-regions correspond to the peripheral matching block E, and the motion information of the sub-regions is determined according to the motion information of E. Or, the sub-area corresponds to the peripheral matching block B1, and the motion information of the sub-area is determined according to the motion information of B1. Or, the sub-area corresponds to the peripheral matching block a1, and the motion information of the sub-area is determined according to the motion information of a 1. According to fig. 10G, when the size of the current block is 8 × 8 and the target motion information prediction mode of the current block is the horizontal down mode, the current block is divided into sub-regions having a size of 8 × 8, the sub-regions correspond to the peripheral matching block a2, and the motion information of the sub-regions is determined according to the motion information of a 2. Or, the sub-area corresponds to the peripheral matching block A3, and the motion information of the sub-area is determined according to the motion information of A3. Or, the sub-area corresponds to the peripheral matching block a4, and the motion information of the sub-area is determined according to the motion information of a 4. According to fig. 10H, the size of the current block is 8 × 8, and when the target motion information prediction mode of the current block is the vertical right mode, the current block is divided into sub-regions of 8 × 8 in size, and the sub-regions correspond to the surrounding matching block B2, and the motion information of the sub-regions is determined according to the motion information of B2. Or, the sub-area corresponds to the peripheral matching block B3, and the motion information of the sub-area is determined according to the motion information of B3. Or, the sub-area corresponds to the peripheral matching block B4, and the motion information of the sub-area is determined according to the motion information of B4.
Application scenario 7: the width W of the current block is equal to or greater than 16 and the height H of the current block is equal to 8, on the basis of which each sub-region within the current block can be motion compensated in the following way: and if the angle prediction mode is the horizontal motion information angle prediction mode, performing motion compensation on each W4 sub-area according to the horizontal angle. And if the angular prediction mode is other angular prediction modes, performing motion compensation according to a certain angle for each 8 x 8 sub-area in the current block. For each sub-region of 8 × 8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks with respect to the motion information of the sub-region. For example, referring to fig. 10I, for the horizontal motion information angular prediction mode, the motion information of the peripheral matching block a1 may be selected for the first W × 4 sub-region, and the motion information of the peripheral matching block a2 may be selected for the second W × 4 sub-region. Referring to fig. 10J, for the vertical motion information angular prediction mode, for the first 8 × 8 sub-region, the motion information of the peripheral matching block B1 may be selected, and the motion information of the peripheral matching block B2 may be selected. For the second 8 x 8 sub-region, the motion information of the peripheral matching block B3 may be selected, and the motion information of the peripheral matching block B4 may be selected. Other angular prediction modes are similar and will not be described herein. According to fig. 10I, the size of the current block is 16 × 8, and when the target motion information prediction mode of the current block is the horizontal mode, 2 sub-regions with the size of 16 × 4 are divided, wherein one sub-region with 16 × 4 corresponds to the peripheral matching block a1, and the motion information of the sub-region with 16 × 4 is determined according to the motion information of a 1. Another 16 × 4 sub-region corresponds to the peripheral matching block a2, and the motion information of the 16 × 4 sub-region is determined according to the motion information of a 2.
According to fig. 10J, the size of the current block is 16 × 8, and when the target motion information prediction mode is the vertical mode, 2 sub-regions with the size of 8 × 8 are divided, wherein one sub-region with 8 × 8 corresponds to the peripheral matching block B1 or B2, and the motion information of the sub-region with 8 × 8 is determined according to the motion information of B1 or B2. The other 8 × 8 sub-region corresponds to the peripheral matching block B3 or B4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of B3 or B4.
Application scenario 8: the width W of the current block is equal to 8 and the height H of the current block is equal to or greater than 16, on the basis of which each sub-region within the current block can be motion compensated in the following way: and if the angle prediction mode is the vertical motion information angle prediction mode, performing motion compensation on each 4 × H sub-area according to the vertical angle. And if the angular prediction mode is other angular prediction modes, performing motion compensation according to a certain angle for each 8 x 8 sub-area in the current block. For each sub-region of 8 × 8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks with respect to the motion information of the sub-region. For example, referring to fig. 10K, for the vertical motion information angular prediction mode, the motion information of the peripheral matching block B1 may be selected for the first 4 × H sub-region, and the motion information of the peripheral matching block B2 may be selected for the second 4 × H sub-region. Referring to fig. 10L, for the horizontal motion information angle prediction mode, for the first 8 × 8 sub-region, the motion information of the peripheral matching block a1 may be selected, and the motion information of the peripheral matching block a2 may be selected. For the second 8 x 8 sub-region, the motion information of the peripheral matching block a1 may be selected, and the motion information of the peripheral matching block a2 may be selected. Other angular prediction modes are similar and will not be described herein. According to fig. 10K, the size of the current block is 8 × 16, and when the target motion information prediction mode of the current block is the vertical mode, 2 sub-regions having a size of 4 × 16 are divided, wherein one sub-region of 4 × 16 corresponds to the peripheral matching block B1, and the motion information of the sub-region of 4 × 16 is determined according to the motion information of B1. Another 4 × 16 sub-area corresponds to the peripheral matching block B2, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B2.
According to fig. 10L, the size of the current block is 16 × 8, and when the target motion information prediction mode is the horizontal mode, 2 sub-regions with the size of 8 × 8 are divided, one sub-region with 8 × 8 corresponds to the peripheral matching block a1 or a2, and the motion information of the sub-region with 8 × 8 is determined according to the motion information of the corresponding peripheral matching block. Another 8 × 8 sub-region corresponds to the peripheral matching block a1 or a2, and the motion information of the 8 × 8 sub-region is determined according to the motion information of the corresponding peripheral matching block.
Application scenario 9: the width W of the current block may be equal to or greater than 16, and the height H of the current block may be equal to or greater than 16, based on which each sub-region within the current block may be motion compensated in the following manner: and if the angle prediction mode is the vertical motion information angle prediction mode, performing motion compensation on each 4 × H sub-area according to the vertical angle. And if the angle prediction mode is the horizontal motion information angle prediction mode, performing motion compensation on each W4 sub-area according to the horizontal angle. And if the angular prediction mode is other angular prediction modes, performing motion compensation according to a certain angle for each 8 x 8 sub-area in the current block. For each sub-region of 8 × 8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks with respect to the motion information of the sub-region.
Referring to fig. 10M, for the vertical motion information angle prediction mode, the motion information of the peripheral matching block B1 may be selected for the first 4 × H sub-region, the motion information of the peripheral matching block B2 may be selected for the second 4 × H sub-region, the motion information of the peripheral matching block B3 may be selected for the third 4 × H sub-region, and the motion information of the peripheral matching block B4 may be selected for the fourth 4 × H sub-region. For the horizontal motion information angle prediction mode, motion information of the peripheral matching block a1 is selected for the first W × 4 sub-region, motion information of the peripheral matching block a2 is selected for the second W × 4 sub-region, motion information of the peripheral matching block A3 is selected for the third W × 4 sub-region, and motion information of the peripheral matching block a4 is selected for the fourth W × 4 sub-region. Other angular prediction modes are similar and will not be described herein.
According to fig. 10M, the size of the current block is 16 × 16, and when the target motion information prediction mode is the vertical mode, 4 sub-regions with the size of 4 × 16 are divided, one 4 × 16 sub-region corresponds to the peripheral matching block B1, and the motion information of the 4 × 16 sub-region is determined according to the motion information of B1. A4 × 16 sub-area corresponds to the peripheral matching block B2, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B2. A4 × 16 sub-area corresponds to the peripheral matching block B3, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B3. A4 × 16 sub-area corresponds to the peripheral matching block B4, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B4.
According to fig. 10M, the size of the current block is 16 × 16, and when the target motion information prediction mode of the current block is the horizontal mode, 4 sub-regions with the size of 16 × 4 are divided, wherein one sub-region with 16 × 4 corresponds to the peripheral matching block a1, and the motion information of the sub-region with 16 × 4 is determined according to the motion information of a 1. One 16 × 4 sub-area corresponds to the peripheral matching block a2, and the motion information of the 16 × 4 sub-area is determined according to the motion information of a 2. One 16 × 4 sub-area corresponds to the peripheral matching block A3, and the motion information of the 16 × 4 sub-area is determined according to the motion information of A3. One 16 × 4 sub-area corresponds to the peripheral matching block a4, and the motion information of the 16 × 4 sub-area is determined according to the motion information of a 4.
Application scenario 10: the width W of the current block may be greater than or equal to 8, and the height H of the current block may be greater than or equal to 8, and then motion compensation is performed on each 8 × 8 sub-region within the current block. Referring to fig. 10N, for each sub-region of 8 × 8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks with respect to the motion information of the sub-region. The sub-region division size is independent of the motion information angle prediction mode, and as long as the width is greater than or equal to 8 and the height is greater than or equal to 8, the sub-region division size may be 8 × 8 in any motion information angle prediction mode.
According to fig. 10N, the size of the current block is 16 × 16, and when the target motion information prediction mode of the current block is the horizontal mode, 4 sub-regions with the size of 8 × 8 are divided, wherein one sub-region with 8 × 8 corresponds to the peripheral matching block a1 or a2, and the motion information of the sub-region with 8 × 8 is determined according to the motion information of a1 or a 2. One of the 8 × 8 sub-regions corresponds to the peripheral matching block a1 or a2, and the motion information of the 8 × 8 sub-region is determined according to the motion information of a1 or a 2. One of the 8 × 8 sub-regions corresponds to the peripheral matching block A3 or a4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of A3 or a 4. One of the 8 × 8 sub-regions corresponds to the peripheral matching block A3 or a4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of A3 or a 4. When the target motion information prediction mode of the current block is a horizontal mode, 4 sub-regions with the size of 8 × 8 are divided, wherein one sub-region with the size of 8 × 8 corresponds to the peripheral matching block B1 or B2, and the motion information of the sub-region with the size of 8 × 8 is determined according to the motion information of B1 or B2. One of the 8 × 8 sub-regions corresponds to the peripheral matching block B1 or B2, and the motion information of the 8 × 8 sub-region is determined according to the motion information of B1 or B2. One of the 8 × 8 sub-regions corresponds to the peripheral matching block B3 or B4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of B3 or B4. One of the 8 × 8 sub-regions corresponds to the peripheral matching block B3 or B4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of B3 or B4. When the target motion information prediction mode of the current block is the horizontal upward mode, 4 sub-regions with the size of 8 × 8 may be divided. Then, for each 8 × 8 sub-region, a peripheral matching block (E, B2 or a2) corresponding to the 8 × 8 sub-region may be determined, and the determination method is not limited thereto, and the motion information of the 8 × 8 sub-region is determined based on the motion information of the peripheral matching block. When the target motion information prediction mode of the current block is a horizontal down mode, 4 sub-regions with the size of 8 x 8 are divided. Then, for each 8 × 8 sub-region, a peripheral matching block (A3, a5, or a7) corresponding to the 8 × 8 sub-region may be determined, without limitation, and the motion information of the 8 × 8 sub-region is determined based on the motion information of the peripheral matching block. When the target motion information prediction mode of the current block is a vertical right mode, 4 sub-regions with the size of 8 x 8 are divided. Then, for each 8 × 8 sub-region, a peripheral matching block (B3, B5, or B7) corresponding to the 8 × 8 sub-region may be determined, without limitation, and the motion information of the 8 × 8 sub-region is determined based on the motion information of the peripheral matching block.
Application scenario 11: the width W of the current block is greater than or equal to 8, and when the height H of the current block is greater than or equal to 8, motion compensation is performed on each 8 × 8 sub-region in the current block, and for each sub-region, any one of several pieces of motion information of the surrounding matching blocks is selected according to a corresponding angle, as shown in fig. 10N, which is not described herein again.
Based on the application scenes, for each sub-region of the current block, the motion information of the sub-region can be determined according to the motion information of the peripheral matching block, and the target prediction value of the sub-region is determined according to the motion information of the sub-region.
Example 9: in the above embodiment, the target prediction value of the sub-region may be determined according to the motion information of the sub-region, and in a possible implementation, the target prediction value of the sub-region may be determined directly according to the motion information of the sub-region.
In another possible embodiment, the motion compensation value for the sub-region may be determined based on the motion information for the sub-region. If the sub-area meets the condition of using the bidirectional optical flow, acquiring the bidirectional optical flow deviation value of the sub-area, and determining the target prediction value of the sub-area according to the forward motion compensation value in the motion compensation values of the sub-area, the backward motion compensation value in the motion compensation values of the sub-area and the bidirectional optical flow deviation value of the sub-area.
For example, for each sub-region of the current block, if the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located between two reference frames in the temporal sequence, the sub-region satisfies the condition of using bidirectional optical flow. One piece of motion information in the bidirectional motion information is forward motion information, a reference frame corresponding to the forward motion information is a forward reference frame, the other piece of motion information in the bidirectional motion information is backward motion information, and a reference frame corresponding to the backward motion information is a backward reference frame. The current frame in which the sub-region is located between the forward reference frame and the backward reference frame in time sequence.
When determining the motion compensation value of the sub-region according to the motion information of the sub-region, the forward motion compensation value of the sub-region may be determined based on a forward reference frame corresponding to the forward motion information in the bidirectional motion information, the backward motion compensation value of the sub-region may be determined based on a backward reference frame corresponding to the backward motion information in the bidirectional motion information, and the forward motion compensation value and the backward motion compensation value of the sub-region constitute the motion compensation value of the sub-region.
When determining the target prediction value of the sub-region, the target prediction value of the sub-region may be determined according to the forward motion compensation value of the sub-region, the backward motion compensation value of the sub-region, and the bi-directional optical flow offset value of the sub-region.
For example, for each sub-region of the current block, if the motion information of the sub-region is unidirectional motion information, the sub-region does not satisfy the condition of using bidirectional optical flow. If the sub-area does not satisfy the condition of using the bidirectional optical flow, determining a target predicted value of the sub-area according to the motion compensation value of the sub-area without referring to the bidirectional optical flow offset value. When the target prediction value of the sub-region is determined, the motion compensation value of the sub-region is determined as the target prediction value of the sub-region.
For example, for each sub-region of the current block, if the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located is not located between two reference frames in the temporal sequence, the sub-region does not satisfy the condition of using bidirectional optical flow. If the sub-region does not satisfy the condition of using the bi-directional optical flow, the target prediction value of the sub-region can be determined according to the motion compensation value of the sub-region without referring to the bi-directional optical flow offset value. For convenience of distinguishing, one piece of motion information in the bidirectional motion information is recorded as first motion information, a reference frame corresponding to the first motion information is recorded as a first reference frame, the other piece of motion information in the bidirectional motion information is recorded as second motion information, and a reference frame corresponding to the second motion information is recorded as a second reference frame. Since the current frame where the sub-region is located is not located between two reference frames in the time sequence, the first reference frame and the second reference frame are both forward reference frames of the sub-region, or the first reference frame and the second reference frame are both backward reference frames of the sub-region.
When determining the motion compensation value of the sub-region according to the motion information of the sub-region, the first motion compensation value of the sub-region may be determined based on a first reference frame corresponding to first motion information in the bidirectional motion information, the second motion compensation value of the sub-region may be determined based on a second reference frame corresponding to second motion information in the bidirectional motion information, and the first motion compensation value and the second motion compensation value of the sub-region constitute the motion compensation value of the sub-region.
In determining the target prediction value for the sub-region, the target prediction value for the sub-region may be determined based on the first motion compensation value for the sub-region and the second motion compensation value for the sub-region without referring to the bi-directional optical flow offset value.
Illustratively, obtaining the bi-directional optical flow offset value for the sub-region may include, but is not limited to: determining a first pixel value and a second pixel value according to the motion information of the sub-area; the first pixel value is a forward motion compensation value and a forward extension value for the subregion, the forward extension value being copied from the forward motion compensation value or obtained from a reference pixel location of a forward reference frame; the second pixel value is a backward motion compensation value and a backward extension value of the sub-region, and the backward extension value is copied from the backward motion compensation value or obtained from a reference pixel position of a backward reference frame; the forward reference frame and the backward reference frame are determined according to the motion information of the sub-regions. Then, a bi-directional optical-flow offset value for the sub-region is determined from the first pixel value and the second pixel value.
In summary, the motion compensation value of the sub-region may be determined according to the motion information of the sub-region, and if the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located between two reference frames in the time sequence, the motion compensation value of the sub-region includes a forward motion compensation value and a backward motion compensation value; if the motion information of the sub-region is unidirectional motion information, the sub-region corresponds to a motion compensation value. If the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located is not located between two reference frames in the time sequence, the motion compensation value of the sub-region includes a first motion compensation value and a second motion compensation value.
Example 10: in the above embodiments, bidirectional optical flow processing is performed on the sub-region of the current block, that is, after obtaining the motion compensation value of each sub-region, for each sub-region inside the current block that satisfies the condition of using bidirectional optical flow, a bidirectional optical flow offset value is superimposed on the motion compensation value of the sub-region using bidirectional optical flow technology (BIO). For example, for each sub-region of the current block, if the sub-region satisfies the condition of using bidirectional optical flow, a forward motion compensation value and a backward motion compensation value of the sub-region are determined, and a target prediction value of the sub-region is determined according to the forward motion compensation value, the backward motion compensation value and a bidirectional optical flow offset value of the sub-region. And if the sub-area does not meet the condition of using the bidirectional optical flow, determining a motion compensation value of the sub-area, and then determining a target prediction value of the sub-area according to the motion compensation value.
Illustratively, obtaining the bi-directional optical flow offset values for the sub-regions may include, but is not limited to: determining a first pixel value and a second pixel value according to the motion information of the sub-area; the first pixel value is a forward motion compensation value and a forward extension value for the subregion, the forward extension value being copied from the forward motion compensation value or obtained from a reference pixel location of a forward reference frame; the second pixel value is a backward motion compensation value and a backward extension value of the sub-region, and the backward extension value is copied from the backward motion compensation value or obtained from a reference pixel position of a backward reference frame; the forward reference frame and the backward reference frame are determined according to the motion information of the sub-regions. Then, a bi-directional optical-flow offset value for the sub-area is determined based on the first pixel value and the second pixel value.
Illustratively, obtaining the bi-directional optical flow offset value of the sub-area may be achieved by:
and c1, determining the first pixel value and the second pixel value according to the motion information of the sub-area.
And c2, determining the autocorrelation coefficient of the horizontal direction gradient sum S1, the cross correlation coefficient of the horizontal direction gradient sum and the vertical direction gradient sum S2, the cross correlation coefficient of the time domain predicted value difference and the horizontal direction gradient sum S3, the autocorrelation coefficient of the vertical direction gradient sum S5 and the cross correlation coefficient of the time domain predicted value difference and the vertical direction gradient sum S6 according to the first pixel value and the second pixel value.
For example, the gradient sum S can be calculated using the following formula1、S2、S3、S5、S6。
Exemplary,. psix(i,j)、ψyThe way in which (i, j) and θ (i, j) are calculated can be as follows:
θ(i,j)=I(1)(i,j)-I(0)(i,j)
I0(x, y) is a first pixel value, and I0(x, y) is the forward motion compensation value of the subregion and its forward extension value, I(1)(x, y) is a second pixel value, and I(1)(x, y) are the backward motion compensation value of the sub-region and its backward extension value. Illustratively, the forward extension value may be copied from the forward motion compensation value or may be obtained from a reference pixel position of a forward reference frame. The backward extension value may be copied from the backward motion compensation value or may be obtained from a reference pixel position of the backward reference frame. The forward reference frame and the backward reference frame are determined according to the motion information of the sub-regions。
ψx(i, j) is the rate of change of horizontal and vertical components of the forward reference frame for a pixel, ψx(i, j) represents the sum of the horizontal gradients,. phiy(i, j) is the rate of change of the horizontal and vertical components of the pixel points in the backward reference frame, psiyAnd (i, j) represents a vertical direction gradient sum, and theta (i, j) represents a pixel difference value of corresponding positions of the forward reference frame and the backward reference frame, namely theta (i, j) represents a time domain predicted value difference value.
Step c3, determining horizontal direction velocity v according to the autocorrelation coefficient S1 and the cross correlation coefficient S3 x(velocity v in horizontal directionxAlso called improvement motion vector vx) (ii) a Determining a vertical direction velocity v from the cross correlation coefficient S2, the autocorrelation coefficient S5 and the cross correlation coefficient S6y(velocity v in vertical directionyAlso called improved motion vector sum vy)。
For example, the horizontal direction velocity v may be calculated using the following formulaxAnd velocity v in the vertical directiony:
vx=(S1+r)>mclip3(-thBIO,thBIO,(S3<<5)/(S1+r)):0
vy=(S5+r)>mclip3(-thBIO,thBIO,((S6<<6)-vxS2)/((S5+r)<<1)):0
In the above formula, m and thBIOAll threshold values can be configured according to experience, and r is a regular term, so that the operation of 0 is avoided. clip3 shows vxIs guaranteed to be at-thBIOAnd thBIOAnd v isyIs guaranteed to be at-thBIOAnd thBIOIn the meantime.
Exemplary if (S)1+ r) > m? Is established, then vx=clip3(-thBIO,thBIO,(S3<<5)/(S1+ r)). If (S)1+ r) > m? If not, vx=0。th′BIOFor mixing vxIs limited to-th'BIOAnd th'BIOI.e. vxIs greater than or equal to-th'BIO,vxIs less than or equal to th'BIO. For vxIn other words, Clip3(a, b, x) indicates that if x is smaller than a, x is a; if x is greater than b, x ═ b; otherwise x is not changed, in the above formula, -th'BIOIs a, th'BIOIs b, (S)3<<5)/(S1+ r) is x, in summary, if (S)3<<5)/(S1+ r) is greater than-th'BIOAnd is less than th'BIOThen v isxIs (S)3<<5)/(S1+r)。
If (S)5+ r) > m is true, then vy=clip3(-thBIO,thBIO,((S6<<6)-vxS2)/((S5+ r) < 1). If (S)5If + r) > m is not true, vy=0。th′BIOFor mixing vyIs limited to-th' BIOAnd th'BIOI.e. vyIs greater than or equal to-th'BIO,vyIs less than or equal to th'BIO. For vyIn other words, Clip3(a, b, x) indicates that if x is smaller than a, x is a; if x is greater than b, x ═ b; otherwise x is not changed, in the above formula, -th'BIOIs a, th'BIOIs b, ((S)6<<6)-vxS2)/((S5+ r) < 1) is x, as described above, if, ((S)6<<6)-vxS2)/((S5+ r) < 1) is greater than-th'BIOAnd is less than th'BIOThen v isyIs ((S)6<<6)-vxS2)/((S5+r)<<1)。
Of course, the above is only to calculate vxAnd vyOther ways of calculating v may also be usedxAnd vyThis is not limitative.
And c4, acquiring the bidirectional optical flow offset value b of the subarea according to the horizontal direction velocity and the vertical direction velocity.
For example, one example of calculating the bi-directional optical flow offset value b is based on the horizontal direction velocity, the vertical direction velocity, the first pixel value, and the bi-directional optical flow offset value b of the sub-area of the second pixel value, see the following formula:
in the above formula, (x, y) is the coordinates of each pixel inside the current block, of course, the above formula is only an example of obtaining the bidirectional optical flow offset value b, and the bidirectional optical flow offset value b may also be calculated in other ways, which is not limited to this. I is0(x, y) is a first pixel value, and I0(x, y) is the forward motion compensation value and its forward extension value, I (1)(x, y) is a second pixel value, and I(1)(x, y) are the backward motion compensation value and its backward extension value.
And c5, determining the target predicted value of the sub-area according to the motion compensation value and the bidirectional optical flow offset value of the sub-area.
For example, after determining the forward motion compensation value, the backward motion compensation value, and the bi-directional optical flow offset value for the sub-region, the target prediction value for the sub-region may be determined based on the forward motion compensation value, the backward motion compensation value, and the bi-directional optical flow offset value. For example, a target prediction value pred of a pixel point (x, y) in the sub-region is determined based on the following formulaBIO(x,y):predBIO(x,y)=(I(0)(x,y)+I(1)(x, y) + b +1) > 1. In the above formula, I0(x, y) is the forward motion compensation value of pixel (x, y), I(1)(x, y) backward motion compensation values of the pixel (x, y).
Example 11: on the basis of any of the foregoing embodiments or combinations of the foregoing embodiments, it may also be determined whether to start a Motion Vector angle Prediction technique (MVAP), which may also be referred to as a Motion information angle Prediction technique, and the Motion information angle Prediction technique is described later by taking as an example. When the motion information angle prediction technology is started, the technical scheme of the embodiment of the present application, that is, the implementation processes of the above embodiments 1 to 10, may be adopted.
The following describes a process of determining whether to start a motion information angle prediction technique in conjunction with a specific application scenario.
Application scenario 1: the motion information angle prediction technique may be turned on or off using a Sequence Parameter Set (SPS) level syntax, for example, the SPS level syntax is added to control the turning on or off of the motion information angle prediction technique.
Illustratively, first indication information is obtained, the first indication information being located in the SPS stage. When the value of the first indication information is a first value, the first indication information is used for indicating the starting of the motion information angle prediction technology; and when the value of the first indication information is the second value, the first indication information is used for indicating to close the motion information angle prediction technology.
The encoding side can choose whether to turn on the motion information angle prediction technique. If the motion information angle prediction technology is started, when an encoding end sends an encoding bit stream to a decoding end, the encoding bit stream can carry first indication information, the value of the first indication information is a first value, the decoding end obtains the first indication information from the encoding bit stream after receiving the encoding bit stream, and the decoding end determines to start the motion information angle prediction technology because the value of the first indication information is the first value.
The encoding side can choose whether to turn on the motion information angle prediction technique. If the motion information angle prediction technology is closed, when an encoding end sends an encoding bit stream to a decoding end, the encoding bit stream can carry first indication information, the value of the first indication information is a second value, the decoding end obtains the first indication information from the encoding bit stream after receiving the encoding bit stream, and the decoding end determines to close the motion information angle prediction technology because the value of the first indication information is the second value.
For example, the first indication information is located in the SPS level, and when the motion information angle prediction technique is turned on, the motion information angle prediction technique may be turned on for each image corresponding to the SPS level, and when the motion information angle prediction technique is turned off, the motion information angle prediction technique may be turned off for each image corresponding to the SPS level.
Illustratively, when the encoding end encodes the first indication information into the bitstream, u (n), u (v), ue (n), ue (v), or the like may be used for encoding, u (n) or u (v) indicates that n consecutive bits are read, and after decoding, the u (n) or u (v) indicates Golomb entropy encoding of unsigned exponent. For u (n) and ue (n), the parameter in parentheses is n, indicating that the syntax element is fixed-length coding; for u (v) and ue (v), the parameter in parentheses is v, indicating that the syntax element is variable length coded. The application scenario does not limit the encoding mode, and if u (1) is adopted for encoding, only one bit is needed to indicate whether the motion information angle prediction technology is started or not.
Application scenario 2: the SPS-level syntax may be used to control the maximum size at which the motion information angular prediction technique may be used, for example, the SPS-level syntax may be added to control the maximum size at which the motion information angular prediction technique may be used, e.g., 32 x 32.
Illustratively, second indication information is obtained, the second indication information is located in the SPS, and the second indication information is used for indicating the maximum size. If the size of the current block is not larger than the maximum size, starting a motion information angle prediction technology for the current block; if the size of the current block is larger than the maximum size, the current block closes the motion information angle prediction technology.
When the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream may carry second indication information, where the second indication information is used to indicate a maximum size, such as 32 × 32, that the motion information angle prediction technique may be used. After receiving the coded bit stream, the decoding end acquires second indication information from the coded bit stream, and because the second indication information is used for indicating the maximum size, when the decoding end decodes the current block, if the size of the current block is not larger than the maximum size, the motion information angle prediction technology is started for the current block; if the size of the current block is larger than the maximum size, the motion information angle prediction technology is closed for the current block.
For example, the second indication information is located in the SPS level, and when each current block in the image corresponding to the SPS level is decoded, it is required to determine whether to start a motion information angle prediction technique for the current block according to the size of the current block.
For example, when the encoding end encodes the second indication information into the bitstream, u (n), u (v), ue (n), ue (v), or the like may be used for encoding, and the application scenario does not limit the encoding method, for example, ue (v) may be used for encoding.
Application scenario 3: the SPS-level syntax may be used to control the minimum size at which the motion information angle prediction technique may be used, for example, the SPS-level syntax may be added to control the minimum size at which the motion information angle prediction technique may be used, e.g., the minimum size is 8 x 8.
Illustratively, third indication information is obtained, the third indication information is located in the SPS, and the third indication information is used for indicating the minimum size. If the size of the current block is not smaller than the minimum size, starting a motion information angle prediction technology for the current block; if the size of the current block is smaller than the minimum size, the current block closes the motion information angle prediction technology.
When the encoding end sends the coded bit stream to the decoding end, the coded bit stream may carry third indication information, where the third indication information is used to indicate a minimum size, such as 8 × 8, where the motion information angle prediction technique may be used. After receiving the coded bit stream, the decoding end acquires third indication information from the coded bit stream, and because the third indication information is used for indicating the minimum size, when the decoding end decodes the current block, if the size of the current block is not smaller than the minimum size, the motion information angle prediction technology is started for the current block; if the size of the current block is smaller than the minimum size, the motion information angle prediction technology is closed for the current block.
For example, the third indication information is located in the SPS level, and when each current block in the image corresponding to the SPS level is decoded, it is required to determine whether to start a motion information angle prediction technique for the current block according to the size of the current block.
For example, when the encoding end encodes the third indication information into the bitstream, u (n), u (v), ue (n), ue (v), or the like may be used for encoding, and the application scenario does not limit the encoding method, for example, ue (v) may be used for encoding.
For example, when the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream may only carry the second indication information, may also only carry the third indication information, and may also simultaneously carry the second indication information and the third indication information.
Application scenario 4: slice (Slice) -level syntax may be used to turn on or off the motion information angle prediction technique, for example, Slice-level syntax is added to control the turning on or off of the motion information angle prediction technique.
Illustratively, fourth indication information is obtained, and the fourth indication information is located in Slice level. When the value of the fourth indication information is the first value, the fourth indication information is used for indicating to start the motion information angle prediction technology; and when the value of the fourth indication information is the second value, the fourth indication information is used for indicating to close the motion information angle prediction technology.
The encoding side can choose whether to turn on the motion information angle prediction technique. If the motion information angle prediction technology is started, when the encoding end sends an encoding bit stream to the decoding end, the encoding bit stream can carry fourth indication information, the value of the fourth indication information is a first value, the decoding end obtains the fourth indication information from the encoding bit stream after receiving the encoding bit stream, and the decoding end determines to start the motion information angle prediction technology because the value of the fourth indication information is the first value.
The encoding side can choose whether to turn on the motion information angle prediction technique. If the motion information angle prediction technology is closed, when the encoding end sends the encoded bit stream to the decoding end, the encoded bit stream can carry fourth indication information, the value of the fourth indication information is a second value, the decoding end obtains the fourth indication information from the encoded bit stream after receiving the encoded bit stream, and the decoding end determines to close the motion information angle prediction technology because the value of the fourth indication information is the second value.
For example, the fourth indication information is located in Slice level, when the motion information angle prediction technology is turned on, the motion information angle prediction technology may be turned on for an image corresponding to Slice level, and when the motion information angle prediction technology is turned off, the motion information angle prediction technology may be turned off for an image corresponding to Slice level.
For example, when the encoding end encodes the fourth indication information into the bitstream, u (n), u (v), ue (n), ue (v), or the like may be used for encoding, and the application scenario does not limit the encoding method, for example, u (1) may be used for encoding.
Application scenario 5: the Slice-level syntax may be used to control the maximum size of the motion information angle prediction technique, for example, adding Slice-level syntax to control the maximum size of the motion information angle prediction technique, such as 32 × 32.
Illustratively, fifth indication information is obtained, the fifth indication information is located in Slice, and the fifth indication information is used for indicating the maximum size. If the size of the current block is not larger than the maximum size, starting a motion information angle prediction technology for the current block; if the size of the current block is larger than the maximum size, the current block closes the motion information angle prediction technology.
When the encoding end sends the coded bitstream to the decoding end, the coded bitstream may carry fifth indication information, where the fifth indication information is used to indicate a maximum size, such as 32 × 32, that the motion information angle prediction technique may be used. After receiving the coded bit stream, the decoding end acquires fifth indication information from the coded bit stream, and because the fifth indication information is used for indicating the maximum size, when the decoding end decodes the current block, if the size of the current block is not larger than the maximum size, the motion information angle prediction technology is started for the current block; if the size of the current block is larger than the maximum size, the motion information angle prediction technology is closed for the current block.
For example, the fifth indication information is located in the Slice level, and when each current block in the image corresponding to the Slice level is decoded, it is required to determine whether to start a motion information angle prediction technique for the current block according to the size of the current block.
For example, when the encoding end encodes the fifth indication information into the bitstream, u (n), u (v), ue (n), ue (v), or the like may be used for encoding, and the application scenario does not limit the encoding method, for example, ue (v) may be used for encoding.
Application scenario 6: slice-level syntax may be used to control the minimum size at which motion information angle prediction techniques may be used, for example, adding Slice-level syntax to control the minimum size at which motion information angle prediction techniques may be used, e.g., a minimum size of 8 x 8.
Illustratively, sixth indication information is obtained, the sixth indication information is located in Slice, and the sixth indication information is used for indicating the minimum size. If the size of the current block is not smaller than the minimum size, starting a motion information angle prediction technology for the current block; if the size of the current block is smaller than the minimum size, the current block closes the motion information angle prediction technology.
When the encoding end sends the coded bit stream to the decoding end, the coded bit stream may carry sixth indication information, where the sixth indication information is used to indicate a minimum size, such as 8 × 8, where the motion information angle prediction technique may be used. After receiving the coded bit stream, the decoding end acquires sixth indication information from the coded bit stream, and because the sixth indication information is used for indicating the minimum size, when the decoding end decodes the current block, if the size of the current block is not smaller than the minimum size, the motion information angle prediction technology is started for the current block; if the size of the current block is smaller than the minimum size, the motion information angle prediction technology is closed for the current block.
For example, the sixth indication information is located in the Slice level, and when each current block in the image corresponding to the Slice level is decoded, it is required to determine whether to start a motion information angle prediction technique for the current block according to the size of the current block.
For example, when the encoding end encodes the sixth indication information into the bitstream, u (n), u (v), ue (n), ue (v), or the like may be used for encoding, and the application scenario does not limit the encoding method, for example, ue (v) may be used for encoding.
For example, when the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream may only carry the fifth indication information, may also only carry the sixth indication information, and may also simultaneously carry the fifth indication information and the sixth indication information.
Example 12: the current block may use a motion information angular prediction mode, that is, a motion compensation value of each sub-region of the current block is determined based on the motion information angular prediction mode, which is specifically determined in the above-mentioned embodiment. If the current block uses the motion information angle prediction mode, the current block can close the motion vector adjustment (DMVR) technology of the decoding end; alternatively, if the current block uses the motion information angle prediction mode, the current block may initiate a decoding-side motion vector adjustment technique. If the current block uses the motion information angle prediction mode, the current block may turn off the bidirectional optical flow technique (BIO); alternatively, if the current block uses the motion information angular prediction mode, the current block may initiate a bi-directional optical flow technique. Illustratively, the bi-directional optical flow technique is to superimpose optical flow compensation values on the current block using gradient information of pixel values in forward and backward reference frames. The principle of the decoding-side motion vector adjustment technique is to adjust a motion vector using a matching criterion between forward and backward reference pixel values. The following describes the combination of the motion information angle prediction mode, the decoding-side motion vector adjustment technique, and the bidirectional optical flow technique, with reference to a specific application scenario.
Application scenario 1: if the current block uses the motion information angle prediction mode, the current block may start the bi-directional optical flow technique, and the current block may close the motion vector adjustment technique at the decoding end. In this application scenario, a motion compensation value for each sub-region of the current block is determined based on the motion information angular prediction mode. Then, based on the bi-directional optical flow technique, the target prediction value of each sub-area of the current block is determined according to the motion compensation value of the sub-area, for example, if the sub-area satisfies the condition of using bi-directional optical flow, the target prediction value of the sub-area is determined according to the motion compensation value of the sub-area and the bi-directional optical flow offset value, and if the sub-area does not satisfy the condition of using bi-directional optical flow, the target prediction value of the sub-area is determined according to the motion compensation value of the sub-area. Then, the prediction value of the current block is determined according to the target prediction value of each sub-region.
Application scenario 2: if the current block uses the motion information angle prediction mode, the current block may initiate a bi-directional optical flow technique, and the current block may initiate a decoding-side motion vector adjustment technique. In such an application scenario, original motion information of each sub-region of the current block is determined based on the motion information angular prediction mode (for convenience of distinction, the motion information determined based on the motion information angular prediction mode is referred to as original motion information). Then, based on the decoding-end motion vector adjustment technology, determining target motion information of each sub-region of the current block according to the original motion information of the sub-region, for example, if the sub-region meets the condition of using the decoding-end motion vector adjustment, adjusting the original motion information of the sub-region to obtain the adjusted target motion information, and if the sub-region does not meet the condition of using the decoding-end motion vector adjustment, using the original motion information of the sub-region as the target motion information. Then, a motion compensation value of each sub-region is determined according to the target motion information of the sub-region. Then, based on the bi-directional optical flow technique, a target prediction value of each sub-area of the current block is determined according to the motion compensation value of the sub-area, for example, if the sub-area satisfies the condition of using the bi-directional optical flow, the target prediction value of the sub-area is determined according to the motion compensation value of the sub-area and the bi-directional optical flow offset value, and if the sub-area does not satisfy the condition of using the bi-directional optical flow, the target prediction value of the sub-area is determined according to. Then, the prediction value of the current block is determined according to the target prediction value of each sub-region.
Application scenario 3: if the current block uses the motion information angle prediction mode, the current block may turn off the bi-directional optical flow technique, and the current block may start the motion vector adjustment technique at the decoding end. In this application scenario, the original motion information of each sub-region of the current block is determined based on the motion information angular prediction mode. Then, based on the decoding-end motion vector adjustment technology, determining target motion information of each sub-region of the current block according to the original motion information of the sub-region, for example, if the sub-region meets the condition of using the decoding-end motion vector adjustment, adjusting the original motion information of the sub-region to obtain the adjusted target motion information, and if the sub-region does not meet the condition of using the decoding-end motion vector adjustment, using the original motion information of the sub-region as the target motion information. Then, a target prediction value of each sub-area is determined according to the target motion information of the sub-area, and the process does not need to consider a bidirectional optical flow technology. Then, the prediction value of the current block is determined according to the target prediction value of each sub-region.
Application scenario 4: if the current block uses the motion information angle prediction mode, the current block may disable the bi-directional optical flow technique, and the current block may disable the motion vector adjustment technique at the decoding end. In the application scenario, the motion information of each sub-region of the current block is determined based on the motion information angle prediction mode, the target prediction value of each sub-region is determined according to the motion information of each sub-region, and the prediction value of the current block is determined according to the target prediction value of each sub-region. In the above process, the motion vector adjustment technique at the decoding end and the bidirectional optical flow technique do not need to be considered.
In application scenarios 2 and 3, for each sub-region of the current block, DMVR may be used for sub-regions that meet the requirements of using the decoding-side motion vector adjustment technique. For example, if the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located between two reference frames (i.e., a forward reference frame and a backward reference frame) in the temporal sequence, and the distance between the current frame and the forward reference frame is the same as the distance between the backward reference frame and the current frame, the sub-region satisfies the condition of using the motion vector adjustment at the decoding end. If the motion information of the sub-region is unidirectional motion information, the sub-region does not satisfy the condition of using the motion vector adjustment of the decoding end. If the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located is not located between two reference frames in the time sequence, the sub-region does not satisfy the condition of using the motion vector adjustment of the decoding end. If the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located between two reference frames (i.e. a forward reference frame and a backward reference frame) in time sequence, and the distance between the current frame and the forward reference frame is different from the distance between the backward reference frame and the current frame, the sub-region does not satisfy the condition of using the motion vector adjustment of the decoding end.
In application scenarios 2 and 3, a decoding-end motion vector adjustment technique needs to be started, where the decoding-end motion vector adjustment technique adjusts a motion vector according to a matching criterion of forward and backward reference pixel values, and the decoding-end motion vector adjustment technique can be applied to a direct mode or a skip mode, and an implementation process of the decoding-end motion vector adjustment technique can be as follows:
a) and acquiring reference pixels needed in the prediction block and the search area by using the initial motion vector.
b) And obtaining the optimal whole pixel position. Illustratively, the luminance image blocks of the current block are divided into non-overlapping and adjacently located sub-regions, and the initial motion vectors of all sub-regions are MV0 and MV 1. For each subregion, the positions corresponding to the initial MV0 and the initial MV1 are taken as centers, and the position with the minimum template matching distortion in a certain range nearby is searched. The calculation mode of the template matching distortion is as follows: the SAD value between a block of the sub-region width starting at the center position multiplied by the sub-region height in the forward search region and a block of the sub-region width starting at the center position multiplied by the sub-region height in the backward search region is calculated.
c) And obtaining the optimal sub-pixel position. And the sub-pixel position is confirmed by using template matching distortion values at five positions including the optimal position of the integer position, the left side of the optimal position, the right side of the optimal position, the upper side of the optimal position and the lower side of the optimal position, estimating a secondary distortion plane near the optimal position of the integer position, and calculating to obtain the position with the minimum distortion in the distortion plane as the sub-pixel position. For example, the horizontal sub-pixel position and the vertical sub-pixel position are calculated according to the template matching distortion values at five positions, namely, the optimal position of the integer position, the left side thereof, the right side thereof, the upper side thereof and the lower side thereof, see the following formula, which is one example of calculating the horizontal sub-pixel position and the vertical sub-pixel position:
Horizontal subpixel position (sad _ left-sad _ right) N/((sad _ right + sad _ left-2 sa _ mid) 2)
Vertical subpixel position (sad _ btm-sad _ top) N/((sad _ top + sad _ btm-2 sad _ mid) 2)
Illustratively, sad _ mid, sad _ left, sad _ right, sad _ top, and sad _ btm are template matching distortion values at five positions of an integer position optimal position, the left side thereof, the right side thereof, the upper side thereof, and the lower side thereof, respectively, and N is precision.
Of course, the above is only an example of calculating the horizontal sub-pixel position and the vertical sub-pixel position, and the horizontal sub-pixel position and the vertical sub-pixel position may also be calculated in other manners, for example, the horizontal sub-pixel position and the vertical sub-pixel position are calculated according to the template matching distortion values at five positions, i.e., the integer position optimal position, the left side thereof, the right side thereof, the upper side thereof, and the lower side thereof, which is not limited to this, as long as the horizontal sub-pixel position and the vertical sub-pixel position are calculated with reference to these parameters.
d) And calculating according to the optimal MV to obtain a final predicted value.
Based on the same application concept as the method, an embodiment of the present application provides a decoding apparatus applied to a decoding end, as shown in fig. 11A, which is a structural diagram of the apparatus, and the apparatus includes:
A construction module 1111 for constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle of the motion information angle prediction mode, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block; a selecting module 1112 for selecting a target motion information prediction mode for the current block from the motion information prediction mode candidate list; a filling module 1113, configured to fill motion information that is unavailable in the neighboring blocks of the current block if the target motion information prediction mode is a target motion information angle prediction mode; a determining module 1114, configured to determine motion information of a current block according to motion information of a plurality of peripheral matching blocks pointed by preconfigured angles of the target motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
The building module 1111 is specifically configured to: adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block if there is no available motion information in at least one of the first peripheral matching block and the second peripheral matching block.
The building module 1111 is specifically configured to: if there is no available motion information in at least one of the first and second peripheral matching blocks, prohibiting the motion information angular prediction mode from being added to the motion information prediction mode candidate list of the current block.
The building module 1111 is specifically configured to: and if the available motion information exists in the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is the same, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block.
The building module 1111 is specifically configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, and for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuing to judge whether available motion information exists in both the second peripheral matching block and the third peripheral matching block;
If there is available motion information for both the second peripheral matching block and the third peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
The building module 1111 is specifically configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, and for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuing to judge whether available motion information exists in both the second peripheral matching block and the third peripheral matching block;
if there is available motion information in both the second peripheral matching block and the third peripheral matching block, prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
The building module 1111 is specifically configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, and for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block.
The building module 1111 is specifically configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, aiming at the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block; alternatively, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
The building module 1111 is specifically configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, aiming at the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is available motion information for both the second peripheral matching block and the third peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
The building module 1111 is specifically configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, aiming at the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
The building module 1111 is specifically configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, aiming at the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information;
if there is no motion information available for at least one of the second peripheral matching block and the third peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block, or prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block.
The building module 1111 is specifically configured to: judging whether any peripheral matching block has available motion information, specifically:
if the peripheral matching block is an interframe coded block, determining that available motion information exists in the peripheral matching block;
otherwise, determining that the peripheral matching block has no available motion information.
The building module 1111 is specifically configured to: judging whether any peripheral matching block has available motion information, specifically:
and if the prediction mode of the peripheral matching block is the intra block copy mode, determining that no available motion information exists in the peripheral matching block.
The building module 1111 is specifically configured to: judging whether any peripheral matching block has available motion information, specifically:
if the peripheral matching block is positioned outside the image where the current block is positioned or the peripheral matching block is positioned outside the image slice where the current block is positioned, determining that the peripheral matching block does not have available motion information;
if the peripheral matching block is an uncoded block, determining that the peripheral matching block does not have available motion information;
and if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block.
The filling module 1113 is specifically configured to: traversing the peripheral blocks of the current block according to a traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the peripheral block comprises a first peripheral block without available motion information before, filling the motion information of the peripheral block into the first peripheral block;
And continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise a second peripheral block without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block.
The determining module 1114 is specifically configured to: dividing the current block into at least one sub-region;
for each sub-region of the current block, determining motion information of the sub-region according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the target motion information angle prediction mode;
the determining the prediction value of the current block according to the motion information of the current block includes:
and aiming at each sub-area of the current block, determining a target predicted value of the sub-area according to the motion information of the sub-area, and determining the predicted value of the current block according to the target predicted value of each sub-area.
The determining module 1114 is specifically configured to: for each sub-region of the current block, determining a motion compensation value of the sub-region according to the motion information of the sub-region; if the sub-area meets the condition of using the bidirectional optical flow, acquiring a bidirectional optical flow offset value of the sub-area; determining a target predicted value of the sub-region according to a forward motion compensation value in the motion compensation values of the sub-region, a backward motion compensation value in the motion compensation values of the sub-region and a bidirectional optical flow offset value of the sub-region; and determining the predicted value of the current block according to the target predicted value of each sub-area.
The determining module 1114 is further configured to: if the sub-area does not meet the condition of using the bidirectional optical flow, determining a target prediction value of the sub-area according to the motion compensation value of the sub-area; if the motion information of the sub-area is unidirectional motion information, the sub-area does not meet the condition of using bidirectional optical flow; or, if the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located is not located between two reference frames in the time sequence, the sub-region does not satisfy the condition of using the bidirectional optical flow.
And if the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located between two reference frames in the time sequence, the sub-region meets the condition of using bidirectional optical flow.
The determining module 1114, when obtaining the bidirectional optical flow offset value of the sub-region, is specifically configured to:
determining a first pixel value and a second pixel value according to the motion information of the sub-region; the first pixel value is a forward motion compensation value and a forward extension value for the subregion, the forward extension value being copied from the forward motion compensation value or obtained from a reference pixel location of a forward reference frame; the second pixel value is a backward motion compensation value and a backward extension value of the sub-region, and the backward extension value is copied from the backward motion compensation value or obtained from a reference pixel position of a backward reference frame; the forward reference frame and the backward reference frame are determined according to the motion information of the sub-area;
Determining a bi-directional optical flow offset value for the sub-region from the first pixel value and the second pixel value.
Based on the same application concept as the method, an embodiment of the present application provides an encoding apparatus applied to an encoding end, as shown in fig. 11B, which is a structural diagram of the apparatus, and the apparatus includes:
a constructing module 1121 configured to construct a motion information prediction mode candidate list of a current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle of the motion information angle prediction mode, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block;
A padding module 1122, configured to pad motion information that is unavailable in the neighboring blocks of the current block if a motion information angle prediction mode exists in the motion information prediction mode candidate list;
a determining module 1123, configured to determine, for each motion information angle prediction mode in the motion information prediction mode candidate list, motion information of the current block according to motion information of a plurality of peripheral matching blocks pointed to by a preconfigured angle of the motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
The building module 1121 is specifically configured to: adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block if there is no available motion information in at least one of the first peripheral matching block and the second peripheral matching block.
The building module 1121 is specifically configured to: if there is no available motion information in at least one of the first and second peripheral matching blocks, prohibiting the motion information angular prediction mode from being added to the motion information prediction mode candidate list of the current block.
The building module 1121 is specifically configured to: and if the available motion information exists in the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is the same, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block.
The building module 1121 is specifically configured to: judging whether any peripheral matching block has available motion information, specifically:
if the peripheral matching block is an interframe coded block, determining that available motion information exists in the peripheral matching block;
otherwise, determining that the peripheral matching block has no available motion information.
The building module 1121 is specifically configured to: judging whether any peripheral matching block has available motion information, specifically:
and if the prediction mode of the peripheral matching block is the intra block copy mode, determining that no available motion information exists in the peripheral matching block.
The building module 1121 is specifically configured to: judging whether any peripheral matching block has available motion information, specifically:
if the peripheral matching block is positioned outside the image where the current block is positioned or the peripheral matching block is positioned outside the image slice where the current block is positioned, determining that the peripheral matching block does not have available motion information;
If the peripheral matching block is an uncoded block, determining that the peripheral matching block does not have available motion information;
and if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block.
The filling module 1122 is specifically configured to: traversing the peripheral blocks of the current block according to a traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the peripheral block comprises a first peripheral block without available motion information before, filling the motion information of the peripheral block into the first peripheral block;
and continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise a second peripheral block without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block.
The determining module 1123 is specifically configured to: dividing the current block into at least one sub-region;
for each sub-region of the current block, determining motion information of the sub-region according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
The determining the prediction value of the current block according to the motion information of the current block includes:
and aiming at each sub-area of the current block, determining a target predicted value of the sub-area according to the motion information of the sub-area, and determining the predicted value of the current block according to the target predicted value of each sub-area.
In terms of hardware, the hardware architecture diagram of the decoding-side device provided in the embodiment of the present application may specifically refer to fig. 11C. The method comprises the following steps: a processor 1131 and a machine-readable storage medium 1132, the machine-readable storage medium 1132 storing machine-executable instructions executable by the processor 1131; the processor 1131 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
For example, the processor 1131 is configured to execute machine executable instructions to implement the following steps:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle of the motion information angle prediction mode, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block;
Selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
if the target motion information prediction mode is a target motion information angle prediction mode, filling unavailable motion information in the peripheral blocks of the current block;
determining the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
In terms of hardware, the hardware architecture diagram of the encoding end device provided in the embodiment of the present application may specifically refer to fig. 11D. The method comprises the following steps: a processor 1141 and a machine-readable storage medium 1142, the machine-readable storage medium 1142 storing machine-executable instructions executable by the processor 1141; the processor 1141 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
For example, the processor 1141 is configured to execute machine-executable instructions to perform the following steps:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle of the motion information angle prediction mode, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block;
If the motion information angle prediction mode exists in the motion information prediction mode candidate list, filling unavailable motion information in the peripheral blocks of the current block;
for each motion information angle prediction mode in the motion information angle prediction mode candidate list, determining motion information of a current block according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where a plurality of computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the encoding and decoding methods disclosed in the above examples of the present application can be implemented. The machine-readable storage medium may be, for example, any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices. For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application. As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (38)
1. A decoding method applied to a decoding end, the method comprising:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle of the motion information angle prediction mode, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block;
Selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
if the target motion information prediction mode is a target motion information angle prediction mode, filling unavailable motion information in the peripheral blocks of the current block;
determining the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
2. The method of claim 1,
the constructing of the motion information prediction mode candidate list of the current block includes:
adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block if there is no available motion information in at least one of the first peripheral matching block and the second peripheral matching block.
3. The method of claim 1,
the constructing of the motion information prediction mode candidate list of the current block includes:
if there is no available motion information in at least one of the first and second peripheral matching blocks, prohibiting the motion information angular prediction mode from being added to the motion information prediction mode candidate list of the current block.
4. The method of claim 1,
the constructing of the motion information prediction mode candidate list of the current block includes: and if the available motion information exists in the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is the same, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block.
5. The method of claim 1,
the constructing of the motion information prediction mode candidate list of the current block includes:
if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, and for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuing to judge whether available motion information exists in both the second peripheral matching block and the third peripheral matching block;
if there is available motion information for both the second peripheral matching block and the third peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
6. The method of claim 1,
the constructing of the motion information prediction mode candidate list of the current block includes:
if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, and for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuing to judge whether available motion information exists in both the second peripheral matching block and the third peripheral matching block;
if there is available motion information in both the second peripheral matching block and the third peripheral matching block, prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
7. The method of claim 1,
the constructing of the motion information prediction mode candidate list of the current block includes:
If the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, and for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block.
8. The method of claim 1,
the constructing of the motion information prediction mode candidate list of the current block includes:
if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, aiming at the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block; alternatively, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
9. The method of claim 1,
the constructing of the motion information prediction mode candidate list of the current block includes:
if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, aiming at the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is available motion information for both the second peripheral matching block and the third peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
10. The method of claim 1,
the constructing of the motion information prediction mode candidate list of the current block includes:
if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, aiming at the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
11. The method of claim 1,
the constructing of the motion information prediction mode candidate list of the current block includes:
if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, aiming at the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information;
if there is no motion information available for at least one of the second peripheral matching block and the third peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block, or prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block.
12. The method according to any one of claims 1 to 11,
the process of judging whether any peripheral matching block has available motion information includes:
If the peripheral matching block is an interframe coded block, determining that available motion information exists in the peripheral matching block;
otherwise, determining that the peripheral matching block has no available motion information.
13. The method according to any one of claims 1 to 11,
the process of judging whether any peripheral matching block has available motion information includes:
and if the prediction mode of the peripheral matching block is the intra block copy mode, determining that no available motion information exists in the peripheral matching block.
14. The method according to any one of claims 1 to 11,
the process of judging whether any peripheral matching block has available motion information includes:
if the peripheral matching block is positioned outside the image where the current block is positioned or the peripheral matching block is positioned outside the image slice where the current block is positioned, determining that the peripheral matching block does not have available motion information;
if the peripheral matching block is an uncoded block, determining that the peripheral matching block does not have available motion information;
and if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block.
15. The method of claim 1,
the padding motion information unavailable in peripheral blocks of the current block comprises:
Traversing the peripheral blocks of the current block according to a traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the peripheral block comprises a first peripheral block without available motion information before, filling the motion information of the peripheral block into the first peripheral block;
and continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise a second peripheral block without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block.
16. The method of claim 1, wherein determining the motion information of the current block according to the motion information of the plurality of neighboring matching blocks pointed to by the pre-configured angle of the target motion information angular prediction mode comprises:
dividing the current block into at least one sub-region;
for each sub-region of the current block, determining motion information of the sub-region according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the target motion information angle prediction mode;
the determining the prediction value of the current block according to the motion information of the current block includes:
And aiming at each sub-area of the current block, determining a target predicted value of the sub-area according to the motion information of the sub-area, and determining the predicted value of the current block according to the target predicted value of each sub-area.
17. The method according to any of claims 1-16, wherein the motion information prediction mode candidate list comprises candidate motion information of a skip mode or a direct mode, and the candidate motion information of the skip mode or the direct mode comprises: time domain candidate motion information, space domain candidate motion information, and HMVP candidate motion information;
wherein the motion information angular prediction mode is located between the spatial domain candidate motion information and the HMVP candidate motion information when the motion information angular prediction mode is added to the motion information prediction mode candidate list.
18. The method according to claim 1 or 16,
the determining the prediction value of the current block according to the motion information of the current block includes:
for each sub-region of the current block, determining a motion compensation value of the sub-region according to the motion information of the sub-region; if the sub-area meets the condition of using the bidirectional optical flow, acquiring a bidirectional optical flow offset value of the sub-area; determining a target predicted value of the sub-region according to a forward motion compensation value in the motion compensation values of the sub-region, a backward motion compensation value in the motion compensation values of the sub-region and a bidirectional optical flow offset value of the sub-region;
And determining the predicted value of the current block according to the target predicted value of each sub-area.
19. The method of claim 18, wherein after determining the motion compensation value for the sub-region according to the motion information of the sub-region, the method further comprises:
if the sub-area does not meet the condition of using the bidirectional optical flow, determining a target prediction value of the sub-area according to the motion compensation value of the sub-area; if the motion information of the sub-area is unidirectional motion information, the sub-area does not meet the condition of using bidirectional optical flow; or, if the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located is not located between two reference frames in the time sequence, the sub-region does not satisfy the condition of using the bidirectional optical flow.
20. The method of claim 18,
and if the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located between two reference frames in the time sequence, the sub-region meets the condition of using bidirectional optical flow.
21. The method of claim 18,
The acquiring of the bi-directional optical flow offset value of the sub-area comprises:
determining a first pixel value and a second pixel value according to the motion information of the sub-region; the first pixel value is a forward motion compensation value and a forward extension value for the subregion, the forward extension value being copied from the forward motion compensation value or obtained from a reference pixel location of a forward reference frame; the second pixel value is a backward motion compensation value and a backward extension value of the sub-region, and the backward extension value is copied from the backward motion compensation value or obtained from a reference pixel position of a backward reference frame; the forward reference frame and the backward reference frame are determined according to the motion information of the sub-area;
determining a bi-directional optical flow offset value for the sub-region from the first pixel value and the second pixel value.
22. The method of claim 18,
before the constructing the motion information prediction mode candidate list of the current block, the method further includes:
acquiring first indication information, wherein the first indication information is positioned in a sequence parameter set level; when the value of the first indication information is a first value, the first indication information is used for indicating to start a motion information angle prediction technology; and when the value of the first indication information is a second value, the first indication information is used for indicating to close the motion information angle prediction technology.
23. The method of any one of claims 1-22, wherein if the current block uses a motion information angle prediction mode, the current block disables a decoding-side motion vector adjustment technique; the decoding-end motion vector adjusting technology adjusts a motion vector according to the matching criterion of the forward reference pixel value and the backward reference pixel value.
24. The method of any one of claims 1 to 22,
if the current block uses a motion information angle prediction mode, the current block initiates a bi-directional optical flow technique.
25. An encoding method applied to an encoding end, the method comprising:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle of the motion information angle prediction mode, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block;
If the motion information angle prediction mode exists in the motion information prediction mode candidate list, filling unavailable motion information in the peripheral blocks of the current block;
for each motion information angle prediction mode in the motion information angle prediction mode candidate list, determining motion information of a current block according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
26. The method of claim 25,
the constructing of the motion information prediction mode candidate list of the current block includes:
adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block if there is no available motion information in at least one of the first peripheral matching block and the second peripheral matching block.
27. The method of claim 25,
the constructing of the motion information prediction mode candidate list of the current block includes:
if there is no available motion information in at least one of the first and second peripheral matching blocks, prohibiting the motion information angular prediction mode from being added to the motion information prediction mode candidate list of the current block.
28. The method of claim 25,
the constructing of the motion information prediction mode candidate list of the current block includes: and if the available motion information exists in the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is the same, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block.
29. The method of any one of claims 25-28,
the process of judging whether any peripheral matching block has available motion information includes:
if the peripheral matching block is an interframe coded block, determining that available motion information exists in the peripheral matching block;
otherwise, determining that the peripheral matching block has no available motion information.
30. The method of any one of claims 25-28,
the process of judging whether any peripheral matching block has available motion information includes:
and if the prediction mode of the peripheral matching block is the intra block copy mode, determining that no available motion information exists in the peripheral matching block.
31. The method of any one of claims 25-28,
The process of judging whether any peripheral matching block has available motion information includes:
if the peripheral matching block is positioned outside the image where the current block is positioned or the peripheral matching block is positioned outside the image slice where the current block is positioned, determining that the peripheral matching block does not have available motion information;
if the peripheral matching block is an uncoded block, determining that the peripheral matching block does not have available motion information;
and if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block.
32. The method of claim 25,
the padding motion information unavailable in peripheral blocks of the current block comprises:
traversing the peripheral blocks of the current block according to a traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the peripheral block comprises a first peripheral block without available motion information before, filling the motion information of the peripheral block into the first peripheral block;
and continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise a second peripheral block without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block.
33. The method of claim 25, wherein determining the motion information of the current block according to the motion information of the plurality of neighboring matching blocks pointed to by the preconfigured angle of motion information angular prediction mode comprises:
dividing the current block into at least one sub-region;
for each sub-region of the current block, determining motion information of the sub-region according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
the determining the prediction value of the current block according to the motion information of the current block includes:
and aiming at each sub-area of the current block, determining a target predicted value of the sub-area according to the motion information of the sub-area, and determining the predicted value of the current block according to the target predicted value of each sub-area.
34. The method according to any of claims 25-33, wherein the motion information prediction mode candidate list comprises candidate motion information of a skip mode or a direct mode, and the candidate motion information of the skip mode or the direct mode comprises: time domain candidate motion information, space domain candidate motion information, and HMVP candidate motion information;
Wherein the motion information angular prediction mode is located between the spatial domain candidate motion information and the HMVP candidate motion information when the motion information angular prediction mode is added to the motion information prediction mode candidate list.
35. A decoding apparatus, applied to a decoding side, the apparatus comprising:
a construction module for constructing a motion information prediction mode candidate list of a current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle of the motion information angle prediction mode, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block;
A selection module for selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
a filling module, configured to fill motion information that is unavailable in neighboring blocks of the current block if the target motion information prediction mode is a target motion information angle prediction mode;
a determining module, configured to determine motion information of a current block according to motion information of a plurality of neighboring matching blocks pointed by preconfigured angles of the target motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
36. An encoding apparatus applied to an encoding side, the apparatus comprising:
a construction module for constructing a motion information prediction mode candidate list of a current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle of the motion information angle prediction mode, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block;
A filling module, configured to fill motion information that is unavailable in neighboring blocks of the current block if a motion information angle prediction mode exists in the motion information prediction mode candidate list;
a determining module, configured to determine, for each motion information angle prediction mode in the motion information prediction mode candidate list, motion information of a current block according to motion information of a plurality of neighboring matching blocks pointed by a preconfigured angle of the motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
37. A decoding-side apparatus, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle of the motion information angle prediction mode, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block;
Selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
if the target motion information prediction mode is a target motion information angle prediction mode, filling unavailable motion information in the peripheral blocks of the current block;
determining the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
38. An encoding side device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle of the motion information angle prediction mode, wherein the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block;
If the motion information angle prediction mode exists in the motion information prediction mode candidate list, filling unavailable motion information in the peripheral blocks of the current block;
for each motion information angle prediction mode in the motion information angle prediction mode candidate list, determining motion information of a current block according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111153097.6A CN113709457B (en) | 2019-09-26 | 2019-09-26 | Decoding and encoding method, device and equipment |
CN201910919775.1A CN112565747B (en) | 2019-09-26 | 2019-09-26 | Decoding and encoding method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910919775.1A CN112565747B (en) | 2019-09-26 | 2019-09-26 | Decoding and encoding method, device and equipment |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111153097.6A Division CN113709457B (en) | 2019-09-26 | 2019-09-26 | Decoding and encoding method, device and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112565747A true CN112565747A (en) | 2021-03-26 |
CN112565747B CN112565747B (en) | 2022-12-23 |
Family
ID=75030154
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111153097.6A Active CN113709457B (en) | 2019-09-26 | 2019-09-26 | Decoding and encoding method, device and equipment |
CN201910919775.1A Active CN112565747B (en) | 2019-09-26 | 2019-09-26 | Decoding and encoding method, device and equipment |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111153097.6A Active CN113709457B (en) | 2019-09-26 | 2019-09-26 | Decoding and encoding method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN113709457B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023273802A1 (en) * | 2021-06-30 | 2023-01-05 | 杭州海康威视数字技术股份有限公司 | Decoding method and apparatus, coding method and apparatus, device, and storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118540496A (en) * | 2024-07-24 | 2024-08-23 | 浙江大华技术股份有限公司 | Image decoding method, image encoding device, and computer storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103248895A (en) * | 2013-05-14 | 2013-08-14 | 芯原微电子(北京)有限公司 | Quick mode estimation method used for HEVC intra-frame coding |
WO2013155299A2 (en) * | 2012-04-12 | 2013-10-17 | Qualcomm Incorporated | Scalable video coding prediction with non-causal information |
CN107534780A (en) * | 2015-02-25 | 2018-01-02 | 瑞典爱立信有限公司 | The coding and decoding of inter picture in video |
CN109104609A (en) * | 2018-09-12 | 2018-12-28 | 浙江工业大学 | A kind of lens boundary detection method merging HEVC compression domain and pixel domain |
-
2019
- 2019-09-26 CN CN202111153097.6A patent/CN113709457B/en active Active
- 2019-09-26 CN CN201910919775.1A patent/CN112565747B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013155299A2 (en) * | 2012-04-12 | 2013-10-17 | Qualcomm Incorporated | Scalable video coding prediction with non-causal information |
CN103248895A (en) * | 2013-05-14 | 2013-08-14 | 芯原微电子(北京)有限公司 | Quick mode estimation method used for HEVC intra-frame coding |
CN107534780A (en) * | 2015-02-25 | 2018-01-02 | 瑞典爱立信有限公司 | The coding and decoding of inter picture in video |
CN109104609A (en) * | 2018-09-12 | 2018-12-28 | 浙江工业大学 | A kind of lens boundary detection method merging HEVC compression domain and pixel domain |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023273802A1 (en) * | 2021-06-30 | 2023-01-05 | 杭州海康威视数字技术股份有限公司 | Decoding method and apparatus, coding method and apparatus, device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112565747B (en) | 2022-12-23 |
CN113709457A (en) | 2021-11-26 |
CN113709457B (en) | 2022-12-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110933426B (en) | Decoding and encoding method and device thereof | |
CN111698500B (en) | Encoding and decoding method, device and equipment | |
CN113873249B (en) | Encoding and decoding method, device and equipment | |
CN113709457B (en) | Decoding and encoding method, device and equipment | |
CN110662033B (en) | Decoding and encoding method and device thereof | |
CN110662074B (en) | Motion vector determination method and device | |
CN113747166B (en) | Encoding and decoding method, device and equipment | |
CN113709499B (en) | Encoding and decoding method, device and equipment | |
CN112449180B (en) | Encoding and decoding method, device and equipment | |
CN112468817B (en) | Encoding and decoding method, device and equipment | |
CN112449181B (en) | Encoding and decoding method, device and equipment | |
CN113766234B (en) | Decoding and encoding method, device and equipment | |
CN112055220B (en) | Encoding and decoding method, device and equipment | |
CN111669592B (en) | Encoding and decoding method, device and equipment | |
CN114710665B (en) | Decoding and encoding method, device and equipment | |
CN114710663B (en) | Decoding and encoding method, device and equipment | |
CN110691247B (en) | Decoding and encoding method and device | |
US20160366434A1 (en) | Motion estimation apparatus and method | |
WO2012114561A1 (en) | Moving image coding device and moving image coding method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |