CN113766234B - Decoding and encoding method, device and equipment - Google Patents

Decoding and encoding method, device and equipment Download PDF

Info

Publication number
CN113766234B
CN113766234B CN202010508179.7A CN202010508179A CN113766234B CN 113766234 B CN113766234 B CN 113766234B CN 202010508179 A CN202010508179 A CN 202010508179A CN 113766234 B CN113766234 B CN 113766234B
Authority
CN
China
Prior art keywords
motion information
block
prediction mode
peripheral
peripheral matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010508179.7A
Other languages
Chinese (zh)
Other versions
CN113766234A (en
Inventor
方树清
曹小强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010508179.7A priority Critical patent/CN113766234B/en
Publication of CN113766234A publication Critical patent/CN113766234A/en
Application granted granted Critical
Publication of CN113766234B publication Critical patent/CN113766234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application provides a decoding and encoding method, a decoding and encoding device and equipment thereof, wherein the method comprises the following steps: constructing a motion information prediction mode candidate list of the current block; selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list; if the target motion information prediction mode is a target motion information angle prediction mode, filling unavailable motion information in the peripheral blocks of the current block; and determining the motion information of the current block according to the motion information of the peripheral matching block pointed by the pre-configured angle of the target motion information angle prediction mode, and determining the prediction value of the current block according to the motion information of the current block. By the scheme, the coding performance can be improved.

Description

Decoding and encoding method, device and equipment
Technical Field
The present application relates to the field of encoding and decoding technologies, and in particular, to a decoding method, an encoding device, and an encoding apparatus.
Background
In order to achieve the purpose of saving space, video images are transmitted after being coded, and the complete video coding method can comprise the processes of prediction, transformation, quantization, entropy coding, filtering and the like. The prediction process may include intra-frame prediction and inter-frame prediction, where the inter-frame prediction uses the correlation of the video time domain to predict the pixels of the current image using the pixels adjacent to the encoded image, so as to achieve the purpose of effectively removing the video time domain redundancy. In the related art, a motion information is determined for a current block directly by indicating a motion information index or a difference information index without sub-block division of the current block.
However, in the above manner, all sub-blocks within the current block share one motion information, and since all sub-blocks within the current block share one motion information, for some sub-blocks with smaller moving objects, sharing one motion information cannot obtain a good encoding performance. Also, if the current block is divided into a plurality of sub-blocks, additional bit overhead is generated.
Disclosure of Invention
The application provides a decoding and encoding method, device and equipment thereof, which can improve the encoding performance.
The application provides a decoding method, which is applied to a decoding end and comprises the following steps:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode, wherein the at least two peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; if there is available motion information in both the first and second peripheral matching blocks and the motion information in the first and second peripheral matching blocks is different, adding the motion information angular prediction mode to the motion information prediction mode candidate list;
Selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list; if the target motion information prediction mode is a target motion information angle prediction mode, filling unavailable motion information in peripheral blocks of the current block; determining the motion information of the current block according to the motion information of the peripheral matching block pointed by the pre-configured angle of the target motion information angle prediction mode, and determining the prediction value of the current block according to the motion information of the current block; wherein the motion information angle prediction mode of the current block comprises at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; a vertical right motion information angle prediction mode; a horizontal right motion information angle prediction mode; a vertical downward motion information angle prediction mode; and (4) a diagonal down-right motion information angle prediction mode.
The application provides a coding method, which is applied to a coding end, and the method comprises the following steps:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode, wherein the at least two peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; if the available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, adding the motion information angle prediction mode to the motion information prediction mode candidate list;
If the motion information angle prediction mode exists in the motion information prediction mode candidate list, filling unavailable motion information in the peripheral blocks of the current block; for each motion information angle prediction mode in the motion information prediction mode candidate list, determining motion information of the current block according to motion information of a peripheral matching block pointed by a pre-configured angle of the motion information angle prediction mode; determining a prediction value of the current block according to the motion information of the current block;
wherein the motion information angular prediction mode of the current block comprises at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; a vertical rightward motion information angle prediction mode; horizontal right motion information angle prediction mode; a vertical downward motion information angle prediction mode; and (4) a diagonal down-right motion information angle prediction mode.
The application provides a decoding device, is applied to the decoding end, the device includes:
a construction module for constructing a motion information prediction mode candidate list of a current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode, wherein the at least two peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; if there is available motion information in both the first and second peripheral matching blocks and the motion information in the first and second peripheral matching blocks is different, adding the motion information angular prediction mode to the motion information prediction mode candidate list; a selection module for selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list; a filling module, configured to fill motion information that is unavailable in neighboring blocks of the current block if the target motion information prediction mode is a target motion information angle prediction mode; the determining module is used for determining the motion information of the current block according to the motion information of the peripheral matching block pointed by the pre-configured angle of the target motion information angle prediction mode, and determining the prediction value of the current block according to the motion information of the current block;
Wherein the motion information angular prediction mode of the current block comprises at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; a vertical rightward motion information angle prediction mode; horizontal right motion information angle prediction mode; a vertical downward motion information angle prediction mode; and (4) a diagonal down-right motion information angle prediction mode.
The application provides a coding device, is applied to the code end, the device includes: a construction module for constructing a motion information prediction mode candidate list of a current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode, wherein the at least two peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; if the available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, adding the motion information angle prediction mode to the motion information prediction mode candidate list; a filling module, configured to fill motion information that is unavailable in neighboring blocks of the current block if a motion information angle prediction mode exists in the motion information prediction mode candidate list; a determining module, configured to determine, for each motion information angle prediction mode in the motion information prediction mode candidate list, motion information of the current block according to motion information of a peripheral matching block pointed by a preconfigured angle of the motion information angle prediction mode; determining a prediction value of the current block according to the motion information of the current block; wherein the motion information angular prediction mode of the current block comprises at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; a vertical right motion information angle prediction mode; a horizontal right motion information angle prediction mode; a vertical downward motion information angle prediction mode; and (4) a diagonal down-right motion information angle prediction mode.
The application provides a decoding side device, including: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode, wherein the at least two peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; if the available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, adding the motion information angle prediction mode to the motion information prediction mode candidate list;
selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list; if the target motion information prediction mode is a target motion information angle prediction mode, filling unavailable motion information in the peripheral blocks of the current block; determining the motion information of the current block according to the motion information of the peripheral matching block pointed by the pre-configured angle of the target motion information angle prediction mode, and determining the prediction value of the current block according to the motion information of the current block; wherein the motion information angular prediction mode of the current block comprises at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; a vertical rightward motion information angle prediction mode; a horizontal right motion information angle prediction mode; a vertical downward motion information angle prediction mode; and (4) a diagonal down-right motion information angle prediction mode.
The application provides a coding end device, including: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode, wherein the at least two peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; if there is available motion information in both the first and second peripheral matching blocks and the motion information in the first and second peripheral matching blocks is different, adding the motion information angular prediction mode to the motion information prediction mode candidate list;
if the motion information angle prediction mode exists in the motion information prediction mode candidate list, filling unavailable motion information in peripheral blocks of the current block; for each motion information angle prediction mode in the motion information angle prediction mode candidate list, determining motion information of the current block according to motion information of a peripheral matching block pointed by a pre-configured angle of the motion information angle prediction mode; determining a prediction value of the current block according to the motion information of the current block;
Wherein the motion information angular prediction mode of the current block comprises at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; a vertical rightward motion information angle prediction mode; a horizontal right motion information angle prediction mode; a vertical downward motion information angle prediction mode; and (4) a diagonal down-right motion information angle prediction mode.
According to the technical scheme, the current block does not need to be divided, the bit overhead caused by sub-block division can be effectively solved, for example, on the basis that the current block is not divided into sub-blocks, motion information can be provided for each sub-region of the current block, different sub-regions of the current block can correspond to the same or different motion information, the coding performance is improved, the problem of transmitting a large amount of motion information is solved, and a large amount of bit overhead can be saved.
Drawings
FIG. 1 is a schematic diagram of a video coding framework in one embodiment of the present application;
FIGS. 2A-2D are schematic diagrams illustrating the partitioning of a current block according to an embodiment of the present application;
FIG. 3A is a flow chart of a decoding method in one embodiment of the present application;
FIGS. 3B and 3C are schematic diagrams of the motion information angle prediction mode;
FIG. 3D is a flow chart of an encoding method in one embodiment of the present application;
FIGS. 4A-4K are schematic diagrams of peripheral blocks of a current block;
FIGS. 5A-5P are diagrams illustrating selection of motion information for a sub-region of a current block;
fig. 6A is a block diagram of a decoding device according to an embodiment of the present application;
fig. 6B is a block diagram of an encoding device according to an embodiment of the present application;
fig. 6C is a hardware structure diagram of a decoding-side device according to an embodiment of the present application;
fig. 6D is a hardware structure diagram of an encoding end device in an embodiment of the present application.
Detailed Description
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items. It should be understood that although the terms first, second, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" used may be interpreted as "when used or" when used "or" in response to a determination, depending on the context.
The embodiment of the application provides a decoding and encoding method, a decoding and encoding device and equipment thereof, and can relate to the following concepts:
intra and inter prediction (intra and inter) techniques: intra-frame prediction refers to predictive coding using reconstructed pixel values of spatially neighboring blocks of a current block (i.e., blocks in the same frame of image as the current block). Inter-frame prediction refers to performing predictive coding by using reconstructed pixel values of time-domain neighboring blocks of a current block (i.e., blocks in different frame images from the current block), and inter-frame prediction refers to using video time-domain correlation, and because a video sequence contains strong time-domain correlation, pixels of a current image are predicted by using pixels of an adjacent coded image, thereby achieving the purpose of effectively removing video time-domain redundancy.
Motion Vector (MV): in inter-frame coding, a motion vector is used to represent a relative displacement between a current block of a current frame and a reference block of a reference frame, for example, there is a strong temporal correlation between the current frame a and the reference frame B, when transmitting the current block A1 of the current frame a, a motion search can be performed in the reference frame B to find a reference block B1 that matches the current block A1 most, and a relative displacement between the current block A1 and the reference block B1, that is, a motion vector, is determined.
Each divided block has a corresponding motion vector transmitted to a decoding side, and if the motion vector of each block is independently encoded and transmitted, particularly divided into a large number of blocks of small size, a considerable amount of bits are consumed. In order to reduce the bit number for encoding the motion vector, the spatial correlation between adjacent blocks can be used to predict the motion vector of the current block to be encoded according to the motion vector of the adjacent encoded block, and then the prediction difference is encoded, thus effectively reducing the bit number representing the motion vector. In the process of encoding the Motion Vector of the current block, the Motion Vector of the current block can be predicted by using the Motion Vector of the adjacent encoded block, and then the Difference value (MVD) between the predicted value (MVP) of the Motion Vector and the true estimate value of the Motion Vector can be encoded, thereby effectively reducing the encoding bit number of the Motion Vector.
Motion Information (Motion Information): in order to accurately acquire information of the reference block, index information of the reference frame image is required to indicate which reference frame image is used, in addition to the motion vector. In video coding techniques, a reference frame picture list may be generally established for a current frame, and the reference frame picture index information indicates that the current block uses the second reference frame in the reference frame picture list. Many coding techniques also support multiple reference picture lists, and therefore, an index value, which may be referred to as a reference direction, may also be used to indicate which reference picture list is used. In the video encoding technology, motion-related information such as a motion vector, a reference frame index, and a reference direction may be collectively referred to as motion information.
Rate-Distortion principle (Rate-Distortion Optimized): there are two major indicators for evaluating coding efficiency: code rate and Peak Signal to Noise Ratio (PSNR), the smaller the bit stream, the larger the compression rate, and the larger the PSNR, the better the reconstructed image quality, and in the mode selection, the discrimination formula is essentially the comprehensive evaluation of the two. For example, the cost corresponding to a pattern: j (mode) = D + λ R, illustratively, D denotes Distortion, which can generally be measured using SSE index, SSE refers to the mean square sum of the differences of the reconstructed image block and the source image; and λ is a lagrange multiplier, and R is the actual number of bits required for encoding the image block in the mode, including the sum of bits required for encoding mode information, motion information, residual errors and the like. When selecting the mode, if the RDO principle is used to make a comparison decision on the coding mode, the best coding performance can be ensured.
Skip mode and direct mode: in the skip mode or the direct mode, the motion information of the current block completely multiplexes the motion information of some neighboring block in the temporal or spatial domain, for example, one motion information is selected as the motion information of the current block from the motion information sets of the surrounding blocks, so that only one index value needs to be encoded to indicate which motion information in the motion information set is used by the current block, and the difference between the skip mode and the direct mode is that: the skip mode does not require coding of the residual, and the direct mode does require coding of the residual. Obviously, the skip mode or the direct mode can greatly save the coding overhead of the motion information.
HMVP (History based Motion Vector Prediction) mode: the motion information of the current block is predicted by using the motion information of the previous reconstructed block, the motion information of the previous reconstructed block is saved by establishing an HMVP list, and the HMVP list is updated when one block is decoded and the motion information changes. For the current block, the motion information in the HMVP list is always available, and the prediction precision is improved by utilizing the motion information in the HMVP list.
MHBSKIP mode: the MHBSKIP mode is a prediction mode of a skip mode or a direct mode, and predicts motion information of a current block using motion information of spatial neighboring blocks of the current block. The MHBSKIP mode constructs three pieces of motion information of two directions, back direction and front direction to predict the current block through the motion information of the spatial adjacent blocks of the current block.
When predicting a current block in a skip mode or a direct mode, a motion information prediction mode candidate list needs to be created for the current block. In creating the motion information prediction mode candidate list, the motion information prediction mode candidate list may sequentially include temporal motion information candidates, spatial motion information candidates (i.e., motion information candidates for MHBSKIP mode), and HMVP motion information candidates (i.e., motion information candidates for HMVP mode). The number of the temporal candidate motion information is 1, the number of the spatial candidate motion information is 3, and the number of the HMVP candidate motion information is 8. Of course, the number of temporal candidate motion information may be other values, the number of spatial candidate motion information may be other values, and the number of HMVP candidate motion information may be other values.
The Intra Block Copy (IBC) may also be referred to as Intra Block Copy, which allows reference in the same frame, and reference data of the current Block is from the same frame, and obtains a predicted value of the current Block using a Block vector. The block vector represents the relative displacement between the current block and the best matching block in the current frame encoded block.
Motion Vector Angle Prediction (MVAP) mode: the MVAP mode divides the current block into sub-regions (namely sub-blocks inside the current block), and copies motion information of each sub-region from a space block according to a preset angle by using a prediction angle, so that the purpose of no division is achieved, more motion information is provided for the inside of the current block, and the encoding performance is improved.
The motion vector angle prediction mode, which may also be referred to as a motion information angle prediction mode, is an angle prediction mode for predicting motion information and is used for inter-frame coding rather than intra-frame coding, and the motion information angle prediction mode selects a matching block rather than a matching pixel. The motion information angle prediction mode is used for indicating a pre-configuration angle, selecting a peripheral matching block for a sub-region of a current block from peripheral blocks of the current block according to the pre-configuration angle, determining one or more pieces of motion information of the current block according to the motion information of the peripheral matching block, and determining the motion information of the sub-region according to the motion information of the peripheral matching block for each sub-region of the current block. The peripheral matching block is a block at a specified position determined from peripheral blocks of the current block according to a pre-configured angle.
The video coding framework comprises the following steps: referring to fig. 1, a schematic diagram of a video encoding framework is shown, where the video encoding framework is used to implement a processing flow at an encoding end in the embodiment of the present application, the schematic diagram of the video decoding framework is similar to that in fig. 1, and is not described herein again, and the video decoding framework may be used to implement a processing flow at a decoding end in the embodiment of the present application. Illustratively, in the video encoding and decoding frameworks, intra prediction, motion estimation/motion compensation, reference picture buffers, in-loop filtering, reconstruction, transformation, quantization, inverse transformation, inverse quantization, entropy coder, and like modules may be included. At the encoding end, the processing flow at the encoding end can be realized through the matching of the modules, and at the decoding end, the processing flow at the decoding end can be realized through the matching of the modules.
In the related art, the current block only has one motion information, that is, all sub-blocks inside the current block share one motion information, and for a scene with a small moving object, the prediction accuracy is not high. Referring to fig. 2A, a sub-region C, a sub-region G, and a sub-region H are sub-regions within the current block, and assuming that the current block uses the motion information of the block F, each sub-region within the current block uses the motion information of the block F. Since the sub-region H is far from the block F, if the sub-region H also uses the motion information of the block F, the prediction accuracy of the motion information of the sub-region H is not high. If the current block is divided into a plurality of sub-blocks and motion information is selected for each sub-block, the motion information of the sub-blocks inside the current block cannot utilize the motion information of the coded blocks around the current block, so that the available motion information is reduced and the accuracy of the motion information is not high. For example, referring to fig. 2B, the sub-block I of the current block can only use the motion information of the sub-blocks C, G, and H, and cannot use the motion information of a, B, F, D, and E.
In view of the above findings, embodiments of the present application provide a decoding method and an encoding method, which enable a current block to correspond to multiple pieces of motion information on the basis of not performing sub-block division on the current block, and improve prediction accuracy of the motion information of the current block. Because the current block is not sub-divided, the sub-block division mode can be prevented from being transmitted by consuming extra bits, and the bit overhead is saved. For each sub-region of the current block (here, any sub-region inside the current block, where the size of the sub-region is smaller than or equal to the size of the current block, but not a sub-block obtained by dividing the current block), the motion information of the sub-region may be obtained by using the motion information of the encoded blocks around the current block, in other words, different sub-regions of the current block may correspond to the same or different motion information, thereby providing more motion information for the current block and improving the accuracy of the motion information.
For example, referring to fig. 2B, C is a sub-region inside the current block, a, B, D, E, and F are encoded blocks around the current block, and the motion information of the sub-region C may adopt the motion information of the encoded blocks around the current block, that is, the motion information of each sub-region of the current block may be obtained according to the motion information of the encoded blocks around the current block.
For example, referring to fig. 2C, C is a sub-region inside the current block, a, B, D, E, F, G, H and I are surrounding blocks of the current block, and the motion information of the sub-region C may adopt the motion information of the surrounding blocks of the current block. Obviously, compared to fig. 2B, the motion information of the sub-region C may use not only the motion information of the encoded blocks (a, B, D, E, and F) around the current block but also the motion information of the unencoded blocks (G, H, I) around the current block, which may be temporal motion information. The motion information of the uncoded blocks around the current block can be used as the motion information of the sub-region C, so that the motion information of the sub-region C can be selected more, and the coding performance can be improved.
Referring to FIG. 2D, the current block includes 9 sub-regions, e.g., f1-f9, which are not sub-blocks into which the current block is divided. For different sub-regions, the same or different motion information may be associated, so that on the basis of not dividing the current block into sub-blocks, the current block may also be associated with multiple pieces of motion information, for example, the sub-region f1 is associated with the motion information 1, the sub-region f2 is associated with the motion information 2, and so on. In determining the motion information of each sub-region, the motion information of A1-A14, the motion information of CN-C1 and the motion information of D1-DN can be utilized, so as to provide more motion information for the sub-regions.
In the embodiment of the present application, the construction process of the motion information prediction mode candidate list may be related, for example, for any one motion information angle prediction mode, the motion information angle prediction mode is added to the motion information prediction mode candidate list or the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list. The padding process may involve motion information, for example, padding motion information that is not available in the peripheral blocks of the current block. The motion compensation process of the current block may be involved, for example, determining the motion information of the current block by using the motion information of the peripheral matching block pointed by the pre-configured angle of the motion information angle prediction mode, and determining the prediction value of the current block according to the motion information of the current block.
When the construction process of the motion information prediction mode candidate list and the filling process of the motion information are realized, the motion information angle prediction mode is firstly subjected to duplication checking treatment, and then unavailable motion information in peripheral blocks is filled, so that the complexity of a decoding end can be reduced, and the decoding performance is improved. For example, for the horizontal leftward motion information angle prediction mode, the vertical upward motion information angle prediction mode, the horizontal upward motion information angle prediction mode, and the like, the duplication checking process is performed first, and if the horizontal leftward motion information angle prediction mode is not duplicated, the non-duplicated horizontal leftward motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block, so that the motion information prediction mode candidate list can be obtained. After the decoding end selects the target motion information prediction mode of the current block from the motion information prediction mode candidate list, if the target motion information prediction mode is a motion information angle prediction mode, the decoding end can fill unavailable motion information in the peripheral blocks. If the target motion information prediction mode is not the motion information angle prediction mode, the decoding end can not fill the unavailable motion information in the peripheral blocks, so that the decoding end reduces the filling operation of the motion information, reduces the complexity of the decoding end and improves the decoding performance.
The decoding method and the encoding method in the embodiments of the present application will be described below with reference to several specific embodiments.
Example 1: referring to fig. 3A, a flowchart of a decoding method is shown, which can be applied to a decoding end, and the method includes:
step 311, construct a motion information prediction mode candidate list of the current block.
Illustratively, when constructing a motion information prediction mode candidate list of a current block, for any motion information angle prediction mode of the current block, based on a preconfigured angle of the motion information angle prediction mode, at least two peripheral matching blocks pointed by the preconfigured angle are selected from peripheral blocks of the current block, and the at least two peripheral matching blocks at least include a first peripheral matching block and a second peripheral matching block to be traversed. And for the first and second peripheral matching blocks to be traversed, if available motion information exists in both the first and second peripheral matching blocks and the motion information of the first and second peripheral matching blocks is different, adding the motion information angle prediction mode to the motion information prediction mode candidate list.
For example, the process of constructing the motion information prediction mode candidate list of the current block may include:
Step a1, aiming at any motion information angle prediction mode of a current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode.
The motion information angle prediction mode is used for indicating a preconfigured angle, selecting a peripheral matching block for the sub-region of the current block from peripheral blocks of the current block according to the preconfigured angle, and determining one or more motion information of the current block according to the motion information of the peripheral matching block, that is, for each sub-region of the current block, determining the motion information of the sub-region according to the motion information of the peripheral matching block.
The peripheral matching block is a block at a specified position determined from peripheral blocks of the current block according to a pre-configured angle.
Illustratively, the peripheral blocks may include blocks (encoded blocks and unencoded blocks) adjacent to the current block; alternatively, the peripheral blocks may include a block adjacent to the current block and a non-adjacent block. Of course, the peripheral block may also include other blocks, which is not limited in this regard.
For example, the motion information angle prediction mode may include, but is not limited to, at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; a vertical right motion information angle prediction mode; horizontal right motion information angle prediction mode; a vertical downward motion information angle prediction mode; diagonal down-right motion information angle prediction mode. Of course, the above are only a few examples of the motion information angle prediction mode, and there may be other types of motion information angle prediction modes, where the motion information angle prediction mode is related to the preconfigured angle, for example, the preconfigured angle may also be 10 degrees or 20 degrees, and this is not limited thereto.
Referring to fig. 3B, a diagram of a horizontal left motion information angle prediction mode, a vertical up motion information angle prediction mode, a horizontal down motion information angle prediction mode, and a vertical right motion information angle prediction mode is shown. Fig. 3C is a schematic diagram of the horizontal rightward motion information angle prediction mode, the vertical downward motion information angle prediction mode, and the diagonal downward and rightward motion information angle prediction mode. In summary, based on the preconfigured angle of the motion information angular prediction mode, the peripheral matching block pointed by the preconfigured angle is selected from the peripheral blocks of the current block. For example, referring to fig. 3B and 3C, the peripheral matching blocks pointed to by the preconfigured angles for each motion information angular prediction mode are shown.
Step a2, if the at least two peripheral matching blocks at least include a first peripheral matching block and a second peripheral matching block to be traversed, and if, for the first peripheral matching block and the second peripheral matching block to be traversed, available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, the motion information angle prediction mode may be added to the motion information prediction mode candidate list of the current block.
In one possible embodiment, if there is no available motion information in at least one of the first and second peripheral matched blocks, the motion information angular prediction mode is added to the motion information prediction mode candidate list of the current block.
In one possible embodiment, if there is no motion information available for at least one of the first and second peripheral matched blocks, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
In one possible implementation, for the first and second peripheral matching blocks to be traversed, if there is available motion information for both the first and second peripheral matching blocks, and the motion information for the first and second peripheral matching blocks is different, the motion information angular prediction mode is added to the motion information prediction mode candidate list for the current block.
In one possible embodiment, for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the second peripheral matching block is the same, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
In one possible implementation, for the first and second peripheral matching blocks to be traversed, if a peripheral block exists in the first and second peripheral matching blocks, and/or the prediction mode is an intra block copy mode, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block. Or, if the intra block and/or the peripheral block whose prediction mode is the intra block copy mode exists in the first peripheral matching block and the second peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block. Or if at least one of the first peripheral matching block and the second peripheral matching block is positioned outside the image of the current block or outside the image slice of the current block, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block. Or if at least one of the first peripheral matching block and the second peripheral matching block is positioned outside the image of the current block or outside the image slice of the current block, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block.
In a possible embodiment, if the at least two peripheral matching blocks include at least a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, the motion information of the first peripheral matching block and the second peripheral matching block is different, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block.
In a possible embodiment, if the at least two peripheral matching blocks include at least a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially, for the first peripheral matching block and the second peripheral matching block to be traversed, if both the first peripheral matching block and the second peripheral matching block have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block are the same, it is continuously determined whether both the second peripheral matching block and the third peripheral matching block have available motion information. If there is available motion information in both the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is added to the motion information prediction mode candidate list when the motion information of the second peripheral matching block and the third peripheral matching block are different.
In a possible embodiment, if the at least two peripheral matching blocks include at least a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially, for the first peripheral matching block and the second peripheral matching block to be traversed, if both the first peripheral matching block and the second peripheral matching block have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block are the same, it is continuously determined whether both the second peripheral matching block and the third peripheral matching block have available motion information. If the available motion information exists in the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
In a possible embodiment, if the at least two peripheral matching blocks include at least a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed in sequence, for the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angle prediction mode may be added to the motion information prediction mode candidate list of the current block.
In a possible implementation, if at least two peripheral matching blocks include at least a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially, for the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block may be prohibited.
In a possible embodiment, if at least two peripheral matching blocks include at least a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, for the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to determine whether both the second peripheral matching block and the third peripheral matching block have available motion information. If there is available motion information for both the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode may be added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is different.
In a possible embodiment, if at least two peripheral matching blocks include at least a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, for the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to determine whether both the second peripheral matching block and the third peripheral matching block have available motion information. If there is available motion information in both the second peripheral matching block and the third peripheral matching block, when the motion information of the second peripheral matching block and the third peripheral matching block is the same, the motion information angular prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
In a possible embodiment, if at least two peripheral matching blocks include at least a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, for the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to determine whether both the second peripheral matching block and the third peripheral matching block have available motion information. If there is no available motion information in at least one of the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block.
In a possible embodiment, if at least two peripheral matching blocks include at least a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, for the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to determine whether both the second peripheral matching block and the third peripheral matching block have available motion information. If there is no available motion information in at least one of the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
In the above embodiment, the process of determining whether there is available motion information in any peripheral matching block may include, but is not limited to: if the peripheral matching block is the inter-frame coded block, determining that the available motion information exists in the peripheral matching block.
In the above embodiment, the process of determining whether there is available motion information in any peripheral matching block may include, but is not limited to: and if the prediction mode of the peripheral matching block is the intra block copy mode, determining that no available motion information exists in the peripheral matching block. Illustratively, the intra block copy mode requires providing a block vector for the current block, and obtaining a prediction value in the current frame according to the block vector, where the block vector represents a relative displacement between the current block and a best matching block in the current frame encoded block.
In the above embodiment, the process of determining whether there is available motion information in any peripheral matching block may include, but is not limited to: and if the peripheral matching block is positioned outside the image of the current block, determining that the peripheral matching block has no available motion information. Or, if the peripheral matching block is located outside the image slice where the current block is located, determining that the peripheral matching block has no available motion information.
In the above embodiment, the process of determining whether there is available motion information in any peripheral matching block may include, but is not limited to: and if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block.
In one possible implementation, the motion information prediction mode candidate list includes candidate motion information of a skip mode or a direct mode, and the candidate motion information of the skip mode or the direct mode includes but is not limited to: temporal candidate motion information, spatial candidate motion information, HMVP candidate motion information. Based on this, when the motion information angle prediction mode is added to the motion information prediction mode candidate list, the motion information angle prediction mode may be located between the spatial domain candidate motion information and the HMVP candidate motion information. Of course, the above is only an example, and the order of the motion information prediction mode candidate list may be set in other orders.
For example, the motion information prediction mode candidate list sequentially includes: time domain candidate motion information, space domain candidate motion information, motion information angle prediction mode and HMVP candidate motion information; or, the time domain candidate motion information, the spatial domain candidate motion information, the HMVP candidate motion information, the motion information angle prediction mode; or, spatial domain candidate motion information, temporal domain candidate motion information, motion information angle prediction mode, HMVP candidate motion information; or, spatial domain candidate motion information, temporal domain candidate motion information, HMVP candidate motion information, motion information angle prediction mode; or, time domain candidate motion information, motion information angle prediction mode, spatial domain candidate motion information, HMVP candidate motion information; or the motion information angle prediction mode, the time domain candidate motion information, the spatial domain candidate motion information and the HMVP candidate motion information.
The number of temporal candidate motion information, the number of spatial candidate motion information, the number of motion information angular prediction modes, and the number of HMVP candidate motion information may be arbitrarily configured, but is not limited thereto.
In step 312, a target motion information prediction mode of the current block is selected from the motion information prediction mode candidate list.
Step 313, if the target motion information prediction mode is the target motion information angle prediction mode (i.e. a certain motion information angle prediction mode is used as the target motion information prediction mode), filling the unavailable motion information in the peripheral blocks of the current block.
In a possible implementation, the motion information that is not available in the peripheral blocks of the current block can be directly padded. In another possible embodiment, if the peripheral matching block pointed to by the pre-configured angle of the angular prediction mode of the target motion information includes a peripheral block without available motion information, the motion information that is not available in the peripheral block of the current block is filled.
Step 314, determining the motion information of the current block according to the motion information of the peripheral matching block pointed by the pre-configured angle of the target motion information angle prediction mode, and determining the prediction value of the current block according to the motion information of the current block.
In a possible embodiment, the current block may be divided into at least one sub-region; and aiming at each sub-region of the current block, determining the motion information of the sub-region according to the motion information of the peripheral matching block pointed by the pre-configured angle of the target motion information angle prediction mode. Then, aiming at each sub-area of the current block, determining a target predicted value of the sub-area according to the motion information of the sub-area, and determining the predicted value of the current block according to the target predicted value of each sub-area.
In another possible embodiment, the current block is divided into at least one sub-region; and aiming at each sub-region of the current block, determining the motion information of the sub-region according to the motion information of the peripheral matching block pointed by the pre-configured angle of the target motion information angle prediction mode. Determining a motion compensation value of the sub-region according to the motion information of the sub-region; if the sub-area meets the condition of using the bidirectional optical flow, acquiring a bidirectional optical flow offset value of the sub-area; determining a target predicted value of the sub-region according to a forward motion compensation value in the motion compensation values of the sub-region, a backward motion compensation value in the motion compensation values of the sub-region and a bidirectional optical flow offset value of the sub-region; and determining the predicted value of the current block according to the target predicted value of each sub-area.
For example, after determining the motion compensation value of the sub-region according to the motion information of the sub-region, if the sub-region does not satisfy the condition of using the bi-directional optical flow, the target prediction value of the sub-region is determined according to the motion compensation value of the sub-region.
For example, if the motion information of the sub-area is unidirectional motion information, the sub-area does not satisfy the condition of using bidirectional optical flow; or, if the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located is not located between two reference frames in the time sequence, the sub-region does not satisfy the condition of using bidirectional optical flow.
For example, if the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located between two reference frames in the temporal sequence, the sub-region satisfies the condition of using bidirectional optical flow.
Illustratively, obtaining the bi-directional optical flow offset values for the sub-regions may include, but is not limited to: determining a first pixel value and a second pixel value according to the motion information of the sub-area; the first pixel value is a forward motion compensation value and a forward extension value for the subregion, the forward extension value being copied from the forward motion compensation value or obtained from a reference pixel location of a forward reference frame; the second pixel value is a backward motion compensation value and a backward extension value of the sub-region, and the backward extension value is copied from the backward motion compensation value or obtained from a reference pixel position of a backward reference frame; determining a forward reference frame and a backward reference frame according to the motion information of the subarea; a bi-directional optical flow offset value for the sub-region is determined based on the first pixel value and the second pixel value.
As can be seen from the above technical solutions, in the embodiments of the present application, the current block does not need to be divided into sub-blocks, the division information of each sub-region of the current block can be determined based on the motion information angle prediction mode, and the bit overhead caused by sub-block division can be effectively solved. By adding the motion information angle prediction modes with the incompletely same motion information into the motion information prediction mode candidate list, the motion information angle prediction modes only with single motion information are removed, the number of the motion information angle prediction modes in the motion information prediction mode candidate list is reduced, the number of bits for coding a plurality of pieces of motion information can be reduced, and the coding performance is further improved.
For example, the motion information prediction mode candidate list constructing process in step 311 may implement: some motion information angle prediction modes can enable motion information of each sub-region inside the current block to be the same, such motion information angle prediction modes need to be removed, some motion information angle prediction modes can enable motion information of each sub-region inside the current block to be different, and such motion information angle prediction modes need to be reserved, namely, added into a motion information prediction mode candidate list.
The reason for removing the motion information angle prediction mode is as follows: if the motion information angle prediction mode 1, the motion information angle prediction mode 2, and the motion information angle prediction mode 3 are all added to the motion information prediction mode candidate list, when encoding the index of the motion information angle prediction mode 3, since there are two motion information angle prediction modes in the front, it may need to be encoded 001 to represent. However, in the embodiment of the present application, only the motion information angle prediction mode 3 is added to the motion information prediction mode candidate list, and when the index of the motion information angle prediction mode 3 is encoded, only 0 may be encoded to indicate the motion information angle prediction mode. In summary, the above manner can reduce bit overhead caused by encoding index information of the motion information angle prediction mode, reduce hardware complexity while saving bit overhead, avoid a problem of low performance gain caused by the motion information angle prediction mode of a single motion information, and reduce the number of bits for encoding a plurality of motion information angle prediction modes.
In the embodiment of the application, the motion information angle prediction mode is subjected to duplicate checking, and then unavailable motion information in peripheral blocks is filled, so that the complexity of a decoding end can be reduced, and the decoding performance can be improved. For example, motion information that is not available in the peripheral blocks has not been filled in when the motion information prediction mode candidate list is obtained. After the decoding end selects the target motion information prediction mode from the motion information prediction mode candidate list, if the target motion information prediction mode is not the motion information angle prediction mode, the unavailable motion information in the peripheral blocks does not need to be filled, so that the filling operation of the motion information is reduced by the decoding end.
Example 2: referring to fig. 3D, a flowchart of the encoding method is shown, which can be applied to an encoding end, and the method includes:
step 321, construct a motion information prediction mode candidate list of the current block.
Illustratively, when constructing a motion information prediction mode candidate list of a current block, for any motion information angle prediction mode of the current block, based on a preconfigured angle of the motion information angle prediction mode, selecting at least two peripheral matching blocks pointed by the preconfigured angle from peripheral blocks of the current block, where the at least two peripheral matching blocks include at least a first peripheral matching block and a second peripheral matching block to be traversed; and adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block if available motion information exists in the first peripheral matching block and the second peripheral matching block which are to be traversed and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different.
The process of constructing the motion information prediction mode candidate list at the encoding end is the same as the process of constructing the motion information prediction mode candidate list at the decoding end, and only the execution main body is changed into the encoding end, and the specific process refers to step 311, which is not repeated herein.
In step 322, if there is a motion information angle prediction mode (i.e. there is at least one motion information angle prediction mode) in the motion information prediction mode candidate list, filling the motion information that is not available in the peripheral blocks of the current block.
In a possible implementation, the motion information that is not available in the peripheral blocks of the current block can be directly padded. In another possible embodiment, for each motion information angular prediction mode in the motion information prediction mode candidate list, if a peripheral matching block pointed by a pre-configured angle of the motion information angular prediction mode includes a peripheral block without available motion information, the unavailable motion information in the peripheral block of the current block is filled.
Step 323, for each motion information angle prediction mode in the motion information angle prediction mode candidate list, determining the motion information of the current block according to the motion information of the peripheral matching block pointed by the pre-configured angle of the motion information angle prediction mode, and determining the prediction value of the current block according to the motion information of the current block.
For example, the process of determining the prediction value of the current block at the encoding end is similar to the process of determining the prediction value of the current block at the decoding end, and only the execution main body is changed into the encoding end, and the target motion information angle prediction mode is changed into the motion information angle prediction mode in the motion information prediction mode candidate list, and the specific process refers to step 314, and is not repeated herein.
As can be seen from the above technical solutions, in the embodiments of the present application, the current block does not need to be divided into sub-blocks, the division information of each sub-region of the current block can be determined based on the motion information angle prediction mode, and the bit overhead caused by sub-block division can be effectively solved. By adding the motion information angle prediction modes with the incompletely same motion information into the motion information prediction mode candidate list, the motion information angle prediction modes only with single motion information are removed, the number of the motion information angle prediction modes in the motion information prediction mode candidate list is reduced, the number of bits for coding a plurality of pieces of motion information can be reduced, and the coding performance is further improved.
Example 3: the embodiment of the present application provides another encoding method, which may be applied to an encoding end, and the method may include:
step b1, the encoding end constructs a motion information prediction mode candidate list of the current block, wherein the motion information prediction mode candidate list can comprise at least one motion information angle prediction mode. Of course, the motion information prediction mode candidate list may also include other types of motion information prediction modes (i.e. not motion information angle prediction modes), which is not limited herein.
For example, a motion information prediction mode candidate list may be constructed for the current block, that is, all sub-regions in the current block may correspond to the same motion information prediction mode candidate list; alternatively, multiple motion information prediction mode candidate lists may be constructed for the current block, i.e., all sub-regions within the current block may correspond to the same or different motion information prediction mode candidate lists. For convenience of description, construction of a motion information prediction mode candidate list for a current block is taken as an example.
The motion information angle prediction mode may be an angle prediction mode for predicting motion information, i.e., used for inter-frame coding, rather than intra-frame coding, and the motion information angle prediction mode selects a matching block rather than a matching pixel point.
For an exemplary configuration of the motion information prediction mode candidate list, refer to embodiment i, which is not described herein again.
And b2, if the motion information angle prediction mode exists in the motion information prediction mode candidate list, filling unavailable motion information in peripheral blocks of the current block by the encoding end, wherein the specific filling mode refers to the following embodiment.
And b3, the coding end sequentially traverses each motion information angle prediction mode in the motion information prediction mode candidate list, divides the current block into at least one sub-region aiming at the currently traversed motion information angle prediction mode, and determines the motion information of the sub-region according to the motion information of the peripheral matching block pointed by the pre-configured angle of the motion information angle prediction mode aiming at each sub-region. For example, a peripheral matching block corresponding to the sub-region is selected from a plurality of peripheral matching blocks pointed by the preconfigured angle of the motion information angle prediction mode, and the motion information of the sub-region is determined according to the motion information of the selected peripheral matching block.
It should be noted that, since the peripheral blocks of the current block, for which no available motion information exists, have been filled, all of the plurality of peripheral matching blocks pointed to by the preconfigured angle of the motion information angular prediction mode have available motion information, and the motion information of the sub-region can be determined by using the available motion information of the peripheral matching blocks.
And b4, the coding terminal determines the target predicted value of the sub-area according to the motion information of the sub-area.
And step b5, the encoding end determines the predicted value of the current block according to the target predicted value of each sub-area.
And b6, the encoding end selects a target motion information prediction mode of the current block from the motion information prediction mode candidate list, wherein the target motion information prediction mode is a motion information angle prediction mode or other types of motion information prediction modes.
For example, for each motion information angle prediction mode in the motion information prediction mode candidate list, steps b 3-b 5 are performed, and the target prediction value of the current block can be obtained. Based on the target predicted value of the current block, the encoding end determines the rate distortion cost value of the motion information angle prediction mode by adopting a rate distortion principle, and the determination mode is not limited.
For other types of motion information prediction modes R (such as time domain candidate motion information, space domain candidate motion information and the like, which are not limited) in the motion information prediction mode candidate list, a target prediction value of the current block is determined according to the motion information prediction mode R, and then the rate distortion cost value of the motion information prediction mode R is determined, which is not limited.
Then, the motion information prediction mode corresponding to the minimum rate distortion cost is determined as a target motion information prediction mode, which may be a motion information angle prediction mode or another type of motion information prediction mode R.
Example 4: another decoding method provided in the embodiment of the present application may be applied to a decoding end, and the method may include:
step c1, the decoding end constructs a motion information prediction mode candidate list of the current block, wherein the motion information prediction mode candidate list can comprise at least one motion information angle prediction mode. Of course, the motion information prediction mode candidate list may also include other types of motion information prediction modes (i.e. not motion information angle prediction modes), which is not limited herein.
And c2, the decoding end selects a target motion information prediction mode of the current block from the motion information prediction mode candidate list, wherein the target motion information prediction mode is a target motion information angle prediction mode or other types of motion information prediction modes.
For example, after receiving the coded bitstream, the decoding end obtains indication information from the coded bitstream, where the indication information is used to indicate index information of the target motion information prediction mode in the motion information prediction mode candidate list, selects a motion information prediction mode corresponding to the index information from the motion information prediction mode candidate list, and determines the selected motion information prediction mode as the target motion information prediction mode of the current block. For example, when the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream may carry indication information, where the indication information is used to indicate index information of the target motion information prediction mode in the motion information prediction mode candidate list, such as index information 0, and the index information 0 indicates a first motion information prediction mode in the motion information prediction mode candidate list. Based on this, the decoding side may use the first motion information prediction mode in the motion information prediction mode candidate list as the target motion information prediction mode of the previous block based on the index information 0.
And c3, if the target motion information prediction mode is the target motion information angle prediction mode, filling unavailable motion information in the peripheral blocks of the current block by the decoding end, wherein the specific filling mode refers to the following embodiment.
In a possible implementation manner, if the target motion information prediction mode is not the motion information angle prediction mode, the decoding end does not need to fill the unavailable motion information in the peripheral blocks of the current block, so as to reduce the filling operation of the motion information.
And c4, the decoding end divides the current block into at least one sub-area, and for each sub-area, the motion information of the sub-area is determined according to the motion information of the peripheral matching block pointed by the pre-configured angle of the target motion information angle prediction mode.
For example, a peripheral matching block corresponding to the sub-region is selected from a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode, and the motion information of the sub-region is determined according to the motion information of the selected peripheral matching block.
It should be noted that, since the peripheral blocks of the current block, for which no available motion information exists, have been filled, all of the plurality of peripheral matching blocks pointed to by the preconfigured angle of the target motion information angular prediction mode have available motion information, and the motion information of the sub-region can be determined by using the available motion information of the peripheral matching blocks.
And c5, the decoding terminal determines the target predicted value of the sub-area according to the motion information of the sub-area.
And c6, the decoding end determines the predicted value of the current block according to the target predicted value of each sub-area.
Example 5: in the above-described embodiment, a process of constructing a motion information prediction mode candidate list, that is, adding, for any one motion information angle prediction mode, the motion information angle prediction mode to the motion information prediction mode candidate list, or prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list, includes:
and d1, acquiring at least one motion information angle prediction mode of the current block.
For example, at least one of the following motion information angle prediction modes is acquired: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; a vertical rightward motion information angle prediction mode; a horizontal right motion information angle prediction mode; a vertical downward motion information angle prediction mode; and (4) a diagonal down-right motion information angle prediction mode. Of course, the above is only an example, and the preset angle of the motion information angle prediction mode is not limited to this, and may be any angle between 0 and 360 degrees, and the horizontal direction of the center point of the sub-region to the right is set to 0 degree, and any angle rotated counterclockwise from 0 degree may be a preset angle, or the center point of the sub-region is set to 0 degree in other directions. In practice, the preconfigured angle may be a fractional angle, such as 22.5 degrees.
And d2, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block on the basis of the preset angle of the motion information angle prediction mode.
And d3, adding the motion information angle prediction mode to the motion information prediction mode candidate list or forbidding adding the motion information angle prediction mode to the motion information prediction mode candidate list based on the characteristics of whether the available motion information exists in the at least two peripheral matching blocks or not, whether the available motion information of the at least two peripheral matching blocks is the same or not and the like.
The following describes the determination process of step d3 with reference to several specific cases.
In case one, a first peripheral matching block and a second peripheral matching block to be traversed are selected from at least two peripheral matching blocks, and if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angular prediction mode may be added to a motion information prediction mode candidate list of the current block.
Alternatively, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, it may be further prohibited to add the motion information angular prediction mode to the motion information prediction mode candidate list of the current block.
For example, if an intra block exists in the first and second peripheral matching blocks and/or the prediction mode is a peripheral block of an intra block copy mode, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block. Alternatively, if an intra block exists in the first and second peripheral matching blocks and/or the prediction mode is a peripheral block of an intra block copy mode, the addition of the motion information angle prediction mode to the motion information prediction mode candidate list of the current block is prohibited.
And if at least one of the first peripheral matching block and the second peripheral matching block is positioned outside the image of the current block or outside the image slice of the current block, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block. Or if at least one of the first peripheral matching block and the second peripheral matching block is positioned outside the image of the current block or outside the image slice of the current block, forbidding adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block.
And in case II, selecting a first peripheral matching block and a second peripheral matching block to be traversed from at least two peripheral matching blocks, and if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block. If there is available motion information in both the first and second peripheral matching blocks, the motion information of the first and second peripheral matching blocks is the same, and the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list.
And in case three, selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from at least two peripheral matching blocks. On the basis, if the available motion information exists in the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the second peripheral matching block is different, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block.
If the first peripheral matching block and the second peripheral matching block both have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block are the same, continuing to judge whether the second peripheral matching block and the third peripheral matching block both have available motion information; if the available motion information exists in the second peripheral matching block and the third peripheral matching block, and the motion information of the second peripheral matching block is different from that of the third peripheral matching block, the motion information angle prediction mode is added to the motion information prediction mode candidate list.
If the first peripheral matching block and the second peripheral matching block both have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block are the same, continuing to judge whether the second peripheral matching block and the third peripheral matching block both have available motion information; and if the available motion information exists in the second peripheral matching block and the third peripheral matching block, and the motion information of the second peripheral matching block and the motion information of the third peripheral matching block are the same, forbidding adding the motion information angle prediction mode to the motion information prediction mode candidate list.
And in case four, selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from at least two peripheral matching blocks. On this basis, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angle prediction mode is added to the motion information prediction mode candidate list.
If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information or not; if the available motion information exists in the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information or not; if there is no available motion information in at least one of the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block. Or if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to judge whether both the second peripheral matching block and the third peripheral matching block have available motion information; if there is no available motion information in at least one of the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
And in case five, if available motion information exists in at least two peripheral matching blocks and the motion information of the at least two peripheral matching blocks is not completely the same, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block. And if the available motion information exists in at least two peripheral matching blocks and the motion information of the at least two peripheral matching blocks is completely the same, forbidding adding the motion information angle prediction mode to the motion information prediction mode candidate list.
Case six, if at least one of the at least two peripheral matching blocks does not have available motion information, adding the motion information angular prediction mode to the motion information prediction mode candidate list. If at least one of the at least two peripheral matching blocks does not have available motion information, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list. If at least one of the at least two peripheral matching blocks does not have available motion information and the motion information of the at least two peripheral matching blocks is not exactly the same, adding the motion information angular prediction mode to the motion information prediction mode candidate list. If at least one of the at least two peripheral matching blocks does not have available motion information and the motion information of the at least two peripheral matching blocks is identical, then the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list.
For the case five and the case six, the determination manner in which the motion information of at least two peripheral matching blocks is not exactly the same/exactly the same may include but is not limited to: selecting at least one first perimeter match block (e.g., all or a portion of all perimeter match blocks) from the at least two perimeter match blocks; for each first peripheral matching block, selecting a second peripheral matching block corresponding to the first peripheral matching block from at least two peripheral matching blocks. If the motion information of the first peripheral matching block is different from the motion information of the second peripheral matching block, determining that the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different; and if the motion information of the first peripheral matching block is the same as that of the second peripheral matching block, determining that the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are the same. Based on this, if the motion information of any pair of peripheral matching blocks to be compared is different, it is determined that the motion information of the at least two peripheral matching blocks is not completely the same. And if the motion information of all the peripheral matching blocks to be compared is the same, determining that the motion information of the at least two peripheral matching blocks is completely the same.
For cases five and six, the determination that there is no available motion information in at least one of the at least two peripheral matching blocks may include, but is not limited to: selecting at least one first peripheral matching block from the at least two peripheral matching blocks; for each first peripheral matching block, selecting a second peripheral matching block corresponding to the first peripheral matching block from at least two peripheral matching blocks. If there is no available motion information in at least one of any pair of peripheral matching blocks to be compared (first peripheral matching block and second peripheral matching block), it is determined that there is no available motion information in at least one of the at least two peripheral matching blocks. And if all the peripheral matching blocks to be compared have available motion information, determining that the at least two peripheral matching blocks have available motion information.
In each of the above cases, selecting the first peripheral matching block from the at least two peripheral matching blocks may include: taking any one of the at least two peripheral matching blocks as a first peripheral matching block; or, a specified one of the at least two peripheral matching blocks is taken as a first peripheral matching block. Selecting a second perimeter matching block from the at least two perimeter matching blocks may include: selecting a second peripheral matching block corresponding to the first peripheral matching block from the at least two peripheral matching blocks according to the traversal step length and the position of the first peripheral matching block; the traversal step is a block spacing between the first and second perimeter matched blocks.
For cases three and four, selecting a third perimeter matching block from the at least two perimeter matching blocks may include: selecting a third peripheral matching block corresponding to the second peripheral matching block from the at least two peripheral matching blocks according to the traversal step size and the position of the second peripheral matching block; the traversal step can be a block spacing between the second perimeter matched block and the third perimeter matched block.
For example, for a peripheral matching block A1, a peripheral matching block A2, a peripheral matching block A3, a peripheral matching block A4, and a peripheral matching block A5 arranged in this order, examples of the respective peripheral matching blocks for different cases are as follows:
for cases one and two, assuming that peripheral matching block A1 is taken as the first peripheral matching block and the traversal step size is 2, the second peripheral matching block corresponding to peripheral matching block A1 is peripheral matching block A3. For cases three and four, assuming that the peripheral matching block A1 is taken as the first peripheral matching block and the traversal step size is 2, the second peripheral matching block corresponding to the peripheral matching block A1 is the peripheral matching block A3. The third peripheral matching block corresponding to the peripheral matching block A3 is the peripheral matching block A5.
For the fifth case and the sixth case, it is assumed that the peripheral matching block A1 and the peripheral matching block A3 are both regarded as the first peripheral matching block, and the traversal step size is 2, and when the peripheral matching block A1 is regarded as the first peripheral matching block, the second peripheral matching block is the peripheral matching block A3. When the peripheral matching block A3 is the first peripheral matching block, then the second peripheral matching block is the peripheral matching block A5.
Illustratively, before selecting the peripheral matching block from the at least two peripheral matching blocks, the traversal step size may also be determined based on the size of the current block. And controlling the comparison times of the motion information through the traversal step length. For example, assuming that the size of the peripheral matching block is 4 × 4 and the size of the current block is 16 × 16, the current block corresponds to 4 peripheral matching blocks for the horizontal leftward motion information angular prediction mode. In order to control the comparison frequency of the motion information to be 1, the traversal step size may be 2 or 3, if the traversal step size is 2, the first peripheral matching block is the 1 st peripheral matching block, and the second peripheral matching block is the 3 rd peripheral matching block; or the first peripheral matching block is the 2 nd peripheral matching block, and the second peripheral matching block is the 4 th peripheral matching block. If the traversal step size is 3, the first peripheral matching block is the 1 st peripheral matching block, and the second peripheral matching block is the 4 th peripheral matching block. For another example, in order to control the number of comparison times of the motion information to be 2, the traversal step size may be 1, the first peripheral matching block is the 1 st peripheral matching block and the 3 rd peripheral matching block, the second peripheral matching block corresponding to the 1 st peripheral matching block is the 2 nd peripheral matching block, and the second peripheral matching block corresponding to the 3 rd peripheral matching block is the 4 th peripheral matching block. Of course, the above is only an example of the horizontal leftward motion information angle prediction mode, and the traversal step size may also be determined in other ways, which is not limited to this.
In each of the above cases, the process of determining whether there is available motion information in any one of the peripheral matching blocks may include: if the peripheral matching block is the inter-frame coded block, determining that the available motion information exists in the peripheral matching block. And if the prediction mode of the peripheral matching block is the intra block copy mode, determining that no available motion information exists in the peripheral matching block. And if the peripheral matching block is positioned outside the image of the current block or the peripheral matching block is positioned outside the image slice of the current block, determining that the peripheral matching block has no available motion information. And if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block.
Example 6: in the above embodiment, when motion information angle prediction modes such as a horizontal left motion information angle prediction mode, a vertical upward motion information angle prediction mode, a horizontal downward motion information angle prediction mode, and a vertical right motion information angle prediction mode are added to the motion information prediction mode candidate list, the order of each motion information angle prediction mode in the motion information prediction mode candidate list is set as needed, which is not limited, and the following description is given with reference to several specific application scenarios.
Application scenario 1: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the nonexistence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is not decoded (i.e. the peripheral block is an uncoded block), or the peripheral block is an intra block, or the prediction mode of the peripheral block is an intra block copy mode, it indicates that the peripheral block does not have available motion information. If a peripheral block exists, and the peripheral block is not an unencoded block, and the peripheral block is not an intra block, and the prediction mode of the peripheral block is not an intra block copy mode, then the presence of available motion information for the peripheral block is indicated.
Referring to FIG. 4A, the width of the current block is W, the height of the current block is H, let m be W/4, n be H/4, the pixel point at the upper left corner inside the current block be (x, y), (x-1, y + H + W-1) and the surrounding blocks are A 0 ,A 0 Is 4 x 4. According to the formula from A 0 To A 2m+2n Traverse the perimeter blocks in the order of (2), each4 × 4 peripheral blocks are respectively marked as A 1 、A 2 、...、A 2m+2n ,A 2m+2n Is the peripheral block where the pixel point (x + W + H-1, y-1) is located. For each motion information angle prediction mode, based on the preconfigured angle of the motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed by the preconfigured angle from the peripheral blocks, and selecting a peripheral matching block to be traversed from the plurality of peripheral matching blocks (for example, selecting a first peripheral matching block and a second peripheral matching block to be traversed, or selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially). If the available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, or if both the first peripheral matching block and the second peripheral matching block have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block is the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list.
If the available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the second peripheral matching block is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode may be added to the motion information prediction mode candidate list. If at least one of the first and second peripheral matching blocks does not have available motion information, or both the first and second peripheral matching blocks have available motion information and the motion information of the first and second peripheral matching blocks is the same, the comparison result of the two peripheral matching blocks is the same, and it is necessary to continue comparing the second and third peripheral matching blocks.
If the available motion information exists in both the second peripheral matching block and the third peripheral matching block, and the motion information of the second peripheral matching block and the third peripheral matching block is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode can be added to the motion information prediction mode candidate list. If at least one of the second peripheral matching block and the third peripheral matching block does not have available motion information, or if both the second peripheral matching block and the third peripheral matching block have available motion information and the motion information of the second peripheral matching block and the third peripheral matching block is the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode may be prohibited from being added to the motion information prediction mode candidate list.
For example, if the left neighboring block of the current block exists and the upper neighboring block does not exist, for the horizontal leftward motion information angular prediction mode, A is defined as m-1+H/8 As a first peripheral matching block, A m+n-1 As the second peripheral matching block, of course, A m-1+H/8 And A m+n-1 As just one example, other peripheral matching blocks pointed by the preconfigured angle of the horizontal leftward motion information angle prediction mode may also be used as the first peripheral matching block or the second peripheral matching block, which is implemented in a similar manner and will not be described again in the following.
Judgment of A by the above comparison method m-1+H/8 And A m+n-1 Is the same. If the same, the addition of the horizontal leftward motion information angular prediction mode to the motion information prediction mode candidate list of the current block is prohibited. If not, the horizontal left motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block.
For the horizontal downward motion information angle prediction mode, A w/8-1 As a first peripheral matching block, A m-1 As a second peripheral matching block, A m-1+H/8 As the third peripheral matching block, of course, the above is only an example, and other peripheral matching blocks pointed by the preconfigured angle of the horizontal downward motion information angle prediction mode may also be used as the first peripheral matching block, the second peripheral matching block or the third peripheral matching block, which are similar in implementation manner and will not be described in detail later . Judgment of A by the above comparison method w/8-1 And A m-1 Is the same. If not, the horizontal downward motion information angular prediction mode is added to the motion information prediction mode candidate list of the current block. If the two are the same, judging A by using the comparison method m-1 And A m-1+H/8 Is the same. If A m-1 And A m-1+H/8 If the comparison result is different, the horizontal downward motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block. If A m-1 And A m-1+H/8 If the comparison result of (2) is the same, addition of the horizontal downward motion information angle prediction mode to the motion information prediction mode candidate list of the current block is prohibited.
For example, if the left neighboring block of the current block does not exist and the upper neighboring block exists, for the vertical upward motion information angular prediction mode, A will be m+n+1+w/8 As a first peripheral matching block, A m+n+1 As the second peripheral matching block, of course, the above is only an example, and the first peripheral matching block and the second peripheral matching block are not limited.
Judgment of A by the above comparison method m+n+1+w/8 And A m+n+1 Is the same. If the motion information angle prediction mode is the same, the addition of the vertical upward motion information angle prediction mode to the motion information prediction mode candidate list of the current block is prohibited. If not, the vertical upward motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block.
For the vertical rightward motion information angle prediction mode, A m+n+1+w/8 As a first peripheral matching block, A 2m+n+1 As a second peripheral matching block, A 2m+n+1+H/8 As the third peripheral matching block, of course, the above is only an example, and the first peripheral matching block, the second peripheral matching block, and the third peripheral matching block are not limited. Judgment of A by the above comparison method m+n+1+w/8 And A 2m+n+1 Is the same. If not, the vertical right motion information angular prediction mode is added to the motion information prediction mode candidate list. If they are the same, it is goodJudgment of A by the above-mentioned comparison method 2m+n+1 And A 2m+n+1+H/8 Is the same. If A 2m+n+1 And A 2m+n+1+H/8 If the comparison result is different, the vertical right motion information angle prediction mode is added to the motion information prediction mode candidate list. If A 2m+n+1 And A 2m+n+1+H/8 If the comparison result is the same, the addition of the vertical right motion information angle prediction mode to the motion information prediction mode candidate list is prohibited.
For example, if the left neighboring block of the current block exists and the upper neighboring block of the current block also exists, for the horizontal down motion information angle prediction mode, A will be w/8-1 As a first peripheral matching block, A m-1 As a second peripheral matching block, A m-1+H/8 As the third peripheral matching block, of course, the above is only an example. Judgment of A by the above comparison method w/8-1 And A m-1 Is the same. If not, the horizontal downward motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block. If the two are the same, judging A by using the comparison method m-1 And A m-1+H/8 Is the same. If A is m-1 And A m-1+H/8 If the comparison result is different, the horizontal downward motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block. If A is m-1 And A m-1+H/8 If the comparison result is the same, the addition of the horizontal downward motion information angle prediction mode to the motion information prediction mode candidate list of the current block is prohibited.
For the horizontal leftward motion information angle prediction mode, A m-1+H/8 As a first peripheral matching block, A m+n-1 As a second peripheral matching block. Judgment of A by the above comparison method m-1+H/8 And A m+n-1 Is the same. If the motion information angle prediction mode is the same, the horizontal leftward motion information angle prediction mode is not added to the motion information prediction mode candidate list. If not, the horizontal leftward motion information angle prediction mode is added to the motion information prediction mode candidate list.
For the horizonUpward motion information angular prediction mode, will A m+n-1 As a first peripheral matching block, A m+n As a second peripheral matching block, A m+n+1 As a third peripheral matching block. Judgment of A by the above comparison method m+n-1 And A m+n Is the same. If not, the horizontal upward motion information angular prediction mode is added to the motion information prediction mode candidate list of the current block. If the two are the same, judging A by using the comparison method m+n And A m+n+1 Is the same. If A is m+n And A m+n+1 If the comparison result is different, the horizontal upward motion information angle prediction mode is added to the motion information prediction mode candidate list. If A is m+n And A m+n+1 If the comparison result is the same, addition of the horizontal upward motion information angle prediction mode to the motion information prediction mode candidate list is prohibited.
For the angle prediction mode of the vertical upward motion information, A m+n+1+w/8 As a first peripheral matching block, A m+n+1 As a second peripheral matching block. Judgment of A by the above comparison method m+n+1+w/8 And A m+n+1 Is the same. If so, addition of the vertical upward motion information angle prediction mode to the motion information prediction mode candidate list of the current block is prohibited. If not, the vertical upward motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block.
For the vertical rightward motion information angle prediction mode, A m+n+1+w/8 As a first peripheral matching block, A 2m+n+1 As a second peripheral matching block, A 2m+n+1+H/8 As a third peripheral matching block. Judgment of A by the above comparison method m+n+1+w/8 And A 2m+n+1 Is the same. If not, the vertical right motion information angle prediction mode is added to the motion information prediction mode candidate list. If the two are the same, the comparison method is used for judging A 2m+n+1 And A 2m+n+1+H/8 Is the same. If A 2m+n+1 And A 2m+n+1+H/8 If the comparison result is different, the vertical direction will be towards the rightThe motion information angular prediction mode is added to the motion information prediction mode candidate list. If A 2m+n+1 And A 2m+n+1+H/8 If the comparison result is the same, the addition of the vertical right motion information angle prediction mode to the motion information prediction mode candidate list is prohibited.
Application scenario 2: similar to the implementation of application scenario 1, except that: in the application scenario 2, it is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor to distinguish whether an upper adjacent block of the current block exists or not. For example, the above-described processing is performed regardless of whether a left neighboring block of the current block exists or not and whether an upper neighboring block of the current block exists or not.
Application scenario 3: similar to the implementation of application scenario 1, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned, and the peripheral blocks are positioned in the image slice where the current block is positioned; the peripheral block of the current block does not exist, which means that the peripheral block is positioned outside the image where the current block is positioned, or the peripheral block is positioned inside the image where the current block is positioned, but the peripheral block is positioned outside the image slice where the current block is positioned. Other processes are similar to the application scenario 1 and are not described in detail herein.
Application scenario 4: similar to the implementation of application scenario 1, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned, and the peripheral blocks are positioned in the image slice where the current block is positioned; the peripheral block of the current block does not exist, which means that the peripheral block is positioned outside the image where the current block is positioned, or the peripheral block is positioned inside the image where the current block is positioned, but the peripheral block is positioned outside the image slice where the current block is positioned. It is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. Other processes are similar to the application scenario 1 and are not described in detail herein.
Application scenario 5: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the peripheral blocks of the current block do not exist, which means that the peripheral blocks are positioned outside the image where the current block is positioned, or the peripheral blocks are positioned inside the image where the current block is positioned, but the peripheral blocks are positioned outside the image where the current block is positioned. If the peripheral block does not exist, or the peripheral block is an uncoded block, or the peripheral block is an intra block, or the prediction mode of the peripheral block is an intra block copy mode, it indicates that the peripheral block does not have available motion information. If a peripheral block exists, and the peripheral block is not an unencoded block, and the peripheral block is not an intra block, and the prediction mode of the peripheral block is not an intra block copy mode, it is indicated that the peripheral block has available motion information.
Referring to FIG. 4A, the width of the current block is W, the height of the current block is H, let m be W/4, n be H/4, the pixel point at the upper left corner inside the current block be (x, y), (x-1, y + H + W-1) and the surrounding blocks are A 0 ,A 0 Is 4 x 4. According to the formula from A 0 To A 2m+2n The peripheral blocks are traversed in the sequence of (4 x 4), and each peripheral block is marked as A 1 、A 2 、...、A 2m+2n ,A 2m+2n Is the peripheral block where the pixel point (x + W + H-1, y-1) is located.
And for each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed by the preset angle from the peripheral blocks based on the preset angle of the motion information angle prediction mode, and selecting the peripheral matching blocks to be traversed from the plurality of peripheral matching blocks. Unlike the application scenario 1, if at least one of the first and second peripheral matching blocks does not have available motion information, or both the first and second peripheral matching blocks have available motion information and the motion information of the first and second peripheral matching blocks is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list.
If the available motion information exists in the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is forbidden to be added to the motion information prediction mode candidate list; or, continuing to compare the second peripheral matched block to the third peripheral matched block.
If at least one of the second peripheral matching block and the third peripheral matching block does not have available motion information, or both the second peripheral matching block and the third peripheral matching block have available motion information and the motion information of the second peripheral matching block and the third peripheral matching block is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list. Or, if the available motion information exists in both the second peripheral matching block and the third peripheral matching block, and the motion information of the second peripheral matching block and the third peripheral matching block is the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list.
Based on the comparison method, the corresponding processing flow refers to application scenario 1, for example, if the left neighboring block of the current block exists and the upper neighboring block does not exist, for the horizontal leftward motion information angle prediction mode, the comparison method is used to determine a m-1+H/8 And A m+n-1 Is the same. If so, addition of the horizontal leftward motion information angle prediction mode to the motion information prediction mode candidate list is prohibited. If not, the horizontal leftward motion information angle prediction mode is added to the motion information prediction mode candidate list. For the angle prediction mode of the horizontal downward motion information, the comparison method is utilized to judge A w / 8-1 And A m-1 Is the same. If not, the horizontal down motion information angular prediction mode is added to the motion information prediction mode candidate list. If the two are the same, judging A by using the comparison method m-1 And A m-1+H/8 Is the same. If A m-1 And A m-1+H/8 If the comparison result is different, the horizontal downward motion information angle prediction mode may be added to the motion information prediction mode candidate list. If A m-1 And A m-1+H/8 If the comparison result is the same, the addition of the horizontal downward motion information angle prediction mode to the motion information prediction mode candidate list may be prohibited.
Application scenario 6: similar to the implementation of application scenario 5, except that: it is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. For example, whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not, the processing is performed in the manner of the application scenario 5.
Application scenario 7: similar to the implementation of application scenario 5, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the nonexistence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. Other processing procedures are similar to the application scenario 5, and are not repeated here.
Application scenario 8: similar to the implementation of application scenario 5, the difference is: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the nonexistence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. In the application scenario 8, it is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor to distinguish whether an upper adjacent block of the current block exists or not. Other processing procedures are similar to the application scenario 5, and are not repeated here.
Application scenario 9: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the nonexistence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is an uncoded block, or the peripheral block is an intra block, or the prediction mode of the peripheral block is an intra block copy mode, it indicates that the peripheral block does not have available motion information. If a peripheral block exists, and the peripheral block is not an unencoded block, and the peripheral block is not an intra block, and the prediction mode of the peripheral block is not an intra block copy mode, it is indicated that the peripheral block has available motion information. For each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed to by a preconfigured angle from peripheral blocks based on the preconfigured angle of the motion information angle prediction mode, and selecting at least one first peripheral matching block (such as one or more) from the plurality of peripheral matching blocks; for each first peripheral matching block, selecting a second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks.
Each combination of the first peripheral matching block and the second peripheral matching block is referred to as a matching block group, for example, A1, A3, and A5 are selected from a plurality of peripheral matching blocks as a first peripheral matching block, A2 is selected from a plurality of peripheral matching blocks as a second peripheral matching block corresponding to A1, A4 is selected from a plurality of peripheral matching blocks as a second peripheral matching block corresponding to A3, and A6 is selected from a plurality of peripheral matching blocks as a second peripheral matching block corresponding to A5, so that the matching block group 1 includes A1 and A2, the matching block group 2 includes A3 and A4, and the matching block group 3 includes A5 and A6. A1, A2, A3, A4, A5, and A6 are any peripheral matching blocks among the plurality of peripheral matching blocks, and the selection manner thereof may be configured empirically, without limitation.
For each matching block group, if available motion information exists in both of two peripheral matching blocks in the matching block group, and the motion information of the two peripheral matching blocks is different, the comparison result of the matching block group is different. If at least one of two peripheral matching blocks in the matching block group does not have available motion information, or both peripheral matching blocks have available motion information and the motion information of the two peripheral matching blocks is the same, the comparison result of the matching block group is the same. If the comparison results of all the matching block groups are the same, forbidding adding the motion information angle prediction mode to the motion information prediction mode candidate list; and if the comparison result of any matching block group is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list.
For example, if the left neighboring block of the current block exists and the upper neighboring block does not exist, for the horizontal leftward motion information angle prediction mode, the above-mentioned comparison method is used to determine at least one matching block group A i And A j (i and j have values in the range of [ m, m + n-1 ]]And i and j are different, i and j can be selected at will and are within the value range). If the comparison results for all matching block groups are the same, addition of the horizontal leftward motion information angle prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal left motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block.
For the angle prediction mode of the horizontal downward motion information, the comparison method is used for judging at least one matching block group A i And A j (the value ranges of i and j are [0, m + n-2 ]]And i and j are not the same). If the comparison results for all matching block groups are the same, the addition of the horizontal down motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal down motion information angular prediction mode is added to the motion information prediction mode candidate list.
For example, if the left neighboring block of the current block does not exist and the upper neighboring block exists, the above-mentioned comparison method is used to determine at least one matching block group A for the vertical upward motion information angle prediction mode i And A j (i and j have a value in the range of [ m + n +1,2m + n +]And i and j are not the same). If the comparison results for all matching block groups are the same, then addition of the vertical upward motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the vertical upward motion information angle prediction mode is added to the motion information prediction mode candidate list.
For the angle prediction mode of the vertical rightward movement information, the comparison method is used for judging at least one matching block group A i And A j (i and j have a value in the range of [ m + n +2,2m +2n]And i and j are not the same). If the comparison results for all matching block groups are the same, then addition of the vertical right motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the vertical right motion information angle prediction mode is added to the motion information prediction mode candidate list.
For example, if the left neighboring block of the current block exists and the upper neighboring block of the current block also exists, and for the angle prediction mode of the horizontal downward motion information, the above-mentioned comparison method is used to determine at least one matching block group A i And A j (the value ranges of i and j are [0, m + n-2 ]]And i and j are not the same). If all matching block groups are comparedThe result is the same, the addition of the horizontal down motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal down motion information angular prediction mode is added to the motion information prediction mode candidate list.
For the angle prediction mode of the horizontal leftward movement information, the above comparison method is used to determine at least one matching block group A i And A j (i and j have a value in the range of [ m, m + n-1 ]]And i and j are not the same). If the comparison results for all matching block groups are the same, then addition of the horizontal leftward motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal left motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block.
For the angle prediction mode of the horizontal upward movement information, the comparison method is used to judge at least one matching block group A i And A j (i and j have a value in the range of [ m +1,2m + n-1 +]And i and j are not the same). If the comparison results for all matching block groups are the same, adding horizontal upward motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal up motion information angle prediction mode is added to the motion information prediction mode candidate list.
For the angle prediction mode of the vertical upward motion information, the comparison method is used to determine at least one matching block group A i And A j (i and j have a value in the range of [ m + n +1,2m + n +]And i and j are not the same). If the comparison results for all matching block groups are the same, then addition of the vertical upward motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the vertical upward motion information angle prediction mode is added to the motion information prediction mode candidate list.
For the angle prediction mode of the vertical rightward movement information, the comparison method is used to judge at least one matching block group A j And A j (i and j have a value in the range of [ m + n +2,2m +2n]And i and j are not the same). If the comparison results for all matching block groups are the same, then the prohibition will beThe vertical right motion information angular prediction mode is added to the motion information prediction mode candidate list. Otherwise, the vertical right motion information angle prediction mode is added to the motion information prediction mode candidate list.
Application scenario 10: similar to the implementation of application scenario 9, except that: in the application scenario 10, it is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor to distinguish whether an upper adjacent block of the current block exists or not.
Application scenario 11: similar to the implementation of application scenario 9, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned, and the peripheral blocks are positioned in the image slice where the current block is positioned; the peripheral block of the current block does not exist, which means that the peripheral block is positioned outside the image where the current block is positioned, or the peripheral block is positioned inside the image where the current block is positioned, but the peripheral block is positioned outside the image slice where the current block is positioned. Other processes are similar to the application scenario 9, and are not described herein again.
Application scenario 12: similar to the implementation of application scenario 9, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the peripheral block of the current block does not exist, which means that the peripheral block is positioned outside the image where the current block is positioned, or the peripheral block is positioned inside the image where the current block is positioned, but the peripheral block is positioned outside the image slice where the current block is positioned. It is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor is it necessary to distinguish whether an upper adjacent block of the current block exists or not. The other processes are similar to the application scenario 9 and will not be described in detail here.
Application scenario 13: similar to the implementation of application scenario 9, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned, and the peripheral blocks are positioned in the image slice where the current block is positioned; the peripheral blocks of the current block do not exist, which means that the peripheral blocks are positioned outside the image where the current block is positioned, or the peripheral blocks are positioned inside the image where the current block is positioned, but the peripheral blocks are positioned outside the image where the current block is positioned. Unlike application scenario 9, the comparison may be:
For each matching block group, if at least one of two peripheral matching blocks in the matching block group does not have available motion information, or both peripheral matching blocks have available motion information and the motion information of the two peripheral matching blocks is different, the comparison result of the matching block group is different. If available motion information exists in both the two peripheral matching blocks in the matching block group and the motion information of the two peripheral matching blocks is the same, the comparison result of the matching block group is the same. If the comparison results of all the matching block groups are the same, forbidding adding the motion information angle prediction mode to the motion information prediction mode candidate list; and if the comparison result of any matching block group is different, adding the motion information angle prediction mode to the motion information prediction mode candidate list.
Based on the comparison method, other processes are similar to the application scenario 9, and are not repeated herein.
Application scenario 14: similar to the implementation of application scenario 9, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned, and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor is it necessary to distinguish whether an upper adjacent block of the current block exists or not. In contrast to the comparison in the application scenario 9, the comparison can be seen in the application scenario 10. Based on the comparison method, other processes are similar to the application scenario 9, and are not repeated herein.
Application scenario 15: similar to the implementation of the application scenario 9, the difference is that: in contrast to the comparison in the application scenario 9, the comparison can be seen in the application scenario 10. Other processes are similar to the application scenario 9, and are not repeated here.
Application scenario 16: similar to the implementation of application scenario 9, except that: it is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. In contrast to the comparison in the application scenario 9, the comparison can be seen in the application scenario 10. Other processes are similar to the application scenario 9, and are not repeated here.
Application scenario 17: referring to fig. 4B, in order to reduce the complexity of hardware implementation, a downsampling method is used for performing duplicate checking.
Example 7: in embodiments 1 to 5, in a process of constructing a motion information prediction mode candidate list, when a horizontal rightward motion information angle prediction mode, a vertical downward motion information angle prediction mode, and a diagonal downward right motion information angle prediction mode are added to the motion information prediction mode candidate list, since these motion information angle prediction modes point to a right peripheral block (i.e., a block in a right column) or a lower peripheral block (i.e., a block in a lower row) of a current block, and when the current block is coded, the right peripheral block and the lower peripheral block are usually not coded, i.e., have no motion information, motion information of the right peripheral block and the lower peripheral block can be determined according to motion information in a temporal corresponding position, so that motion information angle prediction modes such as the horizontal rightward motion information angle prediction mode, the vertical downward motion information angle prediction mode, and the diagonal downward motion information angle prediction mode can be added to the motion information prediction mode candidate list, and more motion information angle prediction modes are added to the motion information prediction mode candidate list, thereby improving performance of an MVAP mode and improving coding performance.
In a possible implementation manner, if the peripheral matching block in embodiments 1-5 is located at the right side (i.e. right peripheral block) or the lower side (i.e. lower peripheral block) outside the current block, the motion information of the peripheral matching block can be determined in the following manner: determining a reference frame corresponding to a current frame where a current block is located; selecting a reference matching block corresponding to the position of the peripheral matching block from the reference frame; and determining the motion information of the peripheral matching block according to the motion information of the reference matching block.
Referring to FIG. 4C, the width of the current block is W, the height is H, let m be W/4, n be H/4, and the pixel at the top left corner inside the current blockThe peripheral block where the points are (x, y), (x + W, y) is marked as A 0 ,A 0 Is 4 x 4. From A 0 To A m+n The peripheral blocks are traversed in the sequence of (1), and each peripheral block of 4 x 4 is respectively marked as A 1 、A 2 、...、A m+n, A m+n Is the peripheral block where the pixel (x, y + H) is located. If a certain 4 x 4 peripheral block exceeds the current image boundary, the 4 x 4 peripheral block Clip is (shifted or truncated) to be within the current image boundary, namely the size of the peripheral block is kept unchanged, and the peripheral block is shifted so as to enable the 4 x 4 peripheral block to be within the current image boundary. For example, putting the peripheral blocks Clip into the current image boundary means: as shown in fig. 4D, if the peripheral block exceeds the right boundary of the current image, the peripheral block is translated leftward so that the peripheral block is located within the boundary of the current image, and as shown in fig. 4E, if the peripheral block exceeds the lower boundary of the current image, the peripheral block is translated upward so that the peripheral block is located within the boundary of the current image. If the peripheral block exceeds the lower boundary and the right boundary of the current image, the peripheral block is translated leftwards and is translated upwards so as to be positioned in the boundary of the current image. Or, if a certain 4 × 4 peripheral block exceeds the range of the current maximum coding unit, the 4 × 4 peripheral block is Clip to the range of the current maximum coding unit, and the specific Clip manner is similar to that of the peripheral block Clip to the current image boundary, and is not described herein again.
As shown in fig. 4C, the reference frame corresponding to the current frame where the current block is located may be determined first, and the reference block corresponding to the current block is selected from the reference frame, which does not limit the determination process. Illustratively, the width of the reference block is W, the height of the reference block is H, let m be W/4, n be H/4, and the pixel point at the upper left corner inside the reference block be (x ', y'), (x '+ W, y') is located in the block denoted as a 0 ’,A 0 ' size 4 x 4. From A 0 ' to A m+n ' the sequence of these blocks is traversed, each 4 by 4 block is denoted as A 1 ’、A 2 ’、...、A m+n ’,A m+n ' is the block where pixel point (x ', y ' + H) is located.
To sum upA in the reference frame 0 ' is a peripheral matching block A in the current frame 0 Corresponding reference match block, i.e. according to A 0 ' motion information determination A 0 The motion information of (2). A in the reference frame 1 ' is a peripheral matching block A in the current frame 1 Corresponding reference match block, i.e. according to A 1 ' motion information determination A 1 The motion information of (2). By analogy, refer to A in the frame m+n ' is a peripheral matching block A in the current frame m+n Corresponding reference match block, i.e. according to A m+n ' motion information determination A m+n The motion information of (2).
For example, the motion information of the peripheral matching block is determined according to the motion information of the reference matching block, including but not limited to:
In the method 1, if the forward motion information of the reference matching block is available, based on the position relationship between the current frame and a first target frame (for example, the P1 th frame in List0, where P1 may be a positive integer, for example, the first frame in List 0) in List0 (i.e., a forward reference frame List) of the current frame, and the position relationship between the reference frame and a second target frame (for example, the target frame to which the forward motion information of the reference matching block points) in List0 of the reference frame, the forward motion information of the reference matching block is scaled, so as to obtain the forward motion information of the peripheral matching block. If the forward motion information of the reference matching block is not available, the forward motion information of the surrounding matching blocks is not available. If the backward motion information of the reference matching block is available, based on the position relationship between the current frame and the third target frame (for example, the P2 th frame in List1, P2 may be a positive integer, for example, the first frame in List 1) in List1 (i.e., the backward reference frame List) of the current frame, and the position relationship between the reference frame and the fourth target frame (for example, the target frame pointed by the backward motion information of the reference matching block) in List1 of the reference frame, the backward motion information of the reference matching block is scaled, so as to obtain the backward motion information of the peripheral matching block. If the backward motion information of the reference matching block is not available, the backward motion information of the peripheral matching block is not available.
And 2, if the forward motion information of the reference matching block is available, based on the position relationship between the current frame and the first target frame in the List0 of the current frame and the position relationship between the reference frame and the second target frame in the List0 of the reference frame, stretching the forward motion information of the reference matching block to obtain the forward motion information of the peripheral matching block. And based on the position relationship between the current frame and the third target frame in List1 of the current frame and the position relationship between the reference frame and the second target frame in List0 of the reference frame, stretching the forward motion information of the reference matching block to obtain the backward motion information of the peripheral matching block. If the forward motion information of the reference matching block is not available, neither the forward motion information nor the backward motion information of the surrounding matching blocks is available.
And 3, if the forward motion information of the reference matching block is available, stretching the forward motion information of the reference matching block based on the position relationship between the current frame and the first target frame in the List0 of the current frame and the position relationship between the reference frame and the second target frame in the List0 of the reference frame to obtain the forward motion information of the peripheral matching block. If the forward motion information of the reference matching block is not available, the forward motion information of the surrounding matching blocks is not available. The backward motion information of the surrounding matching block is not available, that is, the backward motion information of the surrounding matching block is not available regardless of whether the forward motion information of the reference matching block is available.
And 4, if the backward motion information of the reference matching block is available, scaling the backward motion information of the reference matching block based on the position relationship between the current frame and the first target frame in List0 of the current frame and the position relationship between the reference frame and the fourth target frame in List1 of the reference frame to obtain the forward motion information of the peripheral matching block, and scaling the backward motion information of the reference matching block based on the position relationship between the current frame and the third target frame in List1 of the current frame and the position relationship between the reference frame and the fourth target frame in List1 of the reference frame to obtain the backward motion information of the peripheral matching block. If the backward motion information of the reference matching block is unavailable, neither the forward motion information nor the backward motion information of the surrounding matching block is available.
And 5, if the backward motion information of the reference matching block is available, based on the position relationship between the current frame and the first target frame in the List0 of the current frame and the position relationship between the reference frame and the fourth target frame in the List1 of the reference frame, stretching the backward motion information of the reference matching block to obtain the forward motion information of the peripheral matching block. If the backward motion information of the reference matching block is not available, the forward motion information of the surrounding matching blocks is not available. The backward motion information of the peripheral matching block is not available, i.e., the backward motion information of the peripheral matching block is not available regardless of whether the backward motion information of the reference matching block is available or not.
Mode 6, if the forward motion information of the reference matching block is available, based on the position relationship between the current frame and the third target frame in List1 of the current frame, and the position relationship between the reference frame and the second target frame in List0 of the reference frame, the forward motion information of the reference matching block is extended and contracted to obtain the backward motion information of the peripheral matching block; if the forward motion information of the reference matching block is not available, the backward motion information of the surrounding matching block is not available. The forward motion information of the surrounding matching blocks is not available, i.e., the forward motion information of the surrounding matching blocks is not available regardless of whether the forward motion information of the reference matching block is available or not.
Mode 7, if the backward motion information of the reference matching block is available, based on the position relationship between the current frame and the third target frame in List1 of the current frame, and the position relationship between the reference frame and the fourth target frame in List1 of the reference frame, the backward motion information of the reference matching block is expanded and contracted to obtain the backward motion information of the peripheral matching block; if the backward motion information of the reference matching block is not available, the backward motion information of the peripheral matching block is not available. The forward motion information of the surrounding matching block is not available, i.e., the forward motion information of the surrounding matching block is not available regardless of whether the backward motion information of the reference matching block is available.
In summary, for the peripheral matching block, if the forward motion information of the peripheral matching block is available and the backward motion information of the peripheral matching block is available, the motion information of the peripheral matching block is bidirectional motion information, and the motion information of the peripheral matching block includes the forward motion information and the backward motion information. If the forward motion information of the peripheral matching block is available and the backward motion information of the peripheral matching block is unavailable, the motion information of the peripheral matching block is available, the motion information of the peripheral matching block is unidirectional motion information, and the motion information of the peripheral matching block comprises the forward motion information. If the forward motion information of the peripheral matching block is unavailable and the backward motion information of the peripheral matching block is available, the motion information of the peripheral matching block is unidirectional motion information, and the motion information of the peripheral matching block comprises backward motion information. If the forward motion information of the peripheral matching block is not available and the backward motion information of the peripheral matching block is not available, the motion information of the peripheral matching block is not available.
In the foregoing manners, the forward motion information of the reference matching block may include a first motion vector of the reference matching block, a target frame (e.g., a target frame index) in List0 of the reference frame (i.e., a frame in which the reference matching block is located), a second target frame in List0 of the reference frame, and the like, and the backward motion information of the reference matching block may include a second motion vector of the reference matching block, a target frame in List1 of the reference frame, a fourth target frame in List1 of the reference frame, and on this basis:
The scaling the forward motion information of the reference matching block based on the position relationship between the current frame and the first target frame in List0 of the current frame and the position relationship between the reference frame and the second target frame in List0 of the reference frame to obtain the forward motion information of the peripheral matching block may include: based on the position relationship between the current frame and the first target frame in List0 of the current frame and the position relationship between the reference frame and the second target frame in List0 of the reference frame, the first motion vector of the reference matching block is stretched to obtain a stretched motion vector, and the stretching mode is not limited. Forward motion information of the surrounding matching blocks, which may include the first target frame in List0 of the current frame and the warped motion vector, is determined according to the warped motion vector.
The scaling the backward motion information of the reference matching block based on the position relationship between the current frame and the third target frame in List1 of the current frame and the position relationship between the reference frame and the fourth target frame in List1 of the reference frame to obtain the backward motion information of the peripheral matching block may include: and based on the position relationship between the current frame and the third target frame in the List1 of the current frame and the position relationship between the reference frame and the fourth target frame in the List1 of the reference frame, stretching the second motion vector of the reference matching block to obtain a stretched motion vector. The backward motion information of the peripheral matching block is determined according to the scaled motion vector, and the backward motion information of the peripheral matching block may include the third target frame in List1 of the current frame and the scaled motion vector.
The scaling the forward motion information of the reference matching block based on the position relationship between the current frame and the third target frame in List1 of the current frame and the position relationship between the reference frame and the second target frame in List0 of the reference frame to obtain the backward motion information of the peripheral matching block may include: and based on the position relation between the current frame and the third target frame in the List1 of the current frame and the position relation between the reference frame and the second target frame in the List0 of the reference frame, stretching the first motion vector of the reference matching block to obtain a stretched motion vector. The backward motion information of the peripheral matching block is determined according to the scaled motion vector, and the backward motion information of the peripheral matching block may include the third target frame in List1 of the current frame and the scaled motion vector.
The scaling the backward motion information of the reference matching block based on the position relationship between the current frame and the first target frame in List0 of the current frame and the position relationship between the reference frame and the fourth target frame in List1 of the reference frame to obtain the forward motion information of the surrounding matching block may include: and based on the position relationship between the current frame and the first target frame in the List0 of the current frame and the position relationship between the reference frame and the fourth target frame in the List1 of the reference frame, stretching the second motion vector of the reference matching block to obtain a stretched motion vector. Backward motion information of the surrounding matching block, which may include the first target frame in List0 and the warped motion vector, is determined according to the warped motion vector.
The following describes the addition process of the horizontal rightward motion information angle prediction mode, the vertical downward motion information angle prediction mode, and the oblique downward right motion information angle prediction mode with reference to several specific application scenarios, and the order of each motion information angle prediction mode in the motion information prediction mode candidate list is set as required, which is not limited thereto.
Application scenario 1: referring to fig. 4C, for each motion information angle prediction mode, based on the preconfigured angle of the motion information angle prediction mode, a plurality of peripheral matching blocks pointed by the preconfigured angle are selected from all the peripheral blocks, and a peripheral matching block to be traversed is selected from the plurality of peripheral matching blocks (e.g., a first peripheral matching block and a second peripheral matching block to be traversed are selected, or a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed in sequence are selected).
If the available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list. If the available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list.
If the available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the second peripheral matching block is the same, the comparison result of the two peripheral matching blocks is the same, and the second peripheral matching block and the third peripheral matching block are continuously compared. If the second peripheral matching block and the third peripheral matching block both have available motion information and the motion information of the second peripheral matching block and the third peripheral matching block is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list. If the available motion information exists in both the second peripheral matching block and the third peripheral matching block, and the motion information of the second peripheral matching block is the same as that of the third peripheral matching block, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list.
For example, for the horizontal rightward motion information angle prediction mode, A may be 0 As a first peripheral matching block, A H/8 As the second peripheral matching block, A is judged by the above-mentioned comparison method 0 And A H/8 Is the same. If so, the addition of the horizontal rightward motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. If not, the horizontal right motion information angular prediction mode is added to the motion information prediction mode candidate list.
For the angle prediction mode of vertical downward motion information, A can be defined as m+1 As a first peripheral matching block, A m+1+W/8 As the second peripheral matching block, A is judged by the above-mentioned comparison method m+1 And A m+1+W/8 Is the same. If so, the addition of the vertical downward motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. If not, the vertical downward motion information angular prediction mode is added to the motion information prediction mode candidate list.
For the diagonal down-right motion information angle prediction mode, A can be set m-1 As a first peripheral matching block, A m As a second peripheral matching block, A m+1 As the third peripheral matching block, A is judged by the above-mentioned comparison method m And A m-1 Is the same. If not, the diagonal down-right motion information angle prediction mode is added to the motion information prediction mode candidate list. If the two are the same, continuing to judge A by using the comparison method m And A m+1 Is the same. And if the motion information angle prediction modes are different, adding the oblique downward right motion information angle prediction mode to the motion information prediction mode candidate list. If the motion information angle prediction modes are the same, the motion information angle prediction mode in the diagonal direction is prohibited from being added to the motion information prediction mode candidate list.
Application scenario 2: the comparison method of the application scenario 2 is the same as that of the application scenario 1, in the present applicationWhen the front block is at the right boundary and not at the lower boundary, for the angle prediction mode of the vertical downward motion information, A is set m+1 As a first peripheral matching block, A m+1+W/8 As the second peripheral matching block, A is judged by the above-mentioned comparison method m+1 And A m+1+W/8 Is the same. If the vertical downward motion information angle prediction mode is the same, the addition of the vertical downward motion information angle prediction mode to the motion information prediction mode candidate list is prohibited. If not, the vertical downward motion information angle prediction mode is added to the motion information prediction mode candidate list. When the current block is at the lower boundary and not at the right boundary, for the horizontal right motion information angle prediction mode, A 0 As a first peripheral matching block, A H/8 As the second peripheral matching block, A is judged by the above-mentioned comparison method 0 And A H/8 Is the same. If so, addition of the horizontal rightward motion information angular prediction mode to the motion information prediction mode candidate list is prohibited. If not, the horizontal right motion information angular prediction mode is added to the motion information prediction mode candidate list. When the current block is not located at the lower boundary and the right boundary, for the horizontal right motion information angle prediction mode, the vertical downward motion information angle prediction mode, and the processing mode of the oblique right downward motion information angle prediction mode, refer to application scenario 1, and are not described herein again.
Application scenario 3: for each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed to by a preconfigured angle from among the peripheral blocks based on the preconfigured angle of the motion information angle prediction mode, and selecting at least one first peripheral matching block (such as one or more) from among the plurality of peripheral matching blocks; for each first peripheral matching block, a second peripheral matching block corresponding to the first peripheral matching block is selected from the plurality of peripheral matching blocks. Each combination of the first and second peripheral matching blocks is recorded as a matching block group. For example, A1 and A5 are selected from the plurality of peripheral matching blocks as a first peripheral matching block, A2 is selected from the plurality of peripheral matching blocks as a second peripheral matching block corresponding to A1, A6 is selected from the plurality of peripheral matching blocks as a second peripheral matching block corresponding to A5, the matching block group 1 includes A1 and A2, and the matching block group 2 includes A5 and A6. For each matching block group, if available motion information exists in two peripheral matching blocks in the matching block group, and the motion information of the two peripheral matching blocks is different, the comparison result of the matching block group is different. If available motion information exists in both the two peripheral matching blocks in the matching block group, and the motion information of the two peripheral matching blocks is the same, the comparison result of the matching block group is the same. If the comparison results of all the matching block groups are the same, forbidding adding the motion information angle prediction mode to the motion information prediction mode candidate list; and if the comparison result of any matching block group is different, adding the motion information angle prediction mode to the motion information prediction mode candidate list.
For example, for the horizontal rightward movement information angle prediction mode, the above comparison method is used to determine at least one matching block group A i And A j (i and j have values in the range of [0, m-1 ]]) The comparison result of (1). If the comparison results of all the matching block groups are the same, prohibiting adding the horizontal rightward motion information angle prediction mode to the motion information prediction mode candidate list; and if any comparison result is different, adding the horizontal rightward motion information angle prediction mode to the motion information prediction mode candidate list.
For the angle prediction mode of the vertical downward motion information, the comparison method is used for judging at least one matching block group A i And A j (the value ranges of i and j are [ m +1, m + n ]]) The comparison result of (1). If the comparison results of all the matching block groups are the same, prohibiting adding the vertical downward motion information angle prediction mode to the motion information prediction mode candidate list; if any of the comparison results is different, the vertical downward motion information angle prediction mode is added to the motion information prediction mode candidate list.
For the angle prediction mode of the oblique right and down movement information, the comparison method is used for judging at least one matching block group A i And A j (the value ranges of i and j are [0,m + n ] ]) The comparison result of (1). If the comparison results of all the matching block groups are the same, forbidding adding the oblique downward-right motion information angle prediction mode to the motion information prediction mode candidate list; if it isAnd if any comparison result is different, adding the diagonal downward-right motion information angle prediction mode to the motion information prediction mode candidate list.
Application scenario 4: the comparison method of the application scenario 4 is the same as that of the application scenario 3, and when the current block is at the right boundary and is not at the lower boundary, the vertical downward motion information angle prediction mode is processed in a processing mode that is referred to the application scenario 3, which is not described herein again. When the current block is on the lower boundary and is not on the right boundary, the horizontal right motion information angle prediction mode is processed in a processing mode that refers to application scenario 3, which is not described herein again. When the current block is not located at the lower boundary and the right boundary, for the horizontal right motion information angle prediction mode, the vertical downward motion information angle prediction mode, and the processing mode of the oblique right downward motion information angle prediction mode, refer to application scenario 3, which is not described herein again.
Example 8: in embodiments 1 to 5, in relation to an encoding/decoding side filling motion information that is not available in peripheral blocks, and how to fill the motion information that is not available in the peripheral blocks, in a possible implementation, for the decoding side, after selecting a target motion information prediction mode of a current block from a motion information prediction mode candidate list, if the target motion information prediction mode is a target motion information angle prediction mode and a plurality of peripheral matching blocks to which a preconfigured angle of the target motion information angle prediction mode points include peripheral blocks in which no available motion information exists, filling the motion information that is not available in the peripheral blocks of the current block. For the encoding end, for each motion information angle prediction mode in the motion information prediction mode candidate list, if a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode include peripheral blocks without available motion information, filling unavailable motion information in the peripheral blocks of the current block.
For example, for the encoding side and the decoding side, for a peripheral block for which there is no available motion information, the available motion information of a neighboring block of the peripheral block is filled as the motion information of the peripheral block; or, filling the available motion information of the reference block at the corresponding position of the peripheral block in the time domain reference frame as the motion information of the peripheral block; or, filling default motion information into the motion information of the peripheral block. Of course, the above is just a few filling examples, and no limitation is made to this.
In another possible implementation, for a motion information angle prediction mode such as a horizontal left motion information angle prediction mode, a vertical upward motion information angle prediction mode, a horizontal downward motion information angle prediction mode, a vertical right motion information angle prediction mode, etc., a peripheral block of a current block may be a peripheral block on the left of the current block and/or a peripheral block on the top of the current block, in which case, motion information that is not available in the peripheral block may be filled in the following manner: for an encoding end and a decoding end, traversing the peripheral blocks of the current block according to the traversal sequence from the left peripheral block to the upper peripheral block of the current block (or traversing according to the traversal sequence from the upper peripheral block to the left peripheral block of the current block), and traversing the first peripheral block with available motion information; if the first peripheral block without available motion information is included in the peripheral block, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks after the peripheral block, and if the peripheral blocks after the peripheral block comprise second peripheral blocks without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral blocks into the second peripheral blocks.
For example, the traversing in the traversal order from the left peripheral block to the top peripheral block of the current block may include: if the current block does not have the peripheral blocks on the left side, traversing the peripheral blocks on the upper side of the current block; and traversing the left peripheral block of the current block if the current block does not have the upper peripheral block. The left peripheral block may include a block adjacent to the left of the current block and a non-adjacent block. The upper-side peripheral block may include a block adjacent to the upper side of the current block and a non-adjacent block. The number of first perimeter blocks may be one or more, all perimeter blocks before the first traversed perimeter block where there is available motion information. The first peripheral block may be an unencoded block, or an intra block, or the prediction mode is a peripheral block of an intra block copy mode; the second peripheral block may be an unencoded block, or an intra block, or a peripheral block whose prediction mode is an intra block copy mode.
The following describes a filling process of motion information with reference to several specific application scenarios.
Application scenario 1: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the nonexistence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is not decoded (i.e. the peripheral block is an unencoded block), or the peripheral block is an intra block, or the prediction mode of the peripheral block is an intra block copy mode, it indicates that there is no available motion information for the peripheral block. If a peripheral block exists, and the peripheral block is not an unencoded block, and the peripheral block is not an intra block, and the prediction mode of the peripheral block is not an intra block copy mode, then the presence of available motion information for the peripheral block is indicated.
Referring to FIG. 4A, the width of the current block is W, the height of the current block is H, let m be W/4, n be H/4, the pixel point at the upper left corner inside the current block be (x, y), (x-1, y + H + W-1) and the surrounding blocks are A 0 ,A 0 May be 4 x 4. According to the following from A 0 Traversing the peripheral blocks in the sequence from A2m +2n, and respectively marking the peripheral blocks of 4X 4 as A 1 、A 2 、...、A 2m+2n ,A 2m+2n Is the peripheral block where the pixel point (x + W + H-1, y-1) is located.
For example, if the left neighboring block of the current block exists and the upper neighboring block does not exist, the padding process is as follows: from A 0 To A m+n-1 And traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as A i . If i is greater than 0, then A i Motion information of previously traversed peripheral blocks, all using A i Is filled in. Judging whether i is equal to m + n-1, if so, finishing the filling, and exiting the filling process; otherwise, from A i+1 To A m+n-1 Go through traversal if the traversal of the peripheral blocksAnd if the motion information is not available, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
Referring to FIG. 4A, assume A i Is A 4 Then A can be substituted i Previously traversed perimeter blocks (e.g. A) 0 、A 1 、A 2 、A 3 ) All using A 4 Is filled in. Suppose traversing to A 5 When found, A 5 There is no available motion information, then use A 5 Nearest neighbor previous perimeter block a 4 Is filled in, assuming traversal to a 6 When found, A 6 There is no available motion information, then use A 6 The most adjacent previous peripheral block a 5 The motion information of (2) is filled, and so on.
The left adjacent block of the current block does not exist, the upper adjacent block exists, and the filling process is as follows: from A to A m+n+1 To A 2m+2n And traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as A i . If i is greater than m + n +1, then A i Motion information of previously traversed peripheral blocks, all using A i Is filled in. Judging whether i is equal to 2m +2n, if so, finishing filling, and exiting the filling process; otherwise, from A i+1 To A 2m+2n And traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
The left adjacent block and the upper adjacent block of the current block exist, and the filling process is as follows: from A to A 0 To A 2m+2n And traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as A i . If i is greater than 0, then A i Motion information of previously traversed peripheral blocks, all using A i Is filled in. Judging whether i is equal to 2m +2n, if so, finishing filling, and exiting the filling process; otherwise, from A i+1 To A 2m+2n Traversing is performed, and if the motion information of the traversed periphery block is not available, the periphery block is usedAnd filling the motion information of the most adjacent previous peripheral block of the block until the traversal is finished.
Application scenario 2: similar to the implementation of application scenario 1, except that: whether a left adjacent block of the current block exists or not and whether an upper adjacent block of the current block exists or not are not distinguished, for example, whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not are all processed in the following way: from A 0 To A 2m+2n And traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as A i . If i is greater than 0, then A i Motion information of previously traversed peripheral blocks, all using A i Is filled in. Judging whether i is equal to 2m +2n, if so, finishing filling, and exiting the filling process; otherwise, from A i+1 To A 2m+2n And traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
Application scenario 3: similar to the implementation of application scenario 1, the difference is: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned, and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. Other implementation processes are referred to as application scenario 1, and are not described in detail herein.
Application scenario 4: similar to the implementation of application scenario 1, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned, and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. Other implementation processes are referred to as application scenario 1, and are not described in detail herein.
Application scenario 5: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is not decoded (i.e. the peripheral block is an uncoded block), or the peripheral block is an intra block, or the prediction mode of the peripheral block is an intra block copy mode, it indicates that the peripheral block does not have available motion information. If a peripheral block exists, the peripheral block is not an unencoded block, the peripheral block is not an intra block, and the prediction mode of the peripheral block is not an intra block copy mode, it is indicated that the peripheral block has available motion information.
If the left neighboring block of the current block exists and the upper neighboring block does not exist, the padding process is as follows: from A to A 0 To A m+n-1 And performing sequential traversal, and if the motion information of the traversed peripheral block is unavailable, filling the motion information of the peripheral block by using zero motion information or the motion information of the time domain corresponding position of the peripheral block. If the left neighboring block of the current block does not exist and the upper neighboring block exists, the padding process is as follows: from A to A m+n+1 To A 2m+2n And performing sequential traversal, and if the motion information of the traversed peripheral block is unavailable, filling the motion information of the peripheral block by using zero motion information or the motion information of the time domain corresponding position of the peripheral block. If the left neighboring block of the current block exists and the upper neighboring block exists, the padding process is as follows: from A to A 0 To A 2m+2n And performing sequential traversal, and if the motion information of the traversed peripheral block is unavailable, filling the motion information of the peripheral block by using zero motion information or the motion information of the time domain corresponding position of the peripheral block.
Application scenario 6: similar to the implementation of application scenario 5, except that: whether the left adjacent block and the upper adjacent block of the current block exist or not is not distinguished, and whether the left adjacent block and the upper adjacent block of the current block exist or not is judged from A 0 To A 2m+2n Go through sequential traversal, if traverseIf the motion information of the peripheral block is not available, the motion information of the peripheral block is filled with zero motion information or motion information of a position corresponding to the time domain of the peripheral block.
Application scenario 7: similar to the implementation of application scenario 5, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned, and the peripheral blocks are positioned in the image slice where the current block is positioned; the peripheral blocks of the current block do not exist, which means that the peripheral blocks are positioned outside the image where the current block is positioned, or the peripheral blocks are positioned inside the image where the current block is positioned, but the peripheral blocks are positioned outside the image where the current block is positioned. Other implementation processes are referred to in application scenario 5, and are not described in detail herein.
Application scenario 8: similar to the implementation of application scenario 5, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned, and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. Other implementation processes are referred to in application scenario 5, and are not described in detail here.
Application scenario 9-application scenario 16: similar to the implementation of application scenarios 1-8, the difference is that: the width of the current block is W, the height of the current block is H, the size of m is W/8, the size of n is H/8, and the peripheral block A 0 Is 8 by 8, and the peripheral blocks of each 8 by 8 are respectively marked as A 1 、A 2 、...、A 2m+2n That is, the size of each peripheral block is changed from 4 × 4 to 8 × 8, and other implementation processes may refer to application scenarios 1 to 8, which are not repeated herein.
Application scenario 17: referring to fig. 4F, the width and height of the current block are both 16, and the motion information of the peripheral blocks is stored in a minimum unit of 4 × 4. Suppose A 14 、A 15 、A 16 And A 17 For uncoded blocks, these are not codedThe coding blocks are filled, and the filling method can be any one of the following methods: padding with available motion information of neighboring blocks; filling by adopting default motion information; and filling the available motion information of the position block corresponding to the time domain reference frame. Of course, the above-described manner is merely an example, and is not limited thereto. If the size of the current block is other sizes, the padding can also be performed in the above manner, which is not described herein again.
Application scenario 18: referring to fig. 4G, the width and height of the current block are both 16, and the motion information of the surrounding blocks is stored in the minimum unit of 4 × 4. Suppose A 7 For an intra block, the intra block needs to be filled, and the filling method may be any one of the following methods: padding with available motion information of neighboring blocks; filling by adopting default motion information; and filling the available motion information of the position block corresponding to the time domain reference frame. Of course, the above-described manner is merely an example, and is not limited thereto.
Application scenario 19: referring to FIG. 4A, the width of the current block is W, the height of the current block is H, the size of m is W/4, the size of n is H/4, the pixel points at the upper left corner inside the current block are (x, y), (x-1, y + H + W-1) and the surrounding blocks are A 0 ,A 0 Is 4 x 4. According to the following from A 0 To A 2m+2n The peripheral blocks are traversed in the sequence of (1), and each peripheral block of 4 x 4 is respectively marked as A 1 、A 2 、...、A 2m+2n ,A 2m+2n Is the peripheral block where the pixel point (x + W + H-1, y-1) is located.
If the motion information angle prediction mode is a horizontal downward motion information angle prediction mode, the traversal range is A 0 To A m+n-2 From A 0 To A m+n-2 And traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as A i . If i is greater than 0, A i Motion information of previously traversed peripheral blocks, all using A i Is filled in. Judging whether i is equal to m + n-2, if so, finishing the filling process, and exiting the filling process; otherwise from A i+1 To A m+n-2 Traversing is performed, and if the motion information of the traversed peripheral block is not available, the cycle is usedAnd filling the motion information of the most adjacent previous peripheral block of the edge block until the traversal is finished.
If the motion information angle prediction mode is the horizontal leftward motion information angle prediction mode, the traversal range is A m To A m+n-1 From A m To A m+n-1 And performing sequential traversal. If the motion information angle prediction mode is the horizontal upward motion information angle prediction mode, the traversal range is A m+1 To A 2m+n-1 From A m+1 To A 2m+n-1 And performing sequential traversal. If the motion information angle prediction mode is a vertical upward motion information angle prediction mode, the traversal range is A m+n+1 To A 2m+n From A m+n+1 To A 2m+n And performing sequential traversal. If the motion information angle prediction mode is a vertical rightward motion information angle prediction mode, the traversal range is A m+n+2 To A 2m+2n From A m+n+2 To A 2m+2n And performing sequential traversal.
Application scenario 20: referring to fig. 4H, motion information padding needs to be performed on the intra blocks of the surrounding blocks, or the non-encoded blocks, or the surrounding blocks whose prediction mode is intra block copy mode, and the padding method is similar to the padding method of the intra reference pixels.
Example 9: in embodiments 1 to 5, relating to an encoding/decoding end filling motion information that is not available in the peripheral blocks of a current block, for a horizontal rightward motion information angle prediction mode, a vertical downward motion information angle prediction mode, a diagonal downward motion information angle prediction mode, and other motion information angle prediction modes, the peripheral blocks of the current block may be the right peripheral blocks of the current block and/or the lower peripheral blocks of the current block, in this case, as to how to fill the motion information that is not available in the peripheral blocks of the current block, in a possible implementation, the following filling may be performed:
traversing the peripheral blocks of the current block according to the traversal sequence from the right peripheral block to the lower peripheral block of the current block, and traversing the first peripheral block with available motion information; if the peripheral block comprises a first peripheral block without available motion information before, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise second peripheral blocks without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral blocks into the second peripheral blocks. Or traversing the peripheral blocks of the current block according to a traversal sequence from the lower peripheral block to the right peripheral block of the current block to traverse the first peripheral block with available motion information; if the peripheral block comprises a first peripheral block without available motion information before, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise second peripheral blocks without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral blocks into the second peripheral blocks.
In another possible implementation manner, traversing the peripheral blocks of the current block according to a traversing sequence from the right peripheral block to the lower peripheral block of the current block, and traversing the first peripheral block; if the first peripheral block does not have available motion information, filling the motion information for the first peripheral block; and continuously traversing the peripheral blocks behind the first peripheral block, and filling the motion information of the last peripheral block of the traversed third peripheral block into the third peripheral block if the peripheral blocks behind the first peripheral block comprise the third peripheral block without available motion information. Or traversing according to the traversing sequence from the lower peripheral block to the right peripheral block of the current block to traverse the first peripheral block; if the first peripheral block does not have available motion information, filling the motion information for the first peripheral block; and continuously traversing the peripheral blocks behind the first peripheral block, and filling the motion information of the last peripheral block of the traversed third peripheral block into the third peripheral block if the peripheral blocks behind the first peripheral block comprise the third peripheral block without available motion information.
For example, the traversing in the traversal order from the right peripheral block to the bottom peripheral block of the current block may include: if the current block does not have the right peripheral block, traversing the lower peripheral block of the current block; and traversing the right peripheral block of the current block if the current block does not have the lower peripheral block. The traversing in the traversal order from the lower peripheral block to the right peripheral block of the current block may include: if the current block does not have the right peripheral block, traversing the lower peripheral block of the current block; and traversing the right peripheral block of the current block if the current block does not have the lower peripheral block.
For example, the right peripheral block may include a block adjacent to the right side of the current block and a block not adjacent thereto. The lower peripheral block may include a block adjacent to the lower side of the current block and a non-adjacent block. The number of first perimeter blocks may be one or more, the first perimeter block being all perimeter blocks before the first perimeter block traversed where there is available motion information. The number of the second peripheral blocks may be one or more. The number of the third peripheral blocks may be one or more.
For example, if there is no available motion information in the first peripheral block, the filling of motion information in the first peripheral block may include, but is not limited to: filling zero motion information into the first peripheral block; or filling motion information in the historical motion information list for the first peripheral block; alternatively, the motion information of the neighboring block adjacent to the first neighboring block is filled in the first neighboring block.
The following describes a filling process of motion information with reference to several specific application scenarios.
Application scenario 1: referring to FIG. 4I, the width and height of the current block are W and H, where m is W/4, n is H/4, and the peripheral blocks where the pixel points at the upper left corner of the current block are (x, y), (x + W, y) are marked as A 0 According to the formula from A0 to A m+n Is traversed in the sequential direction of (A), and each peripheral block is marked as A 1 、A 2 、...、A m+n ,A m+n Is the peripheral block where the pixel (x, y + H) is located. If a certain peripheral block exceeds the boundary of the current image, the peripheral block is clipped to the boundary of the current image, or if a certain peripheral block exceeds the range of the current maximum coding unit, the peripheral block is clipped to the range of the current maximum coding unit. Exemplary ofThe step of putting the peripheral blocks into the boundary of the current image means that: if the peripheral block exceeds the right boundary of the current image, the peripheral block is translated leftwards so as to be positioned in the boundary of the current image, and if the peripheral block exceeds the lower boundary of the current image, the peripheral block is translated upwards so as to be positioned in the boundary of the current image.
For example, referring to several ways of embodiment 7, the motion information of the peripheral matching block may be determined according to the motion information of the reference matching block, that is, the motion information of the peripheral matching block is known to be available, or the motion information of the peripheral matching block is known to be unavailable. When the motion information of the surrounding matching blocks is available, the motion information (such as forward motion information and/or backward motion information) of the surrounding matching blocks can be known. On the basis, filling the unavailable motion information in the peripheral blocks by adopting the following modes: from A 0 To A m+n Performing sequential traversal, finding the first peripheral block with available motion information, and recording as A i . If i is greater than 0, then A i Motion information of previously traversed peripheral blocks, all using A i Is filled in. Judging whether i is equal to m + n, if so, finishing the filling, and exiting the filling process; otherwise, from A i+1 To A m+n And traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
Application scenario 2: referring to fig. 4I, the motion information that is not available in the peripheral blocks is filled in as follows: from A m+n To A 0 And traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as A i . If i is less than m + n, then A i Motion information of previously traversed peripheral blocks, all using A i Is filled in. Judging whether i is equal to 0, if yes, finishing filling and exiting the filling process; otherwise, from A i+1 To A 0 And traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
Application scenario 3: referring to fig. 4J, the motion information that is not available in the peripheral blocks is filled in as follows: first, judge A 0 Is available if A 0 Is not available, the zero motion information pair a may be used 0 Is filled in. Then, from A 1 To A m+n And traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
Application scenario 4: referring to fig. 4J, the motion information that is not available in the peripheral blocks is filled in as follows: first, judge A m+n Is available if A m+n If the motion information of (a) is not available, then the zero motion information pair A may be used m+n Is filled in. Then, from A m+n-1 To A 0 And traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
Application scenario 5: referring to fig. 4J, the motion information that is not available in the surrounding blocks is filled in as follows: first, judge A 0 If motion information is available, if A 0 If the motion information of (a) is not available, the motion information (e.g. any motion information in the historical motion information list) in the historical motion information list (i.e. the HMVP list) may be used for a 0 Is filled in. Then, from A 1 To A m+n And traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
Application scenario 6: referring to fig. 4J, the motion information that is not available in the peripheral blocks is filled in as follows: judgment A m+n Is available if A m+n Using the motion information pair A in the historical motion information list if the motion information is not available m+n Is filled in. From A m+n-1 To A 0 And traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
Application scenario 7: referring to fig. 4K, the motion information that is not available in the surrounding blocks is filled in as follows: first, judge A 0 If motion information is available, if A 0 If the motion information of (1) is not available, then the motion information of (A) can be used 0 Motion information pair A of adjacent peripheral blocks 0 Is filled with motion information of A 0 The adjacent peripheral blocks may be located at A 0 The peripheral block on the upper side, i.e. the peripheral block located at the upper right corner of the current block, as shown in A of FIG. 4K 0 ', in the filling process of example 8, A has been completed 0 ' filling, i.e. A 0 ' motion information already exists, and thus, A can be used 0 ' motion information pair A 0 Is filled in. Then, from A 1 To A m+n And traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
Application scenario 8: referring to fig. 4K, the motion information that is not available in the surrounding blocks is filled in as follows: first, judge A m+n Is available if A m+n If the motion information of (1) is not available, then the motion information of (A) can be used m+n Motion information pair A of adjacent peripheral blocks m+n Is filled with motion information of A m+n The adjacent peripheral blocks may be located at A m+n The peripheral block on the left side, i.e. the peripheral block located at the lower left corner of the current block, as Am + in FIG. 4K n ', in the filling process of example 8, A has been completed m+n ' filling, i.e. A m+n ' motion information already exists, therefore, A can be used m+n ' motion information pair A m+n Is filled in. Then, from A m+n- 1 To A 0 Traversing is performed, and if the motion information of the traversed peripheral block is not available, the previous cycle most adjacent to the peripheral block is usedAnd filling the motion information of the edge block until the traversal is finished.
Example 10: in embodiments 1 to 5, the present invention relates to motion compensation using a motion information angle prediction mode, for example, motion information of each sub-region of a current block is determined according to motion information of a peripheral matching block pointed by a preconfigured angle of the motion information angle prediction mode, and a target prediction value of each sub-region is determined according to the motion information of the sub-region. The following describes a motion compensation process for each sub-region with reference to a specific application scenario.
Application scenario 1: and selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle corresponding to the motion information angle prediction mode. The current block is divided into at least one sub-region, and the dividing manner is not limited. And aiming at each sub-region of the current block, selecting a peripheral matching block corresponding to the sub-region from the plurality of peripheral matching blocks, and determining the motion information of the sub-region according to the motion information of the selected peripheral matching block. Then, for each sub-region, the target prediction value of the sub-region is determined according to the motion information of the sub-region, and the determination process is not limited.
For example, the motion information of the selected peripheral matching block may be used as the motion information of the sub-region. For example, assuming that the motion information of the peripheral matching block is one-way motion information, the one-way motion information may be taken as the motion information of the sub-area. Assuming that the motion information of the peripheral matching block is bidirectional motion information, the bidirectional motion information may be used as the motion information of the sub-region, or one of the bidirectional motion information may be used as the motion information of the sub-region, or the other of the bidirectional motion information may be used as the motion information of the sub-region.
For example, the sub-region partition information may be independent of the motion information angle prediction mode, such as sub-region partition information of the current block according to which the current block is partitioned into at least one sub-region, determined according to the size of the current block. For example, if the size of the current block satisfies: the width is greater than or equal to a preset size parameter (empirically configured, such as 8), and the height is greater than or equal to the preset size parameter, then the size of the sub-region is 8 × 8, i.e. the current block is divided into at least one sub-region in a manner of 8 × 8.
For example, the sub-region partition information may relate to a motion information angle prediction mode, and for a horizontal upward motion information angle prediction mode, a horizontal downward motion information angle prediction mode, or a vertical rightward motion information angle prediction mode, if the width of the current block is greater than or equal to a preset size parameter, the height of the current block is greater than or equal to a preset size parameter, and the size of the sub-region is 8 × 8; if the width of the current block is smaller than the preset size parameter, or the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4 x 4.
For the horizontal leftward motion information angle prediction mode, if the width of the current block is greater than the preset size parameter, the size of the sub-region is 4 × 4 of the width of the current block, or 4 × 4 of the size of the current block; if the width of the current block is equal to the preset size parameter, the height of the current block is greater than or equal to the preset size parameter, and the size of the sub-region is 8 × 8; if the width of the current block is smaller than the preset size parameter, the size of the sub-region is 4 x 4.
For the vertical upward motion information angle prediction mode, if the height of the current block is greater than the preset size parameter, the size of the sub-region is 4 × the height of the current block, or the size is 4 × 4; if the height of the current block is higher than the preset size parameter, the width of the current block is larger than or equal to the preset size parameter, and the size of the sub-region is 8 × 8; if the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4 x 4.
For the horizontal leftward motion information angular prediction mode, if the width of the current block is greater than 8, the size of the sub-region may also be 4 × 4. For the vertical upward motion information angular prediction mode, if the height of the current block is greater than 8, the size of the sub-region may also be 4 × 4.
Of course, the above are only examples, and the preset size parameter may be 8, and may be greater than 8.
Application scenario 2: and selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle corresponding to the motion information angle prediction mode. The current block is divided into at least one sub-region in a manner of 8 × 8 (i.e., the size of the sub-region is 8 × 8). And aiming at each sub-region of the current block, selecting a peripheral matching block corresponding to the sub-region from the plurality of peripheral matching blocks, and determining the motion information of the sub-region according to the motion information of the selected peripheral matching block. And for each sub-area, determining a target predicted value of the sub-area according to the motion information of the sub-area, and not limiting the determination process.
Application scenario 3: referring to fig. 5A, motion compensation is performed at an angle for each 4 × 4 sub-region within the current block. And if the motion information of the peripheral matching block is unidirectional motion information, determining the unidirectional motion information as the motion information of the sub-area. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the sub-region, or determining the forward motion information or the backward motion information in the bidirectional motion information as the motion information of the sub-region.
According to fig. 5A, the size of the current block is 4 × 8, and for the horizontal leftward motion information angle prediction mode, two sub-regions with the same size are divided, wherein one sub-region of 4 × 4 corresponds to the peripheral matching block A1, and the motion information of the sub-region of 4 × 4 is determined according to the motion information of A1. And the other 4 × 4 sub-area corresponds to the peripheral matching block A2, and the motion information of the 4 × 4 sub-area is determined according to the motion information of A2. And aiming at the vertical upward motion information angle prediction mode, dividing two sub-regions with the same size, wherein one 4 × 4 sub-region corresponds to the peripheral matching block B1, and determining the motion information of the 4 × 4 sub-region according to the motion information of the B1. And the other 4 × 4 sub-area corresponds to the peripheral matching block B1, and the motion information of the 4 × 4 sub-area is determined according to the motion information of B1. And aiming at the angle prediction mode of the horizontal upward motion information, dividing two sub-regions with the same size, wherein one 4 x 4 sub-region corresponds to the peripheral matching block E, and determining the motion information of the 4 x 4 sub-region according to the motion information of E. And the other 4 × 4 sub-area corresponds to the peripheral matching block A1, and the motion information of the 4 × 4 sub-area is determined according to the motion information of A1. And aiming at the angle prediction mode of the horizontal downward motion information, dividing two sub-regions with the same size, wherein one 4 × 4 sub-region corresponds to the peripheral matching block A2, and determining the motion information of the 4 × 4 sub-region according to the motion information of the A2. Another sub-area of 4 × 4 corresponds to the peripheral matching block A3, and the motion information of the sub-area of 4 × 4 is determined according to the motion information of A3. And aiming at the angle prediction mode of the horizontal downward motion information, dividing two sub-regions with the same size, wherein one 4 x 4 sub-region corresponds to the peripheral matching block B2, and determining the motion information of the 4 x 4 sub-region according to the motion information of the B2. Another sub-region of 4 × 4 corresponds to the peripheral matching block B3, and the motion information of the sub-region of 4 × 4 is determined according to the motion information of B3.
Application scenario 4: referring to fig. 5B, if the width W of the current block is smaller than 8 and the height H of the current block is larger than 8, then each sub-region in the current block is motion compensated as follows: and performing motion compensation on each 4 × h sub-region according to the vertical angle for the vertical upward motion information angle prediction mode. And performing motion compensation according to a certain angle for each 4 x 4 sub-area in the current block aiming at other motion information angle prediction modes except the vertical upward motion information angle prediction mode.
According to fig. 5B, the size of the current block is 4 × 16, and for the horizontal leftward motion information angular prediction mode, 4 sub-regions with the size of 4 × 4 are divided, wherein one sub-region with 4 × 4 corresponds to the peripheral matching block A1, and the motion information of the sub-region with 4 × 4 is determined according to the motion information of A1. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A4. For the vertical upward motion information angle prediction mode, 4 sub-regions with the size of 4 × 4 are divided, each sub-region with the size of 4 × 4 corresponds to the peripheral matching block B1, the motion information of each sub-region with the size of 4 × 4 is determined according to the motion information of B1, and the motion information of the four sub-regions is the same, so in this embodiment, the current block may not be divided into sub-regions, and the current block itself serves as one sub-region and corresponds to one peripheral matching block B1.
And (3) dividing 4 sub-regions with the size of 4 x 4 aiming at the horizontal upward motion information angle prediction mode, wherein one sub-region with the size of 4 x 4 corresponds to the peripheral matching block E, and determining the motion information of the sub-region with the size of 4 x 4 according to the motion information of E. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A1. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. And for the horizontal downward motion information angle prediction mode, dividing 4 sub-regions with the size of 4 × 4, wherein one sub-region with the size of 4 × 4 corresponds to the peripheral matching block A2, and determining the motion information of the sub-region with the size of 4 × 4 according to the motion information of A2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A4. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A5. And for the horizontal downward motion information angle prediction mode, dividing 4 sub-regions with the size of 4 × 4, wherein one sub-region with the size of 4 × 4 corresponds to the peripheral matching block B2, and determining the motion information of the sub-region with the size of 4 × 4 according to the motion information of B2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B4. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B5.
Application scenario 5: referring to fig. 5C, if the width W of the current block is greater than 8 and the height H of the current block is less than 8, then each sub-region in the current block is motion compensated as follows: and performing motion compensation on each W4 sub-region according to the horizontal angle aiming at the horizontal leftward motion information angle prediction mode. For other types of angular prediction modes of motion information, motion compensation may be performed at an angle for each 4 x 4 sub-region within the current block.
According to fig. 5C, the size of the current block is 16 × 4, for the horizontal leftward motion information angle prediction mode, 4 sub-regions with the size of 4 × 4 are divided, each sub-region with 4 × 4 corresponds to a peripheral matching block A1, the motion information of each sub-region with 4 × 4 is determined according to the motion information of A1, and the motion information of the four sub-regions is the same, so that in this embodiment, the current block itself may not be divided into sub-regions, and the current block itself serves as a sub-region and corresponds to a peripheral matching block A1. And aiming at the vertical upward motion information angle prediction mode, dividing 4 sub-regions with the size of 4 × 4, wherein one sub-region with the size of 4 × 4 corresponds to the peripheral matching block B1, and determining the motion information of the sub-region with the size of 4 × 4 according to the motion information of B1. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B4.
And (3) dividing 4 sub-regions with the size of 4 x 4 aiming at the horizontal upward motion information angle prediction mode, wherein one sub-region with the size of 4 x 4 corresponds to the peripheral matching block E, and determining the motion information of the sub-region with the size of 4 x 4 according to the motion information of E. And a4 × 4 sub-area corresponds to the peripheral matching block B1, and the motion information of the 4 × 4 sub-area is determined according to the motion information of B1. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3. And for the horizontal downward motion information angle prediction mode, dividing 4 sub-regions with the size of 4 × 4, wherein one sub-region with the size of 4 × 4 corresponds to the peripheral matching block A2, and determining the motion information of the sub-region with the size of 4 × 4 according to the motion information of A2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A4. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A5. And aiming at the angle prediction mode of the vertical rightward motion information, dividing 4 sub-regions with the size of 4 × 4, wherein one sub-region with the size of 4 × 4 corresponds to the peripheral matching block B2, and determining the motion information of the sub-region with the size of 4 × 4 according to the motion information of B2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B4. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B5.
Application scenario 6: if the width W of the current block is equal to 8 and the height H of the current block is equal to 8, motion compensation is performed on each 8 × 8 sub-region (i.e., the sub-region is the current block itself) in the current block according to a certain angle. If the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block may be selected from the motion information of the plurality of peripheral matching blocks according to the corresponding angle with respect to the motion information of the sub-region. For example, as shown in fig. 5D, for the horizontal leftward motion information angle prediction mode, the motion information of the peripheral matching block A1 may be selected, and the motion information of the peripheral matching block A2 may also be selected. Referring to fig. 5E, for the vertical upward motion information angle prediction mode, the motion information of the peripheral matching block B1 may be selected, and the motion information of the peripheral matching block B2 may also be selected. Referring to fig. 5F, for the horizontal upward motion information angle prediction mode, motion information of the peripheral matching block E may be selected, motion information of the peripheral matching block B1 may be selected, and motion information of the peripheral matching block A1 may be selected. Referring to fig. 5G, for the horizontal downward motion information angle prediction mode, motion information of the peripheral matching block A2 may be selected, motion information of the peripheral matching block A3 may be selected, and motion information of the peripheral matching block A4 may be selected. Referring to fig. 5H, for the vertical rightward motion information angle prediction mode, motion information of the peripheral matching block B2 may be selected, motion information of the peripheral matching block B3 may be selected, and motion information of the peripheral matching block B4 may be selected.
According to fig. 5D, for the horizontal leftward motion information angle prediction mode, a sub-region with a size of 8 × 8 is divided, the sub-region corresponds to the peripheral matching block A1, and the motion information of the sub-region is determined according to the motion information of A1. Or, the sub-area corresponds to the peripheral matching block A2, and the motion information of the sub-area is determined according to the motion information of A2. According to fig. 5E, for the angular prediction mode of the vertical leftward motion information, a sub-region with a size of 8 × 8 is divided, and this sub-region corresponds to the peripheral matching block B1, and the motion information of the sub-region is determined according to the motion information of B1. Or, the sub-region corresponds to the peripheral matching block B2, and the motion information of the sub-region is determined according to the motion information of B2. According to fig. 5F, for the horizontal upward motion information angle prediction mode, a sub-region with a size of 8 × 8 is divided, the sub-region corresponds to the peripheral matching block E, and the motion information of the sub-region is determined according to the motion information of E. Or, the sub-area corresponds to the peripheral matching block B1, and the motion information of the sub-area is determined according to the motion information of B1. Or, the sub-area corresponds to the peripheral matching block A1, and the motion information of the sub-area is determined according to the motion information of A1. According to fig. 5G, when the horizontal downward motion information angle prediction mode is performed, the sub-region with the size of 8 × 8 is divided, the sub-region corresponds to the peripheral matching block A2, and the motion information of the sub-region is determined according to the motion information of A2. Or, the sub-area corresponds to the peripheral matching block A3, and the motion information of the sub-area is determined according to the motion information of A3. Or, the sub-region corresponds to the peripheral matching block A4, and the motion information of the sub-region is determined according to the motion information of A4. According to fig. 5H, for the vertical rightward motion information angle prediction mode, the sub-region with the size of 8 × 8 is divided, the sub-region corresponds to the peripheral matching block B2, and the motion information of the sub-region is determined according to the motion information of B2. Or, the sub-area corresponds to the peripheral matching block B3, and the motion information of the sub-area is determined according to the motion information of B3. Or, the sub-area corresponds to the peripheral matching block B4, and the motion information of the sub-area is determined according to the motion information of B4.
Application scenario 7: the width W of the current block is greater than or equal to 16, the height H of the current block is equal to 8, and on the basis of the motion compensation, each sub-area in the current block is subjected to motion compensation in the following way: and performing motion compensation on each W4 sub-area according to the horizontal angle aiming at the horizontal leftward motion information angle prediction mode. For other types of angular prediction modes of motion information, motion compensation is performed at an angle for each 8 × 8 sub-region in the current block. For each sub-region of 8 × 8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks for the motion information of the sub-region. For example, referring to fig. 5I, for the horizontal left motion information angle prediction mode, the motion information of the peripheral matching block A1 may be selected for the first W × 4 sub-region, and the motion information of the peripheral matching block A2 may be selected for the second W × 4 sub-region. Referring to fig. 5J, for the vertical upward motion information angle prediction mode, for the first 8 × 8 sub-region, the motion information of the peripheral matching block B1 may be selected, and the motion information of the peripheral matching block B2 may be selected. For the second 8 × 8 sub-area, the motion information of the peripheral matching block B3 may be selected, and the motion information of the peripheral matching block B4 may be selected. Other motion information angle prediction modes are similar and will not be described herein.
According to fig. 5I, the size of the current block is 16 × 8, and for the horizontal leftward motion information angular prediction mode, 2 sub-regions with the size of 16 × 4 are divided, wherein one sub-region with 16 × 4 corresponds to the peripheral matching block A1, and the motion information of the sub-region with 16 × 4 is determined according to the motion information of A1. Another 16 × 4 sub-area corresponds to the peripheral matching block A2, and the motion information of the 16 × 4 sub-area is determined according to the motion information of A2. According to fig. 5J, the size of the current block is 16 × 8, and for the vertical left motion information angular prediction mode, 2 sub-regions with the size of 8 × 8 are divided, wherein one sub-region with 8 × 8 corresponds to the peripheral matching block B1 or B2, and the motion information of the sub-region with 8 × 8 is determined according to the motion information of B1 or B2. Another sub-region of 8 × 8 corresponds to the peripheral matching block B3 or B4, and the motion information of the sub-region of 8 × 8 is determined based on the motion information of B3 or B4.
Application scenario 8: the width W of the current block is equal to 8, the height H of the current block is greater than or equal to 16, and on the basis of the above, the motion compensation is carried out on each sub-area in the current block in the following way: and performing motion compensation on each 4 × h sub-region according to the vertical angle aiming at the vertical left motion information angle prediction mode. For other types of angular prediction modes of motion information, motion compensation is performed at an angle for each 8 x 8 sub-region within the current block. For each sub-region of 8 × 8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks for the motion information of the sub-region. For example, referring to fig. 5K, for the vertical upward motion information angular prediction mode, the motion information of the peripheral matching block B1 may be selected for the first 4 × h sub-region, and the motion information of the peripheral matching block B2 may be selected for the second 4 × h sub-region. Referring to fig. 5L, for the horizontal leftward motion information angle prediction mode, for the first 8 × 8 sub-region, the motion information of the peripheral matching block A1 may be selected, and the motion information of the peripheral matching block A2 may be selected. For the second 8 × 8 sub-area, the motion information of the peripheral matching block A1 may be selected, and the motion information of the peripheral matching block A2 may be selected. Other motion information angle prediction modes are similar and will not be described herein.
According to fig. 5K, the size of the current block is 8 × 16, for the vertical upward motion information angular prediction mode, 2 sub-regions with the size of 4 × 16 are divided, one sub-region with 4 × 16 corresponds to the peripheral matching block B1, and the motion information of the sub-region with 4 × 16 is determined according to the motion information of B1. And the other 4 × 16 sub-area corresponds to the peripheral matching block B2, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B2. According to fig. 5L, the size of the current block is 16 × 8, 2 sub-regions with the size of 8 × 8 are divided for the horizontal leftward motion information angle prediction mode, one sub-region with 8 × 8 corresponds to the peripheral matching block A1 or A2, and the motion information of the sub-region with 8 × 8 is determined according to the motion information of the corresponding peripheral matching block. The other 8 × 8 sub-region corresponds to the peripheral matching block A1 or A2, and the motion information of the 8 × 8 sub-region is determined according to the motion information of the corresponding peripheral matching block.
Application scenario 9: the width W of the current block is greater than or equal to 16, the height H can be greater than or equal to 16, and each sub-area in the current block is motion compensated in the following way: and performing motion compensation on each 4 × h sub-region according to the vertical angle aiming at the vertical upward motion information angle prediction mode. And performing motion compensation on each W4 sub-region according to the horizontal angle aiming at the horizontal leftward motion information angle prediction mode. For other types of angular prediction modes of motion information, motion compensation is performed at an angle for each 8 x 8 sub-region within the current block. And if the sub-region corresponds to a plurality of peripheral matching blocks, selecting the motion information of any peripheral matching block from the motion information of the peripheral matching blocks aiming at the motion information of the sub-region.
Referring to fig. 5M, for the vertical upward motion information angle prediction mode, the motion information of the peripheral matching block B1 is selected for the first 4 × h sub-region, the motion information of the peripheral matching block B2 is selected for the second 4 × h sub-region, the motion information of the peripheral matching block B3 is selected for the third 4 × h sub-region, and the motion information of the peripheral matching block B4 is selected for the fourth 4 × h sub-region. For the horizontal leftward motion information angle prediction mode, motion information of the peripheral matching block A1 is selected for the first W × 4 sub-region, motion information of the peripheral matching block A2 is selected for the second W × 4 sub-region, motion information of the peripheral matching block A3 is selected for the third W × 4 sub-region, and motion information of the peripheral matching block A4 is selected for the fourth W × 4 sub-region. Other types of motion information angular prediction modes are similar.
According to fig. 5M, the size of the current block is 16 × 16, for the vertical upward motion information angular prediction mode, 4 sub-regions with the size of 4 × 16 are divided, one sub-region with 4 × 16 corresponds to the peripheral matching block B1, and the motion information of the sub-region with 4 × 16 is determined according to the motion information of B1. And a4 × 16 sub-area corresponds to the peripheral matching block B2, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B2. And a4 × 16 sub-area corresponds to the peripheral matching block B3, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B3. And a4 × 16 sub-area corresponds to the peripheral matching block B4, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B4. According to fig. 5M, the size of the current block is 16 × 16, and for the horizontal leftward motion information angular prediction mode, 4 sub-regions with the size of 16 × 4 are divided, wherein one sub-region with 16 × 4 corresponds to the peripheral matching block A1, and the motion information of the sub-region with 16 × 4 is determined according to the motion information of A1. One of the 16 × 4 sub-regions corresponds to the peripheral matching block A2, and the motion information of the 16 × 4 sub-region is determined according to the motion information of A2. One of the 16 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 16 × 4 sub-region is determined according to the motion information of A3. One of the 16 × 4 sub-regions corresponds to the peripheral matching block A4, and the motion information of the 16 × 4 sub-region is determined according to the motion information of A4.
Application scenario 10: the width W of the current block may be greater than or equal to 8, and the height H of the current block may be greater than or equal to 8, and then motion compensation is performed on each 8 × 8 sub-region within the current block. Referring to fig. 5N, for each sub-region of 8 × 8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks with respect to the motion information of the sub-region. The sub-region division size is independent of the motion information angle prediction mode, and as long as the width is greater than or equal to 8 and the height is greater than or equal to 8, the sub-region division size may be 8 × 8 in any motion information angle prediction mode.
According to fig. 5N, the size of the current block is 16 × 16, for the horizontal leftward motion information angular prediction mode, 4 sub-regions with the size of 8 × 8 are divided, wherein one sub-region with 8 × 8 corresponds to the peripheral matching block A1 or A2, and the motion information of the sub-region with 8 × 8 is determined according to the motion information of A1 or A2. One of the sub-regions of 8 × 8 corresponds to the peripheral matching block A1 or A2, and the motion information of the sub-region of 8 × 8 is determined according to the motion information of A1 or A2. One of the sub-regions of 8 × 8 corresponds to the peripheral matching block A3 or A4, and the motion information of the sub-region of 8 × 8 is determined according to the motion information of A3 or A4. One of the sub-regions of 8 × 8 corresponds to the peripheral matching block A3 or A4, and the motion information of the sub-region of 8 × 8 is determined according to the motion information of A3 or A4. And for the horizontal leftward motion information angle prediction mode, dividing 4 sub-regions with the size of 8 × 8, wherein one sub-region with the size of 8 × 8 corresponds to the peripheral matching block B1 or B2, and determining the motion information of the sub-region with the size of 8 × 8 according to the motion information of B1 or B2. One of the 8 × 8 sub-regions corresponds to the peripheral matching block B1 or B2, and the motion information of the 8 × 8 sub-region is determined according to the motion information of B1 or B2. One of the 8 × 8 sub-regions corresponds to the peripheral matching block B3 or B4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of B3 or B4. One of the 8 × 8 sub-regions corresponds to the peripheral matching block B3 or B4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of B3 or B4. For the horizontal upward motion information angular prediction mode, 4 sub-regions with size of 8 × 8 may be divided. Then, for each 8 × 8 sub-region, a peripheral matching block (E, B2, or A2) corresponding to the 8 × 8 sub-region may be determined, and the determination manner is not limited thereto, and the motion information of the 8 × 8 sub-region is determined according to the motion information of the peripheral matching block. For the horizontal downward motion information angle prediction mode, 4 sub-regions with the size of 8 × 8 are divided. Then, for each 8 × 8 sub-region, a peripheral matching block (A3, A5, or A7) corresponding to the 8 × 8 sub-region may be determined, without limitation, and the motion information of the 8 × 8 sub-region is determined according to the motion information of the peripheral matching block. For the vertical rightward motion information angular prediction mode, 4 sub-regions with size of 8 × 8 are divided. Then, for each sub-region of 8 × 8, a peripheral matching block (B3, B5, or B7) corresponding to the sub-region of 8 × 8 may be determined, without limitation, and the motion information of the sub-region of 8 × 8 may be determined according to the motion information of the peripheral matching block.
Application scenario 11: the width W of the current block is greater than or equal to 8, and when the height H of the current block is greater than or equal to 8, motion compensation is performed on each 8 × 8 sub-region in the current block, and for each sub-region, any one of several pieces of motion information of the surrounding matching blocks is selected according to a corresponding angle, as shown in fig. 5N, which is not described herein again.
Application scenario 12: the width and height of the current block are W and H respectively, and the width and height of the upper left corner inside the current block areThe peripheral blocks where the pixel points are (x, y), (x + W, y) are marked as A 0 According to a from A 0 To A m+n Go through the sequence of (A), and each peripheral block is marked as A 1 、A 2 、...、A m +n,A m+n Is the peripheral block where the pixel (x, y + H) is located. The motion information of each peripheral block (motion information for which padding has been completed) is respectively recorded as neighbor motion info [0 ]]、NeighborMotionInfo[1]...NeighborMotionlnfo[m+n]And m and n have the size of W/4 and H/4 respectively. For the current block, i =0 [ ((W > 2) -1, j =0 [ ((H > 2) -1), ((i, j) is the index of the 4 x 4 sub-region within the current block, curMotionInfo [ i [ ] [, i [ ], and][j]motion information of a sub-region inside the current block.
For the horizontal rightward motion information angle prediction mode, curMotionInfo [ i ] [ j ] = NeighborMotionInfo [ (j > 1) < 1]. CurMotionInfo [ i ] [ j ] = NeighborMotionInfo [ (cbWidth > 2) + (cbHeight > 2) - ((i > 1) < 1) ], for the vertical downward motion information angle prediction mode. CurMotionInfo [ i ] [ j ] = NeighborMotionInfo [ (cbHeight > 2) + ((j > 1) < 1) - ((i > 1) < 1) ], for diagonal down-right motion information angle prediction mode.
And after the motion information of each sub-area in the current block is obtained, performing motion compensation according to the respective motion information.
For example, the width and height of the current block are both 32, a schematic diagram of the motion information of each sub-region inside the current block is shown in fig. 5O. According to fig. 5O, the size of the current block is 32 × 32, and 16 sub-regions with the same size are divided for the horizontal rightward motion information angular prediction mode, and the size of the sub-regions is 8 × 8. The 1 st to 4 th sub-areas correspond to the peripheral matching block A0, and the motion information of the 1 st to 4 th sub-areas is determined according to the motion information of the A0. The 5 th to 8 th sub-areas correspond to the peripheral matching block A2, and the motion information of the 5 th to 8 th sub-areas is determined according to the motion information of A2. The 9 th to 12 th sub-areas correspond to the peripheral matching block A4, and the motion information of the 9 th to 12 th sub-areas is determined according to the motion information of the A4. The 13 th to 16 th sub-areas correspond to the peripheral matching block A6, and the motion information of the 13 th to 16 th sub-areas is determined according to the motion information of A6.
According to fig. 5O, 16 sub-regions of uniform size are divided for the vertical downstroke information angular prediction mode, and the size of these sub-regions is 8 × 8. The 1,5,9, 13 sub-regions correspond to the peripheral matching block A16, and the motion information of the 1,5,9, 13 sub-regions is determined according to the motion information of A16. The 2,6, 10, 14 sub-regions correspond to the peripheral matching block a14, and the motion information of the 2,6, 10, 14 sub-regions is determined according to the motion information of a 14. The 3,7, 11, 15 sub-regions correspond to the peripheral matching block a12, and the motion information of the 3,7, 11, 15 sub-regions is determined according to the motion information of a 12. The 4,8, 12 and 16 th sub-areas correspond to the peripheral matching block A10, and the motion information of the 4,8, 12 and 16 th sub-areas is determined according to the motion information of the A10.
According to fig. 5O, 16 sub-regions with the same size are divided for the angular prediction mode of the motion information moving diagonally to the right and downward, and the size of the sub-regions is 8 × 8. The 1,6, 11, 16 sub-regions correspond to the peripheral matching block A8, and the motion information of the 1,6, 11, 16 sub-regions is determined according to the motion information of A8. The 2,7 and 12 sub-regions correspond to the peripheral matching block A6, and the motion information of the 2,7 and 12 sub-regions is determined according to the motion information of the A6. The 3 rd and 8 th sub-areas correspond to the peripheral matching block A4, and the motion information of the 3 rd and 8 th sub-areas is determined according to the motion information of the A4. The 4 th sub-area corresponds to the peripheral matching block A2, and the motion information of the 4 th sub-area is determined according to the motion information of A2. The 5 th, 10 th and 15 th sub-regions correspond to the peripheral matching block A10, and the motion information of the 5 th, 10 th and 15 th sub-regions is determined according to the motion information of the A10. The 9 th and 14 th sub-areas correspond to the peripheral matching block A12, and the motion information of the 9 th and 14 th sub-areas is determined according to the motion information of the A12. The 13 th sub-area corresponds to the peripheral matching block a14, and the motion information of the 13 th sub-area is determined according to the motion information of a 14.
Application scenario 13: the width and the height of the current block are respectively W and H, the peripheral blocks where pixel points at the upper left corner inside the current block are (x, y) and (x + W, y) are located are marked as A 0 According to a from A 0 To A m+n Go through the sequence of (A), and each peripheral block is marked as A 1 、A 2 、...、A m+n ,A m+n Is the location of the pixel point (x, y + H)The peripheral blocks of (a). The motion information of each peripheral block (motion information for which padding has been completed) is respectively recorded as neighbor motion info [0 ]]、NeighborMotionInfo[1]...NeighborMotionInfo[m+n]And m and n are W/4 and H/4 respectively. For the current block, i =0 [ ((W > 2) -1, j =0 [ ((H > 2) -1), ((i, j) is the index of the 4 x 4 sub-region within the current block, curMotionInfo [ i [ ] [, i [ ], and][j]motion information of a sub-region inside the current block.
For the horizontal rightward movement information angle prediction mode, curMotionInfo [ i ] [ j ] = NeighborMotionInfo [ ((j > 1) < 1) +1]. CurMotionInfo [ i ] [ j ] = NeighborMotionInfo [ ((cbWidth > 2) + (cbHeight > 2) - ((i > 1) < 1)) -1] for the vertical downward motion information angle prediction mode. For the diagonal down-right motion information angle prediction mode, curMotionInfo [ i ] [ j ] = NeighborMotionInfo [ (cbHeight > 2) + ((j > 1) < 1) - ((i > 1) < 1) ].
And after the motion information of each sub-area in the current block is obtained, performing motion compensation according to the respective motion information.
For example, the width and height of the current block are both 32, and a schematic diagram of the motion information of each sub-region inside the current block is shown in fig. 5P. According to fig. 5P, the size of the current block is 32 × 32, and 16 sub-regions with the same size are divided for the horizontal rightward motion information angular prediction mode, and the size of the sub-regions is 8 × 8. The 1 st to 4 th sub-areas correspond to the peripheral matching block A1, and the motion information of the 1 st to 4 th sub-areas is determined according to the motion information of the A1. The 5 th to 8 th sub-areas correspond to the peripheral matching block A3, and the motion information of the 5 th to 8 th sub-areas is determined according to the motion information of A3. The 9 th to 12 th sub-areas correspond to the peripheral matching block A5, and the motion information of the 9 th to 12 th sub-areas is determined according to the motion information of the A5. The 13 th to 16 th sub-areas correspond to the peripheral matching block A7, and the motion information of the 13 th to 16 th sub-areas is determined according to the motion information of A7.
According to fig. 5P, 16 sub-regions with the same size are divided for the vertical downstroke information angle prediction mode, and the size of the sub-regions is 8 × 8. The l,5,9, 13 sub-regions correspond to the peripheral matching block a15, and the motion information of the 1,5,9, 13 sub-regions is determined according to the motion information of a 15. The 2,6, 10, 14 sub-regions correspond to the peripheral matching block a13, and the motion information of the 2,6, 10, 14 sub-regions is determined according to the motion information of a 13. The 3,7, 11, 15 sub-regions correspond to the peripheral matching block a11, and the motion information of the 3,7, 11, 15 sub-regions is determined according to the motion information of a 11. The 4,8, 12 and 16 th sub-areas correspond to the peripheral matching block A9, and the motion information of the 4,8, 12 and 16 th sub-areas is determined according to the motion information of the A9.
According to fig. 5P, 16 sub-regions with the same size are divided for the angular prediction mode of the motion information moving diagonally to the right and downward, and the size of the sub-regions is 8 × 8. The 1,6, 11, 16 sub-regions correspond to the peripheral matching block A8, and the motion information of the 1,6, 11, 16 sub-regions is determined according to the motion information of A8. The 2,7 and 12 sub-regions correspond to the peripheral matching block A6, and the motion information of the 2,7 and 12 sub-regions is determined according to the motion information of the A6. The 3 rd and 8 th sub-areas correspond to the peripheral matching block A4, and the motion information of the 3 rd and 8 th sub-areas is determined according to the motion information of the A4. The 4 th sub-area corresponds to the peripheral matching block A2, and the motion information of the 4 th sub-area is determined according to the motion information of A2. The 5 th, 10 th and 15 th sub-regions correspond to the peripheral matching block A10, and the motion information of the 5 th, 10 th and 15 th sub-regions is determined according to the motion information of the A10. The 9 th and 14 th sub-areas correspond to the peripheral matching block A12, and the motion information of the 9 th and 14 th sub-areas is determined according to the motion information of the A12. The 13 th sub-area corresponds to the peripheral matching block a14, and the motion information of the 13 th sub-area is determined according to the motion information of a 14.
Example 11: in the above embodiments 1 to 5, for the encoding end and the decoding end, the target prediction value of the sub-region may be determined according to the motion information of the sub-region, and in a possible implementation, the target prediction value of the sub-region may be determined directly according to the motion information of the sub-region. In another possible embodiment, the motion compensation value for the sub-region may be determined from the motion information of the sub-region. If the sub-area meets the condition of using the bidirectional optical flow, acquiring a bidirectional optical flow deviation value of the sub-area, and determining a target predicted value of the sub-area according to a forward motion compensation value in the motion compensation values of the sub-area, a backward motion compensation value in the motion compensation values of the sub-area and the bidirectional optical flow deviation value of the sub-area.
For example, for each sub-region of the current block, if the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located between two reference frames in the temporal sequence, the sub-region satisfies the condition of using bidirectional optical flow. One piece of motion information in the bidirectional motion information is forward motion information, a reference frame corresponding to the forward motion information is a forward reference frame, the other piece of motion information in the bidirectional motion information is backward motion information, and a reference frame corresponding to the backward motion information is a backward reference frame. The current frame in which the sub-region is located between the forward reference frame and the backward reference frame in time sequence.
When determining the motion compensation value of the sub-region according to the motion information of the sub-region, the forward motion compensation value of the sub-region may be determined based on a forward reference frame corresponding to the forward motion information in the bidirectional motion information, the backward motion compensation value of the sub-region may be determined based on a backward reference frame corresponding to the backward motion information in the bidirectional motion information, and the forward motion compensation value and the backward motion compensation value of the sub-region may constitute the motion compensation value of the sub-region.
When determining the target prediction value of the sub-region, the target prediction value of the sub-region may be determined according to the forward motion compensation value of the sub-region, the backward motion compensation value of the sub-region, and the bi-directional optical flow offset value of the sub-region.
For example, for each sub-region of the current block, if the motion information of the sub-region is unidirectional motion information, the sub-region does not satisfy the condition of using bidirectional optical flow. If the sub-area does not satisfy the condition of using the bidirectional optical flow, determining a target predicted value of the sub-area according to the motion compensation value of the sub-area without referring to the bidirectional optical flow offset value. When the target prediction value of the sub-region is determined, the motion compensation value of the sub-region is determined as the target prediction value of the sub-region.
For example, for each sub-region of the current block, if the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located is not located between two reference frames in the temporal sequence, the sub-region does not satisfy the condition of using bidirectional optical flow. If the sub-area does not satisfy the condition of using the bi-directional optical flow, the target prediction value of the sub-area can be determined according to the motion compensation value of the sub-area without referring to the bi-directional optical flow offset value. For convenience of distinguishing, one piece of motion information in the bidirectional motion information is recorded as first motion information, a reference frame corresponding to the first motion information is recorded as a first reference frame, the other piece of motion information in the bidirectional motion information is recorded as second motion information, and a reference frame corresponding to the second motion information is recorded as a second reference frame. Since the current frame where the sub-region is located is not located between two reference frames in the time sequence, the first reference frame and the second reference frame are both forward reference frames of the sub-region, or the first reference frame and the second reference frame are both backward reference frames of the sub-region.
When determining the motion compensation value of the sub-region according to the motion information of the sub-region, the first motion compensation value of the sub-region may be determined based on a first reference frame corresponding to first motion information in the bidirectional motion information, the second motion compensation value of the sub-region may be determined based on a second reference frame corresponding to second motion information in the bidirectional motion information, and the first motion compensation value and the second motion compensation value of the sub-region constitute the motion compensation value of the sub-region.
In determining the target prediction value for the sub-region, the target prediction value for the sub-region may be determined based on the first motion compensation value for the sub-region and the second motion compensation value for the sub-region without referring to the bi-directional optical flow offset value.
Illustratively, obtaining the bi-directional optical flow offset value for the sub-region may include, but is not limited to: determining a first pixel value and a second pixel value according to the motion information of the sub-area; the first pixel value is a forward motion compensation value and a forward extension value for the subregion, the forward extension value being copied from the forward motion compensation value or obtained from a reference pixel location of a forward reference frame; the second pixel value is a backward motion compensation value and a backward extension value of the sub-region, and the backward extension value is copied from the backward motion compensation value or is obtained from a reference pixel position of a backward reference frame; the forward reference frame and the backward reference frame are determined according to the motion information of the sub-regions. Then, a bi-directional optical-flow offset value for the sub-region is determined from the first pixel value and the second pixel value.
Example 12: in the above embodiment 11, it relates to performing bidirectional optical flow processing on a sub-region of a current block, that is, after obtaining a motion compensation value of each sub-region, for each sub-region satisfying a condition of using bidirectional optical flow inside the current block, a bidirectional optical flow offset value is superimposed on the motion compensation value of the sub-region using a bidirectional optical flow technique (BIO). For example, for each sub-region of the current block, if the sub-region satisfies the condition of using bidirectional optical flow, a forward motion compensation value and a backward motion compensation value of the sub-region are determined, and a target prediction value of the sub-region is determined according to the forward motion compensation value, the backward motion compensation value and a bidirectional optical flow offset value of the sub-region. Acquiring the bidirectional optical flow offset value of the subarea can be realized by the following processes:
And e1, determining a first pixel value and a second pixel value according to the motion information of the subarea.
And e2, determining an autocorrelation coefficient S1 of a horizontal direction gradient sum, a cross-correlation coefficient S2 of the horizontal direction gradient sum and a vertical direction gradient sum, a cross-correlation coefficient S3 of a time domain predicted value difference value and the horizontal direction gradient sum, an autocorrelation coefficient S5 of the vertical direction gradient sum and a cross-correlation coefficient S6 of the time domain predicted value difference value and the vertical direction gradient sum according to the first pixel value and the second pixel value.
For example, the gradient sum S can be calculated using the following formula 1 、S 2 、S 3 、S 5 、S 6
Figure BDA0002527292490000421
Figure BDA0002527292490000422
Figure BDA0002527292490000423
Figure BDA0002527292490000424
Figure BDA0002527292490000425
Exemplary,. Psi x (i,j)、ψ y The way of calculating (i, j) and θ (i, j) can be as follows:
Figure BDA0002527292490000426
Figure BDA0002527292490000427
θ(i,j)=I (1) (i,j)-I (0) (f,j)
I 0 (x, y) is a first pixel value, and I 0 (x, y) is the forward motion compensation value of the subregion and its forward extension value, I (1) (x, y) is a second pixel value, and I (1) (x, y) are the backward motion compensation values of the sub-regions and their backward extension values. Illustratively, the forward extension value may be copied from the forward motion compensation value or may be obtained from a reference pixel location of a forward reference frame. The backward extension value may be copied from the backward motion compensation value or may be obtained from a reference pixel position of the backward reference frame. The forward reference frame and the backward reference frame are determined according to the motion information of the sub-regions.
ψ x (i, j) is the rate of change of horizontal and vertical components of the forward reference frame, ψ, of pixel points x (i, j) represents the sum of the horizontal gradients,. Phi y (i, j) is a backward reference of a pixel pointBy taking into account the rate of change of the horizontal and vertical components of the frame, psi y And (i, j) represents a sum of gradients in the vertical direction, and theta (i, j) represents a pixel difference value of corresponding positions of the forward reference frame and the backward reference frame, namely theta (i, j) represents a time domain predicted value difference value.
Step e3, determining the horizontal direction velocity v according to the autocorrelation coefficient S1 and the cross correlation coefficient S3 x (velocity v in horizontal direction x Also called improvement motion vector v x ) (ii) a Determining a vertical velocity v from the cross-correlation coefficient S2, the autocorrelation coefficient S5 and the cross-correlation coefficient S6 y (velocity v in vertical direction y Also called improving the motion vector sum v y )。
For example, the horizontal direction velocity v may be calculated using the following formula x And velocity v in the vertical direction y
v x =(S 1 +r)>mclip3(-th BIO ,th BIO ,(S 3 <<5)/(S 1 +r)):0
v y =(S 5 +r)>mclip3(-th BIO ,th BIO ,((S 6 <<6)-v x S 2 )/((S 5 +r)<<1)):0
In the above formula, m and th BIo All the threshold values can be configured according to experience, and r is a regular term, so that 0 operation is avoided. clip3 indicates that v is x Is guaranteed to be at-th BIO And th BIO And v is y Is guaranteed to be at-th BIO And th BIO In the meantime.
Exemplary if (S) 1 + r) > m? If true, then v x =clip3(-th BIO ,th BIO ,(S 3 <<5)/(S 1 + r)). If (S) 1 + r) > m? If not, v x =0。th′ BIO For mixing v x Is limited to-th' BIO And th' BIO I.e. v x Is greater than or equal to-th' BIO ,v x Is less than or equal to th' BIO . For v x In terms of Clip3 (a, b, x), if x is less than a, then x = a; if x is greater than b, x = b; otherwise x is notAnd, -th 'in the above formula' BIO Is a, th' BIO Is b, (S) 3 <<5)/(S 1 + r) is x, in summary, if (S) 3 <<5)/(S 1 + r) is greater than-th' BIO And is less than th' BIO Then v is x Is (S) 3 <<5)/(S 1 +r)。
If (S) 5 + r) > m is true, then v y =clip3(-th BIO ,th BIO ,((S 6 <<6)-v x S 2 )/((S 5 + r) < 1). If (S) 5 If + r) > m is not true, v y =0。th′ BIO For mixing v y Is limited to-th' BIO And th' BIO In between, i.e. v y Is greater than or equal to-th' BIO ,v y Is less than or equal to th' BIO . For v y In other words, clip3 (a, b, x) indicates that if x is smaller than a, x = a; if x is greater than b, x = b; otherwise x is not changed, in the above formula, -th' BIO Is a, th' BIO Is b, ((S) 6 <<6)-v x S 2 )/((S 5 + r) < 1) is x, as described above, if ((S) 6 <<6)-v x S 2 )/((S 5 + r) < 1) is greater than-th' BIO And is less than th' BIO Then v is y Is ((S) 6 <<6)-v x S 2 )/((S 5 +r)<<1)。
Of course, the above is only to calculate v x And v y Other ways of calculating v may also be used x And v y This is not limitative.
And e4, acquiring a bidirectional optical flow offset value b of the sub-area according to the horizontal direction velocity and the vertical direction velocity.
For example, one example of calculating the bi-directional optical flow offset value b is based on the horizontal direction velocity, the vertical direction velocity, the first pixel value, and the bi-directional optical flow offset value b of the sub-area of the second pixel value, see the following formula:
Figure BDA0002527292490000431
In the above formula, (x, y) is the coordinates of each pixel inside the current block, of course, the above formula is only an example of obtaining the bidirectional optical flow offset value b, and the bidirectional optical flow offset value b may also be calculated in other ways, which is not limited to this. I.C. A 0 (x, y) is a first pixel value, and I 0 (x, y) is the forward motion compensation value and its forward extension value, I (1) (x, y) is a second pixel value, and I (1) (x, y) are the backward motion compensation value and its backward extension value.
And e5, determining a target predicted value of the sub-area according to the motion compensation value and the bidirectional optical flow offset value of the sub-area.
Illustratively, after determining the forward motion compensation value, the backward motion compensation value, and the bi-directional optical flow offset value for the subregion, a target prediction value for the subregion may be determined based on the forward motion compensation value, the backward motion compensation value, and the bi-directional optical flow offset value. For example, a target prediction value pred of a pixel point (x, y) in the sub-region is determined based on the following formula BIO (x,y):pred BIO (x,y)=(I (0) (x,y)+I (1) (x, y) + b + 1) > 1. In the above formula, I 0 (x, y) is the forward motion compensation value of pixel (x, y), I (1) (x, y) backward motion compensation values of pixel (x, y).
Example 13: the current block may use a motion information angle prediction mode (which may also be referred to as a motion vector angle prediction mode), that is, a motion compensation value of each sub-region of the current block is determined based on the motion information angle prediction mode, for a specific determination manner, see the above-described embodiment. If the current block uses the motion information angle prediction mode, closing a motion vector adjustment (DMVR) technology of a decoding end by the current block; or, if the current block uses the motion information angle prediction mode, the current block starts the motion vector adjustment technology of the decoding end. If the current block uses the motion information angle prediction mode, the current block closes a bidirectional optical flow technology (BIO); alternatively, if the current block uses the motion information angle prediction mode, the current block initiates a bi-directional optical flow technique. Illustratively, the bi-directional optical flow technique is to superimpose optical flow compensation values on the current block using gradient information of pixel values in forward and backward reference frames. The principle of the decoding-side motion vector adjustment technique is to adjust a motion vector using a matching criterion between forward and backward reference pixel values. The following describes a combination of a motion information angle prediction mode, a decoding-side motion vector adjustment technique, and a bidirectional optical flow technique, with reference to a specific scene.
Application scenario 1: if the current block uses the motion information angle prediction mode, the current block may start the bi-directional optical flow technique, and the current block may close the motion vector adjustment technique at the decoding end. In this application scenario, a motion compensation value for each sub-region of the current block is determined based on the motion information angular prediction mode. Then, based on the bi-directional optical flow technique, the target prediction value of each sub-area of the current block is determined according to the motion compensation value of the sub-area, for example, if the sub-area satisfies the condition of using bi-directional optical flow, the target prediction value of the sub-area is determined according to the motion compensation value of the sub-area and the bi-directional optical flow offset value, and if the sub-area does not satisfy the condition of using bi-directional optical flow, the target prediction value of the sub-area is determined according to the motion compensation value of the sub-area, for example, refer to the above-mentioned embodiment. Then, the prediction value of the current block is determined according to the target prediction value of each sub-region.
Application scenario 2: if the current block uses the motion information angle prediction mode, the current block may initiate a bi-directional optical flow technique, and the current block may initiate a decoding-side motion vector adjustment technique. In such an application scenario, original motion information of each sub-region of the current block is determined based on the motion information angular prediction mode (for convenience of distinction, the motion information determined based on the motion information angular prediction mode is referred to as original motion information). Then, based on the decoding-end motion vector adjustment technology, determining target motion information of each sub-region of the current block according to the original motion information of the sub-region, for example, if the sub-region meets the condition of using the decoding-end motion vector adjustment, adjusting the original motion information of the sub-region to obtain the adjusted target motion information, and if the sub-region does not meet the condition of using the decoding-end motion vector adjustment, using the original motion information of the sub-region as the target motion information. Then, a motion compensation value of each sub-region is determined according to the target motion information of the sub-region. Then, based on the bi-directional optical flow technique, a target prediction value of each sub-area of the current block is determined according to the motion compensation value of the sub-area, for example, if the sub-area satisfies the condition of using the bi-directional optical flow, the target prediction value of the sub-area is determined according to the motion compensation value of the sub-area and the bi-directional optical flow offset value, and if the sub-area does not satisfy the condition of using the bi-directional optical flow, the target prediction value of the sub-area is determined according to the motion compensation value of the sub-area. Then, the prediction value of the current block is determined according to the target prediction value of each sub-region.
Application scenario 3: if the current block uses the motion information angle prediction mode, the current block can close the bidirectional optical flow technology, and the current block can start the motion vector adjustment technology of the decoding end. In this application scenario, the original motion information of each sub-region of the current block is determined based on the motion information angular prediction mode. Then, based on the decoding-end motion vector adjustment technology, determining target motion information of each sub-region of the current block according to the original motion information of the sub-region, for example, if the sub-region meets the condition of using the decoding-end motion vector adjustment, adjusting the original motion information of the sub-region to obtain the adjusted target motion information, and if the sub-region does not meet the condition of using the decoding-end motion vector adjustment, using the original motion information of the sub-region as the target motion information. Then, a target prediction value of each sub-area is determined according to the target motion information of the sub-area, and the process does not need to consider a bidirectional optical flow technology. Then, the prediction value of the current block is determined according to the target prediction value of each sub-region.
Application scenario 4: if the current block uses the motion information angle prediction mode, the current block may disable the bi-directional optical flow technique, and the current block may disable the motion vector adjustment technique at the decoding end. In the application scenario, the motion information of each sub-region of the current block is determined based on the motion information angle prediction mode, the target prediction value of each sub-region is determined according to the motion information of each sub-region, and the prediction value of the current block is determined according to the target prediction value of each sub-region. In the above process, the motion vector adjustment technique at the decoding end and the bidirectional optical flow technique do not need to be considered.
In application scenarios 2 and 3, for each sub-region of the current block, DMVR may be used for sub-regions that meet the requirements of using the decoding-side motion vector adjustment technique. For example, if the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located between two reference frames (i.e., a forward reference frame and a backward reference frame) in the temporal sequence, and the distance between the current frame and the forward reference frame is the same as the distance between the backward reference frame and the current frame, the sub-region satisfies the condition of using the motion vector adjustment at the decoding end. If the motion information of the sub-region is unidirectional motion information, the sub-region does not satisfy the condition of using the motion vector adjustment of the decoding end. If the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located is not located between two reference frames in the time sequence, the sub-region does not satisfy the condition of using the motion vector adjustment of the decoding end. If the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located between two reference frames (i.e. a forward reference frame and a backward reference frame) in time sequence, and the distance between the current frame and the forward reference frame is different from the distance between the backward reference frame and the current frame, the sub-region does not satisfy the condition of using the motion vector adjustment of the decoding end.
In application scenarios 2 and 3, a decoding-end motion vector adjustment technique needs to be started, where the decoding-end motion vector adjustment technique adjusts a motion vector according to a matching criterion of forward and backward reference pixel values, and the decoding-end motion vector adjustment technique can be applied to a direct mode or a skip mode, and an implementation process of the decoding-end motion vector adjustment technique can be as follows:
a) And acquiring reference pixels needed in the prediction block and the search area by using the initial motion vector.
b) And obtaining the optimal whole pixel position. Illustratively, the luminance image block of the current block is divided into non-overlapping sub-regions with adjacent positions, and the initial motion vectors of all the sub-regions are MV0 and MV1. And for each subarea, starting to search for a position with minimum template matching distortion in a certain nearby range by taking the positions corresponding to the initial MV0 and the initial MV1 as centers. The calculation mode of the template matching distortion is as follows: the SAD value between a block of the sub-region width starting at the center position multiplied by the sub-region height in the forward search region and a block of the sub-region width starting at the center position multiplied by the sub-region height in the backward search region is calculated.
c) And obtaining the optimal sub-pixel position. And confirming the sub-pixel position by using template matching distortion values at five positions including the optimal integer position, the left side of the optimal integer position, the right side of the optimal integer position, the upper side of the optimal integer position and the lower side of the optimal integer position, estimating a secondary distortion plane near the optimal integer position, and calculating the position with the minimum distortion in the distortion plane to be used as the sub-pixel position. For example, the horizontal sub-pixel position and the vertical sub-pixel position are calculated according to the template matching distortion values at five positions, namely, the optimal position of the integer position, the left side thereof, the right side thereof, the upper side thereof and the lower side thereof, see the following formula, which is one example of calculating the horizontal sub-pixel position and the vertical sub-pixel position:
Horizontal sub-pixel position = (sad _ left-sad _ right)/((sad _ right + sad _ left-2) · sad_mid) × 2)
Vertical subpixel position = (sad _ btm-sad _ top) × N/((sad _ top + sad _ btm-2 × sad_mid) × 2)
Illustratively, sad _ mid, sad _ left, sad _ right, sad _ top, and sad _ btm are template matching distortion values at five positions of an integer position optimal position, the left side thereof, the right side thereof, the upper side thereof, and the lower side thereof, respectively, and N is precision.
Of course, the above is only an example of calculating the horizontal sub-pixel position and the vertical sub-pixel position, and the horizontal sub-pixel position and the vertical sub-pixel position may also be calculated in other manners, for example, the horizontal sub-pixel position and the vertical sub-pixel position are calculated according to the template matching distortion values at five positions, i.e., the integer position optimal position, the left side thereof, the right side thereof, the upper side thereof, and the lower side thereof, which is not limited in this respect, as long as the horizontal sub-pixel position and the vertical sub-pixel position are calculated with reference to these parameters.
d) And calculating according to the optimal MV to obtain a final predicted value.
For example, the embodiments 1 to 13 may be implemented individually or in combination. For example, embodiment 1 and embodiment 4 are implemented in combination; example 1 and example 5 were combined; example 1 and example 6 were combined; example 1 and example 7 were carried out in combination; example 1 and example 8 were carried out in combination; example 1 and example 9 were combined; example 1 and example 10 were combined; example 1 and example 11 were carried out in combination; example 1 and example 12 were combined; example 1 and example 13 were combined. As another example, embodiment 2 and embodiment 3 are implemented in combination; example 2 and example 5 were combined; example 2 and example 6 were combined; example 2 and example 7 were combined; example 2 and example 8 were combined; example 2 and example 9 were combined; example 2 and example 10 were implemented in combination; example 2 and example 11 were combined; example 2 and example 12 were combined; example 2 and example 13 were combined. The embodiments between embodiment 5 and embodiment 13 may be arbitrarily combined, and the like. Of course, the above is only an example, and the combination is not limited.
Based on the same application concept as the method, an embodiment of the present application provides a decoding apparatus applied to a decoding end, as shown in fig. 6A, which is a structural diagram of the apparatus, and the apparatus includes:
a constructing module 611, configured to construct a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode, wherein the at least two peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; if there is available motion information in both the first and second peripheral matching blocks and the motion information in the first and second peripheral matching blocks is different, adding the motion information angular prediction mode to the motion information prediction mode candidate list; a selecting module 612, configured to select a target motion information prediction mode of the current block from the motion information prediction mode candidate list; a filling module 613, configured to fill motion information that is unavailable in the peripheral blocks of the current block if the target motion information prediction mode is a target motion information angle prediction mode; a determining module 614, configured to determine motion information of the current block according to motion information of a peripheral matching block pointed by a preconfigured angle of the target motion information angle prediction mode, and determine a prediction value of the current block according to the motion information of the current block;
Wherein the motion information angular prediction mode of the current block comprises at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; a vertical rightward motion information angle prediction mode; a horizontal right motion information angle prediction mode; a vertical downward motion information angle prediction mode; and (4) a diagonal down-right motion information angle prediction mode.
In a possible implementation, the building module 611 is specifically configured to: refraining from adding the motion information angular prediction mode to the motion information prediction mode candidate list if there is no available motion information in at least one of the first peripheral matching block and the second peripheral matching block.
In a possible implementation, the building module 611 is specifically configured to: if there is available motion information in both the first and second peripheral matching blocks and the motion information in the first and second peripheral matching blocks is the same, then refraining from adding the motion information angular prediction mode to the motion information prediction mode candidate list.
The at least two peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block which are to be traversed sequentially; the building module 611 is specifically configured to: if the first peripheral matching block and the second peripheral matching block both have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block are the same, continuing to judge whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, adding the motion information angle prediction mode to the motion information prediction mode candidate list when the motion information of the second peripheral matching block and the third peripheral matching block is different.
The at least two peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block which are to be traversed in sequence; the building module 611 is specifically configured to: if available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuously judging whether available motion information exists in both the second peripheral matching block and the third peripheral matching block; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list when the motion information in the second peripheral matching block and the third peripheral matching block is the same.
The at least two peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block which are to be traversed sequentially; the building module 611 is specifically configured to: if the available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to the motion information prediction mode candidate list.
The at least two peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block which are to be traversed sequentially; the building module 611 is specifically configured to: if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information or not; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, adding the motion information angle prediction mode to the motion information prediction mode candidate list when the motion information of the second peripheral matching block and the third peripheral matching block are different.
The at least two peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block which are to be traversed sequentially; the building module 611 is specifically configured to: if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list when the motion information in the second peripheral matching block and the third peripheral matching block is the same.
The at least two peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block which are to be traversed sequentially; the building module 611 is specifically configured to: if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to judge whether the second peripheral matching block and the third peripheral matching block both have available motion information; adding the motion information angular prediction mode to the motion information prediction mode candidate list or prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list if there is no available motion information in at least one of the second peripheral matching block and the third peripheral matching block.
The building module 611 is specifically configured to, when determining whether any peripheral matching block has available motion information:
if the peripheral matching block is the inter-frame coded block, determining that available motion information exists in the peripheral matching block;
if the prediction mode of the peripheral matching block is the intra block copy mode, determining that the peripheral matching block does not have available motion information;
if the peripheral matching block is positioned outside the image where the current block is positioned or the peripheral matching block is positioned outside the image slice where the current block is positioned, determining that the peripheral matching block does not have available motion information;
and if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block.
If the peripheral matching block is located on the right side or the lower side outside the current block, the building module 611 determines the motion information of the peripheral matching block by using the following method: determining a reference frame corresponding to the current frame where the current block is located; selecting a reference matching block corresponding to the position of the peripheral matching block from the reference frame; and determining the motion information of the peripheral matching block according to the motion information of the reference matching block.
The filling module 613 is specifically configured to: traversing the peripheral blocks of the current block according to a traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the first peripheral block does not have available motion information before the peripheral block, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise a second peripheral block without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block.
The filling module 613 is specifically configured to: traversing the peripheral blocks of the current block according to the traversal sequence from the right peripheral block to the lower peripheral block of the current block or from the lower peripheral block to the right peripheral block of the current block, and traversing the first peripheral block with available motion information; if the first peripheral block does not have available motion information before the peripheral block, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise a second peripheral block without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block.
The filling module 613 is specifically configured to: traversing the peripheral blocks of the current block according to the traversal sequence from the right peripheral block to the lower peripheral block of the current block or from the lower peripheral block to the right peripheral block of the current block to traverse the first peripheral block; if the first peripheral block does not have available motion information, filling motion information into the first peripheral block; and continuously traversing the peripheral blocks behind the first peripheral block, and filling the motion information of the last peripheral block of the traversed third peripheral block into the third peripheral block if the peripheral blocks behind the first peripheral block comprise the third peripheral block without available motion information.
Based on the same application concept as the method, an embodiment of the present application provides an encoding apparatus applied to an encoding end, as shown in fig. 6B, which is a structural diagram of the apparatus, and the apparatus includes:
a constructing module 621, configured to construct a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode, wherein the at least two peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; if there is available motion information in both the first and second peripheral matching blocks and the motion information in the first and second peripheral matching blocks is different, adding the motion information angular prediction mode to the motion information prediction mode candidate list; a filling module 622, configured to fill, if a motion information angle prediction mode exists in the motion information prediction mode candidate list, motion information that is not available in the neighboring blocks of the current block; a determining module 623, configured to determine, for each motion information angle prediction mode in the motion information prediction mode candidate list, motion information of the current block according to motion information of a peripheral matching block pointed by a preconfigured angle of the motion information angle prediction mode; determining a prediction value of the current block according to the motion information of the current block;
Wherein the motion information angular prediction mode of the current block comprises at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; a vertical rightward motion information angle prediction mode; a horizontal right motion information angle prediction mode; a vertical downward motion information angle prediction mode; and (4) a diagonal down-right motion information angle prediction mode.
If the peripheral matching block is located on the right side or the lower side outside the current block, the constructing module 621 is specifically configured to determine the motion information of the peripheral matching block by using the following method: determining a reference frame corresponding to the current frame where the current block is located; selecting a reference matching block corresponding to the position of the peripheral matching block from the reference frame; and determining the motion information of the peripheral matching block according to the motion information of the reference matching block.
The filling module 622 is specifically configured to: traversing the peripheral blocks of the current block according to a traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the first peripheral block does not have available motion information before the peripheral block, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise a second peripheral block without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block.
The filling module 622 is specifically configured to: traversing the peripheral blocks of the current block according to the traversal sequence from the right peripheral block to the lower peripheral block of the current block or from the lower peripheral block to the right peripheral block of the current block, and traversing the first peripheral block with available motion information; if the peripheral block comprises a first peripheral block without available motion information before, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise a second peripheral block without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block.
The filling module 622 is specifically configured to: traversing the peripheral blocks of the current block according to the traversal sequence from the right peripheral block to the lower peripheral block of the current block or from the lower peripheral block to the right peripheral block of the current block to traverse the first peripheral block; if the first peripheral block does not have available motion information, filling motion information for the first peripheral block; and continuously traversing the peripheral blocks behind the first peripheral block, and filling the motion information of the last peripheral block of the traversed third peripheral block into the third peripheral block if the peripheral blocks behind the first peripheral block comprise the third peripheral block without available motion information.
As for a decoding-end device (such as a video decoder) provided in the embodiment of the present application, from a hardware level, a schematic diagram of a hardware architecture of the decoding-end device may specifically refer to fig. 6C. The method comprises the following steps: a processor 631 and a machine-readable storage medium 632, the machine-readable storage medium 632 storing machine-executable instructions executable by the processor 631; the processor 631 is configured to execute machine executable instructions to implement the methods disclosed in the above examples of the present application.
For example, the processor 631 is configured to execute machine-executable instructions to implement the following steps:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode, wherein the at least two peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; if the available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, adding the motion information angle prediction mode to the motion information prediction mode candidate list;
Selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
if the target motion information prediction mode is a target motion information angle prediction mode, filling unavailable motion information in the peripheral blocks of the current block;
determining the motion information of the current block according to the motion information of the peripheral matching block pointed by the pre-configured angle of the target motion information angle prediction mode, and determining the prediction value of the current block according to the motion information of the current block;
wherein the motion information angular prediction mode of the current block comprises at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; a vertical rightward motion information angle prediction mode; horizontal right motion information angle prediction mode; a vertical downward motion information angle prediction mode; diagonal down-right motion information angle prediction mode.
In terms of hardware, a schematic diagram of a hardware architecture of the encoding end device (such as a video encoder) provided in the embodiment of the present application may specifically refer to fig. 6D. The method comprises the following steps: a processor 641 and a machine-readable storage medium 642, the machine-readable storage medium 642 storing machine-executable instructions executable by the processor 641; the processor 641 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
For example, the processor 641 is configured to execute machine-executable instructions to perform the following steps:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode, wherein the at least two peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; if the available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, adding the motion information angle prediction mode to the motion information prediction mode candidate list;
if the motion information angle prediction mode exists in the motion information prediction mode candidate list, filling unavailable motion information in the peripheral blocks of the current block;
for each motion information angle prediction mode in the motion information prediction mode candidate list, determining motion information of the current block according to motion information of a peripheral matching block pointed by a pre-configured angle of the motion information angle prediction mode;
Determining a prediction value of the current block according to the motion information of the current block;
wherein the motion information angle prediction mode of the current block comprises at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; a vertical rightward motion information angle prediction mode; a horizontal right motion information angle prediction mode; a vertical downward motion information angle prediction mode; and (4) a diagonal down-right motion information angle prediction mode.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where a plurality of computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the encoding and decoding methods disclosed in the above examples of the present application can be implemented. The machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, apparatuses, modules or units described in the above embodiments may be specifically implemented by a computer chip or an entity, or implemented by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices. For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the various elements may be implemented in the same one or more pieces of software and/or hardware in the practice of the present application. As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present application shall be included in the scope of the claims of the present application.

Claims (22)

1. A decoding method, applied to a decoding side, the method comprising:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode, wherein the at least two peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; if the available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, adding the motion information angle prediction mode to the motion information prediction mode candidate list;
Selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
if the target motion information prediction mode is a target motion information angle prediction mode, filling unavailable motion information in peripheral blocks of the current block;
determining the motion information of the current block according to the motion information of the peripheral matching block pointed by the pre-configured angle of the target motion information angle prediction mode, and determining the prediction value of the current block according to the motion information of the current block;
wherein the motion information angle prediction mode of the current block comprises at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; vertical right motion information angle prediction mode.
2. The method of claim 1,
the constructing of the motion information prediction mode candidate list of the current block includes:
refraining from adding the motion information angular prediction mode to the motion information prediction mode candidate list if there is no available motion information in at least one of the first peripheral matching block and the second peripheral matching block.
3. The method of claim 1,
the constructing of the motion information prediction mode candidate list of the current block includes:
if there is available motion information for both the first and second peripheral matching blocks and the motion information for the first and second peripheral matching blocks are the same, then refraining from adding the motion information angular prediction mode to the motion information prediction mode candidate list.
4. The method of claim 1, wherein the at least two perimeter matching blocks include at least a first perimeter matching block, a second perimeter matching block, and a third perimeter matching block to be traversed in sequence;
the constructing of the motion information prediction mode candidate list of the current block includes:
if the first peripheral matching block and the second peripheral matching block both have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block are the same, continuing to judge whether the second peripheral matching block and the third peripheral matching block both have available motion information;
if there is available motion information in both the second peripheral matching block and the third peripheral matching block, adding the motion information angle prediction mode to the motion information prediction mode candidate list when the motion information of the second peripheral matching block and the third peripheral matching block are different.
5. The method according to claim 1, characterized in that said at least two peripheral matching blocks comprise at least a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed in sequence;
the constructing of the motion information prediction mode candidate list of the current block includes:
if the first peripheral matching block and the second peripheral matching block have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block are the same, continuing to judge whether the second peripheral matching block and the third peripheral matching block have available motion information;
if there is available motion information in both the second peripheral matching block and the third peripheral matching block, prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list when the motion information in the second peripheral matching block and the third peripheral matching block is the same.
6. The method according to claim 1, characterized in that said at least two peripheral matching blocks comprise at least a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed in sequence;
the constructing of the motion information prediction mode candidate list of the current block includes:
If there is available motion information for both the first and second peripheral matching blocks and the motion information for the first and second peripheral matching blocks are different, adding the motion information angular prediction mode to the motion information prediction mode candidate list.
7. The method of claim 1, wherein the at least two perimeter matching blocks include at least a first perimeter matching block, a second perimeter matching block, and a third perimeter matching block to be traversed in sequence;
the constructing of the motion information prediction mode candidate list of the current block includes:
if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information or not;
if there is available motion information in both the second peripheral matching block and the third peripheral matching block, adding the motion information angle prediction mode to the motion information prediction mode candidate list when the motion information of the second peripheral matching block and the third peripheral matching block is different.
8. The method of claim 1, wherein the at least two perimeter matching blocks include at least a first perimeter matching block, a second perimeter matching block, and a third perimeter matching block to be traversed in sequence;
The constructing of the motion information prediction mode candidate list of the current block includes:
if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information or not;
if there is available motion information in both the second peripheral matching block and the third peripheral matching block, prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list when the motion information in the second peripheral matching block and the third peripheral matching block is the same.
9. The method of claim 1, wherein the at least two perimeter matching blocks include at least a first perimeter matching block, a second perimeter matching block, and a third perimeter matching block to be traversed in sequence;
the constructing of the motion information prediction mode candidate list of the current block includes:
if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to judge whether the second peripheral matching block and the third peripheral matching block both have available motion information;
Refraining from adding the motion information angular prediction mode to the motion information prediction mode candidate list if there is no available motion information in at least one of the second peripheral matching block and the third peripheral matching block.
10. The method according to any one of claims 1 to 9,
the process of judging whether any peripheral matching block has available motion information includes:
if the peripheral matching block is the inter-frame coded block, determining that available motion information exists in the peripheral matching block;
if the prediction mode of the peripheral matching block is the intra block copy mode, determining that the peripheral matching block does not have available motion information;
if the peripheral matching block is positioned outside the image where the current block is positioned or the peripheral matching block is positioned outside the image slice where the current block is positioned, determining that the peripheral matching block does not have available motion information;
and if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block.
11. The method of claim 1,
the padding motion information that is not available in surrounding blocks of the current block comprises:
and if the second peripheral block without the available motion information is included in the peripheral blocks after the peripheral block, filling the second peripheral block with the motion information of the last peripheral block of the traversed second peripheral block.
12. The method of claim 1, wherein determining the motion information of the current block according to the motion information of the peripheral matching block pointed to by the pre-configured angle of the target motion information angular prediction mode comprises:
dividing the current block into at least one sub-region;
for each sub-region of the current block, determining motion information of the sub-region according to motion information of a peripheral matching block pointed by a pre-configured angle of the target motion information angle prediction mode;
the determining the prediction value of the current block according to the motion information of the current block comprises:
and aiming at each sub-area of the current block, determining a target prediction value of the sub-area according to the motion information of the sub-area, and determining the prediction value of the current block according to the target prediction value of each sub-area.
13. The method of claim 12,
the determining the target prediction value of the sub-region according to the motion information of the sub-region includes:
determining a motion compensation value of the sub-area according to the motion information of the sub-area;
if the sub-area meets the condition of using the bidirectional optical flow, acquiring a bidirectional optical flow offset value of the sub-area;
And determining a target predicted value of the sub-area according to a forward motion compensation value in the motion compensation values of the sub-area, a backward motion compensation value in the motion compensation values of the sub-area and a bidirectional optical flow offset value of the sub-area.
14. The method of claim 13,
after determining the motion compensation value of the sub-region according to the motion information of the sub-region, the method further includes:
if the sub-area does not meet the condition of using the bidirectional optical flow, determining a target prediction value of the sub-area according to the motion compensation value of the sub-area;
if the motion information of the sub-area is unidirectional motion information, the sub-area does not meet the condition of using bidirectional optical flow; or, if the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located is not located between two reference frames in the time sequence, the sub-region does not satisfy the condition of using bidirectional optical flow;
and if the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located between two reference frames in the time sequence, the sub-region meets the condition of using bidirectional optical flow.
15. An encoding method applied to an encoding end, the method comprising:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode, wherein the at least two peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; if the available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, adding the motion information angle prediction mode to the motion information prediction mode candidate list;
if the motion information angle prediction mode exists in the motion information prediction mode candidate list, filling unavailable motion information in the peripheral blocks of the current block;
for each motion information angle prediction mode in the motion information angle prediction mode candidate list, determining motion information of the current block according to motion information of a peripheral matching block pointed by a pre-configured angle of the motion information angle prediction mode;
Determining a prediction value of the current block according to the motion information of the current block;
wherein the motion information angular prediction mode of the current block comprises at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; vertical right motion information angle prediction mode.
16. The method of claim 15,
the padding motion information that is not available in surrounding blocks of the current block comprises:
and if the second peripheral block without the available motion information is included in the peripheral blocks after the peripheral block, filling the second peripheral block with the motion information of the last peripheral block of the traversed second peripheral block.
17. The method of claim 15, wherein the determining the motion information of the current block according to the motion information of the neighboring matching block pointed to by the pre-configured angle of the motion information angular prediction mode comprises:
dividing the current block into at least one sub-region;
for each sub-region of the current block, determining motion information of the sub-region according to motion information of a peripheral matching block pointed by a pre-configured angle of the motion information angle prediction mode;
The determining the prediction value of the current block according to the motion information of the current block includes:
and aiming at each sub-area of the current block, determining a target predicted value of the sub-area according to the motion information of the sub-area, and determining the predicted value of the current block according to the target predicted value of each sub-area.
18. The method of claim 17,
the determining the target prediction value of the sub-region according to the motion information of the sub-region includes:
determining a motion compensation value of the sub-area according to the motion information of the sub-area;
if the sub-area meets the condition of using the bidirectional optical flow, acquiring a bidirectional optical flow deviant of the sub-area;
and determining a target predicted value of the sub-area according to a forward motion compensation value in the motion compensation values of the sub-area, a backward motion compensation value in the motion compensation values of the sub-area and a bidirectional optical flow offset value of the sub-area.
19. A decoding apparatus, applied to a decoding side, the apparatus comprising:
a construction module for constructing a motion information prediction mode candidate list of a current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode, wherein the at least two peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; if the available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, adding the motion information angle prediction mode to the motion information prediction mode candidate list;
A selection module for selecting a target motion information prediction mode for the current block from the motion information prediction mode candidate list;
a filling module, configured to fill motion information that is unavailable in neighboring blocks of the current block if the target motion information prediction mode is a target motion information angle prediction mode;
the determining module is used for determining the motion information of the current block according to the motion information of the peripheral matching block pointed by the pre-configured angle of the target motion information angle prediction mode, and determining the prediction value of the current block according to the motion information of the current block;
wherein the motion information angular prediction mode of the current block comprises at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; vertical right motion information angle prediction mode.
20. An encoding apparatus applied to an encoding side, the apparatus comprising:
a construction module for constructing a motion information prediction mode candidate list of a current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode, wherein the at least two peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; if there is available motion information in both the first and second peripheral matching blocks and the motion information in the first and second peripheral matching blocks is different, adding the motion information angular prediction mode to the motion information prediction mode candidate list;
A filling module, configured to fill motion information that is unavailable in neighboring blocks of the current block if a motion information angle prediction mode exists in the motion information prediction mode candidate list;
a determining module, configured to determine, for each motion information angle prediction mode in the motion information prediction mode candidate list, motion information of the current block according to motion information of a peripheral matching block pointed by a preconfigured angle of the motion information angle prediction mode; determining a prediction value of the current block according to the motion information of the current block;
wherein the motion information angular prediction mode of the current block comprises at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; vertical right motion information angular prediction mode.
21. A decoding-side apparatus, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
Constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode, wherein the at least two peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; if the available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, adding the motion information angle prediction mode to the motion information prediction mode candidate list;
selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
if the target motion information prediction mode is a target motion information angle prediction mode, filling unavailable motion information in the peripheral blocks of the current block;
determining the motion information of the current block according to the motion information of a peripheral matching block pointed by a pre-configured angle of the target motion information angle prediction mode, and determining the prediction value of the current block according to the motion information of the current block;
Wherein the motion information angular prediction mode of the current block comprises at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; vertical right motion information angle prediction mode.
22. An encoding side apparatus, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
constructing a motion information prediction mode candidate list of the current block; when a motion information prediction mode candidate list of a current block is constructed, aiming at any motion information angle prediction mode of the current block, selecting at least two peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode, wherein the at least two peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; if the available motion information exists in both the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, adding the motion information angle prediction mode to the motion information prediction mode candidate list;
If the motion information angle prediction mode exists in the motion information prediction mode candidate list, filling unavailable motion information in peripheral blocks of the current block;
for each motion information angle prediction mode in the motion information prediction mode candidate list, determining motion information of the current block according to motion information of a peripheral matching block pointed by a pre-configured angle of the motion information angle prediction mode;
determining a prediction value of the current block according to the motion information of the current block;
wherein the motion information angular prediction mode of the current block comprises at least one of: horizontal leftward motion information angle prediction mode; a vertical upward motion information angle prediction mode; a horizontal upward motion information angle prediction mode; a horizontal downward motion information angle prediction mode; vertical right motion information angle prediction mode.
CN202010508179.7A 2020-06-05 2020-06-05 Decoding and encoding method, device and equipment Active CN113766234B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010508179.7A CN113766234B (en) 2020-06-05 2020-06-05 Decoding and encoding method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010508179.7A CN113766234B (en) 2020-06-05 2020-06-05 Decoding and encoding method, device and equipment

Publications (2)

Publication Number Publication Date
CN113766234A CN113766234A (en) 2021-12-07
CN113766234B true CN113766234B (en) 2022-12-23

Family

ID=78785206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010508179.7A Active CN113766234B (en) 2020-06-05 2020-06-05 Decoding and encoding method, device and equipment

Country Status (1)

Country Link
CN (1) CN113766234B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017537529A (en) * 2014-10-31 2017-12-14 サムスン エレクトロニクス カンパニー リミテッド Video encoding apparatus and video decoding apparatus using high-precision skip encoding, and method thereof
CN109104609A (en) * 2018-09-12 2018-12-28 浙江工业大学 A kind of lens boundary detection method merging HEVC compression domain and pixel domain
CN110225346A (en) * 2018-12-28 2019-09-10 杭州海康威视数字技术股份有限公司 A kind of decoding method and its equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017537529A (en) * 2014-10-31 2017-12-14 サムスン エレクトロニクス カンパニー リミテッド Video encoding apparatus and video decoding apparatus using high-precision skip encoding, and method thereof
CN109104609A (en) * 2018-09-12 2018-12-28 浙江工业大学 A kind of lens boundary detection method merging HEVC compression domain and pixel domain
CN110225346A (en) * 2018-12-28 2019-09-10 杭州海康威视数字技术股份有限公司 A kind of decoding method and its equipment

Also Published As

Publication number Publication date
CN113766234A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
CN110933426B (en) Decoding and encoding method and device thereof
EP3818713B1 (en) Apparatus for block-based predictive video coding
CN111698500B (en) Encoding and decoding method, device and equipment
CN112135145B (en) Encoding and decoding method, device and equipment
CN113709457B (en) Decoding and encoding method, device and equipment
CN113794883B (en) Encoding and decoding method, device and equipment
CN113766234B (en) Decoding and encoding method, device and equipment
CN113709499B (en) Encoding and decoding method, device and equipment
CN112449181B (en) Encoding and decoding method, device and equipment
CN112468817B (en) Encoding and decoding method, device and equipment
CN112449180B (en) Encoding and decoding method, device and equipment
CN110662074B (en) Motion vector determination method and device
CN112055220B (en) Encoding and decoding method, device and equipment
CN114598889B (en) Encoding and decoding method, device and equipment
CN111669592B (en) Encoding and decoding method, device and equipment
WO2012114561A1 (en) Moving image coding device and moving image coding method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant