CN113709487A - Encoding and decoding method, device and equipment - Google Patents

Encoding and decoding method, device and equipment Download PDF

Info

Publication number
CN113709487A
CN113709487A CN202111153142.8A CN202111153142A CN113709487A CN 113709487 A CN113709487 A CN 113709487A CN 202111153142 A CN202111153142 A CN 202111153142A CN 113709487 A CN113709487 A CN 113709487A
Authority
CN
China
Prior art keywords
motion information
sub
block
prediction mode
peripheral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111153142.8A
Other languages
Chinese (zh)
Other versions
CN113709487B (en
Inventor
方树清
陈方栋
王莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202111153142.8A priority Critical patent/CN113709487B/en
Publication of CN113709487A publication Critical patent/CN113709487A/en
Application granted granted Critical
Publication of CN113709487B publication Critical patent/CN113709487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application provides a coding and decoding method, a device and equipment thereof, wherein the method comprises the following steps: if the target motion information prediction mode of the current block is a motion information angle prediction mode, dividing the current block into at least one sub-region; for each sub-region of the current block, determining the motion information of the sub-region according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the motion information angle prediction mode; determining a motion compensation value of the sub-region according to the motion information of the sub-region; if the sub-area meets the condition of using the bidirectional optical flow, acquiring a bidirectional optical flow offset value of the sub-area; determining a target prediction value of the sub-region according to a forward motion compensation value in the motion compensation values of the sub-region, a backward motion compensation value in the motion compensation values and a bidirectional optical flow offset value of the sub-region; and determining the predicted value of the current block according to the target predicted value of each sub-area. By the scheme, the coding performance can be improved.

Description

Encoding and decoding method, device and equipment
Technical Field
The present application relates to the field of encoding and decoding technologies, and in particular, to an encoding and decoding method, apparatus, and device.
Background
In order to achieve the purpose of saving space, video images are transmitted after being coded, and the complete video coding method can comprise the processes of prediction, transformation, quantization, entropy coding, filtering and the like. The predictive coding comprises intra-frame coding and inter-frame coding, wherein the inter-frame coding uses the correlation of a video time domain to predict the pixels of the current image by using the pixels adjacent to the coded image so as to achieve the aim of effectively removing the video time domain redundancy. In inter-frame coding, a motion vector is used to represent a relative displacement between a current image block of a current frame video image and a reference image block of a reference frame video image. For example, when the video image a of the current frame and the video image B of the reference frame have a strong temporal correlation, and the image block a1 (current block) of the video image a needs to be transmitted, a motion search may be performed in the video image B to find the image block B1 (i.e., reference block) that best matches the image block a1, and determine a relative displacement between the image block a1 and the image block B1, which is also a motion vector of the image block a 1.
In the prior art, a current coding unit does not need to be divided into blocks, but only one piece of motion information can be determined for the current coding unit directly by indicating a motion information index or a difference information index. Because all sub-blocks in the current coding unit share one motion information, for some moving objects which are small, the best motion information can be obtained only after the coding unit is divided into blocks. However, if the current coding unit is divided into a plurality of sub-blocks, additional bit overhead is generated.
Disclosure of Invention
The application provides a coding and decoding method, device and equipment thereof, which can improve coding performance.
The application provides a coding and decoding method, which comprises the following steps:
if the target motion information prediction mode of the current block is a motion information angle prediction mode, then
Dividing the current block into at least one sub-region; for each sub-region of the current block, determining motion information of the sub-region according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
determining a motion compensation value of the sub-region according to the motion information of the sub-region;
if the sub-area meets the condition of using the bidirectional optical flow, acquiring a bidirectional optical flow offset value of the sub-area; determining a target predicted value of the sub-region according to a forward motion compensation value in the motion compensation values of the sub-region, a backward motion compensation value in the motion compensation values of the sub-region and a bidirectional optical flow offset value of the sub-region;
and determining the predicted value of the current block according to the target predicted value of each sub-area.
The present application provides a coding and decoding device, the device includes:
a first determining module, configured to divide the current block into at least one sub-region if a target motion information prediction mode of the current block is a motion information angle prediction mode; for each sub-region of the current block, determining motion information of the sub-region according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
a second determining module, configured to determine a motion compensation value of the sub-region according to the motion information of the sub-region;
an obtaining module, configured to obtain a bidirectional optical flow offset value of the sub-area if the sub-area meets a condition of using a bidirectional optical flow; determining a target predicted value of the sub-region according to a forward motion compensation value in the motion compensation values of the sub-region, a backward motion compensation value in the motion compensation values of the sub-region and a bidirectional optical flow offset value of the sub-region;
and the third determining module is used for determining the predicted value of the current block according to the target predicted value of each sub-area.
The present application provides a decoding-side device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
if the target motion information prediction mode of the current block is a motion information angle prediction mode, then
Dividing the current block into at least one sub-region; for each sub-region of the current block, determining motion information of the sub-region according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
determining a motion compensation value of the sub-region according to the motion information of the sub-region;
if the sub-area meets the condition of using the bidirectional optical flow, acquiring a bidirectional optical flow offset value of the sub-area; determining a target predicted value of the sub-region according to a forward motion compensation value in the motion compensation values of the sub-region, a backward motion compensation value in the motion compensation values of the sub-region and a bidirectional optical flow offset value of the sub-region;
and determining the predicted value of the current block according to the target predicted value of each sub-area.
The application provides a coding end equipment, which is characterized by comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
if the target motion information prediction mode of the current block is a motion information angle prediction mode, then
Dividing the current block into at least one sub-region; for each sub-region of the current block, determining motion information of the sub-region according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
determining a motion compensation value of the sub-region according to the motion information of the sub-region;
if the sub-area meets the condition of using the bidirectional optical flow, acquiring a bidirectional optical flow offset value of the sub-area; determining a target predicted value of the sub-region according to a forward motion compensation value in the motion compensation values of the sub-region, a backward motion compensation value in the motion compensation values of the sub-region and a bidirectional optical flow offset value of the sub-region;
and determining the predicted value of the current block according to the target predicted value of each sub-area.
According to the technical scheme, the current block does not need to be divided, the bit overhead caused by sub-block division can be effectively solved, for example, motion information is provided for each sub-region of the current block on the basis that the current block is not divided into sub-blocks, different sub-regions of the current block can correspond to the same or different motion information, so that the coding performance is improved, the problem of transmitting a large amount of motion information is solved, and a large amount of bits can be saved.
Drawings
FIG. 1 is a schematic diagram of a video coding framework in one embodiment of the present application;
FIGS. 2A-2B are schematic diagrams illustrating the partitioning of a current block according to an embodiment of the present application;
FIG. 3 is a schematic view of several sub-regions in one embodiment of the present application;
FIG. 4 is a flow chart of a method of encoding and decoding in one embodiment of the present application;
fig. 5A and 5B are schematic diagrams of a motion information angle prediction mode in an embodiment of the present application;
FIG. 6 is a flow chart of a method of encoding and decoding in one embodiment of the present application;
FIG. 7 is a flow chart of a method of encoding and decoding in one embodiment of the present application;
FIGS. 8A-8C are schematic diagrams of peripheral blocks of a current block in one embodiment of the present application;
FIGS. 9A-9N are diagrams of motion compensation in one embodiment of the present application;
fig. 10 is a block diagram of a codec device according to an embodiment of the present application;
fig. 11A is a hardware configuration diagram of a decoding-side device according to an embodiment of the present application;
fig. 11B is a hardware configuration diagram of an encoding-side device according to an embodiment of the present application.
Detailed Description
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items. It will be understood that, although the terms first, second, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The embodiment of the application provides a coding and decoding method, a coding and decoding device and equipment thereof, which can relate to the following concepts:
motion Vector (MV): in inter coding, a motion vector is used to represent the relative displacement between the current block of the current frame image and the reference block of the reference frame image, for example, there is a strong temporal correlation between image a of the current frame and image B of the reference frame, when image block a1 (current block) of image a is transmitted, a motion search can be performed in image B to find image block B1 (reference block) that best matches image block a1, and to determine the relative displacement between image block a1 and image block B1, which is also the motion vector of image block a 1. Each divided image block has a corresponding motion vector transmitted to a decoding side, and if the motion vector of each image block is independently encoded and transmitted, especially divided into a large number of image blocks of small size, a considerable number of bits are consumed. In order to reduce the bit number used for encoding the motion vector, the spatial correlation between adjacent image blocks can be utilized, the motion vector of the current image block to be encoded is predicted according to the motion vector of the adjacent encoded image block, and then the prediction difference is encoded, so that the bit number representing the motion vector can be effectively reduced. In the process of encoding the Motion Vector of the current block, the Motion Vector of the current block can be predicted by using the Motion Vector of the adjacent encoded block, and then the Difference value (MVD) between the predicted value (MVP) of the Motion Vector and the true estimate value of the Motion Vector can be encoded, thereby effectively reducing the encoding bit number of the Motion Vector.
Motion Information (Motion Information): in order to accurately acquire information of the reference block, index information of the reference frame image is required to indicate which reference frame image is used, in addition to the motion vector. In video coding technology, for a current frame picture, a reference frame picture list can be generally established, and the reference frame picture index information indicates that the current block adopts a few reference frame pictures in the reference frame picture list. Many coding techniques also support multiple reference picture lists, and therefore, an index value, which may be referred to as a reference direction, may also be used to indicate which reference picture list is used. In the video encoding technology, motion-related information such as a motion vector, a reference frame index, and a reference direction may be collectively referred to as motion information.
Rate-Distortion principle (Rate-Distortion Optimized): there are two major indicators for evaluating coding efficiency: code rate and Peak Signal to Noise Ratio (PSNR), the smaller the bit stream, the larger the compression rate, and the larger the PSNR, the better the reconstructed image quality, and in the mode selection, the discrimination formula is essentially the comprehensive evaluation of the two. For example, the cost for a mode: j (mode) ═ D + λ R, illustratively, D denotes Distortion, which can generally be measured using SSE metrics, SSE referring to the sum of the mean square of the differences between the reconstructed image block and the source image; and λ is a lagrange multiplier, and R is the actual number of bits required for encoding the image block in the mode, including the sum of bits required for encoding mode information, motion information, residual errors and the like.
Intra and inter prediction (intra and inter) techniques: intra-frame prediction refers to predictive coding using reconstructed pixel values of spatially neighboring blocks of a current block (i.e., pictures in the same frame as the current block). Inter-frame prediction refers to performing predictive coding by using reconstructed pixel values of time-domain neighboring blocks of a current block (i.e., images in different frames from the current block), and inter-frame prediction refers to using video time-domain correlation, and because a video sequence contains strong time-domain correlation, pixels of a current image are predicted by using pixels of neighboring coded images, thereby achieving the purpose of effectively removing video time-domain redundancy.
The video coding framework comprises the following steps: referring to fig. 1, a schematic diagram of a video encoding framework is shown, where the video encoding framework is used to implement a processing flow at an encoding end in the embodiment of the present application, the schematic diagram of the video decoding framework is similar to that in fig. 1, and is not described herein again, and the video decoding framework may be used to implement a processing flow at a decoding end in the embodiment of the present application. Illustratively, in the video encoding framework and the video decoding framework, intra prediction, motion estimation/motion compensation, reference picture buffer, in-loop filtering, reconstruction, transformation, quantization, inverse transformation, inverse quantization, entropy encoder, etc. modules may be included. At the encoding end, the processing flow at the encoding end can be realized through the matching among the modules, and at the decoding end, the processing flow at the decoding end can be realized through the matching among the modules.
In the conventional manner, there is only one motion information for the current block, i.e., all sub-blocks inside the current block share one motion information. For a scene with a small moving target, the optimal motion information can be obtained only after the current block is divided, and if the current block is not divided, the current block only has one motion information, so that the prediction precision is not high. Referring to fig. 2A, the region C, the region G, and the region H are regions within the current block, and are not subblocks divided within the current block. Assuming that the current block uses the motion information of the block F, each area within the current block uses the motion information of the block F. Since the distance between the area H and the block F in the current block is long, if the area H also uses the motion information of the block F, the prediction accuracy of the motion information of the area H is not high. The motion information of the sub-block inside the current block cannot utilize the coded motion information around the current block, so that the available motion information is reduced, and the accuracy of the motion information is not high. For example, the sub-block I of the current block can only use the motion information of the sub-blocks C, G, and H, but cannot use the motion information of the image blocks a, B, F, D, and E.
In view of the above discovery, an encoding and decoding method is provided in this embodiment of the present application, which enables a current block to correspond to a plurality of pieces of motion information on the basis of not dividing the current block, that is, on the basis of not increasing overhead caused by sub-block division, so as to improve the prediction accuracy of the motion information of the current block. Because the current block is not divided, the consumption of extra bits for transmitting the division mode is avoided, and the bit overhead is saved. For each region of the current block (here, any region in the current block, where the size of the region is smaller than the size of the current block and is not a sub-block obtained by dividing the current block), the motion information of each region of the current block may be obtained by using the encoded motion information around the current block. Referring to fig. 2B, C is a sub-region inside the current block, A, B, D, E and F are encoded blocks around the current block, the motion information of the current sub-region C can be directly obtained by using an angular prediction method, and other sub-regions inside the current block are obtained by using the same method. Therefore, for the current block, different motion information can be obtained without carrying out block division on the current block, and the bit cost of part of block division is saved.
Referring to FIG. 3, the current block includes 9 regions (hereinafter, referred to as sub-regions within the current block), such as sub-region f 1-sub-region f9, which are sub-regions within the current block, not sub-blocks into which the current block is divided. For different sub-regions of the sub-region f 1-the sub-region f9, the same or different motion information may be associated, and therefore, on the basis of not dividing the current block, the current block may also be associated with a plurality of motion information, for example, the sub-region f1 is associated with motion information 1, the sub-region f2 is associated with motion information 2, and so on. For example, in determining the motion information of the sub-region f5, the motion information of the block a1, the block a2, the block A3, the block E, the block B1, the block B2 and the block B3, i.e., the motion information of the encoded blocks around the current block, may be utilized to provide more motion information for the sub-region f 5. Of course, the motion information of the tile A1, tile A2, tile A3, etc. may also be utilized for the motion information of other sub-regions of the current block.
In the embodiment of the present application, a process of constructing a motion information prediction mode candidate list is involved, for example, for any motion information angle prediction mode, it is determined to add the motion information angle prediction mode to the motion information prediction mode candidate list or to prohibit adding the motion information angle prediction mode to the motion information prediction mode candidate list. The present invention relates to a filling process of motion information, for example, filling motion information that is not available in peripheral blocks of a current block, and timing for filling motion information that is not available in peripheral blocks. For example, the motion information of the current block is determined by using the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the motion information angle prediction mode, and the motion compensation value of the current block is determined according to the motion information of the current block. For example, if the sub-region satisfies the condition of using the bidirectional optical flow, the motion compensation value of the sub-region is superimposed with the bidirectional optical flow offset value to obtain the target prediction value of the sub-region.
In one embodiment, a motion compensation process may be implemented, bi-directional optical flow processing of a sub-region of a current block. In another embodiment, a motion information prediction mode candidate list construction process, a motion compensation process, and bi-directional optical flow processing on a sub-region of the current block may be implemented. In another embodiment, a filling process of motion information, a motion compensation process, a bi-directional optical flow processing of the sub-region of the current block can be implemented. In another embodiment, a construction process of a motion information prediction mode candidate list, a filling process of motion information, a motion compensation process, and a bidirectional optical flow processing on a sub-region of a current block may be implemented. Of course, the above embodiments are only a few examples of the present application, and are not limited thereto.
In the embodiment of the application, when the construction process of the motion information prediction mode candidate list and the filling process of the motion information are realized, the motion information angle prediction mode is firstly subjected to duplication checking processing, and then unavailable motion information in peripheral blocks is filled, so that the complexity of a decoding end is reduced, and the decoding performance is improved. For example, for a horizontal angle prediction mode, a vertical angle prediction mode, a horizontal upward angle prediction mode, a horizontal downward angle prediction mode, a vertical rightward angle prediction mode, and the like, duplication checking processing is performed first, and the non-repeated horizontal downward angle prediction mode and vertical rightward angle prediction mode are added to the motion information prediction mode candidate list, so that the motion information prediction mode candidate list can be obtained first, and motion information that is not available in the peripheral blocks is not yet filled.
After the decoding end selects the target motion information prediction mode of the current block from the motion information prediction mode candidate list, if the target motion information prediction mode is not the motion information angle prediction mode, the decoding end does not fill unavailable motion information in the peripheral blocks, so that the decoding end reduces the filling operation of the motion information, reduces the complexity of the decoding end and improves the decoding performance.
In the embodiment of the application, the bidirectional optical flow processing is performed on the sub-region, so that the motion compensation value of the sub-region is superposed with the bidirectional optical flow deviation value to obtain the target prediction value of the sub-region, the accuracy of the target prediction value is higher, and the accuracy of prediction is improved.
The following describes the encoding and decoding method in the embodiments of the present application with reference to several specific embodiments.
Example 1: referring to fig. 4, a schematic flow chart of the encoding and decoding method provided in the embodiment of the present application is shown, where the method may be applied to a decoding end or an encoding end, and the method may include the following steps:
step 401, if the target motion information prediction mode of the current block is a motion information angle prediction mode, dividing the current block into at least one sub-region; and determining the motion information of each sub-region of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the motion information angle prediction mode.
Step 402, determining a motion compensation value of the sub-region according to the motion information of the sub-region.
Step 403, if the sub-region satisfies the condition of using the bi-directional optical flow, acquiring a bi-directional optical flow offset value of the sub-region, and determining a target prediction value of the sub-region according to a forward motion compensation value in the motion compensation values of the sub-region, a backward motion compensation value in the motion compensation values of the sub-region, and the bi-directional optical flow offset value of the sub-region.
For example, for each sub-region of the current block, if the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located between two reference frames in the temporal sequence, the sub-region satisfies the condition of using bidirectional optical flow. One piece of motion information in the bidirectional motion information is forward motion information, a reference frame corresponding to the forward motion information is a forward reference frame, the other piece of motion information in the bidirectional motion information is backward motion information, and a reference frame corresponding to the backward motion information is a backward reference frame. The current frame in which the sub-region is located between the forward reference frame and the backward reference frame in time sequence.
When determining the motion compensation value of the sub-region according to the motion information of the sub-region, the forward motion compensation value of the sub-region may be determined based on a forward reference frame corresponding to the forward motion information in the bidirectional motion information, the backward motion compensation value of the sub-region may be determined based on a backward reference frame corresponding to the backward motion information in the bidirectional motion information, and the forward motion compensation value and the backward motion compensation value of the sub-region constitute the motion compensation value of the sub-region.
When determining the target prediction value of the sub-region, the target prediction value of the sub-region may be determined according to the forward motion compensation value of the sub-region, the backward motion compensation value of the sub-region, and the bi-directional optical flow offset value of the sub-region.
For example, for each sub-region of the current block, if the motion information of the sub-region is unidirectional motion information, the sub-region does not satisfy the condition of using bidirectional optical flow. If the sub-area does not satisfy the condition of using the bidirectional optical flow, determining a target predicted value of the sub-area according to the motion compensation value of the sub-area without referring to the bidirectional optical flow offset value. When the target prediction value of the sub-region is determined, the motion compensation value of the sub-region is determined as the target prediction value of the sub-region.
For example, for each sub-region of the current block, if the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located is not located between two reference frames in the temporal sequence, the sub-region does not satisfy the condition of using bidirectional optical flow. If the sub-region does not satisfy the condition of using the bi-directional optical flow, the target prediction value of the sub-region can be determined according to the motion compensation value of the sub-region without referring to the bi-directional optical flow offset value. For convenience of distinguishing, one piece of motion information in the bidirectional motion information is recorded as first motion information, a reference frame corresponding to the first motion information is recorded as a first reference frame, the other piece of motion information in the bidirectional motion information is recorded as second motion information, and a reference frame corresponding to the second motion information is recorded as a second reference frame. Since the current frame where the sub-region is located is not located between two reference frames in the time sequence, the first reference frame and the second reference frame are both forward reference frames of the sub-region, or the first reference frame and the second reference frame are both backward reference frames of the sub-region.
When determining the motion compensation value of the sub-region according to the motion information of the sub-region, the first motion compensation value of the sub-region may be determined based on a first reference frame corresponding to first motion information in the bidirectional motion information, the second motion compensation value of the sub-region may be determined based on a second reference frame corresponding to second motion information in the bidirectional motion information, and the first motion compensation value and the second motion compensation value of the sub-region constitute the motion compensation value of the sub-region.
In determining the target prediction value for the sub-region, the target prediction value for the sub-region may be determined based on the first motion compensation value for the sub-region and the second motion compensation value for the sub-region without referring to the bi-directional optical flow offset value.
Illustratively, obtaining the bi-directional optical flow offset value for the sub-region may include, but is not limited to: determining a first pixel value and a second pixel value according to the motion information of the sub-area; the first pixel value is a forward motion compensation value and a forward extension value for the subregion, the forward extension value being copied from the forward motion compensation value or obtained from a reference pixel location of a forward reference frame; the second pixel value is a backward motion compensation value and a backward extension value of the sub-region, and the backward extension value is copied from the backward motion compensation value or obtained from a reference pixel position of a backward reference frame; the forward reference frame and the backward reference frame are determined according to the motion information of the sub-regions. Then, a bi-directional optical-flow offset value for the sub-region is determined from the first pixel value and the second pixel value.
And step 404, determining the predicted value of the current block according to the target predicted value of each sub-area.
For example, before determining that the target motion information prediction mode of the current block is the motion information angle prediction mode, a motion information prediction mode candidate list of the current block may be further constructed, and the target motion information prediction mode of the current block is selected from the motion information prediction mode candidate list; the process for constructing the motion information prediction mode candidate list of the current block may include:
step a1, aiming at any motion information angle prediction mode of the current block, based on the pre-configuration angle of the motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed by the pre-configuration angle from the peripheral blocks of the current block.
The motion information angle prediction mode is used for indicating a preconfigured angle, selecting a peripheral matching block for a sub-region of the current block from peripheral blocks of the current block according to the preconfigured angle, and determining one or more motion information of the current block according to the motion information of the peripheral matching block, that is, for each sub-region of the current block, determining the motion information of the sub-region according to the motion information of the peripheral matching block. Also, the peripheral matching block is a block at a specified position determined from peripheral blocks of the current block at a pre-configured angle.
For example, the peripheral blocks may include blocks adjacent to the current block; alternatively, the peripheral blocks may include blocks adjacent to the current block and non-adjacent blocks. Of course, the peripheral block may also include other blocks, which is not limited in this regard.
For example, the motion information angle prediction mode may include, but is not limited to, one or any combination of the following: a horizontal angle prediction mode, a vertical angle prediction mode, a horizontal upward angle prediction mode, a horizontal downward angle prediction mode, and a vertical right angle prediction mode. Of course, the above are just a few examples of the motion information angle prediction mode, and there may be other types of motion information angle prediction modes, and the motion information angle prediction mode is related to the preconfigured angle, for example, the preconfigured angle may also be 10 degrees, 20 degrees, and the like. Referring to fig. 5A, a schematic diagram of a horizontal angle prediction mode, a vertical angle prediction mode, a horizontal upward angle prediction mode, a horizontal downward angle prediction mode, and a vertical right angle prediction mode is shown, where different motion information angle prediction modes correspond to different preconfigured angles.
In summary, a plurality of peripheral matching blocks pointed to by the preconfigured angle may be selected from the peripheral blocks of the current block based on the preconfigured angle of the motion information angle prediction mode. For example, referring to fig. 5A, a plurality of peripheral matching blocks pointed to by preconfigured angles for horizontal angle prediction mode, a plurality of peripheral matching blocks pointed to by preconfigured angles for vertical angle prediction mode, a plurality of peripheral matching blocks pointed to by preconfigured angles for horizontal upward angle prediction mode, a plurality of peripheral matching blocks pointed to by preconfigured angles for horizontal downward angle prediction mode, and a plurality of peripheral matching blocks pointed to by preconfigured angles for vertical right angle prediction mode are shown.
Step a2, if the plurality of peripheral matching blocks at least include a first peripheral matching block and a second peripheral matching block to be traversed, for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block when the motion information of the first peripheral matching block and the second peripheral matching block is different.
In a possible embodiment, after selecting the first peripheral matching block and the second peripheral matching block to be traversed from the plurality of peripheral matching blocks, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angular prediction mode is added to the motion information prediction mode candidate list of the current block.
In a possible embodiment, after selecting the first peripheral matching block and the second peripheral matching block to be traversed from the plurality of peripheral matching blocks, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
In one possible embodiment, after selecting the first peripheral matching block and the second peripheral matching block to be traversed from the plurality of peripheral matching blocks, for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, when the motion information of the first peripheral matching block and the second peripheral matching block is different, the motion information angle prediction mode may be added to the motion information prediction mode candidate list of the current block.
In one possible embodiment, after the first peripheral matching block and the second peripheral matching block to be traversed are selected from the plurality of peripheral matching blocks, for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, when the motion information of the first peripheral matching block and the second peripheral matching block is the same, the motion information angle prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
In a possible embodiment, after selecting the first peripheral matching block and the second peripheral matching block to be traversed from the plurality of peripheral matching blocks, for the first peripheral matching block and the second peripheral matching block to be traversed, if an intra block and/or an unencoded block exists in the first peripheral matching block and the second peripheral matching block, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block. Or, if the intra block and/or the non-coded block exists in the first peripheral matching block and the second peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block. Or if at least one of the first peripheral matching block and the second peripheral matching block is positioned outside the image of the current block or outside the image slice of the current block, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block. Or if at least one of the first peripheral matching block and the second peripheral matching block is positioned outside the image of the current block or outside the image slice of the current block, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. And for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. And for the first peripheral matching block and the second peripheral matching block to be traversed, if the first peripheral matching block and the second peripheral matching block both have available motion information, and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information. If there is available motion information in both the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed in sequence are selected from a plurality of peripheral matching blocks. And judging whether the second peripheral matching block and the third peripheral matching block have available motion information or not if the first peripheral matching block and the second peripheral matching block have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block are the same, aiming at the first peripheral matching block and the second peripheral matching block to be traversed. If the available motion information exists in the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. For a first and a second peripheral matching block to be traversed, if at least one of the first and the second peripheral matching block does not have available motion information, the motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. For a first peripheral matching block and a second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block may be prohibited.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. For the first and second peripheral matching blocks to be traversed, if at least one of the first and second peripheral matching blocks does not have available motion information, it may be continuously determined whether both the second and third peripheral matching blocks have available motion information. If there is available motion information for both the second peripheral matching block and the third peripheral matching block, the motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. For the first and second peripheral matching blocks to be traversed, if at least one of the first and second peripheral matching blocks does not have available motion information, it may be continuously determined whether both the second and third peripheral matching blocks have available motion information. If there is available motion information for both the second and third perimeter matching blocks, the motion information angular prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block when the motion information of the second and third perimeter matching blocks is the same.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed in sequence are selected from a plurality of peripheral matching blocks. And judging whether the available motion information exists in the second peripheral matching block and the third peripheral matching block or not if at least one of the first peripheral matching block and the second peripheral matching block does not have the available motion information aiming at the first peripheral matching block and the second peripheral matching block to be traversed. If there is no available motion information in at least one of the second peripheral matching block and the third peripheral matching block, adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed in sequence are selected from a plurality of peripheral matching blocks. And judging whether the second peripheral matching block and the third peripheral matching block have available motion information or not if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information. If there is no motion information available in at least one of the second peripheral matching block and the third peripheral matching block, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
In the above embodiment, the process of determining whether there is available motion information in any peripheral matching block may include, but is not limited to: and if the peripheral matching block is positioned outside the image of the current block or the peripheral matching block is positioned outside the image slice of the current block, determining that the peripheral matching block has no available motion information. And if the peripheral matching block is an uncoded block, determining that no available motion information exists in the peripheral matching block. And if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block. If the peripheral matching block is the inter-frame coded block, determining that the available motion information exists in the peripheral matching block.
In a possible implementation manner, for the encoding side, after constructing and retrieving the motion information prediction mode candidate list of the current block according to the foregoing embodiment, the method further includes: and if the motion information angle prediction mode exists in the motion information prediction mode candidate list, filling unavailable motion information in the peripheral blocks of the current block. For the decoding end, after the motion information prediction mode candidate list of the current block is obtained through construction and duplication checking according to the foregoing embodiment, and the target motion information prediction mode of the current block is selected from the motion information prediction mode candidate list, the method further includes: and if the target motion information prediction mode is a motion information angle prediction mode, filling unavailable motion information in the peripheral blocks of the current block.
As an example, for an encoding side or a decoding side, the encoding side/the decoding side fills motion information that is not available in peripheral blocks of a current block, including: traversing the peripheral blocks of the current block according to the traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the peripheral block comprises a first peripheral block without available motion information before, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise second peripheral blocks without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral blocks into the second peripheral blocks.
For example, the traversing in the traversal order from the left peripheral block to the top peripheral block of the current block may include: if the current block does not have the left peripheral block, traversing the upper peripheral block of the current block; and traversing the left peripheral block of the current block if the current block does not have the upper peripheral block. The left peripheral block may include a block adjacent to the left of the current block and a non-adjacent block. The upper-side peripheral block may include a block adjacent to the upper side of the current block and a non-adjacent block. The number of first perimeter blocks may be one or more, all perimeter blocks before the first traversed perimeter block where there is available motion information. The first peripheral block may be an unencoded block or an intra block; the second peripheral block may be an unencoded block or an intra block.
In another possible implementation manner, for the encoding side, after the motion information prediction mode candidate list of the current block is obtained through construction and duplication checking according to the foregoing embodiment, for each motion information angle prediction mode in the motion information prediction mode candidate list, if a plurality of peripheral matching blocks pointed by a preconfigured angle of the motion information angle prediction mode include peripheral blocks without available motion information, the motion information that is not available in the peripheral blocks of the current block is filled. For a decoding end, after a motion information prediction mode candidate list of a current block is obtained by constructing and repeating the above embodiment, and a target motion information prediction mode of the current block is selected from the motion information prediction mode candidate list, if the target motion information prediction mode is a motion information angle prediction mode and a plurality of peripheral matching blocks pointed by a pre-configuration angle of the motion information angle prediction mode include peripheral blocks without available motion information, unavailable motion information in the peripheral blocks of the current block is filled.
For example, for a peripheral block for which there is no available motion information, the available motion information of a neighboring block of the peripheral block is filled as the motion information of the peripheral block; or, filling the available motion information of the reference block at the corresponding position of the peripheral block in the time domain reference frame as the motion information of the peripheral block; or, filling default motion information into the motion information of the peripheral block.
For another example, when filling the peripheral blocks of the current block, traversing according to the traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the first peripheral block without available motion information is included in the peripheral block, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block if the peripheral blocks behind the peripheral block comprise the second peripheral block without available motion information.
As can be seen from the above technical solutions, in the embodiments of the present application, the current block does not need to be divided into sub-blocks, the division information of each sub-region of the current block can be determined based on the motion information angle prediction mode, and the bit overhead caused by sub-block division can be effectively solved. By adding the motion information angle prediction modes with the incompletely same motion information into the motion information prediction mode candidate list, the motion information angle prediction modes only with single motion information are removed, the number of the motion information angle prediction modes in the motion information prediction mode candidate list is reduced, the number of bits for coding a plurality of pieces of motion information can be reduced, and the coding performance is further improved.
Fig. 5B is a schematic diagram of a horizontal angle prediction mode, a vertical angle prediction mode, a horizontal upward angle prediction mode, a horizontal downward angle prediction mode, and a vertical right angle prediction mode. As can be seen from fig. 5B, some motion information angular prediction modes may make the motion information of each sub-region inside the current block the same, for example, a horizontal angular prediction mode, a vertical angular prediction mode, and a horizontal upward angular prediction mode, and such motion information angular prediction modes need to be eliminated. Some motion information angle prediction modes, such as a horizontal downward angle prediction mode and a vertical rightward angle prediction mode, may have different motion information for each sub-region inside the current block, and such motion information angle prediction modes need to be retained, i.e., may be added to the motion information prediction mode candidate list.
Obviously, if a horizontal angle prediction mode, a vertical angle prediction mode, a horizontal upward angle prediction mode, a horizontal downward angle prediction mode, and a vertical rightward angle prediction mode are added to the motion information prediction mode candidate list, when encoding the index of the horizontal downward angle prediction mode, since there are a horizontal angle prediction mode, a vertical angle prediction mode, and a horizontal upward angle prediction mode in the front (the order of each motion information angle prediction mode is not fixed, and is only an example here), it may be necessary to encode 0001 to represent them.
However, in the embodiment of the present application, only the horizontal downward angle prediction mode and the vertical rightward angle prediction mode are added to the motion information prediction mode candidate list, and the addition of the horizontal angle prediction mode, the vertical angle prediction mode, and the horizontal upward angle prediction mode to the motion information prediction mode candidate list is prohibited, that is, the horizontal angle prediction mode, the vertical angle prediction mode, and the horizontal upward angle prediction mode do not exist before the horizontal downward angle prediction mode, and therefore, when encoding the index of the horizontal downward angle prediction mode, it may be only necessary to encode 0 to indicate. In summary, the above manner can reduce bit overhead caused by encoding index information of the motion information angle prediction mode, reduce hardware complexity while saving bit overhead, avoid the problem of low performance gain caused by the motion information angle prediction mode of a single motion information, and reduce the number of bits for encoding a plurality of motion information angle prediction modes.
In the embodiment of the application, the motion information angle prediction mode is subjected to duplicate checking processing, and then unavailable motion information in peripheral blocks is filled, so that the complexity of a decoding end is reduced, and the decoding performance is improved. For example, for a horizontal angle prediction mode, a vertical angle prediction mode, a horizontal upward angle prediction mode, a horizontal downward angle prediction mode, a vertical rightward angle prediction mode, and the like, duplication checking processing is performed first, and the non-repeated horizontal downward angle prediction mode and vertical rightward angle prediction mode are added to the motion information prediction mode candidate list, so that the motion information prediction mode candidate list can be obtained first, and at this time, motion information that is not available in the peripheral blocks is not yet filled.
After the decoding end selects the target motion information prediction mode of the current block from the motion information prediction mode candidate list, if the target motion information prediction mode is other than the horizontal downward angle prediction mode and the vertical rightward angle prediction mode, the unavailable motion information in the peripheral blocks does not need to be filled, and therefore the filling operation of the motion information can be reduced by the decoding end.
Example 2: based on the same application concept as the above method, referring to fig. 6, a schematic flow chart of a coding and decoding method provided in the embodiment of the present application is shown, where the method may be applied to a coding end, and the method may include:
in step 601, an encoding end constructs a motion information prediction mode candidate list of a current block, where the motion information prediction mode candidate list may include at least one motion information angle prediction mode. Of course, the motion information prediction mode candidate list may also include other types of motion information prediction modes (i.e. not motion information angle prediction modes), which is not limited herein.
For example, a motion information prediction mode candidate list may be constructed for the current block, that is, all sub-regions in the current block may correspond to the same motion information prediction mode candidate list; alternatively, multiple motion information prediction mode candidate lists may be constructed for the current block, i.e., all sub-regions within the current block may correspond to the same or different motion information prediction mode candidate lists. For convenience of description, an example of constructing a motion information prediction mode candidate list for a current block will be described.
The motion information angle prediction mode may be an angle prediction mode for predicting motion information, i.e., used for inter-frame coding, rather than intra-frame coding, and the motion information angle prediction mode selects a matching block rather than a matching pixel point.
The motion information prediction mode candidate list may be constructed in a conventional manner, or in the motion information prediction mode candidate list construction manner in embodiment 1, which is not limited to this construction manner.
Step 602, if there is a motion information angle prediction mode in the motion information prediction mode candidate list, the encoding end fills the unavailable motion information in the peripheral blocks of the current block. For example, traversing the peripheral blocks of the current block according to the traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the first peripheral block without available motion information is included in the peripheral block, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block if the peripheral blocks behind the peripheral block comprise the second peripheral block without available motion information.
Step 603, the encoding end sequentially traverses each motion information angle prediction mode in the motion information prediction mode candidate list. Dividing the current block into at least one sub-region according to the currently traversed motion information angle prediction mode; and for each sub-region, determining the motion information of the sub-region according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configuration angles of the motion information angle prediction mode. For example, a peripheral matching block corresponding to the sub-region is selected from a plurality of peripheral matching blocks pointed by the pre-configured angle of the motion information angle prediction mode; and determining the motion information of the sub-area according to the motion information of the selected peripheral matching block.
It should be noted that, since the step 602 fills the peripheral blocks without available motion information in the peripheral blocks of the current block, all the plurality of peripheral matching blocks pointed by the preconfigured angle according to the motion information angular prediction mode in step 603 have available motion information, and the motion information of the sub-region can be determined by using the available motion information of the peripheral matching blocks.
Step 604, for each sub-region, the encoding end determines a motion compensation value of the sub-region according to the motion information of the sub-region. If the sub-area meets the condition of using the bidirectional optical flow, the encoding end acquires the bidirectional optical flow offset value of the sub-area, and determines the target prediction value of the sub-area according to the motion compensation value of the sub-area (namely the forward motion compensation value of the sub-area and the backward motion compensation value of the sub-area) and the bidirectional optical flow offset value. If the sub-area does not meet the condition of using the bidirectional optical flow, the encoding end determines a target predicted value of the sub-area according to the motion compensation value of the sub-area.
Step 605, the encoding end determines the prediction value of the current block according to the target prediction value of each sub-region.
In step 606, the encoding end selects a target motion information prediction mode of the current block from the motion information prediction mode candidate list, where the target motion information prediction mode is a motion information angle prediction mode or other types of motion information prediction modes.
For example, after performing steps 603-605 for each motion information angular prediction mode (e.g., horizontal downward angular prediction mode, etc.) in the motion information prediction mode candidate list, the target prediction value of the current block can be obtained. Based on the target predicted value of the current block, the encoding end determines the rate distortion cost value of the motion information angle prediction mode by adopting a rate distortion principle, and the determination mode is not limited. For other types of motion information prediction modes R (obtained in a traditional manner) in the motion information prediction mode candidate list, determining the motion information of the current block according to the motion information prediction mode R, determining a target prediction value of the current block according to the motion information of the current block, and then determining the rate-distortion cost value of the motion information prediction mode R, without limitation.
Then, the motion information prediction mode corresponding to the minimum rate distortion cost is determined as a target motion information prediction mode, which may be a motion information angle prediction mode or another type of motion information prediction mode R.
Example 3: based on the same application concept as the above method, referring to fig. 7, a schematic flow chart of the encoding and decoding method provided in the embodiment of the present application is shown, where the method may be applied to a decoding end, and the method may include:
in step 701, a decoding end constructs a motion information prediction mode candidate list of a current block, where the motion information prediction mode candidate list may include at least one motion information angle prediction mode. Of course, the motion information prediction mode candidate list may also include other types of motion information prediction modes (i.e. not motion information angle prediction modes), which is not limited herein.
In step 702, the decoding end selects a target motion information prediction mode of the current block from the motion information prediction mode candidate list, where the target motion information prediction mode may be a motion information angle prediction mode or another type of motion information prediction mode.
The process of selecting the target motion information prediction mode at the decoding end may include: upon receiving the coded bitstream, obtaining indication information from the coded bitstream, the indication information indicating index information of the target motion information prediction mode in the motion information prediction mode candidate list. For example, when the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream carries indication information, where the indication information is used to indicate index information of the target motion information prediction mode in the motion information prediction mode candidate list. It is assumed that the motion information prediction mode candidate list sequentially includes: a horizontal down angle prediction mode, a vertical right angle prediction mode, a motion information prediction mode R, and the indication information is used to indicate index information 1, and index information 1 represents the first motion information prediction mode in the motion information prediction mode candidate list. Based on this, the decoding side acquires index information 1 from the coded bit stream.
The decoding end selects the motion information prediction mode corresponding to the index information from the motion information prediction mode candidate list, and determines the selected motion information prediction mode as the target motion information prediction mode of the current block. For example, when the indication information is used to indicate index information 1, the decoding end may determine the 1 st motion information prediction mode in the motion information prediction mode candidate list as the target motion information prediction mode of the current block.
In step 703, if the target motion information prediction mode is a motion information angle prediction mode, the decoding end fills motion information that is not available in the peripheral blocks of the current block. For example, traversing the peripheral blocks of the current block according to the traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the first peripheral block without available motion information is included in the peripheral block, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block if the peripheral blocks behind the peripheral block comprise the second peripheral block without available motion information.
In a possible implementation manner, if the target motion information prediction mode is not the motion information angle prediction mode, the unavailable motion information in the peripheral blocks of the current block is not needed to be filled, so that the decoding end can reduce the filling operation of the motion information.
Step 704, the decoding end divides the current block into at least one sub-region; and for each sub-region, determining the motion information of the sub-region according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configuration angles of the motion information angle prediction mode (namely the selected target motion information prediction mode). For example, a peripheral matching block corresponding to the sub-region is selected from a plurality of peripheral matching blocks pointed by the pre-configured angle of the motion information angle prediction mode; and determining the motion information of the sub-area according to the motion information of the selected peripheral matching block. It should be noted that, since the peripheral blocks of the current block, which do not have available motion information, are filled in step 703, all the plurality of peripheral matching blocks pointed to according to the preconfigured angle of the motion information angular prediction mode in step 704 have available motion information, and the motion information of the sub-region can be determined according to the available motion information.
Step 705, for each sub-region, the decoding end determines a motion compensation value of the sub-region according to the motion information of the sub-region. If the sub-area meets the condition of using the bidirectional optical flow, the decoding end acquires the bidirectional optical flow offset value of the sub-area, and determines the target prediction value of the sub-area according to the motion compensation value of the sub-area (namely the forward motion compensation value of the sub-area and the backward motion compensation value of the sub-area) and the bidirectional optical flow offset value. If the sub-area does not meet the condition of using the bidirectional optical flow, the decoding end determines a target predicted value of the sub-area according to the motion compensation value of the sub-area.
Step 706, the decoding end determines the prediction value of the current block according to the target prediction value of each sub-region.
Example 4: in the above-described embodiment, a process of constructing a motion information prediction mode candidate list, that is, determining whether to add the motion information angle prediction mode to the motion information prediction mode candidate list or to prohibit the motion information angle prediction mode from being added to the motion information prediction mode candidate list for any one motion information angle prediction mode, includes:
step b1, obtaining at least one motion information angle prediction mode of the current block.
For example, the following motion information angle prediction modes may be obtained: a horizontal angle prediction mode, a vertical angle prediction mode, a horizontal upward angle prediction mode, a horizontal downward angle prediction mode, and a vertical right angle prediction mode. Of course, the above manner is only an example, and the preconfigured angle may be any angle between 0-360 degrees, and the horizontal direction of the center point of the sub-region to the right may be located as 0 degree, so that any angle rotated counterclockwise from 0 degree may be the preconfigured angle, or the center point of the sub-region may be located as 0 degree to other directions. In practical applications, the preconfigured angle may be a fractional angle, such as 22.5 degrees, or the like.
And b2, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode.
Step b3, based on the characteristics of whether the motion information is available in the plurality of peripheral matching blocks, whether the available motion information in the plurality of peripheral matching blocks is the same, etc., adding the motion information angle prediction mode to the motion information prediction mode candidate list, or prohibiting adding the motion information angle prediction mode to the motion information prediction mode candidate list.
The determination process of step b3 will be described below with reference to several specific cases.
In case one, a first peripheral matching block and a second peripheral matching block to be traversed are selected from the plurality of peripheral matching blocks, and if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block and a second peripheral matching block to be traversed from the plurality of peripheral matching blocks, and if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angular prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
For example, if an intra block and/or an unencoded block exists in the first peripheral matching block and the second peripheral matching block, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block.
And if the intra block and/or the non-coded block exists in the first peripheral matching block and the second peripheral matching block, forbidding adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block.
And if at least one of the first peripheral matching block and the second peripheral matching block is positioned outside the image of the current block or outside the image slice of the current block, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block.
And if at least one of the first peripheral matching block and the second peripheral matching block is positioned outside the image of the current block or outside the image slice of the current block, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block.
And in case two, selecting a first peripheral matching block and a second peripheral matching block to be traversed from the plurality of peripheral matching blocks, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, and if the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block and a second peripheral matching block to be traversed from the plurality of peripheral matching blocks, and if available motion information exists in both the first peripheral matching block and the second peripheral matching block, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block when the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are the same.
And in case III, selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. And if the available motion information exists in the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. And if the first peripheral matching block and the second peripheral matching block both have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuing to judge whether the second peripheral matching block and the third peripheral matching block both have available motion information. If there is available motion information in both the second peripheral matching block and the third peripheral matching block, the motion information angular prediction mode is added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. And if the first peripheral matching block and the second peripheral matching block both have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuing to judge whether the second peripheral matching block and the third peripheral matching block both have available motion information. If the available motion information exists in both the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
And in case four, selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block has no available motion information, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if the available motion information exists in the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if at least one of the second peripheral matching block and the third peripheral matching block does not have available motion information, the motion information angular prediction mode is added to the motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is no motion information available in at least one of the second peripheral matching block and the third peripheral matching block, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
And fifthly, if the plurality of peripheral matching blocks all have available motion information and the motion information of the plurality of peripheral matching blocks is not completely the same, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block.
If there is available motion information in all the peripheral matching blocks and the motion information of the peripheral matching blocks is completely the same, the motion information angular prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
Case six, if there is no available motion information in at least one of the plurality of peripheral matching blocks, the motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block.
If there is no motion information available for at least one of the plurality of peripheral matching blocks, the motion information angular prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
If at least one of the plurality of peripheral matching blocks does not have available motion information and the motion information of the plurality of peripheral matching blocks is not identical, adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block.
If at least one of the plurality of peripheral matching blocks does not have available motion information and the motion information of the plurality of peripheral matching blocks is completely the same, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
For the case five and the case six, the determination manner in which the motion information of the plurality of peripheral matching blocks is not exactly the same/exactly the same may include, but is not limited to: selecting at least one first peripheral matching block (e.g., all or a portion of all peripheral matching blocks) from the plurality of peripheral matching blocks; for each first peripheral matching block, a second peripheral matching block corresponding to the first peripheral matching block is selected from the plurality of peripheral matching blocks. If the motion information of the first peripheral matching block is different from the motion information of the second peripheral matching block, determining that the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different; and if the motion information of the first peripheral matching block is the same as that of the second peripheral matching block, determining that the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are the same. Based on this, if the motion information of any pair of peripheral matching blocks to be compared is different, it is determined that the motion information of the plurality of peripheral matching blocks is not completely the same. And if the motion information of all the peripheral matching blocks to be compared is the same, determining that the motion information of the peripheral matching blocks is completely the same.
For cases five and six, the determination that there is no available motion information in at least one of the plurality of peripheral matching blocks may include, but is not limited to: selecting at least one first peripheral matching block from the plurality of peripheral matching blocks; for each first peripheral matching block, a second peripheral matching block corresponding to the first peripheral matching block is selected from the plurality of peripheral matching blocks. If at least one of any pair of peripheral matching blocks to be compared (i.e. the first peripheral matching block and the second peripheral matching block) does not have available motion information, determining that at least one of the plurality of peripheral matching blocks does not have available motion information. And if all the peripheral matching blocks to be compared have available motion information, determining that the plurality of peripheral matching blocks have available motion information.
In each of the above cases, selecting a first peripheral matching block from the plurality of peripheral matching blocks may include: taking any one of the plurality of peripheral matching blocks as a first peripheral matching block; alternatively, a specified one of the plurality of peripheral matching blocks is set as a first peripheral matching block. Selecting a second peripheral matching block from the plurality of peripheral matching blocks may include: selecting a second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks according to the traversal step size and the position of the first peripheral matching block; the traversal step may be a block spacing between the first and second perimeter matched blocks.
For cases three and four, selecting a third peripheral matching block from the plurality of peripheral matching blocks may include: according to the traversal step size and the position of the second peripheral matching block, selecting a third peripheral matching block corresponding to the second peripheral matching block from the plurality of peripheral matching blocks; the traversal step can be a block spacing between the second perimeter matched block and the third perimeter matched block.
For example, for a peripheral matching block a1, a peripheral matching block a2, a peripheral matching block A3, a peripheral matching block a4, and a peripheral matching block a5 arranged in this order, examples of the respective peripheral matching blocks for different cases are as follows:
for cases one and two, assuming perimeter matching block A1 is the first perimeter matching block and the traversal step size is 2, then the second perimeter matching block to which perimeter matching block A1 corresponds is perimeter matching block A3. For cases three and four, assuming perimeter matching block A1 is the first perimeter matching block and the traversal step size is 2, then the second perimeter matching block to which perimeter matching block A1 corresponds is perimeter matching block A3. The third perimeter match block to perimeter match block A3 is perimeter match block a 5.
For cases five and six, assuming that perimeter match block A1 and perimeter match block A3 are both the first perimeter match block and the traversal step size is 2, when perimeter match block A1 is the first perimeter match block, then the second perimeter match block is perimeter match block A3. When the peripheral matching block A3 is the first peripheral matching block, then the second peripheral matching block is the peripheral matching block A5.
For example, before selecting the peripheral matching block from the plurality of peripheral matching blocks, the traversal step may be determined based on the size of the current block. And controlling the comparison times of the motion information through the traversal step length. For example, assuming that the size of the peripheral matching block is 4 × 4 and the size of the current block is 16 × 16, the current block corresponds to 4 peripheral matching blocks for the horizontal angular prediction mode. In order to control the comparison frequency of the motion information to be 1, the traversal step size may be 2 or 3, if the traversal step size is 2, the first peripheral matching block is the 1 st peripheral matching block, and the second peripheral matching block is the 3 rd peripheral matching block; or the first peripheral matching block is the 2 nd peripheral matching block, and the second peripheral matching block is the 4 th peripheral matching block. If the traversal step size is 3, the first peripheral matching block is the 1 st peripheral matching block, and the second peripheral matching block is the 4 th peripheral matching block. For another example, in order to control the number of comparison times of the motion information to be 2, the traversal step size may be 1, the first peripheral matching block is the 1 st peripheral matching block and the 3 rd peripheral matching block, the second peripheral matching block corresponding to the 1 st peripheral matching block is the 2 nd peripheral matching block, and the second peripheral matching block corresponding to the 3 rd peripheral matching block is the 4 th peripheral matching block. Of course, the above is only an example for the horizontal angle prediction mode, and the traversal step size may also be determined in other ways, which is not limited to this. Moreover, for other motion information angle prediction modes except the horizontal angle prediction mode, the mode of determining the traversal step length refers to the horizontal angle prediction mode, and is not repeated herein.
In each of the above cases, the process of determining whether there is available motion information in any one peripheral matching block may include, but is not limited to: and if the peripheral matching block is positioned outside the image of the current block or the peripheral matching block is positioned outside the image slice of the current block, determining that the peripheral matching block has no available motion information. And if the peripheral matching block is an uncoded block, determining that no available motion information exists in the peripheral matching block. And if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block. If the peripheral matching block is the inter-frame coded block, determining that the available motion information exists in the peripheral matching block.
Example 5: in the above embodiments, the process of constructing the motion information prediction mode candidate list is described below with reference to several specific application scenarios.
Application scenario 1: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is not decoded (i.e. the peripheral block is an uncoded block), or the peripheral block is an intra block, it indicates that the peripheral block does not have available motion information. If the peripheral block exists, the peripheral block is not an uncoded block, and the peripheral block is not an intra block, the peripheral block is indicated to have available motion information.
Referring to FIG. 8A, the width of the current block is W, the height of the current block is H, let m be W/4, n be H/4, and the peripheral block where the pixel points at the upper left corner of the current block are (x, y), (x-1, y + H + W-1) are located is A0,A0Is 4 x 4. Traversing the peripheral blocks in the clockwise direction, and respectively marking each peripheral block of 4 x 4 as A1、A2、…、A2m+2n,A2m+2nIs the peripheral block where the pixel point (x + W + H-1, y-1) is located. For each motion information angle prediction mode, based on the preconfigured angle of the motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed by the preconfigured angle from the peripheral blocks, and selecting a peripheral matching block to be traversed from the plurality of peripheral matching blocks (for example, selecting a first peripheral matching block and a second peripheral matching block to be traversed, or selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially). If the available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, or if both the first peripheral matching block and the second peripheral matching block have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block is the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list.
If the available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, or if both the first peripheral matching block and the second peripheral matching block have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block is the same, the comparison result of the two peripheral matching blocks is the same, and the second peripheral matching block and the third peripheral matching block are continuously compared.
If the available motion information exists in the second peripheral matching block and the third peripheral matching block, and the motion information of the second peripheral matching block and the motion information of the third peripheral matching block are different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list. If at least one of the second peripheral matching block and the third peripheral matching block does not have available motion information, or if both the second peripheral matching block and the third peripheral matching block have available motion information and the motion information of the second peripheral matching block and the third peripheral matching block is the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list.
For example, if the left neighboring block of the current block exists and the upper neighboring block does not exist, for the horizontal angle prediction mode, A will bem-1+H/8As a first peripheral matching block, Am+n-1As the second peripheral matching block, of course, Am-1+H/8And Am+n-1As just one example, other peripheral matching blocks pointed by the preconfigured angle of the horizontal angle prediction mode may also be used as the first peripheral matching block or the second peripheral matching block, which is similar to the implementation manner and will not be described again in the following. Judgment of A by the above comparison methodm-1+H/8And Am+n-1Is the same. If the same, addition of the horizontal angle prediction mode to the motion information prediction mode candidate list of the current block is prohibited. If not, the horizontal angle prediction mode is added to the motion information prediction mode candidate list of the current block.
For the horizontal down angle prediction mode, AW/8-1As a first peripheral matching block, Am-1As a second peripheral matching block, Am-1+H/8As the third peripheral matching block, of course, the above is only an example, and other peripheral matching blocks pointed by the preconfigured angle of the horizontal downward angle prediction mode may also be used as the first peripheral matching block, the second peripheral matching block, or the third peripheral matching block, which are similar in implementation manner and will not be described in detail later. Judgment of A by the above comparison methodW/8-1And Am-1Is the same. If not, a horizontal down angle prediction mode is added to the motion information prediction mode candidate list of the current block. If the two are the same, the comparison method is used for judging Am-1And Am-1+H/8Is the same. If Am-1And Am-1+H/8If the comparison result is different, a horizontal down angle prediction mode is added to the motion information prediction mode candidate list of the current block. If Am-1And Am-1+H/8Is the same, addition of the horizontal down angle prediction mode to the motion information prediction mode candidate list of the current block is prohibited.
For example, if the left neighboring block of the current block does not exist and the upper neighboring block exists, for the vertical angle prediction mode, A will bem+n+1+W/8As a first peripheral matching block, Am+n+1As the second peripheral matching block, of course, the above is only an example, and the first peripheral matching block and the second peripheral matching block are not limited. Judgment of A by the above comparison methodm+n+1+W/8And Am+n+1Is the same. If the motion information prediction mode candidates are the same, the addition of the vertical angle prediction mode to the motion information prediction mode candidate list of the current block is prohibited. If not, the vertical angle prediction mode is added to the motion information prediction mode candidate list of the current block.
For the vertical right angle prediction mode, Am+n+1+W/8As a first peripheral matching block, A2m+n+1As a second peripheral matching block, A2m+n+1+H/8As the third peripheral matching block, of course, the above is only an example, and the first peripheral matching block, the second peripheral matching block, and the third peripheral matching block are not limited. By usingThe above comparison method judges Am+n+1+W/8And A2m+n+1Is the same. If not, the vertical right angle prediction mode is added to the motion information prediction mode candidate list. If the two are the same, the comparison method is used for judging A2m+n+1And A2m+n+1+H/8Is the same. If A2m+n+1And A2m+n+1+H/8If the comparison result is different, the vertical right angle prediction mode is added to the motion information prediction mode candidate list. If A2m+n+1And A2m+n+1+H/8If the comparison result is the same, addition of the vertical right angle prediction mode to the motion information prediction mode candidate list is prohibited.
For example, if the left neighboring block of the current block exists and the upper neighboring block of the current block also exists, for the horizontal downward angle prediction mode, A will beW/8-1As a first peripheral matching block, Am-1As a second peripheral matching block, Am-1+H/8As the third peripheral matching block, of course, the above is only an example, and the first peripheral matching block, the second peripheral matching block, and the third peripheral matching block are not limited. Judgment of A by the above comparison methodW/8-1And Am-1Is the same. If not, a horizontal down angle prediction mode may be added to the motion information prediction mode candidate list of the current block. If the two are the same, judging A by using the comparison methodm-1And Am-1+H/8Is the same. If Am-1And Am-1+H/8If the comparison result is different, a horizontal down angle prediction mode is added to the motion information prediction mode candidate list of the current block. If Am-1And Am-1+H/8If the comparison result is the same, the addition of the horizontal down angle prediction mode to the motion information prediction mode candidate list of the current block is prohibited.
For the horizontal angle prediction mode, Am-1+H/8As a first peripheral matching block, Am+n-1As the second peripheral matching block, of course, the above is only an example, and the first peripheral matching block and the second peripheral matching block are not limited. Judgment of A by the above comparison methodm-1+H/8And Am+n-1Is the same. If the same, the horizontal angle prediction mode is not added to the motion information prediction mode candidate list. If not, the horizontal angle prediction mode is added to the motion information prediction mode candidate list.
For the horizontal upward angle prediction mode, Am+n-1As a first peripheral matching block, Am+nAs a second peripheral matching block, Am+n+1As the third peripheral matching block, of course, the above is only an example, and the first peripheral matching block, the second peripheral matching block, and the third peripheral matching block are not limited. Judgment of A by the above comparison methodm+n-1And Am+nIs the same. If not, a horizontal upward angle prediction mode is added to the motion information prediction mode candidate list of the current block. If the two are the same, judging A by using the comparison methodm+nAnd Am+n+1Is the same. If Am+nAnd Am+n+1If the comparison result is different, the horizontal upward angle prediction mode is added to the motion information prediction mode candidate list. If Am+nAnd Am+n+1If the comparison result is the same, addition of the horizontal upward angle prediction mode to the motion information prediction mode candidate list is prohibited.
For the vertical angle prediction mode, Am+n+1+W/8As a first peripheral matching block, Am+n+1As the second peripheral matching block, of course, the above is just one example, and the first peripheral matching block and the second peripheral matching block are referred to. Judgment of A by the above comparison methodm+n+1+W/8And Am+n+1Is the same. If the same, the addition of the vertical angle prediction mode to the motion information prediction mode candidate list of the current block may be prohibited. If not, the vertical angle prediction mode may be added to the motion information prediction mode candidate list of the current block.
For the vertical right angle prediction mode, Am+n+1+W/8As a first peripheral matching block, A2m+n+1As a second peripheral matching block, A2m+n+1+H/8As the third peripheral matching block, of course, the above is only an example, and the first peripheral is used for thisThe match block, the second peripheral match block, and the third peripheral match block are not limited. Judgment of A by the above comparison methodm+n+1+W/8And A2m+n+1Is the same. If not, the vertical right angle prediction mode is added to the motion information prediction mode candidate list. If the two are the same, the comparison method is used for judging A2m+n+1And A2m+n+1+H/8Is the same. If A2m+n+1And A2m+n+1+H/8If the comparison result is different, the vertical right angle prediction mode is added to the motion information prediction mode candidate list. If A2m+n+1And A2m+n+1+H/8If the comparison result is the same, addition of the vertical right angle prediction mode to the motion information prediction mode candidate list is prohibited.
Application scenario 2: similar to the implementation of application scenario 1, except that: in the application scenario 2, it is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor to distinguish whether an upper adjacent block of the current block exists or not. For example, the above-described processing is performed regardless of whether a left neighboring block of the current block exists or not and whether an upper neighboring block of the current block exists or not.
Application scenario 3: similar to the implementation of application scenario 1, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. Other processes are similar to the application scenario 1 and are not described in detail herein.
Application scenario 4: similar to the implementation of application scenario 1, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. Other processes are similar to the application scenario 1 and are not described in detail herein.
Application scenario 5: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral blocks of the current block do not exist means that the peripheral blocks are located outside the image where the current block is located, or the peripheral blocks are located inside the image where the current block is located, but the peripheral blocks are located outside the image slice where the current block is located. If the peripheral block does not exist, or the peripheral block is an uncoded block, or the peripheral block is an intra block, it indicates that the peripheral block does not have available motion information. If the peripheral block exists, the peripheral block is not an uncoded block, and the peripheral block is not an intra block, the peripheral block is indicated to have available motion information.
Referring to FIG. 8A, the width of the current block is W, the height of the current block is H, let m be W/4, n be H/4, and the peripheral block where the pixel points at the upper left corner of the current block are (x, y), (x-1, y + H + W-1) are located is A0,A0Is 4 x 4. Traversing the peripheral blocks in the clockwise direction, and respectively marking each peripheral block of 4 x 4 as A1、A2、…、A2m+2n,A2m+2nIs the peripheral block where the pixel point (x + W + H-1, y-1) is located.
And for each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed by the preset angle from the peripheral blocks based on the preset angle of the motion information angle prediction mode, and selecting the peripheral matching blocks to be traversed from the plurality of peripheral matching blocks. Unlike the application scenario 1, if at least one of the first and second peripheral matching blocks does not have available motion information, or both the first and second peripheral matching blocks have available motion information and the motion information of the first and second peripheral matching blocks is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list.
If the available motion information exists in the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is forbidden to be added to the motion information prediction mode candidate list; or, continuing to compare the second peripheral matched block to the third peripheral matched block.
If at least one of the second peripheral matching block and the third peripheral matching block does not have available motion information, or both the second peripheral matching block and the third peripheral matching block have available motion information and the motion information of the second peripheral matching block and the third peripheral matching block is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list. Or, if the available motion information exists in both the second peripheral matching block and the third peripheral matching block, and the motion information of the second peripheral matching block and the third peripheral matching block is the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list.
Based on the above comparison method, the corresponding processing flow refers to application scenario 1, for example, if the left neighboring block of the current block exists and the upper neighboring block does not exist, for the horizontal angle prediction mode, the above comparison method is used to determine am-1+H/8And Am+n-1Is the same. If the same, addition of the horizontal angle prediction mode to the motion information prediction mode candidate list is prohibited. If not, the horizontal angle prediction mode is added to the motion information prediction mode candidate list.
For the horizontal down angle prediction mode, the above comparison method is used to judge AW/8-1And Am-1Is the same. If not, the horizontal down angle prediction mode is added to the motion information prediction mode candidate list. If the two are the same, the comparison method is used for judging Am-1And Am-1+H/8Is the same. If Am-1And Am-1+H/8If the comparison result is different, the horizontal down angle prediction mode may be added to the motion information prediction mode candidate list. If Am-1And Am-1+H/8May be the same, the addition of the horizontal down angle prediction mode to the motion information prediction mode candidate list may be prohibited.
Application scenario 6: similar to the implementation of application scenario 5, except that: it is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. For example, whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not, the processing is performed in the manner of the application scenario 5.
Application scenario 7: similar to the implementation of application scenario 5, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. Other processing procedures are similar to the application scenario 5, and are not repeated here.
Application scenario 8: similar to the implementation of application scenario 5, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. In the application scenario 8, it is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor to distinguish whether an upper adjacent block of the current block exists or not. Other processing procedures are similar to the application scenario 5, and are not repeated here.
Application scenario 9: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is an uncoded block, or the peripheral block is an intra block, it indicates that the peripheral block does not have available motion information. If the peripheral block exists, the peripheral block is not an uncoded block, and the peripheral block is not an intra block, the peripheral block is indicated to have available motion information. For each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed to by a preconfigured angle from peripheral blocks based on the preconfigured angle of the motion information angle prediction mode, and selecting at least one first peripheral matching block (such as one or more) from the plurality of peripheral matching blocks; for each first peripheral matching block, selecting a second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks.
Taking each combination of the first peripheral matching block and the second peripheral matching block as a matching block group, for example, a1, A3, a5 are selected as the first peripheral matching block from the plurality of peripheral matching blocks, a2 is selected as the second peripheral matching block corresponding to a1 from the plurality of peripheral matching blocks, a4 is selected as the second peripheral matching block corresponding to A3 from the plurality of peripheral matching blocks, A6 is selected as the second peripheral matching block corresponding to a5 from the plurality of peripheral matching blocks, then the matching block group 1 includes a1 and a2, the matching block group 2 includes A3 and a4, and the matching block group 3 includes a5 and A6. The above-mentioned a1, a2, A3, a4, a5 and a6 are any peripheral matching blocks among the plurality of peripheral matching blocks, and the selection manner thereof may be configured empirically, without limitation.
For each matching block group, if available motion information exists in both the two peripheral matching blocks in the matching block group and the motion information of the two peripheral matching blocks is different, the comparison result of the matching block group is different. If at least one of the two peripheral matching blocks in the matching block group does not have available motion information, or both of the two peripheral matching blocks have available motion information and the motion information of the two peripheral matching blocks is the same, the comparison result of the matching block group is the same. If the comparison results of all the matching block groups are the same, forbidding adding the motion information angle prediction mode to the motion information prediction mode candidate list; and if the comparison result of any matching block group is different, adding the motion information angle prediction mode to the motion information prediction mode candidate list.
For example, if the left neighboring block of the current block exists and the upper neighboring block does not exist, the above-mentioned comparison method is used to determine at least one matching block group A for the horizontal angle prediction modeiAnd Aj(i and j have a value in the range of [ m, m + n-1 ]]And i and j are different, i and j can be selected at will and are within the value range). If the comparison results of all the matching block groups are the same, adding the horizontal angle prediction mode to the block group is prohibitedAnd predicting the mode candidate list by the dynamic information. Otherwise, the horizontal angle prediction mode is added to the motion information prediction mode candidate list of the current block. For the horizontal downward angle prediction mode, the above comparison method is used to determine at least one matching block group AiAnd Aj(i and j have a value in the range of [0, m + n-2 ]]And i and j are not the same). If the comparison results for all matching block groups are the same, the addition of the horizontal down angle prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal down angle prediction mode is added to the motion information prediction mode candidate list. For example, if the left neighboring block of the current block does not exist and the upper neighboring block exists, the above-mentioned comparison method is used to determine at least one matching block group A for the vertical angle prediction modeiAnd Aj(i and j have a value in the range of [ m + n +1, 2m + n ]]And i and j are not the same). If the comparison results for all the matching block groups are the same, the addition of the vertical angle prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the vertical angle prediction mode is added to the motion information prediction mode candidate list. For the vertical right angle prediction mode, the comparison method is used to judge at least one matching block group AiAnd Aj(i and j have a value in the range of [ m + n +2,2m +2n [)]And i and j are not the same). If the comparison results for all the matching block groups are the same, the addition of the vertical right angle prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the vertical right angle prediction mode is added to the motion information prediction mode candidate list.
For example, if the left neighboring block of the current block exists and the upper neighboring block of the current block also exists, for the horizontal downward angle prediction mode, the above-mentioned comparison method is used to determine at least one matching block group AiAnd Aj(i and j have a value in the range of [0, m + n-2 ]]And i and j are not the same). If the comparison results for all matching block groups are the same, the addition of the horizontal down angle prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal down angle prediction mode is added to the motion information prediction mode candidate list. For the horizontal angle prediction mode of the video signal,then the above-mentioned comparison method is used to judge at least one matching block group AiAnd Aj(i and j have a value in the range of [ m, m + n-1 ]]And i and j are not the same). If the comparison results for all the matching block groups are the same, the addition of the horizontal angle prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal angle prediction mode is added to the motion information prediction mode candidate list of the current block. For the prediction mode of the angle in the horizontal direction, the above comparison method is used to determine at least one matching block group AiAnd Aj(i and j have a value in the range of [ m +1, 2m + n-1 ]]And i and j are not the same). If the comparison results for all matching block groups are the same, the addition of the horizontal upward angle prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal upward angle prediction mode is added to the motion information prediction mode candidate list. For the vertical angle prediction mode, the above comparison method is used to determine at least one matching block group AiAnd Aj(i and j have a value in the range of [ m + n +1, 2m + n ]]And i and j are not the same). If the comparison results for all the matching block groups are the same, the addition of the vertical angle prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the vertical angle prediction mode is added to the motion information prediction mode candidate list. For the vertical right angle prediction mode, the above comparison method is used to determine at least one matching block group AiAnd Aj(i and j have a value in the range of [ m + n +2,2m +2n [)]And i and j are not the same). If the comparison results for all the matching block groups are the same, the addition of the vertical right angle prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the vertical right angle prediction mode is added to the motion information prediction mode candidate list.
Application scenario 10: similar to the implementation of application scenario 9, except that: in the application scenario 10, it is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor to distinguish whether an upper adjacent block of the current block exists or not.
Application scenario 11: similar to the implementation of application scenario 9, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. The other processes are similar to the application scenario 9 and will not be described in detail here.
Application scenario 12: similar to the implementation of application scenario 9, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. The other processes are similar to the application scenario 9 and will not be described in detail here.
Application scenario 13: similar to the implementation of application scenario 9, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. Unlike application scenario 9, the comparison may be:
for each matching block group, if at least one of two peripheral matching blocks in the matching block group does not have available motion information, or both the two peripheral matching blocks have available motion information and the motion information of the two peripheral matching blocks is different, the comparison result of the matching block group is different. If the available motion information exists in both the two peripheral matching blocks in the matching block group and the motion information of the two peripheral matching blocks is the same, the comparison result of the matching block group is the same. If the comparison results of all the matching block groups are the same, forbidding adding the motion information angle prediction mode to the motion information prediction mode candidate list; and if the comparison result of any matching block group is different, adding the motion information angle prediction mode to the motion information prediction mode candidate list.
Based on the comparison method, other processes are similar to the application scenario 9, and are not repeated herein.
Application scenario 14: similar to the implementation of application scenario 9, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. In contrast to the comparison in the application scenario 9, the comparison can be seen in the application scenario 10. Based on the comparison method, other processes are similar to the application scenario 9, and are not repeated herein.
Application scenario 15: similar to the implementation of application scenario 9, except that: in contrast to the comparison in the application scenario 9, the comparison can be seen in the application scenario 10. Other processes are similar to the application scenario 9, and are not repeated here.
Application scenario 16: similar to the implementation of application scenario 9, except that: it is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. In contrast to the comparison in the application scenario 9, the comparison can be seen in the application scenario 10. Other processes are similar to the application scenario 9, and are not repeated here.
Example 6: in the above embodiments, regarding to filling in motion information that is not available in the peripheral block, the following describes a filling process of motion information in connection with several specific application scenarios.
Application scenario 1: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is not decoded (i.e. the peripheral block is an uncoded block), or the peripheral block is an intra block, it indicates that the peripheral block does not have available motion information. If the peripheral block exists, the peripheral block is not an uncoded block, and the peripheral block is not an intra block, the peripheral block is indicated to have available motion information.
Referring to FIG. 8A, the width of the current block is W, the height of the current block is H, let m be W/4, n be H/4, and the peripheral block where the pixel points at the upper left corner of the current block are (x, y), (x-1, y + H + W-1) are located is A0,A0Is 4 x 4. Traversing the peripheral blocks in the clockwise direction, and respectively marking each peripheral block of 4 x 4 as A1、A2、…、A2m+2n,A2m+2nIs the peripheral block where the pixel point (x + W + H-1, y-1) is located.
For example, if the left neighboring block of the current block exists and the upper neighboring block does not exist, the padding process is as follows: from A0To Am+n-1And traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as Ai. If i is greater than 0, then AiMotion information of previously traversed peripheral blocks, all using AiIs filled in. Judging whether i is equal to m + n-1, if so, finishing the filling, and exiting the filling process; otherwise, from Ai+1To Am+n-1And traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
Referring to FIG. 8A, assume AiIs A4Then A can be substitutediPreviously traversed perimeter blocks (e.g. A)0、A1、A2、A3) All using A4Is filled in. Suppose traversing to A5When found, A5There is no available motion information, then use A5Nearest neighbor previous perimeter block a4Is filled in, assuming traversal to a6When found, A6There is no available motion information, then use A6Nearest neighbor previous perimeter block a5The motion information of (a) is filled, and so on.
The left adjacent block of the current block does not exist, the upper adjacent block exists, and the filling process is as follows: from Am+n+1To A2m+2nAnd traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as Ai. If i is greater than m + n +1, then AiMotion information of previously traversed peripheral blocks, all using AiIs filled in. Judging whether i is equal to 2m +2n, if so, finishing the filling, and exiting the filling process; otherwise, from Ai+1To A2m+2nAnd traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
The left adjacent block and the upper adjacent block of the current block exist, and the filling process is as follows: from A0To A2m+2nAnd traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as Ai. If i is greater than 0, then AiMotion information of previously traversed peripheral blocks, all using AiIs filled in. Judging whether i is equal to 2m +2n, if so, finishing the filling, and exiting the filling process; otherwise, from Ai+1To A2m+2nAnd traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
In the above embodiment, the peripheral block where no motion information is available may be an unencoded block or an intra block.
Application scenario 2: similar to the implementation of application scenario 1, except that: whether a left adjacent block of the current block exists or not and whether an upper adjacent block of the current block exists or not are not distinguished, for example, whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not are all processed in the following way: from A0To A2m+2nAnd traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as Ai. If i is greater than 0, then AiMotion information of previously traversed peripheral blocks, all using AiIs filled in. Judging whether i is equal to 2m +2n, if so, finishing the filling, and exiting the filling process; otherwise, from Ai+1To A2m+2nAnd traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
Application scenario 3: similar to the implementation of application scenario 1, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. Other implementation processes are referred to as application scenario 1, and are not described in detail herein.
Application scenario 4: similar to the implementation of application scenario 1, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. Other implementation processes are referred to as application scenario 1, and are not described in detail herein.
Application scenario 5: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is not decoded (i.e. the peripheral block is an uncoded block), or the peripheral block is an intra block, it indicates that the peripheral block does not have available motion information. If the peripheral block exists, the peripheral block is not an uncoded block, and the peripheral block is not an intra block, the peripheral block is indicated to have available motion information.
If the left neighboring block of the current block exists and the upper neighboring block does not exist, the padding process is as follows: from A0To Am+n-1And performing sequential traversal, and if the motion information of the traversed peripheral block is unavailable, filling the motion information of the peripheral block by using zero motion information or the motion information of the time domain corresponding position of the peripheral block. If the left neighboring block of the current block does not exist and the upper neighboring block exists, the padding process is as follows: from Am+n+1To A2m+2nAnd performing sequential traversal, and if the motion information of the traversed peripheral block is unavailable, filling the motion information of the peripheral block by using zero motion information or the motion information of the time domain corresponding position of the peripheral block. If the left neighboring block of the current block exists and the upper neighboring block exists, the padding process is as follows: from A0To A2m+2nAnd performing sequential traversal, and if the motion information of the traversed peripheral block is unavailable, filling the motion information of the peripheral block by using zero motion information or the motion information of the time domain corresponding position of the peripheral block.
Application scenario 6: similar to the implementation of application scenario 5, except that: whether the left adjacent block and the upper adjacent block of the current block exist or not is not distinguished, and whether the left adjacent block and the upper adjacent block of the current block exist or not is judged from A0To A2m+2nAnd performing sequential traversal, and if the motion information of the traversed peripheral block is unavailable, filling the motion information of the peripheral block by using zero motion information or the motion information of the time domain corresponding position of the peripheral block.
Application scenario 7: similar to the implementation of application scenario 5, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. Other implementation processes are referred to in application scenario 5, and are not described in detail herein.
Application scenario 8: similar to the implementation of application scenario 5, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. Other implementation processes are referred to in application scenario 5, and are not described in detail herein.
Application scenario 9-application scenario 16: similar to the implementation of application scenarios 1-8, the difference is that: the width of the current block is W, the height of the current block is H, the size of m is W/8, the size of n is H/8, and the peripheral block A0Is 8 x 8, and the peripheral blocks of each 8 x 8 are respectively marked as A1、A2、…、A2m+2nThat is, the size of each peripheral block is changed from 4 × 4 to 8 × 8, and other implementation processes may refer to the above application scenario, and are not repeated herein.
Application scenario 17: referring to fig. 8B, the width and height of the current block are both 16, and the motion information of the peripheral blocks is stored in a minimum unit of 4 × 4. Suppose A14、A15、A16And A17And filling the uncoded blocks if the uncoded blocks are uncoded, wherein the filling method can be any one of the following methods: padding with available motion information of neighboring blocks; filling by adopting default motion information; and filling the available motion information of the position block corresponding to the time domain reference frame. Of course, the above-described manner is merely an example, and is not limited thereto. If the size of the current block is other sizes, the above-mentioned manner can be adopted for filling, and the detailed description is omitted here
Application scenario 18: referring to fig. 8C, the width and height of the current block are both 16, and the motion information of the surrounding blocks is stored in the minimum unit of 4 × 4. Suppose A7For the intra block, the intra block needs to be processedAnd (3) line filling, wherein the filling method can be any one of the following methods: padding with available motion information of neighboring blocks; filling by adopting default motion information; and filling the available motion information of the position block corresponding to the time domain reference frame. Of course, the above-described manner is merely an example, and is not limited thereto. If the size of the current block is other sizes, the padding can also be performed in the above manner, which is not described herein again.
Application scenario 19: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is not decoded (i.e. the peripheral block is an uncoded block), or the peripheral block is an intra block, it indicates that the peripheral block does not have available motion information. If the peripheral block exists, the peripheral block is not an uncoded block, and the peripheral block is not an intra block, the peripheral block is indicated to have available motion information.
Referring to FIG. 8A, the width of the current block is W, the height of the current block is H, let m be W/4, n be H/4, and the peripheral block where the pixel points at the upper left corner of the current block are (x, y), (x-1, y + H + W-1) are located is A0,A0Is 4 x 4. Traversing the peripheral blocks in the clockwise direction, and respectively marking each peripheral block of 4 x 4 as A1、A2、…、A2m+2n,A2m+2nIs the peripheral block where the pixel point (x + W + H-1, y-1) is located.
If the motion information angle prediction mode is a horizontal downward angle prediction mode, the traversal range is A0To Am+n-2From A0To Am+n-2And traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as Ai. If i is greater than 0, AiMotion information of previously traversed peripheral blocks, all using AiIs filled in. Judging whether i is equal to m + n-2, if so, finishing the filling, and exiting the filling process; otherwise from Ai+1To Am+n-2Traversing is carried out, if the motion information of the traversed peripheral block is unavailable, the motion information of the previous peripheral block which is most adjacent to the peripheral block is used for filling until the motion information of the previous peripheral block which is most adjacent to the peripheral block is used for fillingAnd finishing the traversal.
If the motion information angle prediction mode is the horizontal angle prediction mode, the traversal range is AmTo Am+n-1From AmTo Am+n-1And performing sequential traversal, wherein the specific filling process refers to the above embodiment, and the detailed description is not repeated here.
If the motion information angle prediction mode is a horizontal upward angle prediction mode, the traversal range is Am+1To A2m+n-1From Am+1To A2m+n-1And performing sequential traversal, wherein the specific filling process refers to the above embodiment, and the detailed description is not repeated here.
If the motion information angle prediction mode is the vertical angle prediction mode, the traversal range is Am+n+1To A2m+nFrom Am+n+1To A2m+nAnd performing sequential traversal, wherein the specific filling process refers to the above embodiment, and the detailed description is not repeated here.
If the motion information angle prediction mode is a vertical right angle prediction mode, the traversal range is Am+n+2To A2m+2nFrom Am+n+2To A2m+2nAnd performing sequential traversal, wherein the specific filling process refers to the above embodiment, and the detailed description is not repeated here.
Example 7: in the above embodiments, the motion compensation is performed by using a motion information angle prediction mode, for example, motion information of each sub-region of a current block is determined according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode, and for each sub-region, a motion compensation value of the sub-region is determined according to the motion information of the sub-region. The determination of the motion compensation value for each sub-region is described below with reference to a specific application scenario.
Application scenario 1: and selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle corresponding to the motion information angle prediction mode. The current block is divided into at least one sub-region, and the dividing manner is not limited. And aiming at each sub-region of the current block, selecting a peripheral matching block corresponding to the sub-region from the plurality of peripheral matching blocks, and determining the motion information of the sub-region according to the motion information of the selected peripheral matching block. Then, for each sub-region, a motion compensation value of the sub-region is determined according to the motion information of the sub-region, and the determination process is not limited.
For example, the motion information of the selected peripheral matching block may be used as the motion information of the sub-region. If the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information can be used as the motion information of the sub-region; assuming that the motion information of the peripheral matching block is bidirectional motion information, the bidirectional motion information may be used as the motion information of the sub-region, or one of the bidirectional motion information may be used as the motion information of the sub-region, or the other of the bidirectional motion information may be used as the motion information of the sub-region.
For example, the sub-region partition information may be independent of the motion information angle prediction mode, such as sub-region partition information of the current block according to which the current block is partitioned into at least one sub-region, determined according to the size of the current block. For example, if the size of the current block satisfies: if the width is greater than or equal to the preset size parameter (empirically configured, such as 8) and the height is greater than or equal to the preset size parameter, the size of the sub-region is 8 × 8, i.e. the current block is divided into at least one sub-region in a manner of 8 × 8.
For example, the sub-region partition information may relate to a motion information angle prediction mode, for example, when the motion information angle prediction mode is a horizontal upward angle prediction mode, a horizontal downward angle prediction mode, or a vertical rightward angle prediction mode, if the width of the current block is greater than or equal to a preset size parameter and the height of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8 × 8; if the width of the current block is smaller than the preset size parameter, or the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4 x 4. When the motion information angle prediction mode is a horizontal angle prediction mode, if the width of the current block is larger than a preset size parameter, the size of the sub-region is 4 × 4 of the current block, or the size of the sub-region is 4 × 4; if the width of the current block is equal to the preset size parameter and the height of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8 x 8; if the width of the current block is smaller than the preset size parameter, the size of the sub-region is 4 x 4. When the motion information angle prediction mode is a vertical angle prediction mode, if the height of the current block is greater than a preset size parameter, the size of the sub-region is 4 × that of the current block, or the size of the sub-region is 4 × 4; if the height of the current block is higher than the preset size parameter and the width of the current block is larger than or equal to the preset size parameter, the size of the sub-region is 8 x 8; if the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4 x 4. When the angular motion information prediction mode is the horizontal angular prediction mode, if the width of the current block is greater than 8, the size of the sub-region may also be 4 × 4. When the motion information angular prediction mode is the vertical angular prediction mode, if the height of the current block is greater than 8, the size of the sub-region may also be 4 × 4. Of course, the above are only examples, and the preset size parameter may be 8, and may be greater than 8.
Application scenario 2: and selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle corresponding to the motion information angle prediction mode. The current block is divided into at least one sub-region in a 8 x 8 manner (i.e., the size of the sub-region is 8 x 8). And aiming at each sub-region of the current block, selecting a peripheral matching block corresponding to the sub-region from the plurality of peripheral matching blocks, and determining the motion information of the sub-region according to the motion information of the selected peripheral matching block. And for each sub-region, determining a motion compensation value of the sub-region according to the motion information of the sub-region, wherein the determination process is not limited.
Application scenario 3: referring to fig. 9A, motion compensation is performed at an angle for each 4 × 4 sub-region in the current block. And if the motion information of the peripheral matching block is unidirectional motion information, determining the unidirectional motion information as the motion information of the sub-area. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the sub-region, or determining the forward motion information or the backward motion information in the bidirectional motion information as the motion information of the sub-region.
According to fig. 9A, the size of the current block is 4 × 8, and when the target motion information prediction mode of the current block is the horizontal mode, the current block is divided into two sub-regions having the same size, one of the 4 × 4 sub-regions corresponds to the peripheral matching block a1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 1. Another 4 × 4 sub-region corresponds to the peripheral matching block a2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 2. When the target motion information prediction mode of the current block is a vertical mode, two sub-regions with the same size are divided, wherein one 4 × 4 sub-region corresponds to the peripheral matching block B1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B1. Another 4 × 4 sub-region corresponds to the peripheral matching block B1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B1. When the target motion information prediction mode of the current block is in the horizontal upward direction, two sub-areas with the same size are divided, wherein one 4 x 4 sub-area corresponds to the peripheral matching block E, and the motion information of the 4 x 4 sub-area is determined according to the motion information of the E. Another 4 × 4 sub-region corresponds to the peripheral matching block a1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 1. When the target motion information prediction mode of the current block is horizontal downward, two sub-regions with the same size are divided, wherein one 4 x 4 sub-region corresponds to the peripheral matching block A2, and the motion information of the 4 x 4 sub-region is determined according to the motion information of A2. Another 4 × 4 sub-region corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. When the target motion information prediction mode of the current block is horizontal downward, two sub-regions with the same size are divided, wherein one 4 x 4 sub-region corresponds to the peripheral matching block B2, and the motion information of the 4 x 4 sub-region is determined according to the motion information of B2. Another 4 × 4 sub-region corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3.
Application scenario 4: referring to fig. 9B, if the width W of the current block is less than 8 and the height H of the current block is greater than 8, motion compensation can be performed on each sub-region in the current block as follows: and if the angle prediction mode is the vertical angle prediction mode, performing motion compensation on each 4 × H sub-region according to the vertical angle. If the angular prediction mode is other angular prediction modes (such as a horizontal angular prediction mode, a horizontal upward angular prediction mode, a horizontal downward angular prediction mode, a vertical right angular prediction mode, etc.), motion compensation may be performed at an angle for each 4 × 4 sub-region in the current block.
According to fig. 9B, when the size of the current block is 4 × 16 and the target motion information prediction mode of the current block is the horizontal mode, 4 sub-regions having a size of 4 × 4 are divided, one of the 4 × 4 sub-regions corresponds to the peripheral matching block a1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 1. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 4. When the target motion information prediction mode of the current block is the vertical mode, 4 sub-regions with the size of 4 × 4 may be divided, each sub-region with the size of 4 × 4 corresponds to the peripheral matching block B1, and the motion information of each sub-region with the size of 4 × 4 is determined according to the motion information of B1. The motion information of the four sub-regions are the same, so in this embodiment, the current block itself may not be sub-region divided, and the current block itself serves as a sub-region corresponding to a peripheral matching block B1, and the motion information of the current block is determined according to the motion information of B1.
When the target motion information prediction mode of the current block is a horizontal upward mode, 4 sub-regions with the size of 4 x 4 are divided, one 4 x 4 sub-region corresponds to the peripheral matching block E, and the motion information of the 4 x 4 sub-region is determined according to the motion information of the E. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 1. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. When the target motion information prediction mode of the current block is a horizontal down mode, 4 sub-regions with the size of 4 × 4 are divided, one 4 × 4 sub-region corresponds to the peripheral matching block a2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 4. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 5. When the target motion information prediction mode of the current block is a horizontal down mode, 4 sub-regions with the size of 4 × 4 are divided, one 4 × 4 sub-region corresponds to the peripheral matching block B2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B4. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B5.
Application scenario 5: referring to fig. 9C, if the width W of the current block is greater than 8 and the height H of the current block is less than 8, then each sub-region in the current block may be motion compensated as follows: and if the angular prediction mode is the horizontal angular prediction mode, performing motion compensation on each W4 sub-area according to the horizontal angle. If the angular prediction mode is other angular prediction modes, motion compensation may be performed according to a certain angle for each 4 × 4 sub-region in the current block.
According to fig. 9C, the size of the current block is 16 × 4, and when the target motion information prediction mode of the current block is the horizontal mode, 4 sub-regions with the size of 4 × 4 may be divided, each sub-region with 4 × 4 corresponds to the peripheral matching block a1, and the motion information of each sub-region with 4 × 4 is determined according to the motion information of a 1. The motion information of the four sub-regions are the same, so in this embodiment, the current block itself may not be sub-region divided, and the current block itself serves as a sub-region corresponding to a peripheral matching block a1, and the motion information of the current block is determined according to the motion information of a 1. When the target motion information prediction mode of the current block is a vertical mode, 4 sub-regions with the size of 4 × 4 are divided, wherein one 4 × 4 sub-region corresponds to the peripheral matching block B1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B1. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B4.
When the target motion information prediction mode of the current block is a horizontal upward mode, 4 sub-regions with the size of 4 x 4 are divided, one 4 x 4 sub-region corresponds to the peripheral matching block E, and the motion information of the 4 x 4 sub-region is determined according to the motion information of the E. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B1, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B1. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3. When the target motion information prediction mode of the current block is a horizontal down mode, 4 sub-regions with the size of 4 × 4 are divided, wherein one 4 × 4 sub-region corresponds to the peripheral matching block a2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of A3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 4. One of the 4 × 4 sub-regions corresponds to the peripheral matching block a5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of a 5. When the target motion information prediction mode of the current block is a vertical right mode, 4 sub-regions with the size of 4 × 4 are divided, wherein one 4 × 4 sub-region corresponds to the peripheral matching block B2, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B2. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B3. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B4, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B4. One of the 4 × 4 sub-regions corresponds to the peripheral matching block B5, and the motion information of the 4 × 4 sub-region is determined according to the motion information of B5.
Application scenario 6: the width W of the current block is equal to 8, and the height H of the current block is equal to 8, then motion compensation is performed on each 8 × 8 sub-region (i.e. the sub-region is the current block itself) in the current block according to a certain angle. If the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block may be selected from the motion information of the plurality of peripheral matching blocks according to the corresponding angle with respect to the motion information of the sub-region. For example, referring to fig. 9D, for the horizontal angle prediction mode, motion information of the peripheral matching block a1 may be selected, and motion information of the peripheral matching block a2 may also be selected. Referring to fig. 9E, for the vertical angle prediction mode, motion information of the peripheral matching block B1 may be selected, and motion information of the peripheral matching block B2 may also be selected. Referring to fig. 9F, for the horizontal upward angle prediction mode, motion information of the peripheral matching block E may be selected, motion information of the peripheral matching block B1 may be selected, and motion information of the peripheral matching block a1 may be selected. Referring to fig. 9G, for the horizontal downward angle prediction mode, motion information of the peripheral matching block a2, motion information of the peripheral matching block A3, and motion information of the peripheral matching block a4 may be selected. Referring to fig. 9H, for the vertical right angle prediction mode, motion information of the peripheral matching block B2, motion information of the peripheral matching block B3, and motion information of the peripheral matching block B4 may be selected.
According to fig. 9D, when the size of the current block is 8 × 8 and the target motion information prediction mode of the current block is the horizontal mode, the current block is divided into sub-regions having a size of 8 × 8, the sub-regions correspond to the surrounding matching block a1, and the motion information of the sub-regions is determined according to the motion information of a 1. Or, the sub-area corresponds to the peripheral matching block a2, and the motion information of the sub-area is determined according to the motion information of a 2. According to fig. 9E, the size of the current block is 8 × 8, and when the target motion information prediction mode of the current block is the vertical mode, a sub-region having a size of 8 × 8 is divided, and the sub-region corresponds to the peripheral matching block B1, and the motion information of the sub-region is determined according to the motion information of B1. Or, the sub-area corresponds to the peripheral matching block B2, and the motion information of the sub-area is determined according to the motion information of B2. According to fig. 9F, the size of the current block is 8 × 8, and when the target motion information prediction mode of the current block is the horizontal up mode, the current block is divided into sub-regions with the size of 8 × 8, and the sub-regions correspond to the peripheral matching block E, and the motion information of the sub-regions is determined according to the motion information of E. Or, the sub-area corresponds to the peripheral matching block B1, and the motion information of the sub-area is determined according to the motion information of B1. Or, the sub-area corresponds to the peripheral matching block a1, and the motion information of the sub-area is determined according to the motion information of a 1. According to fig. 9G, when the size of the current block is 8 × 8 and the target motion information prediction mode of the current block is the horizontal down mode, the current block is divided into sub-regions having a size of 8 × 8, the sub-regions correspond to the surrounding matching block a2, and the motion information of the sub-regions is determined according to the motion information of a 2. Or, the sub-area corresponds to the peripheral matching block A3, and the motion information of the sub-area is determined according to the motion information of A3. Or, the sub-area corresponds to the peripheral matching block a4, and the motion information of the sub-area is determined according to the motion information of a 4. According to fig. 9H, the size of the current block is 8 × 8, and when the target motion information prediction mode of the current block is the vertical right mode, the current block is divided into sub-regions of 8 × 8 in size, and the sub-regions correspond to the surrounding matching block B2, and the motion information of the sub-regions is determined according to the motion information of B2. Or, the sub-area corresponds to the peripheral matching block B3, and the motion information of the sub-area is determined according to the motion information of B3. Or, the sub-area corresponds to the peripheral matching block B4, and the motion information of the sub-area is determined according to the motion information of B4.
Application scenario 7: the width W of the current block is equal to or greater than 16 and the height H of the current block is equal to 8, on the basis of which each sub-region within the current block can be motion compensated in the following way: and if the angular prediction mode is the horizontal angular prediction mode, performing motion compensation on each W4 sub-area according to the horizontal angle. And if the angular prediction mode is other angular prediction modes, performing motion compensation according to a certain angle for each 8 x 8 sub-area in the current block. For each sub-region of 8 × 8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks with respect to the motion information of the sub-region. For example, referring to fig. 9I, for the horizontal angle prediction mode, the motion information of the peripheral matching block a1 may be selected for the first W × 4 sub-region, and the motion information of the peripheral matching block a2 may be selected for the second W × 4 sub-region. Referring to fig. 9J, for the vertical angle prediction mode, for the first 8 × 8 sub-region, the motion information of the peripheral matching block B1 may be selected, and the motion information of the peripheral matching block B2 may be selected. For the second 8 x 8 sub-region, the motion information of the peripheral matching block B3 may be selected, and the motion information of the peripheral matching block B4 may be selected. Other angular prediction modes are similar and will not be described herein. According to fig. 9I, the size of the current block is 16 × 8, and when the target motion information prediction mode of the current block is the horizontal mode, 2 sub-regions with the size of 16 × 4 are divided, wherein one sub-region with 16 × 4 corresponds to the peripheral matching block a1, and the motion information of the sub-region with 16 × 4 is determined according to the motion information of a 1. Another 16 × 4 sub-region corresponds to the peripheral matching block a2, and the motion information of the 16 × 4 sub-region is determined according to the motion information of a 2.
According to fig. 9J, the size of the current block is 16 × 8, and when the target motion information prediction mode is the vertical mode, 2 sub-regions with the size of 8 × 8 are divided, wherein one sub-region with 8 × 8 corresponds to the peripheral matching block B1 or B2, and the motion information of the sub-region with 8 × 8 is determined according to the motion information of B1 or B2. The other 8 × 8 sub-region corresponds to the peripheral matching block B3 or B4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of B3 or B4.
Application scenario 8: the width W of the current block is equal to 8 and the height H of the current block is equal to or greater than 16, on the basis of which each sub-region within the current block can be motion compensated in the following way: and if the angle prediction mode is a vertical angle prediction mode, performing motion compensation on each 4 × H sub-region according to a vertical angle. And if the angular prediction mode is other angular prediction modes, performing motion compensation according to a certain angle for each 8 x 8 sub-area in the current block. For each sub-region of 8 × 8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks with respect to the motion information of the sub-region. For example, referring to fig. 9K, for the vertical angle prediction mode, the motion information of the peripheral matching block B1 may be selected for the first 4 × H sub-region, and the motion information of the peripheral matching block B2 may be selected for the second 4 × H sub-region. Referring to fig. 9L, for the horizontal angle prediction mode, for the first 8 × 8 sub-region, the motion information of the peripheral matching block a1 may be selected, and the motion information of the peripheral matching block a2 may be selected. For the second 8 x 8 sub-region, the motion information of the peripheral matching block a1 may be selected, and the motion information of the peripheral matching block a2 may be selected. Other angular prediction modes are similar and will not be described herein. According to fig. 9K, the size of the current block is 8 × 16, and when the target motion information prediction mode of the current block is the vertical mode, 2 sub-regions having a size of 4 × 16 are divided, wherein one sub-region of 4 × 16 corresponds to the peripheral matching block B1, and the motion information of the sub-region of 4 × 16 is determined according to the motion information of B1. Another 4 × 16 sub-area corresponds to the peripheral matching block B2, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B2.
According to fig. 9L, the size of the current block is 16 × 8, and when the target motion information prediction mode is the horizontal mode, 2 sub-regions with the size of 8 × 8 are divided, one sub-region with 8 × 8 corresponds to the peripheral matching block a1 or a2, and the motion information of the sub-region with 8 × 8 is determined according to the motion information of the corresponding peripheral matching block. Another 8 × 8 sub-region corresponds to the peripheral matching block a1 or a2, and the motion information of the 8 × 8 sub-region is determined according to the motion information of the corresponding peripheral matching block.
Application scenario 9: the width W of the current block may be equal to or greater than 16, and the height H of the current block may be equal to or greater than 16, based on which each sub-region within the current block may be motion compensated in the following manner: and if the angle prediction mode is a vertical angle prediction mode, performing motion compensation on each 4 × H sub-region according to a vertical angle. And if the angular prediction mode is the horizontal angular prediction mode, performing motion compensation on each W4 sub-area according to the horizontal angle. And if the angular prediction mode is other angular prediction modes, performing motion compensation according to a certain angle for each 8 x 8 sub-area in the current block. For each sub-region of 8 × 8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks with respect to the motion information of the sub-region.
Referring to fig. 9M, for the vertical angle prediction mode, motion information of the peripheral matching block B1 may be selected for the first 4 × H sub-region, motion information of the peripheral matching block B2 may be selected for the second 4 × H sub-region, motion information of the peripheral matching block B3 may be selected for the third 4 × H sub-region, and motion information of the peripheral matching block B4 may be selected for the fourth 4 × H sub-region. For the horizontal angle prediction mode, the motion information of the peripheral matching block a1 is selected for the first W × 4 sub-region, the motion information of the peripheral matching block a2 is selected for the second W × 4 sub-region, the motion information of the peripheral matching block A3 is selected for the third W × 4 sub-region, and the motion information of the peripheral matching block a4 is selected for the fourth W × 4 sub-region. Other angular prediction modes are similar and will not be described herein.
According to fig. 9M, the size of the current block is 16 × 16, and when the target motion information prediction mode is the vertical mode, 4 sub-regions with the size of 4 × 16 are divided, one sub-region with the size of 4 × 16 corresponds to the peripheral matching block B1, and the motion information of the sub-region with the size of 4 × 16 is determined according to the motion information of B1. A4 × 16 sub-area corresponds to the peripheral matching block B2, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B2. A4 × 16 sub-area corresponds to the peripheral matching block B3, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B3. A4 × 16 sub-area corresponds to the peripheral matching block B4, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B4.
According to fig. 9M, the size of the current block is 16 × 16, and when the target motion information prediction mode of the current block is the horizontal mode, 4 sub-regions with the size of 16 × 4 are divided, wherein one sub-region with 16 × 4 corresponds to the peripheral matching block a1, and the motion information of the sub-region with 16 × 4 is determined according to the motion information of a 1. One 16 × 4 sub-area corresponds to the peripheral matching block a2, and the motion information of the 16 × 4 sub-area is determined according to the motion information of a 2. One 16 × 4 sub-area corresponds to the peripheral matching block A3, and the motion information of the 16 × 4 sub-area is determined according to the motion information of A3. One 16 × 4 sub-area corresponds to the peripheral matching block a4, and the motion information of the 16 × 4 sub-area is determined according to the motion information of a 4.
Application scenario 10: the width W of the current block may be greater than or equal to 8, and the height H of the current block may be greater than or equal to 8, and then motion compensation is performed on each 8 × 8 sub-region within the current block. Referring to fig. 9N, for each sub-area of 8 × 8, if the sub-area corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks with respect to the motion information of the sub-area. The sub-region division size is independent of the motion information angle prediction mode, and as long as the width is greater than or equal to 8 and the height is greater than or equal to 8, the sub-region division size may be 8 × 8 in any motion information angle prediction mode.
According to fig. 9N, the size of the current block is 16 × 16, and when the target motion information prediction mode of the current block is the horizontal mode, 4 sub-regions with the size of 8 × 8 are divided, wherein one sub-region with 8 × 8 corresponds to the peripheral matching block a1 or a2, and the motion information of the sub-region with 8 × 8 is determined according to the motion information of a1 or a 2. One of the 8 × 8 sub-regions corresponds to the peripheral matching block a1 or a2, and the motion information of the 8 × 8 sub-region is determined according to the motion information of a1 or a 2. One of the 8 × 8 sub-regions corresponds to the peripheral matching block A3 or a4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of A3 or a 4. One of the 8 × 8 sub-regions corresponds to the peripheral matching block A3 or a4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of A3 or a 4. When the target motion information prediction mode of the current block is a horizontal mode, 4 sub-regions with the size of 8 × 8 are divided, wherein one sub-region with the size of 8 × 8 corresponds to the peripheral matching block B1 or B2, and the motion information of the sub-region with the size of 8 × 8 is determined according to the motion information of B1 or B2. One of the 8 × 8 sub-regions corresponds to the peripheral matching block B1 or B2, and the motion information of the 8 × 8 sub-region is determined according to the motion information of B1 or B2. One of the 8 × 8 sub-regions corresponds to the peripheral matching block B3 or B4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of B3 or B4. One of the 8 × 8 sub-regions corresponds to the peripheral matching block B3 or B4, and the motion information of the 8 × 8 sub-region is determined according to the motion information of B3 or B4. When the target motion information prediction mode of the current block is the horizontal upward mode, 4 sub-regions with the size of 8 × 8 may be divided. Then, for each 8 × 8 sub-region, a peripheral matching block (E, B2 or a2) corresponding to the 8 × 8 sub-region may be determined, and the determination method is not limited thereto, and the motion information of the 8 × 8 sub-region is determined based on the motion information of the peripheral matching block. When the target motion information prediction mode of the current block is a horizontal down mode, 4 sub-regions with the size of 8 x 8 are divided. Then, for each 8 × 8 sub-region, a peripheral matching block (A3, a5, or a7) corresponding to the 8 × 8 sub-region may be determined, without limitation, and the motion information of the 8 × 8 sub-region is determined based on the motion information of the peripheral matching block. When the target motion information prediction mode of the current block is a vertical right mode, 4 sub-regions with the size of 8 x 8 are divided. Then, for each 8 × 8 sub-region, a peripheral matching block (B3, B5, or B7) corresponding to the 8 × 8 sub-region may be determined, without limitation, and the motion information of the 8 × 8 sub-region is determined based on the motion information of the peripheral matching block.
Application scenario 11: the width W of the current block is greater than or equal to 8, and when the height H of the current block is greater than or equal to 8, motion compensation is performed on each 8 × 8 sub-region in the current block, and for each sub-region, any one of several pieces of motion information of the surrounding matching blocks is selected according to a corresponding angle, as shown in fig. 9N, which is not described herein again.
Based on the application scenes, for each sub-region of the current block, motion information of the sub-region may be determined according to motion information of the peripheral matching block, and a motion compensation value of the sub-region may be determined according to the motion information of the sub-region, where a determination manner of the motion compensation value is not limited. Referring to the foregoing embodiment 1, if the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located between two reference frames in the temporal sequence, the motion compensation value of the sub-region includes a forward motion compensation value and a backward motion compensation value; if the motion information of the sub-region is unidirectional motion information, the sub-region corresponds to a motion compensation value. If the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located is not located between two reference frames in the time sequence, the motion compensation value of the sub-region includes a first motion compensation value and a second motion compensation value.
Example 8: in the above embodiments, bidirectional optical flow processing is performed on the sub-region of the current block, that is, after obtaining the motion compensation value of each sub-region, for each sub-region inside the current block that satisfies the condition of using bidirectional optical flow, a bidirectional optical flow offset value is superimposed on the motion compensation value of the sub-region using bidirectional optical flow technology (BIO). For example, for each sub-region of the current block, if the sub-region satisfies the condition of using bidirectional optical flow, a forward motion compensation value and a backward motion compensation value of the sub-region are determined, and a target prediction value of the sub-region is determined according to the forward motion compensation value, the backward motion compensation value and a bidirectional optical flow offset value of the sub-region. And if the sub-area does not meet the condition of using the bidirectional optical flow, determining a motion compensation value of the sub-area, and then determining a target prediction value of the sub-area according to the motion compensation value.
Illustratively, obtaining the bi-directional optical flow offset values for the sub-regions may include, but is not limited to: determining a first pixel value and a second pixel value according to the motion information of the sub-area; the first pixel value is a forward motion compensation value and a forward extension value for the subregion, the forward extension value being copied from the forward motion compensation value or obtained from a reference pixel location of a forward reference frame; the second pixel value is a backward motion compensation value and a backward extension value of the sub-region, and the backward extension value is copied from the backward motion compensation value or obtained from a reference pixel position of a backward reference frame; the forward reference frame and the backward reference frame are determined according to the motion information of the sub-regions. Then, a bi-directional optical-flow offset value for the sub-area is determined based on the first pixel value and the second pixel value.
Illustratively, obtaining the bi-directional optical flow offset value of the sub-area may be achieved by:
and c1, determining the first pixel value and the second pixel value according to the motion information of the sub-area.
And c2, determining the autocorrelation coefficient of the horizontal direction gradient sum S1, the cross correlation coefficient of the horizontal direction gradient sum and the vertical direction gradient sum S2, the cross correlation coefficient of the time domain predicted value difference and the horizontal direction gradient sum S3, the autocorrelation coefficient of the vertical direction gradient sum S5 and the cross correlation coefficient of the time domain predicted value difference and the vertical direction gradient sum S6 according to the first pixel value and the second pixel value.
For example, the gradient sum S can be calculated using the following formula1、S2、S3、S5、S6
Figure BDA0003287823390000311
Figure BDA0003287823390000312
Figure BDA0003287823390000313
Figure BDA0003287823390000321
Figure BDA0003287823390000322
Exemplary,. psix(i,j)、ψyThe way in which (i, j) and θ (i, j) are calculated can be as follows:
Figure BDA0003287823390000323
Figure BDA0003287823390000324
θ(i,j)=I(1)(i,j)-I(0)(i,j)
I0(x, y) is a first pixel value, and I0(x, y) is the forward motion compensation value of the subregion and its forward extension value, I(1)(x, y) is a second pixel value, and I(1)(x, y) are the backward motion compensation value of the sub-region and its backward extension value. Illustratively, the forward extension value may be copied from the forward motion compensation value or may be obtained from a reference pixel position of a forward reference frame. The backward extension value may be copied from the backward motion compensation value or may be obtained from a reference pixel position of the backward reference frame. The forward reference frame and the backward reference frame are determined according to the motion information of the sub-regions.
ψx(i, j) is the rate of change of horizontal and vertical components of the forward reference frame for a pixel, ψx(i, j) represents the sum of the horizontal gradients,. phiy(i, j) is the rate of change of the horizontal and vertical components of the pixel points in the backward reference frame, psiyAnd (i, j) represents a vertical direction gradient sum, and theta (i, j) represents a pixel difference value of corresponding positions of the forward reference frame and the backward reference frame, namely theta (i, j) represents a time domain predicted value difference value.
Step c3, determining horizontal direction velocity v according to the autocorrelation coefficient S1 and the cross correlation coefficient S3x(velocity v in horizontal directionxAlso called improvement motion vector vx) (ii) a Determining a vertical direction velocity v from the cross correlation coefficient S2, the autocorrelation coefficient S5 and the cross correlation coefficient S6y(vertical direction)Velocity vyAlso called improved motion vector sum vy)。
For example, the horizontal direction velocity v may be calculated using the following formulaxAnd velocity v in the vertical directiony
vx=(S1+r)>mclip3(-thBIO,thBIO,(S3<<5)/(S1+r)):0
vy=(S5+r)>mclip3(-thBIO,thBIO,((S6<<6)-vxS2)/((S5+r)<<1)):0
In the above formula, m and thBIOAll threshold values can be configured according to experience, and r is a regular term, so that the operation of 0 is avoided. clip3 shows vxIs guaranteed to be at-thBIOAnd thBIOAnd v isyIs guaranteed to be at-thBIOAnd thBIOIn the meantime.
Exemplary if (S)1+ r) > m? Is established, then vx=clip3(-thBIO,thBIO,(S3<<5)/(S1+ r)). If (S)1+ r) > m? If not, vx=0。th′BIOFor mixing vxIs limited to-th'BIOAnd th'BIOI.e. vxIs greater than or equal to-th'BIO,vxIs less than or equal to th'BIO. For vxIn other words, Clip3(a, b, x) indicates that if x is smaller than a, x is a; if x is greater than b, x ═ b; otherwise x is not changed, in the above formula, -th'BIOIs a, th'BIOIs b, (S)3<<5)/(S1+ r) is x, in summary, if (S)3<<5)/(S1+ r) is greater than-th'BIOAnd is less than th'BIOThen v isxIs (S)3<<5)/(S1+r)。
If (S)5+ r) > m is true, then vy=clip3(-thBIO,thBIO,((S6<<6)-vxS2)/((S5+ r) < 1). If (S)5If + r) > m is not true, vy=0。th′BIOFor mixing vyIs limited to-th'BIOAnd th'BIOI.e. vyIs greater than or equal to-th'BIO,vyIs less than or equal to th'BIO. For vyIn other words, Clip3(a, b, x) indicates that if x is smaller than a, x is a; if x is greater than b, x ═ b; otherwise x is not changed, in the above formula, -th'BIOIs a, th'BIOIs b, ((S)6<<6)-vxS2)/((S5+ r) < 1) is x, as described above, if, ((S)6<<6)-vxS2)/((S5+ r) < 1) is greater than-th'BIOAnd is less than th'BIOThen v isyIs ((S)6<<6)-vxS2)/((S5+r)<<1)。
Of course, the above is only to calculate vxAnd vyOther ways of calculating v may also be usedxAnd vyThis is not limitative.
And c4, acquiring the bidirectional optical flow offset value b of the subarea according to the horizontal direction velocity and the vertical direction velocity.
For example, one example of calculating the bi-directional optical flow offset value b is based on the horizontal direction velocity, the vertical direction velocity, the first pixel value, and the bi-directional optical flow offset value b of the sub-area of the second pixel value, see the following formula:
Figure BDA0003287823390000331
in the above formula, (x, y) is the coordinates of each pixel inside the current block, of course, the above formula is only an example of obtaining the bidirectional optical flow offset value b, and the bidirectional optical flow offset value b may also be calculated in other ways, which is not limited to this. I is0(x, y) is a first pixel value, and I0(x, y) is the forward motion compensation value and its forward extension value, I(1)(x, y) is a second pixel value, and I(1)(x, y) are the backward motion compensation value and its backward extension value.
And c5, determining the target predicted value of the sub-area according to the motion compensation value and the bidirectional optical flow offset value of the sub-area.
For example, after determining the forward motion compensation value, the backward motion compensation value, and the bi-directional optical flow offset value for the sub-region, the target prediction value for the sub-region may be determined based on the forward motion compensation value, the backward motion compensation value, and the bi-directional optical flow offset value. For example, a target prediction value pred of a pixel point (x, y) in the sub-region is determined based on the following formulaBIO(x,y):predBIO(x,y)=(I(0)(x,y)+I(1)(x, y) + b +1) > 1. In the above formula, I0(x, y) is the forward motion compensation value of pixel (x, y), I(1)(x, y) backward motion compensation values of the pixel (x, y).
Example 9: in this embodiment, whether a Motion Vector angle Prediction technology (MVAP) is started may be determined, which may also be referred to as a Motion information angle Prediction technology, and the Motion information angle Prediction technology is described later by taking as an example the Motion information angle Prediction technology. When the motion information angle prediction technology is started, the technical scheme of the embodiment of the present application, that is, the implementation processes of the above embodiments 1 to 8, may be adopted.
The following describes a process of determining whether to start a motion information angle prediction technique in conjunction with a specific application scenario.
Application scenario 1: the motion information angle prediction technique may be turned on or off using a Sequence Parameter Set (SPS) level syntax, for example, the SPS level syntax is added to control the turning on or off of the motion information angle prediction technique.
Illustratively, first indication information is obtained, the first indication information being located in the SPS stage. When the value of the first indication information is a first value, the first indication information is used for indicating the starting of the motion information angle prediction technology; and when the value of the first indication information is the second value, the first indication information is used for indicating to close the motion information angle prediction technology.
The encoding side can choose whether to turn on the motion information angle prediction technique. If the motion information angle prediction technology is started, when an encoding end sends an encoding bit stream to a decoding end, the encoding bit stream can carry first indication information, the value of the first indication information is a first value, the decoding end obtains the first indication information from the encoding bit stream after receiving the encoding bit stream, and the decoding end determines to start the motion information angle prediction technology because the value of the first indication information is the first value.
The encoding side can choose whether to turn on the motion information angle prediction technique. If the motion information angle prediction technology is closed, when an encoding end sends an encoding bit stream to a decoding end, the encoding bit stream can carry first indication information, the value of the first indication information is a second value, the decoding end obtains the first indication information from the encoding bit stream after receiving the encoding bit stream, and the decoding end determines to close the motion information angle prediction technology because the value of the first indication information is the second value.
For example, the first indication information is located in the SPS level, and when the motion information angle prediction technique is turned on, the motion information angle prediction technique may be turned on for each image corresponding to the SPS level, and when the motion information angle prediction technique is turned off, the motion information angle prediction technique may be turned off for each image corresponding to the SPS level.
Illustratively, when the encoding end encodes the first indication information into the bitstream, u (n), u (v), ue (n), ue (v), or the like may be used for encoding, u (n) or u (v) indicates that n consecutive bits are read, and after decoding, the u (n) or u (v) indicates Golomb entropy encoding of unsigned exponent. For u (n) and ue (n), the parameter in parentheses is n, indicating that the syntax element is fixed-length coding; for u (v) and ue (v), the parameter in parentheses is v, indicating that the syntax element is variable length coded. The application scenario does not limit the encoding mode, and if u (1) is adopted for encoding, only one bit is needed to indicate whether the motion information angle prediction technology is started or not.
Application scenario 2: the SPS-level syntax may be used to control the maximum size at which the motion information angular prediction technique may be used, for example, the SPS-level syntax may be added to control the maximum size at which the motion information angular prediction technique may be used, e.g., 32 x 32.
Illustratively, second indication information is obtained, the second indication information is located in the SPS, and the second indication information is used for indicating the maximum size. If the size of the current block is not larger than the maximum size, starting a motion information angle prediction technology for the current block; if the size of the current block is larger than the maximum size, the current block closes the motion information angle prediction technology.
When the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream may carry second indication information, where the second indication information is used to indicate a maximum size, such as 32 × 32, that the motion information angle prediction technique may be used. After receiving the coded bit stream, the decoding end acquires second indication information from the coded bit stream, and because the second indication information is used for indicating the maximum size, when the decoding end decodes the current block, if the size of the current block is not larger than the maximum size, the motion information angle prediction technology is started for the current block; if the size of the current block is larger than the maximum size, the motion information angle prediction technology is closed for the current block.
For example, the second indication information is located in the SPS level, and when each current block in the image corresponding to the SPS level is decoded, it is required to determine whether to start a motion information angle prediction technique for the current block according to the size of the current block.
For example, when the encoding end encodes the second indication information into the bitstream, u (n), u (v), ue (n), ue (v), or the like may be used for encoding, and the application scenario does not limit the encoding method, for example, ue (v) may be used for encoding.
Application scenario 3: the SPS-level syntax may be used to control the minimum size at which the motion information angle prediction technique may be used, for example, the SPS-level syntax may be added to control the minimum size at which the motion information angle prediction technique may be used, e.g., the minimum size is 8 x 8.
Illustratively, third indication information is obtained, the third indication information is located in the SPS, and the third indication information is used for indicating the minimum size. If the size of the current block is not smaller than the minimum size, starting a motion information angle prediction technology for the current block; if the size of the current block is smaller than the minimum size, the current block closes the motion information angle prediction technology.
When the encoding end sends the coded bit stream to the decoding end, the coded bit stream may carry third indication information, where the third indication information is used to indicate a minimum size, such as 8 × 8, where the motion information angle prediction technique may be used. After receiving the coded bit stream, the decoding end acquires third indication information from the coded bit stream, and because the third indication information is used for indicating the minimum size, when the decoding end decodes the current block, if the size of the current block is not smaller than the minimum size, the motion information angle prediction technology is started for the current block; if the size of the current block is smaller than the minimum size, the motion information angle prediction technology is closed for the current block.
For example, the third indication information is located in the SPS level, and when each current block in the image corresponding to the SPS level is decoded, it is required to determine whether to start a motion information angle prediction technique for the current block according to the size of the current block.
For example, when the encoding end encodes the third indication information into the bitstream, u (n), u (v), ue (n), ue (v), or the like may be used for encoding, and the application scenario does not limit the encoding method, for example, ue (v) may be used for encoding.
For example, when the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream may only carry the second indication information, may also only carry the third indication information, and may also simultaneously carry the second indication information and the third indication information.
Application scenario 4: slice (Slice) -level syntax may be used to turn on or off the motion information angle prediction technique, for example, Slice-level syntax is added to control the turning on or off of the motion information angle prediction technique.
Illustratively, fourth indication information is obtained, and the fourth indication information is located in Slice level. When the value of the fourth indication information is the first value, the fourth indication information is used for indicating to start the motion information angle prediction technology; and when the value of the fourth indication information is the second value, the fourth indication information is used for indicating to close the motion information angle prediction technology.
The encoding side can choose whether to turn on the motion information angle prediction technique. If the motion information angle prediction technology is started, when the encoding end sends an encoding bit stream to the decoding end, the encoding bit stream can carry fourth indication information, the value of the fourth indication information is a first value, the decoding end obtains the fourth indication information from the encoding bit stream after receiving the encoding bit stream, and the decoding end determines to start the motion information angle prediction technology because the value of the fourth indication information is the first value.
The encoding side can choose whether to turn on the motion information angle prediction technique. If the motion information angle prediction technology is closed, when the encoding end sends the encoded bit stream to the decoding end, the encoded bit stream can carry fourth indication information, the value of the fourth indication information is a second value, the decoding end obtains the fourth indication information from the encoded bit stream after receiving the encoded bit stream, and the decoding end determines to close the motion information angle prediction technology because the value of the fourth indication information is the second value.
For example, the fourth indication information is located in Slice level, when the motion information angle prediction technology is turned on, the motion information angle prediction technology may be turned on for an image corresponding to Slice level, and when the motion information angle prediction technology is turned off, the motion information angle prediction technology may be turned off for an image corresponding to Slice level.
For example, when the encoding end encodes the fourth indication information into the bitstream, u (n), u (v), ue (n), ue (v), or the like may be used for encoding, and the application scenario does not limit the encoding method, for example, u (1) may be used for encoding.
Application scenario 5: the Slice-level syntax may be used to control the maximum size of the motion information angle prediction technique, for example, adding Slice-level syntax to control the maximum size of the motion information angle prediction technique, such as 32 × 32.
Illustratively, fifth indication information is obtained, the fifth indication information is located in Slice, and the fifth indication information is used for indicating the maximum size. If the size of the current block is not larger than the maximum size, starting a motion information angle prediction technology for the current block; if the size of the current block is larger than the maximum size, the current block closes the motion information angle prediction technology.
When the encoding end sends the coded bitstream to the decoding end, the coded bitstream may carry fifth indication information, where the fifth indication information is used to indicate a maximum size, such as 32 × 32, that the motion information angle prediction technique may be used. After receiving the coded bit stream, the decoding end acquires fifth indication information from the coded bit stream, and because the fifth indication information is used for indicating the maximum size, when the decoding end decodes the current block, if the size of the current block is not larger than the maximum size, the motion information angle prediction technology is started for the current block; if the size of the current block is larger than the maximum size, the motion information angle prediction technology is closed for the current block.
For example, the fifth indication information is located in the Slice level, and when each current block in the image corresponding to the Slice level is decoded, it is required to determine whether to start a motion information angle prediction technique for the current block according to the size of the current block.
For example, when the encoding end encodes the fifth indication information into the bitstream, u (n), u (v), ue (n), ue (v), or the like may be used for encoding, and the application scenario does not limit the encoding method, for example, ue (v) may be used for encoding.
Application scenario 6: slice-level syntax may be used to control the minimum size at which motion information angle prediction techniques may be used, for example, adding Slice-level syntax to control the minimum size at which motion information angle prediction techniques may be used, e.g., a minimum size of 8 x 8.
Illustratively, sixth indication information is obtained, the sixth indication information is located in Slice, and the sixth indication information is used for indicating the minimum size. If the size of the current block is not smaller than the minimum size, starting a motion information angle prediction technology for the current block; if the size of the current block is smaller than the minimum size, the current block closes the motion information angle prediction technology.
When the encoding end sends the coded bit stream to the decoding end, the coded bit stream may carry sixth indication information, where the sixth indication information is used to indicate a minimum size, such as 8 × 8, where the motion information angle prediction technique may be used. After receiving the coded bit stream, the decoding end acquires sixth indication information from the coded bit stream, and because the sixth indication information is used for indicating the minimum size, when the decoding end decodes the current block, if the size of the current block is not smaller than the minimum size, the motion information angle prediction technology is started for the current block; if the size of the current block is smaller than the minimum size, the motion information angle prediction technology is closed for the current block.
For example, the sixth indication information is located in the Slice level, and when each current block in the image corresponding to the Slice level is decoded, it is required to determine whether to start a motion information angle prediction technique for the current block according to the size of the current block.
For example, when the encoding end encodes the sixth indication information into the bitstream, u (n), u (v), ue (n), ue (v), or the like may be used for encoding, and the application scenario does not limit the encoding method, for example, ue (v) may be used for encoding.
For example, when the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream may only carry the fifth indication information, may also only carry the sixth indication information, and may also simultaneously carry the fifth indication information and the sixth indication information.
Example 10: the current block may use a motion information angular prediction mode, that is, a motion compensation value of each sub-region of the current block is determined based on the motion information angular prediction mode, which is specifically determined in the above-mentioned embodiment. If the current block uses the motion information angle prediction mode, the current block can close the motion vector adjustment (DMVR) technology of the decoding end; alternatively, if the current block uses the motion information angle prediction mode, the current block may initiate a decoding-side motion vector adjustment technique. If the current block uses the motion information angle prediction mode, the current block may turn off the bidirectional optical flow technique (BIO); alternatively, if the current block uses the motion information angular prediction mode, the current block may initiate a bi-directional optical flow technique. Illustratively, the bi-directional optical flow technique is to superimpose optical flow compensation values on the current block using gradient information of pixel values in forward and backward reference frames. The principle of the decoding-side motion vector adjustment technique is to adjust a motion vector using a matching criterion between forward and backward reference pixel values. The following describes the combination of the motion information angle prediction mode, the decoding-side motion vector adjustment technique, and the bidirectional optical flow technique, with reference to a specific application scenario.
Application scenario 1: if the current block uses the motion information angle prediction mode, the current block may start the bi-directional optical flow technique, and the current block may close the motion vector adjustment technique at the decoding end. In this application scenario, a motion compensation value for each sub-region of the current block is determined based on the motion information angular prediction mode. Then, based on the bi-directional optical flow technique, the target prediction value of each sub-area of the current block is determined according to the motion compensation value of the sub-area, for example, if the sub-area satisfies the condition of using bi-directional optical flow, the target prediction value of the sub-area is determined according to the motion compensation value of the sub-area and the bi-directional optical flow offset value, and if the sub-area does not satisfy the condition of using bi-directional optical flow, the target prediction value of the sub-area is determined according to the motion compensation value of the sub-area, for example, refer to the above-mentioned embodiment. Then, the prediction value of the current block is determined according to the target prediction value of each sub-region.
Application scenario 2: if the current block uses the motion information angle prediction mode, the current block may initiate a bi-directional optical flow technique, and the current block may initiate a decoding-side motion vector adjustment technique. In such an application scenario, original motion information of each sub-region of the current block is determined based on the motion information angular prediction mode (for convenience of distinction, the motion information determined based on the motion information angular prediction mode is referred to as original motion information). Then, based on the decoding-end motion vector adjustment technology, determining target motion information of each sub-region of the current block according to the original motion information of the sub-region, for example, if the sub-region meets the condition of using the decoding-end motion vector adjustment, adjusting the original motion information of the sub-region to obtain the adjusted target motion information, and if the sub-region does not meet the condition of using the decoding-end motion vector adjustment, using the original motion information of the sub-region as the target motion information. Then, a motion compensation value of each sub-region is determined according to the target motion information of the sub-region. Then, based on the bi-directional optical flow technique, a target prediction value of each sub-area of the current block is determined according to the motion compensation value of the sub-area, for example, if the sub-area satisfies the condition of using the bi-directional optical flow, the target prediction value of the sub-area is determined according to the motion compensation value of the sub-area and the bi-directional optical flow offset value, and if the sub-area does not satisfy the condition of using the bi-directional optical flow, the target prediction value of the sub-area is determined according to the motion compensation value of the sub-area. Then, the prediction value of the current block is determined according to the target prediction value of each sub-region.
Application scenario 3: if the current block uses the motion information angle prediction mode, the current block may turn off the bi-directional optical flow technique, and the current block may start the motion vector adjustment technique at the decoding end. In this application scenario, the original motion information of each sub-region of the current block is determined based on the motion information angular prediction mode. Then, based on the decoding-end motion vector adjustment technology, determining target motion information of each sub-region of the current block according to the original motion information of the sub-region, for example, if the sub-region meets the condition of using the decoding-end motion vector adjustment, adjusting the original motion information of the sub-region to obtain the adjusted target motion information, and if the sub-region does not meet the condition of using the decoding-end motion vector adjustment, using the original motion information of the sub-region as the target motion information. Then, a target prediction value of each sub-area is determined according to the target motion information of the sub-area, and the process does not need to consider a bidirectional optical flow technology. Then, the prediction value of the current block is determined according to the target prediction value of each sub-region.
Application scenario 4: if the current block uses the motion information angle prediction mode, the current block may disable the bi-directional optical flow technique, and the current block may disable the motion vector adjustment technique at the decoding end. In the application scenario, the motion information of each sub-region of the current block is determined based on the motion information angle prediction mode, the target prediction value of each sub-region is determined according to the motion information of each sub-region, and the prediction value of the current block is determined according to the target prediction value of each sub-region. In the above process, the motion vector adjustment technique at the decoding end and the bidirectional optical flow technique do not need to be considered.
In application scenarios 2 and 3, for each sub-region of the current block, DMVR may be used for sub-regions that meet the requirements of using the decoding-side motion vector adjustment technique. For example, if the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located between two reference frames (i.e., a forward reference frame and a backward reference frame) in the temporal sequence, and the distance between the current frame and the forward reference frame is the same as the distance between the backward reference frame and the current frame, the sub-region satisfies the condition of using the motion vector adjustment at the decoding end. If the motion information of the sub-region is unidirectional motion information, the sub-region does not satisfy the condition of using the motion vector adjustment of the decoding end. If the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located is not located between two reference frames in the time sequence, the sub-region does not satisfy the condition of using the motion vector adjustment of the decoding end. If the motion information of the sub-region is bidirectional motion information, and the current frame where the sub-region is located between two reference frames (i.e. a forward reference frame and a backward reference frame) in time sequence, and the distance between the current frame and the forward reference frame is different from the distance between the backward reference frame and the current frame, the sub-region does not satisfy the condition of using the motion vector adjustment of the decoding end.
In application scenarios 2 and 3, a decoding-end motion vector adjustment technique needs to be started, where the decoding-end motion vector adjustment technique adjusts a motion vector according to a matching criterion of forward and backward reference pixel values, and the decoding-end motion vector adjustment technique can be applied to a direct mode or a skip mode, and an implementation process of the decoding-end motion vector adjustment technique can be as follows:
a) and acquiring reference pixels needed in the prediction block and the search area by using the initial motion vector.
b) And obtaining the optimal whole pixel position. Illustratively, the luminance image blocks of the current block are divided into non-overlapping and adjacently located sub-regions, and the initial motion vectors of all sub-regions are MV0 and MV 1. For each subregion, the positions corresponding to the initial MV0 and the initial MV1 are taken as centers, and the position with the minimum template matching distortion in a certain range nearby is searched. The calculation mode of the template matching distortion is as follows: the SAD value between a block of the sub-region width starting at the center position multiplied by the sub-region height in the forward search region and a block of the sub-region width starting at the center position multiplied by the sub-region height in the backward search region is calculated.
c) And obtaining the optimal sub-pixel position. And the sub-pixel position is confirmed by using template matching distortion values at five positions including the optimal position of the integer position, the left side of the optimal position, the right side of the optimal position, the upper side of the optimal position and the lower side of the optimal position, estimating a secondary distortion plane near the optimal position of the integer position, and calculating to obtain the position with the minimum distortion in the distortion plane as the sub-pixel position. For example, the horizontal sub-pixel position and the vertical sub-pixel position are calculated according to the template matching distortion values at five positions, namely, the optimal position of the integer position, the left side thereof, the right side thereof, the upper side thereof and the lower side thereof, see the following formula, which is one example of calculating the horizontal sub-pixel position and the vertical sub-pixel position:
horizontal subpixel position (sad _ left-sad _ right) N/((sad _ right + sad _ left-2 sa _ mid) 2)
Vertical subpixel position (sad _ btm-sad _ top) N/((sad _ top + sad _ btm-2 sad _ mid) 2)
Illustratively, sad _ mid, sad _ left, sad _ right, sad _ top, and sad _ btm are template matching distortion values at five positions of an integer position optimal position, the left side thereof, the right side thereof, the upper side thereof, and the lower side thereof, respectively, and N is precision.
Of course, the above is only an example of calculating the horizontal sub-pixel position and the vertical sub-pixel position, and the horizontal sub-pixel position and the vertical sub-pixel position may also be calculated in other manners, for example, the horizontal sub-pixel position and the vertical sub-pixel position are calculated according to the template matching distortion values at five positions, i.e., the integer position optimal position, the left side thereof, the right side thereof, the upper side thereof, and the lower side thereof, which is not limited to this, as long as the horizontal sub-pixel position and the vertical sub-pixel position are calculated with reference to these parameters.
d) And calculating according to the optimal MV to obtain a final predicted value.
Based on the same application concept as the method, an embodiment of the present application provides an encoding and decoding apparatus applied to a decoding end or an encoding end, as shown in fig. 10, which is a structural diagram of the apparatus, and the apparatus includes:
a first determining module 1001, configured to divide a current block into at least one sub-region if a target motion information prediction mode of the current block is a motion information angle prediction mode; for each sub-region of the current block, determining motion information of the sub-region according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
a second determining module 1002, configured to determine a motion compensation value of the sub-region according to the motion information of the sub-region;
an obtaining module 1003, configured to obtain a bidirectional optical flow offset value of the sub-area if the sub-area meets a condition of using a bidirectional optical flow; determining a target predicted value of the sub-region according to a forward motion compensation value in the motion compensation values of the sub-region, a backward motion compensation value in the motion compensation values of the sub-region and a bidirectional optical flow offset value of the sub-region;
a third determining module 1004, configured to determine the predictor of the current block according to the target predictor of each sub-region.
The obtaining module 1003 is further configured to: if the sub-area does not meet the condition of using the bidirectional optical flow, determining a target prediction value of the sub-area according to the motion compensation value of the sub-area; if the motion information of the sub-area is unidirectional motion information, the sub-area does not meet the condition of using bidirectional optical flow; or, if the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located is not located between two reference frames in the time sequence, the sub-region does not satisfy the condition of using the bidirectional optical flow.
And if the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located between two reference frames in the time sequence, the sub-region meets the condition of using bidirectional optical flow.
The obtaining module 1003 is specifically configured to, when obtaining the bidirectional optical flow offset value of the sub-area:
determining a first pixel value and a second pixel value according to the motion information of the sub-region; the first pixel value is a forward motion compensation value and a forward extension value for the subregion, the forward extension value being copied from the forward motion compensation value or obtained from a reference pixel location of a forward reference frame; the second pixel value is a backward motion compensation value and a backward extension value of the sub-region, the backward extension value is copied from the backward motion compensation value or obtained from a reference pixel position of a backward reference frame; the forward reference frame and the backward reference frame are determined according to the motion information of the sub-area;
determining a bi-directional optical flow offset value for the sub-region from the first pixel value and the second pixel value.
The obtaining module 1003 is further configured to: acquiring first indication information, wherein the first indication information is positioned in a sequence parameter set level; when the value of the first indication information is the first value, the first indication information is used for indicating the starting of the motion information angle prediction technology; and when the value of the first indication information is the second value, the first indication information is used for indicating to close the motion information angle prediction technology.
The obtaining module 1003 is further configured to: acquiring second indication information, wherein the second indication information is positioned in a sequence parameter set level and is used for indicating the maximum size; if the size of the current block is not larger than the maximum size, starting a motion information angle prediction technology for the current block; if the size of the current block is larger than the maximum size, closing the motion information angle prediction technology of the current block; and/or obtaining third indication information, wherein the third indication information is located in the sequence parameter set level, and the third indication information is used for indicating the minimum size; if the size of the current block is not smaller than the minimum size, starting a motion information angle prediction technology for the current block; if the size of the current block is smaller than the minimum size, the current block closes the motion information angle prediction technology.
The obtaining module 1003 is further configured to: acquiring fourth indication information, wherein the fourth indication information is positioned in a slice level; when the value of the fourth indication information is the first value, the fourth indication information is used for indicating the starting of the motion information angle prediction technology; and when the value of the fourth indication information is the second value, the fourth indication information is used for indicating to close the motion information angle prediction technology.
The obtaining module 1003 is further configured to: acquiring fifth indication information, wherein the fifth indication information is positioned in a slice level and is used for indicating a maximum size; if the size of the current block is not larger than the maximum size, starting a motion information angle prediction technology for the current block; if the size of the current block is larger than the maximum size, closing the motion information angle prediction technology of the current block; and/or sixth indication information is obtained, wherein the sixth indication information is located in a slice level and is used for indicating a minimum size; if the size of the current block is not smaller than the minimum size, starting a motion information angle prediction technology for the current block; if the size of the current block is smaller than the minimum size, the current block closes the motion information angle prediction technology.
The device further comprises: a construction module for constructing a motion information prediction mode candidate list of a current block; selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
the building module is specifically configured to: selecting a plurality of peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of a current block according to the pre-configuration angle of any motion information angle prediction mode of the current block; the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed;
for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, when the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angular prediction mode to a motion information prediction mode candidate list of a current block.
The building module is specifically configured to: adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block if there is no available motion information in at least one of the first peripheral matching block and the second peripheral matching block.
The building module is specifically configured to: if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, prohibiting the motion information angular prediction mode from being added to the motion information prediction mode candidate list of the current block.
The building module is specifically configured to: and if the available motion information exists in the first peripheral matching block and the second peripheral matching block, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block when the motion information of the first peripheral matching block and the second peripheral matching block is the same.
The building module is specifically configured to: if an intra block and/or an unencoded block exists in the first peripheral matching block and the second peripheral matching block, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block; or, if an intra block and/or an unencoded block exists in the first peripheral matching block and the second peripheral matching block, prohibiting the motion information angle prediction mode from being added to a motion information prediction mode candidate list of the current block; or, if at least one of the first peripheral matching block and the second peripheral matching block is located outside the image where the current block is located or outside the image slice where the current block is located, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block.
The building module is specifically configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, and for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuing to judge whether available motion information exists in both the second peripheral matching block and the third peripheral matching block; if there is available motion information for both the second peripheral matching block and the third peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
The building module is specifically configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, and for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuing to judge whether available motion information exists in both the second peripheral matching block and the third peripheral matching block; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
The building module is specifically configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, and for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block.
The building module is specifically configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, aiming at the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block; alternatively, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
The building module is specifically configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, aiming at the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
The building module is specifically configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block which are to be traversed sequentially, aiming at the first peripheral matching block and the second peripheral matching block which are to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
The building module is specifically configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, aiming at the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information;
if there is no motion information available for at least one of the second peripheral matching block and the third peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block, or prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block.
The building module is specifically configured to: if the peripheral matching block is positioned outside the image where the current block is positioned or the peripheral matching block is positioned outside the image slice where the current block is positioned, determining that the peripheral matching block does not have available motion information;
if the peripheral matching block is an uncoded block, determining that the peripheral matching block does not have available motion information;
if the peripheral matching block is the intra-frame block, determining that the peripheral matching block does not have available motion information;
if the peripheral matching block is the inter-frame coded block, determining that the available motion information exists in the peripheral matching block.
The device further comprises: a filling module, configured to, when the apparatus is applied to an encoding end, after a motion information prediction mode candidate list of a current block is constructed, fill, if a motion information angle prediction mode exists in the motion information prediction mode candidate list, motion information that is unavailable in a peripheral block of the current block; when the device is applied to an encoding end, after the target motion information prediction mode of the current block is selected from the motion information prediction mode candidate list, if the target motion information prediction mode is a motion information angle prediction mode, unavailable motion information in peripheral blocks of the current block is filled.
The filling module is specifically configured to: traversing the peripheral blocks of the current block according to a traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the peripheral block comprises a first peripheral block without available motion information before, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise a second peripheral block without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block.
In terms of hardware, the hardware architecture diagram of the decoding-side device provided in the embodiment of the present application may specifically refer to fig. 11A. The method comprises the following steps: a processor 111 and a machine-readable storage medium 112, the machine-readable storage medium 112 storing machine-executable instructions executable by the processor 111; the processor 111 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
For example, the processor 111 is configured to execute machine-executable instructions to perform the following steps:
if the target motion information prediction mode of the current block is a motion information angle prediction mode, then
Dividing the current block into at least one sub-region; for each sub-region of the current block, determining motion information of the sub-region according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
determining a motion compensation value of the sub-region according to the motion information of the sub-region;
if the sub-area meets the condition of using the bidirectional optical flow, acquiring a bidirectional optical flow offset value of the sub-area; determining a target predicted value of the sub-region according to a forward motion compensation value in the motion compensation values of the sub-region, a backward motion compensation value in the motion compensation values of the sub-region and a bidirectional optical flow offset value of the sub-region;
and determining the predicted value of the current block according to the target predicted value of each sub-area.
In terms of hardware, the hardware architecture diagram of the encoding end device provided in the embodiment of the present application may specifically refer to fig. 11B. The method comprises the following steps: a processor 113 and a machine-readable storage medium 114, the machine-readable storage medium 114 storing machine-executable instructions executable by the processor 113; the processor 113 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
For example, the processor 113 is configured to execute machine-executable instructions to perform the following steps:
if the target motion information prediction mode of the current block is a motion information angle prediction mode, then
Dividing the current block into at least one sub-region; for each sub-region of the current block, determining motion information of the sub-region according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
determining a motion compensation value of the sub-region according to the motion information of the sub-region;
if the sub-area meets the condition of using the bidirectional optical flow, acquiring a bidirectional optical flow offset value of the sub-area; determining a target predicted value of the sub-region according to a forward motion compensation value in the motion compensation values of the sub-region, a backward motion compensation value in the motion compensation values of the sub-region and a bidirectional optical flow offset value of the sub-region;
and determining the predicted value of the current block according to the target predicted value of each sub-area.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where a plurality of computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the encoding and decoding methods disclosed in the above examples of the present application can be implemented. The machine-readable storage medium may be, for example, any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices. For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application. As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (17)

1. A decoding method applied to a decoding end, the method comprising:
if the target motion information prediction mode of the current block is a motion information angle prediction mode, then
Dividing the current block into at least one sub-region; for each sub-region of the current block, determining motion information of the sub-region according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
determining a motion compensation value of the sub-region according to the motion information of the sub-region;
if the sub-area meets the condition of using the bidirectional optical flow, acquiring a bidirectional optical flow offset value of the sub-area; determining a target predicted value of the sub-region according to a forward motion compensation value in the motion compensation values of the sub-region, a backward motion compensation value in the motion compensation values of the sub-region and a bidirectional optical flow offset value of the sub-region; if the sub-area does not meet the condition of using the bidirectional optical flow, determining a target prediction value of the sub-area according to the motion compensation value of the sub-area;
and determining the predicted value of the current block according to the target predicted value of each sub-area.
2. The method of claim 1,
if the motion information of the sub-area is unidirectional motion information, the sub-area does not meet the condition of using bidirectional optical flow; or, if the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located is not located between two reference frames in the time sequence, the sub-region does not satisfy the condition of using bidirectional optical flow;
and if the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located between two reference frames in the time sequence, the sub-region meets the condition of using bidirectional optical flow.
3. The method of claim 1, wherein obtaining a bi-directional optical flow offset value for the sub-region comprises:
determining a first pixel value and a second pixel value according to the motion information of the sub-region; the first pixel value is a forward motion compensation value and a forward extension value for the subregion, the forward extension value being copied from the forward motion compensation value or obtained from a reference pixel location of a forward reference frame; the second pixel value is a backward motion compensation value and a backward extension value of the sub-region, and the backward extension value is copied from the backward motion compensation value or obtained from a reference pixel position of a backward reference frame; the forward reference frame and the backward reference frame are determined according to the motion information of the sub-area;
determining a bi-directional optical flow offset value for the sub-region from the first pixel value and the second pixel value.
4. The method of claim 1,
before determining that the target motion information prediction mode of the current block is a motion information angle prediction mode, the method further comprises:
acquiring first indication information, wherein the first indication information is positioned in a sequence parameter set level; when the value of the first indication information is a first value, the first indication information is used for indicating to start a motion information angle prediction technology; and when the value of the first indication information is a second value, the first indication information is used for indicating to close the motion information angle prediction technology.
5. The method of claim 1,
before determining that the target motion information prediction mode of the current block is a motion information angle prediction mode, the method further comprises:
constructing a motion information prediction mode candidate list of the current block;
selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
wherein the constructing of the motion information prediction mode candidate list of the current block includes:
selecting a plurality of peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of a current block according to the pre-configuration angle of any motion information angle prediction mode of the current block; the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed;
for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, when the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angular prediction mode to a motion information prediction mode candidate list of a current block.
6. The method of claim 5,
after the selecting the target motion information prediction mode for the current block from the motion information prediction mode candidate list, the method further comprises: and if the target motion information prediction mode is a motion information angle prediction mode, filling motion information of peripheral blocks of the current block.
7. An encoding method applied to an encoding end, the method comprising:
if the target motion information prediction mode of the current block is a motion information angle prediction mode, then
Dividing the current block into at least one sub-region; for each sub-region of the current block, determining motion information of the sub-region according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
determining a motion compensation value of the sub-region according to the motion information of the sub-region;
if the sub-area meets the condition of using the bidirectional optical flow, acquiring a bidirectional optical flow offset value of the sub-area; determining a target predicted value of the sub-region according to a forward motion compensation value in the motion compensation values of the sub-region, a backward motion compensation value in the motion compensation values of the sub-region and a bidirectional optical flow offset value of the sub-region; if the sub-area does not meet the condition of using the bidirectional optical flow, determining a target prediction value of the sub-area according to the motion compensation value of the sub-area;
and determining the predicted value of the current block according to the target predicted value of each sub-area.
8. The method of claim 7,
if the motion information of the sub-area is unidirectional motion information, the sub-area does not meet the condition of using bidirectional optical flow; or, if the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located is not located between two reference frames in the time sequence, the sub-region does not satisfy the condition of using bidirectional optical flow;
and if the motion information of the sub-region is bidirectional motion information and the current frame where the sub-region is located between two reference frames in the time sequence, the sub-region meets the condition of using bidirectional optical flow.
9. The method of claim 7, wherein obtaining a bi-directional optical flow offset value for the sub-region comprises:
determining a first pixel value and a second pixel value according to the motion information of the sub-region; the first pixel value is a forward motion compensation value and a forward extension value for the subregion, the forward extension value being copied from the forward motion compensation value or obtained from a reference pixel location of a forward reference frame; the second pixel value is a backward motion compensation value and a backward extension value of the sub-region, and the backward extension value is copied from the backward motion compensation value or obtained from a reference pixel position of a backward reference frame; the forward reference frame and the backward reference frame are determined according to the motion information of the sub-area;
determining a bi-directional optical flow offset value for the sub-region from the first pixel value and the second pixel value.
10. The method of claim 7,
before determining that the target motion information prediction mode of the current block is a motion information angle prediction mode, the method further comprises:
acquiring first indication information, wherein the first indication information is positioned in a sequence parameter set level; when the value of the first indication information is a first value, the first indication information is used for indicating to start a motion information angle prediction technology; and when the value of the first indication information is a second value, the first indication information is used for indicating to close the motion information angle prediction technology.
11. The method of claim 7,
before determining that the target motion information prediction mode of the current block is a motion information angle prediction mode, the method further comprises:
constructing a motion information prediction mode candidate list of the current block;
selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
wherein the constructing of the motion information prediction mode candidate list of the current block includes:
selecting a plurality of peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of a current block according to the pre-configuration angle of any motion information angle prediction mode of the current block; the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed;
for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, when the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angular prediction mode to a motion information prediction mode candidate list of a current block.
12. The method of claim 11,
after the constructing the motion information prediction mode candidate list of the current block, the method further includes:
and if the motion information angle prediction mode exists in the motion information prediction mode candidate list, filling the motion information of the peripheral blocks of the current block.
13. A decoding device is characterized in that, applied to a decoding end,
the decoding apparatus comprises means for implementing the method of any one of claims 1-6.
14. An encoding device, which is applied to an encoding end,
the encoding apparatus comprises means for implementing the method of any one of claims 7-12.
15. A decoding device, characterized by comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; wherein the content of the first and second substances,
the processor is configured to execute the machine-executable instructions to implement the method of any of claims 1-6.
16. An encoding device, characterized by comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; wherein the processor is configured to execute the machine executable instructions to implement the method of any of claims 7-12.
17. A machine-readable storage medium having stored thereon machine-executable instructions executable by a processor; wherein the processor is configured to execute the machine-executable instructions to implement the method of any of claims 1-6 or to implement the method of any of claims 7-12.
CN202111153142.8A 2019-09-06 2019-09-06 Encoding and decoding method, device and equipment Active CN113709487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111153142.8A CN113709487B (en) 2019-09-06 2019-09-06 Encoding and decoding method, device and equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111153142.8A CN113709487B (en) 2019-09-06 2019-09-06 Encoding and decoding method, device and equipment
CN201910844633.3A CN112468817B (en) 2019-09-06 2019-09-06 Encoding and decoding method, device and equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910844633.3A Division CN112468817B (en) 2019-09-06 2019-09-06 Encoding and decoding method, device and equipment

Publications (2)

Publication Number Publication Date
CN113709487A true CN113709487A (en) 2021-11-26
CN113709487B CN113709487B (en) 2022-12-23

Family

ID=74807821

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202111153141.3A Active CN113709486B (en) 2019-09-06 2019-09-06 Encoding and decoding method, device and equipment
CN202111153142.8A Active CN113709487B (en) 2019-09-06 2019-09-06 Encoding and decoding method, device and equipment
CN201910844633.3A Active CN112468817B (en) 2019-09-06 2019-09-06 Encoding and decoding method, device and equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202111153141.3A Active CN113709486B (en) 2019-09-06 2019-09-06 Encoding and decoding method, device and equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910844633.3A Active CN112468817B (en) 2019-09-06 2019-09-06 Encoding and decoding method, device and equipment

Country Status (1)

Country Link
CN (3) CN113709486B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109104609A (en) * 2018-09-12 2018-12-28 浙江工业大学 A kind of lens boundary detection method merging HEVC compression domain and pixel domain
CN109510991A (en) * 2017-09-15 2019-03-22 浙江大学 A kind of motion vector deriving method and device
KR20190038371A (en) * 2017-09-29 2019-04-08 한국전자통신연구원 Method and apparatus for encoding/decoding image and recording medium for storing bitstream
CN110024394A (en) * 2016-11-28 2019-07-16 韩国电子通信研究院 The recording medium of method and apparatus and stored bits stream to encoding/decoding image

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1225127C (en) * 2003-09-12 2005-10-26 中国科学院计算技术研究所 A coding/decoding end bothway prediction method for video coding
US20180249172A1 (en) * 2015-09-02 2018-08-30 Mediatek Inc. Method and apparatus of motion compensation for video coding based on bi prediction optical flow techniques
MY201069A (en) * 2016-02-05 2024-02-01 Hfi Innovation Inc Method and apparatus of motion compensation based on bi-directional optical flow techniques for video coding
CN117528111A (en) * 2016-11-28 2024-02-06 英迪股份有限公司 Image encoding method, image decoding method, and method for transmitting bit stream
TW201842782A (en) * 2017-04-06 2018-12-01 美商松下電器(美國)知識產權公司 Encoding device, decoding device, encoding method, and decoding method
KR20190093172A (en) * 2018-01-31 2019-08-08 가온미디어 주식회사 A method of video processing for moving information, a method and appratus for decoding and encoding video using the processing.

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110024394A (en) * 2016-11-28 2019-07-16 韩国电子通信研究院 The recording medium of method and apparatus and stored bits stream to encoding/decoding image
CN109510991A (en) * 2017-09-15 2019-03-22 浙江大学 A kind of motion vector deriving method and device
KR20190038371A (en) * 2017-09-29 2019-04-08 한국전자통신연구원 Method and apparatus for encoding/decoding image and recording medium for storing bitstream
CN109104609A (en) * 2018-09-12 2018-12-28 浙江工业大学 A kind of lens boundary detection method merging HEVC compression domain and pixel domain

Also Published As

Publication number Publication date
CN112468817B (en) 2022-07-29
CN113709486A (en) 2021-11-26
CN113709487B (en) 2022-12-23
CN112468817A (en) 2021-03-09
CN113709486B (en) 2022-12-23

Similar Documents

Publication Publication Date Title
CN108781283B (en) Video coding using hybrid intra prediction
CN110933426B (en) Decoding and encoding method and device thereof
CN111698500B (en) Encoding and decoding method, device and equipment
CN112135145B (en) Encoding and decoding method, device and equipment
CN112565747B (en) Decoding and encoding method, device and equipment
CN113794883B (en) Encoding and decoding method, device and equipment
CN112468817B (en) Encoding and decoding method, device and equipment
CN115834904A (en) Inter-frame prediction method and device
CN113709499B (en) Encoding and decoding method, device and equipment
CN110662033A (en) Decoding and encoding method and device thereof
CN112449180B (en) Encoding and decoding method, device and equipment
CN112449181B (en) Encoding and decoding method, device and equipment
CN113766234B (en) Decoding and encoding method, device and equipment
CN110662074B (en) Motion vector determination method and device
CN111669592B (en) Encoding and decoding method, device and equipment
CN112055220B (en) Encoding and decoding method, device and equipment
CN110691247B (en) Decoding and encoding method and device
US20160366434A1 (en) Motion estimation apparatus and method
WO2012114561A1 (en) Moving image coding device and moving image coding method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40064023

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant