CN113794884B - Encoding and decoding method, device and equipment - Google Patents

Encoding and decoding method, device and equipment Download PDF

Info

Publication number
CN113794884B
CN113794884B CN202111153196.4A CN202111153196A CN113794884B CN 113794884 B CN113794884 B CN 113794884B CN 202111153196 A CN202111153196 A CN 202111153196A CN 113794884 B CN113794884 B CN 113794884B
Authority
CN
China
Prior art keywords
motion information
block
peripheral matching
peripheral
matching block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111153196.4A
Other languages
Chinese (zh)
Other versions
CN113794884A (en
Inventor
方树清
陈方栋
王莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202111153196.4A priority Critical patent/CN113794884B/en
Publication of CN113794884A publication Critical patent/CN113794884A/en
Application granted granted Critical
Publication of CN113794884B publication Critical patent/CN113794884B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application provides a coding and decoding method, a device and equipment thereof, wherein the method comprises the following steps: aiming at any motion information angle prediction mode of a current block, selecting a plurality of peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block on the basis of the pre-configuration angle of the motion information angle prediction mode; the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed; for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, when the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angular prediction mode to a motion information prediction mode candidate list of a current block. By the scheme, the coding performance can be improved.

Description

Encoding and decoding method, device and equipment
Technical Field
The present application relates to the field of encoding and decoding technologies, and in particular, to an encoding and decoding method, apparatus, and device.
Background
In order to achieve the purpose of saving space, video images are transmitted after being coded, and the complete video coding method can comprise the processes of prediction, transformation, quantization, entropy coding, filtering and the like. The predictive coding comprises intra-frame coding and inter-frame coding, wherein the inter-frame coding uses the correlation of a video time domain to predict the pixels of the current image by using the pixels adjacent to the coded image so as to achieve the aim of effectively removing the video time domain redundancy. In inter-frame coding, a motion vector is used to represent a relative displacement between a current image block of a current frame video image and a reference image block of a reference frame video image. For example, when the video image a of the current frame and the video image B of the reference frame have a strong temporal correlation, and an image block A1 (current block) of the video image a needs to be transmitted, a motion search may be performed in the video image B to find an image block B1 (i.e., reference block) that best matches the image block A1, and determine a relative displacement between the image block A1 and the image block B1, where the relative displacement is a motion vector of the image block A1.
In the prior art, a current coding unit does not need to be divided into blocks, but only one piece of motion information can be determined for the current coding unit directly by indicating a motion information index or a difference information index. Because all sub-blocks in the current coding unit share one motion information, for some moving objects which are small, the best motion information can be obtained only after the coding unit is divided into blocks. However, if the current coding unit is divided into a plurality of sub-blocks, additional bit overhead is generated.
Disclosure of Invention
The application provides a coding and decoding method, device and equipment thereof, which can improve coding performance.
The application provides a coding and decoding method, which comprises the following steps:
selecting a plurality of peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of a current block according to the pre-configuration angle of any motion information angle prediction mode of the current block; the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed;
for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, when the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angular prediction mode to a motion information prediction mode candidate list of a current block.
The application provides a coding and decoding method, which is applied to a coding end, and the method comprises the following steps:
constructing a motion information prediction mode candidate list of a current block, and filling motion information of peripheral blocks of the current block if a motion information angle prediction mode exists in the motion information prediction mode candidate list;
for each motion information angle prediction mode in the motion information angle prediction mode candidate list, determining motion information of a current block according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
The application provides a coding and decoding method, which is applied to a decoding end, and the method comprises the following steps:
constructing a motion information prediction mode candidate list of a current block, and selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list; if the target motion information prediction mode is a target motion information angle prediction mode, filling motion information of peripheral blocks of the current block;
determining the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
The present application provides a coding and decoding device, the device includes:
a selection module, configured to select, for any one motion information angle prediction mode of a current block, a plurality of peripheral matching blocks pointed to by a preconfigured angle from peripheral blocks of the current block based on the preconfigured angle of the motion information angle prediction mode; the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed;
and the processing module is used for adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block when the motion information of the first peripheral matching block and the second peripheral matching block is different if available motion information exists in the first peripheral matching block and the second peripheral matching block aiming at the first peripheral matching block and the second peripheral matching block to be traversed.
The application provides a coding and decoding device, is applied to the code end, the device includes:
the device comprises a filling module, a motion information prediction mode candidate list and a motion information prediction mode selection module, wherein the filling module is used for constructing the motion information prediction mode candidate list of the current block, and filling the motion information of the peripheral blocks of the current block if the motion information angle prediction mode exists in the motion information prediction mode candidate list;
a determining module, configured to determine, for each motion information angle prediction mode in the motion information prediction mode candidate list, motion information of a current block according to motion information of a plurality of neighboring matching blocks pointed by a pre-configured angle of the motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
The application provides a coding and decoding device, is applied to the decoding end, the device includes:
the selection module is used for constructing a motion information prediction mode candidate list of a current block and selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
the processing module is used for filling motion information of peripheral blocks of the current block if the target motion information prediction mode is a target motion information angle prediction mode;
a determining module, configured to determine motion information of a current block according to motion information of a plurality of neighboring matching blocks pointed by preconfigured angles of the target motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
The application provides a decoding side device, including: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
aiming at any motion information angle prediction mode of a current block, selecting a plurality of peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of the current block based on the pre-configuration angle of the motion information angle prediction mode; the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed;
for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of a current block when motion information of the first peripheral matching block and the second peripheral matching block is different; alternatively, the first and second liquid crystal display panels may be,
the processor is configured to execute machine executable instructions to perform the steps of:
constructing a motion information prediction mode candidate list of a current block, and selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list; if the target motion information prediction mode is a target motion information angle prediction mode, filling motion information of peripheral blocks of the current block;
determining the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
The application provides a coding end device, including: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
selecting a plurality of peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of a current block according to the pre-configuration angle of any motion information angle prediction mode of the current block; the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed;
for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of a current block when motion information of the first peripheral matching block and the second peripheral matching block is different; alternatively, the first and second liquid crystal display panels may be,
the processor is configured to execute machine executable instructions to perform the steps of:
constructing a motion information prediction mode candidate list of a current block, and filling motion information of peripheral blocks of the current block if a motion information angle prediction mode exists in the motion information prediction mode candidate list;
for each motion information angle prediction mode in the motion information angle prediction mode candidate list, determining motion information of a current block according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
According to the technical scheme, the current block does not need to be divided, the bit overhead caused by sub-block division can be effectively solved, for example, motion information is provided for each sub-region of the current block on the basis that the current block is not divided into sub-blocks, different sub-regions of the current block can correspond to the same or different motion information, so that the coding performance is improved, the problem of transmitting a large amount of motion information is solved, and a large amount of bits can be saved. The coding performance can be further improved by adding the motion information angle prediction modes with the motion information which is not completely the same to the motion information prediction mode candidate list, thereby reducing the number of the motion information angle prediction modes in the motion information prediction mode candidate list.
Drawings
FIG. 1 is a schematic diagram of a video coding framework in one embodiment of the present application;
FIGS. 2A-2B are schematic diagrams illustrating the partitioning of a current block according to an embodiment of the present application;
FIG. 3 is a schematic view of several sub-regions in one embodiment of the present application;
FIG. 4 is a flow chart of a method of encoding and decoding in one embodiment of the present application;
fig. 5A and 5B are schematic diagrams of a motion information angle prediction mode in an embodiment of the present application;
FIG. 6 is a flow chart of a method of encoding and decoding in one embodiment of the present application;
FIG. 7 is a flow chart of a method of encoding and decoding in one embodiment of the present application;
FIGS. 8A-8C are schematic diagrams of peripheral blocks of a current block in one embodiment of the present application;
FIGS. 9A-9N are schematic diagrams of motion compensation in an embodiment of the present application;
fig. 10A to 10C are structural diagrams of a codec device according to an embodiment of the present application;
fig. 11A is a hardware configuration diagram of a decoding-side device according to an embodiment of the present application;
fig. 11B is a hardware configuration diagram of an encoding-side device according to an embodiment of the present application.
Detailed Description
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items. It should be understood that although the terms first, second, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used may be interpreted as "at … …" or "at … …" or "in response to a determination", depending on the context.
The embodiment of the application provides a coding and decoding method, which can relate to the following concepts:
motion Vector (MV): in inter-frame coding, a motion vector is used to represent a relative displacement between a current block of a current frame image and a reference block of a reference frame image, for example, there is a strong temporal correlation between an image a of the current frame and an image B of the reference frame, when an image block A1 (current block) of the image a is transmitted, a motion search can be performed in the image B to find an image block B1 (reference block) that most matches the image block A1, and a relative displacement between the image block A1 and the image block B1, that is, a motion vector of the image block A1, is determined. Each divided image block has a corresponding motion vector transmitted to a decoding side, and if the motion vector of each image block is independently encoded and transmitted, especially divided into a large number of image blocks of small size, a considerable number of bits are consumed. In order to reduce the bit number used for encoding the motion vector, the spatial correlation between adjacent image blocks can be utilized, the motion vector of the current image block to be encoded is predicted according to the motion vector of the adjacent encoded image block, and then the prediction difference is encoded, so that the bit number representing the motion vector can be effectively reduced. In the process of encoding the Motion Vector of the current block, the Motion Vector of the current block can be predicted by using the Motion Vector of the adjacent encoded block, and then the Difference value (MVD) between the predicted value (MVP) of the Motion Vector and the true estimate value of the Motion Vector can be encoded, thereby effectively reducing the encoding bit number of the Motion Vector.
Motion Information (Motion Information): in order to accurately acquire information of the reference block, index information of the reference frame image is required in addition to the motion vector to indicate which reference frame image is used. In video coding technology, for a current frame picture, a reference frame picture list can be generally established, and the reference frame picture index information indicates that the current block adopts a few reference frame pictures in the reference frame picture list. Many coding techniques also support multiple reference picture lists, and therefore, an index value, which may be referred to as a reference direction, may also be used to indicate which reference picture list is used. In the video encoding technology, motion-related information such as a motion vector, a reference frame index, and a reference direction may be collectively referred to as motion information.
Rate-Distortion principle (Rate-Distortion Optimized): there are two major indicators for evaluating coding efficiency: code rate and Peak Signal to Noise Ratio (PSNR), the smaller the bit stream, the larger the compression rate, and the larger the PSNR, the better the reconstructed image quality, and in the mode selection, the discrimination formula is essentially the comprehensive evaluation of the two. For example, the cost corresponding to a pattern: j (mode) = D + λ R, illustratively, D denotes Distortion, which can generally be measured using SSE index, SSE refers to the mean square sum of the differences of the reconstructed image block and the source image; λ is lagrange multiplier, and R is the actual number of bits required for encoding an image block in this mode, including the sum of bits required for encoding mode information, motion information, residual, and the like.
Intra and inter prediction (intra and inter) techniques: intra-frame prediction refers to predictive coding using reconstructed pixel values of spatially neighboring blocks of a current block (i.e., the same frame of image as the current block). Inter-frame prediction refers to performing predictive coding by using reconstructed pixel values of time-domain neighboring blocks of a current block (located in different frame images from the current block), and inter-frame prediction refers to using video time-domain correlation.
A video coding framework: referring to fig. 1, a schematic diagram of a video encoding framework is shown, where the video encoding framework can be used to implement a processing flow at an encoding end in the embodiment of the present application, the schematic diagram of the video decoding framework is similar to that in fig. 1, and is not described herein again, and the video decoding framework can be used to implement a processing flow at a decoding end in the embodiment of the present application. Illustratively, in the video encoding and decoding frameworks, intra prediction, motion estimation/motion compensation, reference picture buffers, in-loop filtering, reconstruction, transformation, quantization, inverse transformation, inverse quantization, entropy coder, and like modules may be included. At the encoding end, the processing flow at the encoding end can be realized through the matching of the modules, and at the decoding end, the processing flow at the decoding end can be realized through the matching of the modules.
In the conventional manner, there is only one motion information for the current block, i.e., all sub-blocks inside the current block share one motion information. For a scene with a small moving target, the optimal motion information can be obtained only after the current block is subjected to block division, and if the current block is not divided, the current block only has one motion information, so that the prediction precision is not high. Referring to fig. 2A, the region C, the region G, and the region H are regions within the current block, and are not subblocks divided within the current block. Assuming that the current block uses the motion information of the block F, each area within the current block uses the motion information of the block F. Since the distance between the area H and the block F in the current block is long, if the area H also uses the motion information of the block F, the prediction accuracy of the motion information of the area H is not high. The motion information of the sub-block inside the current block cannot utilize the coded motion information around the current block, so that the available motion information is reduced, and the accuracy of the motion information is not high. For example, for the sub-block I of the current block, only the motion information of the sub-blocks C, G, and H can be used, and the motion information of the image blocks a, B, F, D, and E cannot be used.
In view of the above discovery, an encoding and decoding method provided in the embodiment of the present application can enable a current block to correspond to a plurality of pieces of motion information on the basis of not dividing the current block, that is, on the basis of not increasing overhead caused by sub-block division, so as to improve the prediction accuracy of the motion information of the current block. Because the current block is not divided, the method avoids consuming additional bits to transmit the division mode, and saves the bit overhead. For each region of the current block (here, any region in the current block, where the size of the region is smaller than the size of the current block and is not a sub-block obtained by dividing the current block), the motion information of each region of the current block may be obtained by using the encoded motion information around the current block. Referring to fig. 2B, C is a sub-region inside the current block, A, B, D, E and F are encoded blocks around the current block, the motion information of the current sub-region C can be directly obtained by using an angle prediction method, and other sub-regions inside the current block are obtained by using the same method. Therefore, for the current block, different motion information can be obtained without carrying out block division on the current block, and the bit cost of part of block division is saved.
Referring to fig. 3, the current block includes 9 regions (hereinafter, referred to as sub-regions within the current block), such as sub-regions f1 to f9, which are sub-regions within the current block, and are not sub-blocks into which the current block is divided. For different sub-areas in the sub-areas f1 to f9, the same or different motion information may be associated, so that the current block may be associated with multiple pieces of motion information on the basis that the current block is not divided, for example, the sub-area f1 is associated with the motion information 1, the sub-area f2 is associated with the motion information 2, and so on. For example, when determining the motion information of the sub-region f5, the motion information of the image block A1, the image block A2, the image block A3, the image block E, the image block B1, the image block B2, and the image block B3, that is, the motion information of the encoded blocks around the current block, may be utilized, so as to provide more motion information for the sub-region f 5. Of course, the motion information of the image block A1, the image block A2, the image block A3, etc. may also be utilized for the motion information of other sub-regions of the current block.
In the embodiment of the present application, a process for constructing a motion information prediction mode candidate list is involved, for example, for any one motion information angle prediction mode, a duplicate check of a motion information angle prediction mode in the motion information prediction mode candidate list is determined, including how to decide to add the motion information angle prediction mode to the motion information prediction mode candidate list or how to prohibit adding the motion information angle prediction mode to the motion information prediction mode candidate list. For example, when there is unavailable motion information in the peripheral blocks of the current block, how to fill the motion information in the peripheral blocks, and the time for filling the motion information in the peripheral blocks. For example, the motion information of the current block is determined by using the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angle of the motion information angle prediction mode, and the prediction value of the current block is determined according to the motion information of the current block.
In one embodiment, a construction process of a motion information prediction mode candidate list may be implemented. In another embodiment, a construction process of a motion information prediction mode candidate list and a filling process of motion information may be implemented. In another embodiment, a construction process of a motion information prediction mode candidate list, a filling process of motion information, and a motion compensation process may be implemented. In another embodiment, a construction process of a motion information prediction mode candidate list and a motion compensation process may be implemented. In another embodiment, a padding process for motion information may be implemented. In another embodiment, a padding process and a motion compensation process of motion information may be implemented. In the following embodiments, the detailed processing flow of these several processes will be described.
In the embodiment of the application, when the construction process of the motion information prediction mode candidate list and the filling process of the motion information are realized, the motion information angle prediction mode is firstly subjected to duplication checking processing, and then the motion information of the peripheral blocks is filled, so that the complexity of a decoding end is reduced, and the decoding performance is improved. For example, the motion information prediction mode candidate list can be obtained first by performing the duplication checking process for the horizontal prediction mode, the vertical prediction mode, the horizontal upward prediction mode, the horizontal downward prediction mode, the vertical rightward prediction mode, and the like, and adding the non-repeated horizontal downward prediction mode and vertical rightward prediction mode to the motion information prediction mode candidate list, and at this time, the motion information of the peripheral blocks is not yet filled.
After the decoding end selects the target motion information prediction mode of the current block from the motion information prediction mode candidate list, if the target motion information prediction mode is not the motion information angle prediction mode, the motion information of the peripheral blocks does not need to be filled, so that the decoding end can reduce the filling operation of the motion information.
The following describes the encoding and decoding method in the embodiment of the present application with reference to several specific embodiments.
Example 1: referring to fig. 4, a schematic flow chart of the encoding and decoding method provided in the embodiment of the present application is shown, where the method may be applied to a decoding end or an encoding end, and the method may include the following steps:
step 401, for any motion information angle prediction mode of the current block, based on a preconfigured angle of the motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed by the preconfigured angle from peripheral blocks of the current block.
The motion information angle prediction mode is used for indicating a pre-configuration angle, selecting a peripheral matching block from peripheral blocks of a current block for a sub-region of the current block according to the pre-configuration angle, and determining one or more motion information of the current block according to the motion information of the peripheral matching block, namely, for each sub-region of the current block, determining the motion information of the sub-region according to the motion information of the peripheral matching block. Also, the peripheral matching block is a block at a specified position determined from peripheral blocks of the current block at a pre-configured angle.
For example, the peripheral blocks may include blocks adjacent to the current block; alternatively, the peripheral blocks may include blocks adjacent to the current block and non-adjacent blocks. Of course, the peripheral block may also include other blocks, which is not limited in this regard.
For example, the motion information angle prediction mode may include, but is not limited to, one or any combination of the following: horizontal prediction mode, vertical prediction mode, horizontal up prediction mode, horizontal down prediction mode, vertical right prediction mode. Of course, the above are just a few examples of the motion information angle prediction mode, and there may be other types of motion information angle prediction modes, and the motion information angle prediction mode is related to the preconfigured angle, for example, the preconfigured angle may also be 10 degrees, 20 degrees, and the like. Referring to fig. 5A, a schematic diagram of a horizontal prediction mode, a vertical prediction mode, a horizontal upward prediction mode, a horizontal downward prediction mode, and a vertical rightward prediction mode is shown, where different motion information angle prediction modes correspond to different preconfigured angles.
In summary, a plurality of peripheral matching blocks pointed to by the preconfigured angle may be selected from the peripheral blocks of the current block based on the preconfigured angle of the motion information angle prediction mode. For example, referring to fig. 5A, a plurality of peripheral matching blocks pointed to by a preconfigured angle for horizontal prediction mode, a plurality of peripheral matching blocks pointed to by a preconfigured angle for vertical prediction mode, a plurality of peripheral matching blocks pointed to by a preconfigured angle for horizontal up prediction mode, a plurality of peripheral matching blocks pointed to by a preconfigured angle for horizontal down prediction mode, and a plurality of peripheral matching blocks pointed to by a preconfigured angle for vertical right prediction mode are shown.
Step 402, if the plurality of peripheral matching blocks at least include a first peripheral matching block and a second peripheral matching block to be traversed, and for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block when the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different.
In a possible embodiment, after selecting the first peripheral matching block and the second peripheral matching block to be traversed from the plurality of peripheral matching blocks, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angular prediction mode is added to the motion information prediction mode candidate list of the current block.
In a possible embodiment, after selecting the first peripheral matching block and the second peripheral matching block to be traversed from the plurality of peripheral matching blocks, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
In one possible embodiment, after selecting the first peripheral matching block and the second peripheral matching block to be traversed from the plurality of peripheral matching blocks, for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, when the motion information of the first peripheral matching block and the second peripheral matching block is different, the motion information angle prediction mode may be added to the motion information prediction mode candidate list of the current block.
In one possible embodiment, after selecting the first peripheral matching block and the second peripheral matching block to be traversed from the plurality of peripheral matching blocks, if available motion information exists for both the first peripheral matching block and the second peripheral matching block to be traversed, when motion information of the first peripheral matching block and the second peripheral matching block is the same, the motion information angle prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
In a possible embodiment, after selecting the first peripheral matching block and the second peripheral matching block to be traversed from the plurality of peripheral matching blocks, for the first peripheral matching block and the second peripheral matching block to be traversed, if an intra block and/or an unencoded block exists in the first peripheral matching block and the second peripheral matching block, the motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block.
In one possible embodiment, after the first peripheral matching block and the second peripheral matching block to be traversed are selected from the plurality of peripheral matching blocks, for the first peripheral matching block and the second peripheral matching block to be traversed, if an intra block and/or an unencoded block exists in the first peripheral matching block and the second peripheral matching block, the motion information angular prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
In a possible embodiment, after selecting the first peripheral matching block and the second peripheral matching block to be traversed from the plurality of peripheral matching blocks, for the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block is located outside the image in which the current block is located or outside the image slice in which the current block is located, the motion information angle prediction mode may be added to the motion information prediction mode candidate list of the current block.
In a possible embodiment, after selecting the first peripheral matching block and the second peripheral matching block to be traversed from the plurality of peripheral matching blocks, for the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block is located outside the image where the current block is located or outside the image slice where the current block is located, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block may be prohibited.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. And for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. If the first and second peripheral matching blocks to be traversed have available motion information and the motion information of the first and second peripheral matching blocks is the same, it may be determined whether the second and third peripheral matching blocks have available motion information. If there is available motion information in both the second peripheral matching block and the third peripheral matching block, the motion information angular prediction mode is added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
In a possible implementation, a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed in sequence are selected from the plurality of peripheral matching blocks. If the first and second peripheral matching blocks to be traversed have available motion information and the motion information of the first and second peripheral matching blocks is the same, it may be determined whether the second and third peripheral matching blocks have available motion information. If the available motion information exists in both the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. For a first and a second peripheral matching block to be traversed, if at least one of the first and the second peripheral matching block does not have available motion information, the motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. For a first and second peripheral matching block to be traversed, if at least one of the first and second peripheral matching blocks does not have available motion information, the motion information angular prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. For the first and second peripheral matching blocks to be traversed, if at least one of the first and second peripheral matching blocks does not have available motion information, it may be continuously determined whether both the second and third peripheral matching blocks have available motion information. If there is available motion information for both the second peripheral matching block and the third peripheral matching block, the motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. For the first and second peripheral matching blocks to be traversed, if at least one of the first and second peripheral matching blocks does not have available motion information, it may be continuously determined whether both the second and third peripheral matching blocks have available motion information. If there is available motion information in both the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. And for the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to judge whether both the second peripheral matching block and the third peripheral matching block have available motion information. If at least one of the second peripheral matching block and the third peripheral matching block does not have available motion information, the motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block.
In one possible implementation, a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially are selected from the plurality of peripheral matching blocks. And for the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to judge whether both the second peripheral matching block and the third peripheral matching block have available motion information. If there is no available motion information in at least one of the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
In the above embodiment, the process of determining whether there is available motion information in any peripheral matching block may include, but is not limited to: and if the peripheral matching block is positioned outside the image of the current block or the peripheral matching block is positioned outside the image slice of the current block, determining that the peripheral matching block has no available motion information. If the peripheral matching block is an uncoded block, determining that no available motion information exists in the peripheral matching block. And if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block. If the peripheral matching block is an inter-frame coded block, determining that available motion information exists in the peripheral matching block.
In a possible implementation manner, for the encoding side, after constructing and retrieving the motion information prediction mode candidate list of the current block according to the foregoing embodiment, the method further includes:
if the motion information angle prediction mode exists in the motion information prediction mode candidate list, filling motion information of peripheral blocks of the current block; determining the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the motion information angle prediction mode aiming at each motion information angle prediction mode in the motion information angle prediction mode candidate list; and determining the predicted value of the current block according to the motion information of the current block.
In a possible implementation manner, after constructing and retrieving the motion information prediction mode candidate list of the current block according to the foregoing embodiment, the method further includes, for a decoding end: a target motion information prediction mode of the current block may be selected from the motion information prediction mode candidate list; if the target motion information prediction mode is a target motion information angle prediction mode, filling motion information of peripheral blocks of the current block; determining the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
As an example, the filling of motion information of peripheral blocks of the current block comprises: traversing the peripheral blocks of the current block according to the traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the peripheral block comprises a first peripheral block without available motion information before, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise second peripheral blocks without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral blocks into the second peripheral blocks.
For example, the traversing in the traversal order from the left peripheral block to the top peripheral block of the current block may include: if the current block does not have the left peripheral block, traversing the upper peripheral block of the current block; and traversing the peripheral blocks on the left side of the current block if the current block does not have the peripheral blocks on the upper side. The left peripheral block may include a block adjacent to the left of the current block and a non-adjacent block. The upper-side peripheral block may include a block adjacent to the upper side of the current block and a non-adjacent block. The number of first perimeter blocks may be one or more, all perimeter blocks before the first traversed perimeter block where there is available motion information. The first peripheral block may be an unencoded block or an intra block; the second peripheral block may be an unencoded block or an intra block.
In another possible implementation manner, for the encoding side, after constructing and retrieving the motion information prediction mode candidate list of the current block according to the foregoing embodiment, the method further includes: for each motion information angle prediction mode in the motion information angle prediction mode candidate list, if a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode comprise peripheral blocks without available motion information, filling motion information of the peripheral blocks of the current block; determining the motion information of the current block according to the motion information of the filled peripheral matching blocks; and determining the predicted value of the current block according to the motion information of the current block.
In another possible implementation manner, after constructing and retrieving the motion information prediction mode candidate list of the current block according to the foregoing embodiment, for a decoding end, the method further includes: selecting a target motion information prediction mode of a current block from a motion information prediction mode candidate list; if the target motion information prediction mode is a target motion information angle prediction mode and a plurality of peripheral matching blocks pointed by the pre-configuration angles of the target motion information angle prediction mode comprise peripheral blocks without available motion information, filling motion information of the peripheral blocks of the current block; determining the motion information of the current block according to the motion information of the filled plurality of peripheral matching blocks; and determining the predicted value of the current block according to the motion information of the current block.
In the above embodiment, determining the motion information of the current block according to the motion information of the plurality of peripheral matching blocks may include, but is not limited to: dividing the current block into at least one sub-region; for each sub-region, selecting a peripheral matching block corresponding to the sub-region from a plurality of peripheral matching blocks; and determining the motion information of the sub-area according to the motion information of the selected peripheral matching block.
As can be seen from the above technical solutions, in the embodiments of the present application, the current block does not need to be divided into sub-blocks, the division information of each sub-region of the current block can be determined based on the motion information angle prediction mode, and the bit overhead caused by sub-block division can be effectively solved. By adding the motion information angle prediction modes with the incompletely same motion information into the motion information prediction mode candidate list, the motion information angle prediction modes only with single motion information are removed, the number of the motion information angle prediction modes in the motion information prediction mode candidate list is reduced, the number of bits for coding a plurality of pieces of motion information can be reduced, and the coding performance is further improved.
Fig. 5B is a schematic diagram of a horizontal prediction mode, a vertical prediction mode, a horizontal upward prediction mode, a horizontal downward prediction mode, and a vertical rightward prediction mode. As can be seen from fig. 5B, some motion information angular prediction modes, which make the motion information of each sub-region inside the current block the same, for example, a horizontal prediction mode, a vertical prediction mode, and a horizontal upward prediction mode, need to be eliminated. Some motion information angle prediction modes, such as a horizontal downward prediction mode and a vertical rightward prediction mode, may cause different motion information of each sub-region inside the current block, and such motion information angle prediction modes need to be reserved, i.e., may be added to the motion information prediction mode candidate list.
Obviously, if a horizontal prediction mode, a vertical prediction mode, a horizontal upward prediction mode, a horizontal downward prediction mode, and a vertical rightward prediction mode are added to the motion information prediction mode candidate list, when the index of the horizontal downward prediction mode is coded, since there are a horizontal prediction mode, a vertical prediction mode, and a horizontal upward prediction mode in the front (the order of each motion information angle prediction mode is not fixed, and is only an example here), 0001 may need to be coded to represent.
However, in the embodiment of the present application, only the horizontal downward prediction mode and the vertical rightward prediction mode are added to the motion information prediction mode candidate list, and the addition of the horizontal prediction mode, the vertical prediction mode, and the horizontal upward prediction mode to the motion information prediction mode candidate list is prohibited, that is, the horizontal prediction mode, the vertical prediction mode, and the horizontal upward prediction mode do not exist before the horizontal downward prediction mode, and therefore, when encoding the index of the horizontal downward prediction mode, it may only need to encode 0 to represent. In summary, the above manner can reduce bit overhead caused by encoding index information of the motion information angle prediction mode, reduce hardware complexity while saving bit overhead, avoid the problem of low performance gain caused by the motion information angle prediction mode of a single motion information, and reduce the number of bits for encoding a plurality of motion information angle prediction modes.
In the embodiment of the application, the motion information angle prediction mode is subjected to duplicate checking processing, and then the motion information of the peripheral blocks is filled, so that the complexity of a decoding end is reduced, and the decoding performance is improved. For example, by performing a repetition check process for a horizontal prediction mode, a vertical prediction mode, a horizontal upward prediction mode, a horizontal downward prediction mode, a vertical rightward prediction mode, or the like, and adding a horizontal downward prediction mode and a vertical rightward prediction mode, which are not repeated, to the motion information prediction mode candidate list, the motion information prediction mode candidate list can be obtained first, and at this time, the motion information of the peripheral block is not yet filled.
After the decoding end selects the target motion information prediction mode of the current block from the motion information prediction mode candidate list, if the target motion information prediction mode is other than a horizontal downward prediction mode and a vertical rightward prediction mode, the motion information of the peripheral blocks does not need to be filled, so that the decoding end can reduce the filling operation of the motion information.
Example 2: based on the same application concept as the above method, referring to fig. 6, a schematic flow chart of a coding and decoding method provided in the embodiment of the present application is shown, where the method may be applied to a coding end, and the method may include:
in step 601, an encoding end constructs a motion information prediction mode candidate list of a current block, where the motion information prediction mode candidate list may include at least one motion information angle prediction mode. Of course, the motion information prediction mode candidate list may also include other types of motion information prediction modes (obtained in a conventional manner), which is not limited thereto.
For example, a motion information prediction mode candidate list may be constructed for the current block, that is, all sub-regions in the current block may correspond to the same motion information prediction mode candidate list; alternatively, multiple motion information prediction mode candidate lists may be constructed for the current block, i.e., all sub-regions within the current block may correspond to the same or different motion information prediction mode candidate lists. For convenience of description, an example of constructing a motion information prediction mode candidate list for a current block will be described.
The motion information angle prediction mode may be an angle prediction mode for predicting motion information, i.e., used for inter-frame coding, rather than intra-frame coding, and the motion information angle prediction mode selects a matching block rather than a matching pixel point.
The motion information prediction mode candidate list may be constructed in a conventional manner, or may be constructed in the manner of embodiment 1, which is not limited to this.
Step 602, if there is a motion information angle prediction mode in the motion information prediction mode candidate list, the encoding end fills the motion information of the peripheral blocks of the current block. For example, traversing the peripheral blocks of the current block according to the traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the first peripheral block without available motion information is included in the peripheral block, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise second peripheral blocks without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral blocks into the second peripheral blocks.
Step 603, the encoding end sequentially traverses each motion information angle prediction mode in the motion information prediction mode candidate list. And aiming at the currently traversed motion information angle prediction mode, determining the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the motion information angle prediction mode. For example, the current block is divided into at least one sub-region; for each sub-region, selecting a peripheral matching block corresponding to the sub-region from a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode; and determining the motion information of the sub-area according to the motion information of the selected peripheral matching block.
It should be noted that, since the step 602 fills the peripheral blocks without available motion information in the peripheral blocks of the current block, all the plurality of peripheral matching blocks pointed to according to the pre-configured angle of the angular prediction mode of the motion information in the step 603 have available motion information.
Step 604, the encoding end determines the prediction value of the current block according to the motion information of the current block. For example, the prediction value of each sub-region within the current block is determined based on the motion information of each sub-region within the current block, which is called motion compensation.
Step 605, the encoding end selects a target motion information prediction mode of the current block from the motion information prediction mode candidate list, where the target motion information prediction mode is a target motion information angle prediction mode or other types of motion information prediction modes.
For example, for each motion information angle prediction mode (such as a horizontal downward prediction mode, a vertical rightward prediction mode, etc.) in the motion information prediction mode candidate list, after determining motion information of a current block according to motion information of a plurality of neighboring matching blocks pointed by a pre-configured angle of the motion information angle prediction mode, and determining a prediction value of the current block according to the motion information of the current block, an encoding end may determine a rate distortion cost value of the motion information angle prediction mode. For example, the rate distortion cost value of the motion information angle prediction mode is determined by using a rate distortion principle, and the determination mode is not limited.
For other types of motion information prediction modes R (obtained in a conventional manner) in the motion information prediction mode candidate list, the motion information of the current block may be determined according to the motion information prediction mode R, the prediction value of the current block may be determined according to the motion information of the current block, and then the rate-distortion cost value of the motion information prediction mode R may be determined, which is not limited to this process.
Then, the encoding end may determine the motion information prediction mode corresponding to the minimum rate-distortion cost as a target motion information prediction mode, where the target motion information prediction mode may be a target motion information angle prediction mode (such as a horizontal downward prediction mode or a vertical rightward prediction mode), or may be another type of motion information prediction mode R.
Example 3: based on the same application concept as the above method, referring to fig. 7, a schematic flow chart of the encoding and decoding method provided in the embodiment of the present application is shown, where the method may be applied to a decoding end, and the method may include:
in step 701, a decoding end constructs a motion information prediction mode candidate list of a current block, where the motion information prediction mode candidate list may include at least one motion information angle prediction mode. Of course, the motion information prediction mode candidate list may also include other types of motion information prediction modes (obtained in a conventional manner), which is not limited thereto.
In step 702, the decoding end selects a target motion information prediction mode of the current block from the motion information prediction mode candidate list, where the target motion information prediction mode is a target motion information angle prediction mode or other types of motion information prediction modes.
The process of selecting the target motion information prediction mode at the decoding end may include: after receiving the coded bit stream, acquiring indication information from the coded bit stream, wherein the indication information is used for indicating index information of the target motion information prediction mode in the motion information prediction mode candidate list. For example, when the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream carries indication information, where the indication information is used to indicate index information of the target motion information prediction mode in the motion information prediction mode candidate list. It is assumed that the motion information prediction mode candidate list sequentially includes: a horizontal down prediction mode, a vertical right prediction mode, a motion information prediction mode R, and the indication information is used to indicate index information 1, and index information 1 represents the first motion information prediction mode in the motion information prediction mode candidate list. Based on this, the decoding side acquires index information 1 from the coded bit stream.
The decoding end selects the motion information prediction mode corresponding to the index information from the motion information prediction mode candidate list, and determines the selected motion information prediction mode as the target motion information prediction mode of the current block. For example, when the indication information is used to indicate index information 1, the decoding end may determine the 1 st motion information prediction mode in the motion information prediction mode candidate list as the target motion information prediction mode of the current block.
In step 703, if the target motion information prediction mode is the target motion information angle prediction mode, the decoding end fills the motion information of the peripheral blocks of the current block.
For example, traversing the peripheral blocks of the current block according to the traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the first peripheral block without available motion information is included in the peripheral block, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block if the peripheral blocks behind the peripheral block comprise the second peripheral block without available motion information.
In a possible implementation manner, if the target motion information prediction mode is not the motion information angle prediction mode, the motion information of the peripheral blocks of the current block is not needed to be filled, so that the decoding end can reduce the filling operation of the motion information.
Step 704, the decoding end determines the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode. For example, the current block is divided into at least one sub-region; for each sub-region, selecting a peripheral matching block corresponding to the sub-region from a plurality of peripheral matching blocks pointed by a pre-configuration angle of the target motion information angle prediction mode; and determining the motion information of the sub-area according to the motion information of the selected peripheral matching block.
It should be noted that, since step 703 fills the neighboring blocks of the current block, which do not have available motion information, all of the neighboring matching blocks pointed to by the pre-configured angle of the angular prediction mode of the motion information in step 704 have available motion information.
Step 705, the decoding end determines the prediction value of the current block according to the motion information of the current block. For example, a prediction value for each sub-region within the current block is determined based on the motion information for each sub-region within the current block, a process also referred to as motion compensation.
Example 4: the embodiment of the present application provides another encoding and decoding method, which may include:
step a1, an encoding end constructs a motion information prediction mode candidate list of a current block, wherein the motion information prediction mode candidate list can comprise at least one motion information angle prediction mode.
And a2, the encoding end sequentially traverses each motion information angle prediction mode in the motion information prediction mode candidate list. And filling motion information of the peripheral blocks of the current block if the plurality of peripheral matching blocks pointed by the pre-configuration angles of the motion information angle prediction mode comprise the peripheral blocks without available motion information aiming at the currently traversed motion information angle prediction mode.
For example, for a peripheral block for which there is no available motion information, the available motion information of a neighboring block of the peripheral block is filled as the motion information of the peripheral block; or, filling the available motion information of the reference block at the corresponding position of the peripheral block in the time domain reference frame as the motion information of the peripheral block; or, filling default motion information into the motion information of the peripheral block.
For another example, when filling the peripheral blocks of the current block, traversing according to the traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the first peripheral block without available motion information is included in the peripheral block, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block if the peripheral blocks behind the peripheral block comprise the second peripheral block without available motion information.
And a3, the encoding end determines the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configuration angles of the motion information angle prediction mode. For example, the current block is divided into at least one sub-region; for each sub-region, selecting a peripheral matching block corresponding to the sub-region from a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode; and determining the motion information of the sub-area according to the motion information of the selected peripheral matching block.
And a4, the encoding end determines the predicted value of the current block according to the motion information of the current block. For example, the prediction value of each sub-region within the current block is determined based on the motion information of each sub-region within the current block, which is called motion compensation.
And a5, selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list by the encoding end.
Example 5: the embodiment of the present application provides another encoding and decoding method, which may include:
step b1, a decoding end constructs a motion information prediction mode candidate list of the current block, wherein the motion information prediction mode candidate list can comprise at least one motion information angle prediction mode.
And b2, the decoding end selects a target motion information prediction mode of the current block from the motion information prediction mode candidate list, wherein the target motion information prediction mode is a target motion information angle prediction mode or other types of motion information prediction modes.
And b3, if the target motion information prediction mode is a target motion information angle prediction mode and the peripheral matching blocks pointed by the pre-configuration angles of the target motion information angle prediction mode comprise peripheral blocks without available motion information, filling the motion information of the peripheral blocks of the current block by the decoding end.
For example, for a peripheral block for which no available motion information exists, the available motion information of a neighboring block of the peripheral block is filled as the motion information of the peripheral block; or, filling the available motion information of the reference block at the corresponding position of the peripheral block in the time domain reference frame as the motion information of the peripheral block; or, filling default motion information into the motion information of the peripheral block.
For another example, when filling the peripheral blocks of the current block, traversing according to the traversal sequence from the left peripheral block to the upper peripheral block of the current block, and traversing the first peripheral block with available motion information; if the first peripheral block without available motion information is included in the peripheral block, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block if the peripheral blocks behind the peripheral block comprise the second peripheral block without available motion information.
And b4, the decoding end determines the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode. For example, the current block is divided into at least one sub-region; for each sub-region, selecting a peripheral matching block corresponding to the sub-region from a plurality of peripheral matching blocks pointed by a pre-configured angle of the target motion information angle prediction mode; and determining the motion information of the sub-area according to the motion information of the selected peripheral matching block.
And step b5, the decoding end determines the predicted value of the current block according to the motion information of the current block. For example, a prediction value for each sub-region within the current block is determined based on the motion information for each sub-region within the current block.
Example 6: in the above-described embodiment, a process of constructing a motion information prediction mode candidate list, that is, deciding to add a motion information angle prediction mode to a motion information prediction mode candidate list or to prohibit the motion information angle prediction mode from being added to the motion information prediction mode candidate list for any one motion information angle prediction mode, includes:
step c1, obtaining at least one motion information angle prediction mode of the current block.
For example, the following motion information angle prediction modes may be obtained: horizontal prediction mode, vertical prediction mode, horizontal up prediction mode, horizontal down prediction mode, vertical right prediction mode. Of course, the above manner is only an example, and the preconfigured angle is not limited to this, and the preconfigured angle may be any angle between 0-360 degrees, and the horizontal direction of the center point of the sub-region to the right may be located as 0 degree, so that any angle rotated counterclockwise from 0 degree may be the preconfigured angle, or the center point of the sub-region may be located as 0 degree in other directions. In practical applications, the preconfigured angle may be a fractional angle, such as 22.5 degrees, or the like.
And c2, aiming at any motion information angle prediction mode of the current block, selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block on the basis of the preset angle of the motion information angle prediction mode.
And c3, adding the motion information angle prediction mode into a motion information prediction mode candidate list or forbidding adding the motion information angle prediction mode into the motion information prediction mode candidate list based on the characteristics of whether the plurality of peripheral matching blocks have available motion information, whether the available motion information of the plurality of peripheral matching blocks is the same and the like.
The decision making process of step c3 will be described below with reference to several specific cases.
In case one, a first peripheral matching block and a second peripheral matching block to be traversed are selected from the plurality of peripheral matching blocks, and if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block and a second peripheral matching block to be traversed from the plurality of peripheral matching blocks, and if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, the motion information angular prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
For example, if an intra block and/or an unencoded block exists in the first and second peripheral matching blocks, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block.
And if the intra block and/or the non-coded block exists in the first peripheral matching block and the second peripheral matching block, forbidding adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block.
And if at least one of the first peripheral matching block and the second peripheral matching block is positioned outside the image of the current block or outside the image slice of the current block, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block.
And if at least one of the first peripheral matching block and the second peripheral matching block is positioned outside the image of the current block or outside the image slice of the current block, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block.
And in case two, selecting a first peripheral matching block and a second peripheral matching block to be traversed from the plurality of peripheral matching blocks, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, and if the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block and a second peripheral matching block to be traversed from the plurality of peripheral matching blocks, and if available motion information exists in both the first peripheral matching block and the second peripheral matching block, when the motion information of the first peripheral matching block and the second peripheral matching block is the same, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block may be prohibited.
And in case III, selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. And if the available motion information exists in the first peripheral matching block and the second peripheral matching block, and the motion information of the first peripheral matching block and the second peripheral matching block is different, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. And if the first peripheral matching block and the second peripheral matching block both have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuing to judge whether the second peripheral matching block and the third peripheral matching block both have available motion information. If there is available motion information in both the second peripheral matching block and the third peripheral matching block, the motion information angular prediction mode is added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. And if the first peripheral matching block and the second peripheral matching block both have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuing to judge whether the second peripheral matching block and the third peripheral matching block both have available motion information. If the available motion information exists in both the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
And in case four, selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block has no available motion information, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information or not; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if at least one of the second peripheral matching block and the third peripheral matching block does not have available motion information, the motion information angular prediction mode is added to the motion information prediction mode candidate list of the current block.
And selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially from the plurality of peripheral matching blocks. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is no motion information available in at least one of the second peripheral matching block and the third peripheral matching block, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
And fifthly, if the plurality of peripheral matching blocks all have available motion information and the motion information of the plurality of peripheral matching blocks is not completely the same, adding the motion information angle prediction mode to the motion information prediction mode candidate list of the current block.
If there is available motion information in all the peripheral matching blocks and the motion information of the peripheral matching blocks is completely the same, the motion information angular prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
Case six, if there is no available motion information in at least one of the plurality of peripheral matching blocks, the motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block.
If there is no motion information available for at least one of the plurality of peripheral matching blocks, the motion information angular prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
If at least one of the plurality of peripheral matching blocks does not have available motion information and the motion information of the plurality of peripheral matching blocks is not identical, the motion information angular prediction mode may be added to the motion information prediction mode candidate list of the current block.
If there is no motion information available in at least one of the plurality of neighboring matching blocks and the motion information of the plurality of neighboring matching blocks is identical, the motion information angular prediction mode may be prohibited from being added to the motion information prediction mode candidate list of the current block.
For the case five and the case six, the determination manner in which the motion information of the plurality of peripheral matching blocks is not exactly the same/exactly the same may include, but is not limited to: selecting at least one first peripheral matching block (e.g., all or a portion of all peripheral matching blocks) from the plurality of peripheral matching blocks; for each first peripheral matching block, a second peripheral matching block corresponding to the first peripheral matching block is selected from the plurality of peripheral matching blocks. If the motion information of the first peripheral matching block is different from the motion information of the second peripheral matching block, determining that the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different; and if the motion information of the first peripheral matching block is the same as that of the second peripheral matching block, determining that the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are the same. Based on this, if the motion information of any pair of peripheral matching blocks to be compared is different, it is determined that the motion information of the plurality of peripheral matching blocks is not completely the same. And if the motion information of all the peripheral matching blocks to be compared is the same, determining that the motion information of the peripheral matching blocks is completely the same.
For cases five and six, the determination that there is no available motion information in at least one of the plurality of peripheral matching blocks may include, but is not limited to: selecting at least one first peripheral matching block from the plurality of peripheral matching blocks; for each first peripheral matching block, a second peripheral matching block corresponding to the first peripheral matching block is selected from the plurality of peripheral matching blocks. If at least one of any pair of peripheral matching blocks to be compared (i.e. the first peripheral matching block and the second peripheral matching block) does not have available motion information, determining that at least one of the plurality of peripheral matching blocks does not have available motion information. And if all the peripheral matching blocks to be compared have available motion information, determining that the plurality of peripheral matching blocks have available motion information.
In each of the above cases, selecting the first peripheral matching block from the plurality of peripheral matching blocks may include: taking any one of the plurality of peripheral matching blocks as a first peripheral matching block; alternatively, a specified one of the plurality of peripheral matching blocks is set as a first peripheral matching block. Selecting a second peripheral matching block from the plurality of peripheral matching blocks may include: selecting a second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks according to the traversal step size and the position of the first peripheral matching block; the traversal step may be a block interval between the first and second peripheral matching blocks.
For cases three and four, selecting a third peripheral matching block from the plurality of peripheral matching blocks may include: according to the traversal step size and the position of the second peripheral matching block, selecting a third peripheral matching block corresponding to the second peripheral matching block from the plurality of peripheral matching blocks; the traversal step can be a block spacing between the second perimeter matched block and the third perimeter matched block.
For example, for a peripheral matching block A1, a peripheral matching block A2, a peripheral matching block A3, a peripheral matching block A4, and a peripheral matching block A5 arranged in this order, examples of the respective peripheral matching blocks for different cases are as follows:
for cases one and two, assuming that the peripheral matching block A1 is taken as the first peripheral matching block and the traversal step size is 2, the second peripheral matching block corresponding to the peripheral matching block A1 is the peripheral matching block A3. For cases three and four, assuming that the peripheral matching block A1 is taken as the first peripheral matching block and the traversal step size is 2, the second peripheral matching block corresponding to the peripheral matching block A1 is the peripheral matching block A3. The third peripheral matching block corresponding to the peripheral matching block A3 is the peripheral matching block A5.
For the fifth case and the sixth case, it is assumed that the peripheral matching block A1 and the peripheral matching block A3 are both regarded as the first peripheral matching block, and the traversal step size is 2, and when the peripheral matching block A1 is regarded as the first peripheral matching block, the second peripheral matching block is the peripheral matching block A3. When the peripheral matching block A3 is the first peripheral matching block, then the second peripheral matching block is the peripheral matching block A5.
For example, before selecting the peripheral matching block from the plurality of peripheral matching blocks, the traversal step may be determined based on the size of the current block. And controlling the comparison times of the motion information through the traversal step length. For example, assuming that the size of the peripheral matching block is 4*4 and the size of the current block is 16 × 16, the current block corresponds to 4 peripheral matching blocks for the horizontal prediction mode. In order to control the comparison frequency of the motion information to be 1, the traversal step size may be 2 or 3, if the traversal step size is 2, the first peripheral matching block is the 1 st peripheral matching block, and the second peripheral matching block is the 3 rd peripheral matching block; or, the first peripheral matching block is the 2 nd peripheral matching block, and the second peripheral matching block is the 4 th peripheral matching block. If the traversal step size is 3, the first peripheral matching block is the 1 st peripheral matching block, and the second peripheral matching block is the 4 th peripheral matching block. For another example, in order to control the number of comparison times of the motion information to be 2, the traversal step size may be 1, the first peripheral matching block is the 1 st peripheral matching block and the 3 rd peripheral matching block, the second peripheral matching block corresponding to the 1 st peripheral matching block is the 2 nd peripheral matching block, and the second peripheral matching block corresponding to the 3 rd peripheral matching block is the 4 th peripheral matching block. Of course, the above is only an example for the horizontal prediction mode, and the traversal step size may also be determined in other ways, which is not limited. Moreover, for other motion information angle prediction modes except the horizontal prediction mode, the mode of determining the traversal step length refers to the horizontal prediction mode, and is not repeated herein.
In each of the above cases, the process of determining whether there is available motion information in any one peripheral matching block may include, but is not limited to: and if the peripheral matching block is positioned outside the image of the current block or the peripheral matching block is positioned outside the image slice of the current block, determining that the peripheral matching block has no available motion information. And if the peripheral matching block is an uncoded block, determining that no available motion information exists in the peripheral matching block. And if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block. If the peripheral matching block is an inter-frame coded block, determining that available motion information exists in the peripheral matching block.
Example 7: in the above embodiments, the process of constructing the motion information prediction mode candidate list is described below with reference to several specific application scenarios.
Application scenario 1: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is not decoded (i.e. the peripheral block is an uncoded block), or the peripheral block is an intra block, it indicates that the peripheral block does not have available motion information. If the peripheral block exists, the peripheral block is not an uncoded block, and the peripheral block is not an intra block, then the peripheral block has available motion information.
Referring to FIG. 8A, width W of the current block is greater than or equal to 8, height H of the current block is greater than or equal to 8, let m be W/4,n be H/4, pixel point at upper left corner of the current block be (x, y), (x-1, y + H + W-1) located at the periphery block A 0 ,A 0 Is 4*4. Traversing the peripheral blocks in the clockwise direction, wherein each peripheral block of 4*4 is marked as A 1 、A 2 、…、A 2m+2n ,A 2m+2n Is the peripheral block where the pixel (x + W + H-1,y-1) is located.
For each motion information angle prediction mode, based on the preconfigured angle of the motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed by the preconfigured angle from the peripheral blocks, and selecting a peripheral matching block to be traversed from the plurality of peripheral matching blocks (for example, selecting a first peripheral matching block and a second peripheral matching block to be traversed, or selecting a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially). If the available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, or if both the first peripheral matching block and the second peripheral matching block have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block is the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list.
If the available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list. If at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, or if both the first peripheral matching block and the second peripheral matching block have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block is the same, the comparison result of the two peripheral matching blocks is the same, and the second peripheral matching block and the third peripheral matching block are continuously compared.
If the second peripheral matching block and the third peripheral matching block both have available motion information and the motion information of the second peripheral matching block and the third peripheral matching block is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list. If at least one of the second peripheral matching block and the third peripheral matching block does not have available motion information, or if both the second peripheral matching block and the third peripheral matching block have available motion information and the motion information of the second peripheral matching block and the third peripheral matching block is the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list.
For example, if the left neighboring block of the current block exists and the upper neighboring block does not exist, for the horizontal prediction mode, A will be m-1+H/8 As a first peripheral matching block, A m+n-1 As the second peripheral matching block, of course, A m-1+H/8 And A m+n-1 As just one example, other peripheral matching blocks pointed by the preconfigured angle of the horizontal prediction mode may also be used as the first peripheral matching block or the second peripheral matching block, which is similar to the implementation manner and will not be described again in the following. Judgment of A by the above comparison method m-1+H/8 And A m+n-1 Is the same. If the same is true, then,the addition of the horizontal prediction mode to the motion information prediction mode candidate list of the current block is prohibited. If not, the horizontal prediction mode is added to the motion information prediction mode candidate list of the current block.
For horizontal downward prediction mode, A W/8-1 As a first peripheral matching block, A m-1 As a second peripheral matching block, A m-1+H/8 As the third peripheral matching block, of course, the above is only an example, and other peripheral matching blocks pointed by the preconfigured angle of the horizontal downward prediction mode may also be used as the first peripheral matching block, the second peripheral matching block, or the third peripheral matching block, which are similar in implementation and will not be described in detail later. Judgment of A by the above comparison method W/8-1 And A m-1 Is the same. If not, a horizontal down prediction mode is added to the motion information prediction mode candidate list of the current block. If the two are the same, the comparison method is used for judging A m-1 And A m-1+H/8 Is the same. If A m-1 And A m-1+H/8 If the comparison result is different, the horizontal down prediction mode is added to the motion information prediction mode candidate list of the current block. If A is m-1 And A m-1+H/8 Is the same, addition of the horizontal down prediction mode to the motion information prediction mode candidate list of the current block is prohibited.
For example, if the left neighboring block of the current block does not exist and the upper neighboring block exists, for the vertical prediction mode, A will be m+n+1+W/8 As a first peripheral matching block, A m+n+1 As the second peripheral matching block, of course, the above is only an example, and the first peripheral matching block and the second peripheral matching block are not limited. Judgment of A by the above comparison method m+n+1+W/8 And A m+n+1 Is the same. If the same, addition of the vertical prediction mode to the motion information prediction mode candidate list of the current block is prohibited. If not, the vertical prediction mode is added to the motion information prediction mode candidate list of the current block.
For the vertical rightward prediction mode, A m+n+1+W/8 As a first one of the peripheral matching blocks,a is to be 2m+n+1 As a second peripheral matching block, A 2m+n+1+H/8 As the third peripheral matching block, of course, the above is only an example, and the first peripheral matching block, the second peripheral matching block, and the third peripheral matching block are not limited. Judgment of A by the above comparison method m+n+1+W/8 And A 2m+n+1 Is the same. If not, the vertical right prediction mode is added to the motion information prediction mode candidate list. If the two are the same, the comparison method is used for judging A 2m+n+1 And A 2m+n+1+H/8 Is the same. If A 2m+n+1 And A 2m+n+1+H/8 If the comparison result is different, the vertical right prediction mode is added to the motion information prediction mode candidate list. If A 2m+n+1 And A 2m+n+1+H/8 If the comparison result is the same, the addition of the vertical right prediction mode to the motion information prediction mode candidate list is prohibited.
For example, if the left neighboring block of the current block exists and the upper neighboring block of the current block also exists, for the horizontal downward prediction mode, A will be W/8-1 As a first peripheral matching block, A m-1 As a second peripheral matching block, A m-1+H/8 As the third peripheral matching block, of course, the above is only an example, and the first peripheral matching block, the second peripheral matching block, and the third peripheral matching block are not limited. Judgment of A by the above comparison method W/8-1 And A m-1 Is the same. If not, a horizontal down prediction mode may be added to the motion information prediction mode candidate list of the current block. If the two are the same, judging A by using the comparison method m-1 And A m-1+H/8 Is the same. If A m-1 And A m-1+H/8 If the comparison result is different, the horizontal down prediction mode is added to the motion information prediction mode candidate list of the current block. If A m-1 And A m-1+H/8 Is the same, addition of the horizontal down prediction mode to the motion information prediction mode candidate list of the current block is prohibited.
For horizontal prediction mode, A m-1+H/8 As a first peripheral matching block, A m+n-1 As a secondThe peripheral matching block is, of course, only an example, and the first peripheral matching block and the second peripheral matching block are not limited thereto. Judgment of A by the above comparison method m-1+H/8 And A m+n-1 Is the same. If the same, the horizontal prediction mode is not added to the motion information prediction mode candidate list. If not, the horizontal prediction mode is added to the motion information prediction mode candidate list.
For the horizontal up-prediction mode, A m+n-1 As a first peripheral matching block, A m+n As a second peripheral matching block, A m+n+1 As the third peripheral matching block, of course, the above is only an example, and the first peripheral matching block, the second peripheral matching block, and the third peripheral matching block are not limited. Judgment of A by the above comparison method m+n-1 And A m+n Is the same. If not, the horizontal upward prediction mode is added to the motion information prediction mode candidate list of the current block. If the two are the same, judging A by using the comparison method m+n And A m+n+1 Is the same. If A m+n And A m+n+1 If the comparison result is different, the horizontal upward prediction mode is added to the motion information prediction mode candidate list. If A m+n And A m+n+1 Is the same, the addition of the horizontal upward prediction mode to the motion information prediction mode candidate list is prohibited.
For the vertical prediction mode, A m+n+1+W/8 As a first peripheral matching block, A m+n+1 As the second peripheral matching block, of course, the above is just one example, and the first peripheral matching block and the second peripheral matching block are referred to. Judgment of A by the above comparison method m+n+1+W/8 And A m+n+1 Is the same. If the same, the addition of the vertical prediction mode to the motion information prediction mode candidate list of the current block may be prohibited. If not, the vertical prediction mode may be added to the motion information prediction mode candidate list of the current block.
For the vertical rightward prediction mode, A m+n+1+W/8 As a first peripheral matching block, A 2m+n+1 AsSecond peripheral match Block, A 2m+n+1+H/8 As the third peripheral matching block, of course, the above is only an example, and the first peripheral matching block, the second peripheral matching block, and the third peripheral matching block are not limited. Judgment of A by the above comparison method m+n+1+W/8 And A 2m+n+1 Is the same. If not, the vertical right prediction mode is added to the motion information prediction mode candidate list. If the two are the same, the comparison method is used for judging A 2m+n+1 And A 2m+n+1+H/8 Is the same. If A 2m+n+1 And A 2m+n+1+H/8 If the comparison result is different, the vertical right prediction mode is added to the motion information prediction mode candidate list. If A is 2m+n+1 And A 2m+n+1+H/8 If the comparison result is the same, the addition of the vertical right prediction mode to the motion information prediction mode candidate list is prohibited.
Application scenario 2: similar to the implementation of application scenario 1, the difference is: in the application scenario 2, it is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor to distinguish whether an upper adjacent block of the current block exists or not. For example, the above-described processing is performed regardless of whether a left neighboring block of the current block exists or not and whether an upper neighboring block of the current block exists or not.
Application scenario 3: similar to the implementation of application scenario 1, the difference is: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. Other processes are similar to the application scenario 1, and are not described herein again.
Application scenario 4: similar to the implementation of application scenario 1, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the peripheral blocks of the current block do not exist, which means that the peripheral blocks are positioned outside the image where the current block is positioned, or the peripheral blocks are positioned inside the image where the current block is positioned, but the peripheral blocks are positioned outside the image where the current block is positioned. It is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. Other processes are similar to the application scenario 1 and are not described in detail herein.
Application scenario 5: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral blocks of the current block do not exist means that the peripheral blocks are located outside the image where the current block is located, or the peripheral blocks are located inside the image where the current block is located, but the peripheral blocks are located outside the image slice where the current block is located. If the peripheral block does not exist, or the peripheral block is an uncoded block, or the peripheral block is an intra block, it indicates that the peripheral block does not have available motion information. If the peripheral block exists, the peripheral block is not an uncoded block, and the peripheral block is not an intra block, the peripheral block is indicated to have available motion information.
Referring to FIG. 8A, width W of the current block is greater than or equal to 8, height H of the current block is greater than or equal to 8, size of m is W/4,n is H/4, pixel points at the upper left corner of the current block are (x, y), (x-1, y + H + W-1) are located on the periphery blocks A 0 ,A 0 Is 4*4. Traversing the peripheral blocks in the clockwise direction, and respectively recording the peripheral blocks of 4*4 as A 1 、A 2 、…、A 2m+2n ,A 2m+2n Is the peripheral block where the pixel (x + W + H-1,y-1) is located.
And for each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed by the preset angle from the peripheral blocks based on the preset angle of the motion information angle prediction mode, and selecting the peripheral matching blocks to be traversed from the plurality of peripheral matching blocks. Unlike application scenario 1, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, or if both the first peripheral matching block and the second peripheral matching block have available motion information and the motion information of the first peripheral matching block and the second peripheral matching block is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list.
If the available motion information exists in the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is forbidden to be added to the motion information prediction mode candidate list; or, continuing to compare the second peripheral matched block to the third peripheral matched block.
If at least one of the second peripheral matching block and the third peripheral matching block does not have available motion information, or both the second peripheral matching block and the third peripheral matching block have available motion information and the motion information of the second peripheral matching block and the third peripheral matching block is different, the comparison result of the two peripheral matching blocks is different, and the motion information angle prediction mode is added to the motion information prediction mode candidate list. Or, if the available motion information exists in both the second peripheral matching block and the third peripheral matching block, and the motion information of the second peripheral matching block and the third peripheral matching block is the same, the comparison result of the two peripheral matching blocks is the same, and the motion information angle prediction mode is prohibited from being added to the motion information prediction mode candidate list.
Based on the above comparison method, the corresponding processing flow refers to application scenario 1, for example, if the left neighboring block of the current block exists and the upper neighboring block does not exist, for the horizontal prediction mode, the above comparison method is used to determine a m-1+H/8 And A m+n-1 Is the same. If the same, the addition of the horizontal prediction mode to the motion information prediction mode candidate list is prohibited. If not, the horizontal prediction mode is added to the motion information prediction mode candidate list.
For the horizontal down prediction mode, the above comparison method is used to judge A W/8-1 And A m-1 Is the same. If not, the horizontal down prediction mode is added to the motion information prediction mode candidate list. If the two are the same, the comparison method is used for judging A m-1 And A m-1+H/8 Is the same. If A m-1 And A m-1+H/8 If the comparison result is different, thenTo add the horizontal downward prediction mode to the motion information prediction mode candidate list. If A m-1 And A m-1+H/8 May be the same, the addition of the horizontal downward prediction mode to the motion information prediction mode candidate list may be prohibited.
Application scenario 6: similar to the implementation of application scenario 5, except that: it is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor is it necessary to distinguish whether an upper adjacent block of the current block exists or not. For example, whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not, the processing is performed in the manner of the application scenario 5.
Application scenario 7: similar to the implementation of application scenario 5, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. Other processing procedures are similar to the application scenario 5, and are not repeated here.
Application scenario 8: similar to the implementation of application scenario 5, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. In the application scenario 8, it is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor to distinguish whether an upper adjacent block of the current block exists or not. Other processing procedures are similar to the application scenario 5, and are not repeated here.
Application scenario 9: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is an uncoded block, or the peripheral block is an intra block, it indicates that the peripheral block does not have available motion information. If the peripheral block exists, the peripheral block is not an uncoded block, and the peripheral block is not an intra block, the peripheral block is indicated to have available motion information. For each motion information angle prediction mode, selecting a plurality of peripheral matching blocks pointed to by a preconfigured angle from peripheral blocks based on the preconfigured angle of the motion information angle prediction mode, and selecting at least one first peripheral matching block (such as one or more) from the plurality of peripheral matching blocks; for each first peripheral matching block, selecting a second peripheral matching block corresponding to the first peripheral matching block from the plurality of peripheral matching blocks.
Each combination of the first peripheral matching block and the second peripheral matching block is referred to as a matching block group, for example, A1, A3, and A5 are selected from a plurality of peripheral matching blocks as a first peripheral matching block, A2 is selected from a plurality of peripheral matching blocks as a second peripheral matching block corresponding to A1, A4 is selected from a plurality of peripheral matching blocks as a second peripheral matching block corresponding to A3, and A6 is selected from a plurality of peripheral matching blocks as a second peripheral matching block corresponding to A5, so that the matching block group 1 includes A1 and A2, the matching block group 2 includes A3 and A4, and the matching block group 3 includes A5 and A6. A1, A2, A3, A4, A5, and A6 are any peripheral matching blocks among the plurality of peripheral matching blocks, and the selection manner thereof may be configured empirically, without limitation.
For each matching block group, if available motion information exists in both the two peripheral matching blocks in the matching block group and the motion information of the two peripheral matching blocks is different, the comparison result of the matching block group is different. If at least one of the two peripheral matching blocks in the matching block group does not have available motion information, or both of the two peripheral matching blocks have available motion information and the motion information of the two peripheral matching blocks is the same, the comparison result of the matching block group is the same. If the comparison results of all the matching block groups are the same, forbidding adding the motion information angle prediction mode to the motion information prediction mode candidate list; and if the comparison result of any matching block group is different, adding the motion information angle prediction mode to the motion information prediction mode candidate list.
For example, if the left neighboring block of the current block exists and the upper neighboring block does not exist, the above-mentioned comparison method is used to determine at least one matching block group A for the horizontal prediction mode i And A j (i and j have a value in the range of [ m, m + n-1 ]]And i and j are different, i and j can be selected at will and are located in the value range). If all matchIf the block group comparison results are the same, the addition of the horizontal prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal prediction mode is added to the motion information prediction mode candidate list of the current block. For the horizontal downward prediction mode, the above comparison method is used to determine at least one matching block group A i And A j (the value ranges of i and j are [0, m + n-2 ]]And i and j are not the same). If the comparison results for all matching block groups are the same, the addition of the horizontal down prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal down prediction mode is added to the motion information prediction mode candidate list.
For example, if the left neighboring block of the current block does not exist and the upper neighboring block exists, the above-mentioned comparison method is used to determine at least one matching block group A for the vertical prediction mode i And A j (i and j have a value in the range of [ m + n +1,2m + n +]And i and j are not the same). If the comparison results for all matching block groups are the same, the addition of the vertical prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the vertical prediction mode is added to the motion information prediction mode candidate list. For the vertical right prediction mode, the comparison method is used to judge at least one matching block group A i And A j (i and j have a value in the range of [ m + n +2,2m +2n]And i and j are not the same). If the comparison results for all matching block groups are the same, then addition of the vertical right prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the vertical right prediction mode is added to the motion information prediction mode candidate list.
For example, if the left neighboring block of the current block exists and the upper neighboring block of the current block also exists, for the horizontal downward prediction mode, the above-mentioned comparison method is used to determine at least one matching block group A i And A j (the value ranges of i and j are [0, m + n-2 ]]And i and j are not the same). If the comparison results for all matching block groups are the same, the addition of the horizontal down prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal down prediction mode is added to the motion information prediction mode candidate list.For the horizontal prediction mode, the above comparison method is used to determine at least one matching block group A i And A j (i and j have a value in the range of [ m, m + n-1 ]]And i and j are not the same). If the comparison results for all matching block groups are the same, the addition of the horizontal prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal prediction mode is added to the motion information prediction mode candidate list of the current block. For the horizontal upward prediction mode, the above comparison method is used to determine at least one matching block group A i And A j (the value ranges of i and j are [ m +1,2m + n-1)]And i and j are not the same). If the comparison results for all matching block groups are the same, the addition of the horizontal upward prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the horizontal upward prediction mode is added to the motion information prediction mode candidate list. For the vertical prediction mode, the above comparison method is used to determine at least one matching block group A i And A j (i and j have a value in the range of [ m + n +1,2m + n +]And i and j are not the same). If the comparison results for all matching block groups are the same, then addition of the vertical prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the vertical prediction mode is added to the motion information prediction mode candidate list. For the vertical right prediction mode, the comparison method is used to determine at least one matching block group A i And A j (i and j have a value in the range of [ m + n +2,2m +2n]And i and j are not the same). If the comparison results for all matching block groups are the same, then addition of the vertical right prediction mode to the motion information prediction mode candidate list is prohibited. Otherwise, the vertical right prediction mode is added to the motion information prediction mode candidate list.
Application scenario 10: similar to the implementation of the application scenario 9, the difference is that: in the application scenario 10, it is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor to distinguish whether an upper adjacent block of the current block exists or not.
Application scenario 11: similar to the implementation of application scenario 9, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the peripheral blocks of the current block do not exist, which means that the peripheral blocks are positioned outside the image where the current block is positioned, or the peripheral blocks are positioned inside the image where the current block is positioned, but the peripheral blocks are positioned outside the image where the current block is positioned. The other processes are similar to the application scenario 9 and will not be described in detail here.
Application scenario 12: similar to the implementation of application scenario 9, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor is it necessary to distinguish whether an upper adjacent block of the current block exists or not. Other processes are similar to the application scenario 9, and are not described herein again.
Application scenario 13: similar to the implementation of application scenario 9, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. Unlike application scenario 9, the comparison may be:
for each matching block group, if at least one of two peripheral matching blocks in the matching block group does not have available motion information, or both the two peripheral matching blocks have available motion information and the motion information of the two peripheral matching blocks is different, the comparison result of the matching block group is different. If the available motion information exists in both the two peripheral matching blocks in the matching block group and the motion information of the two peripheral matching blocks is the same, the comparison result of the matching block group is the same. If the comparison results of all the matching block groups are the same, forbidding adding the motion information angle prediction mode to the motion information prediction mode candidate list; and if the comparison result of any matching block group is different, adding the motion information angle prediction mode to a motion information prediction mode candidate list.
Based on the comparison method, other processes are similar to the application scenario 9, and are not repeated herein.
Application scenario 14: similar to the implementation of application scenario 9, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether a left adjacent block of the current block exists or not, nor is it necessary to distinguish whether an upper adjacent block of the current block exists or not. In contrast to the comparison in the application scenario 9, the comparison can be seen in the application scenario 10. Based on the comparison method, other processes are similar to the application scenario 9, and are not repeated herein.
Application scenario 15: similar to the implementation of application scenario 9, except that: in contrast to the comparison in the application scenario 9, the comparison can be seen in the application scenario 10. Other processes are similar to the application scenario 9, and are not repeated here.
Application scenario 16: similar to the implementation of application scenario 9, except that: it is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. In contrast to the comparison in the application scenario 9, the comparison can be seen in the application scenario 10. Other processes are similar to the application scenario 9, and are not repeated here.
Example 8: in the above embodiments, the process of filling motion information is described below with reference to several specific application scenarios, which relate to filling motion information of peripheral blocks.
Application scenario 1: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image of the current block, and the absence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image of the current block. If the peripheral block does not exist, or the peripheral block is not decoded (i.e. the peripheral block is an unencoded block), or the peripheral block is an intra block, it indicates that there is no available motion information for the peripheral block. If the peripheral block exists, the peripheral block is not an uncoded block, and the peripheral block is not an intra block, then the peripheral block has available motion information.
Referring to FIG. 8A, width W of the current block is greater than or equal to 8, height H of the current block is greater than or equal to 8, let m be W/4,n be H/4, pixel point at upper left corner of the current block be (x, y), (x-1, y + H + W-1) located at the periphery block A 0 ,A 0 Is 4*4. Traversing the peripheral blocks in the clockwise direction, and respectively recording the peripheral blocks of 4*4 as A 1 、A 2 、…、A 2m+2n ,A 2m+2n Is the peripheral block where the pixel (x + W + H-1,y-1) is located.
For example, if the left neighboring block of the current block exists and the upper neighboring block does not exist, the padding process is as follows: from A 0 To A m+n-1 And traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as A i . If i is greater than 1, then A i Motion information of previously traversed peripheral blocks, all using A i Is filled in. Judging whether i is equal to m + n-1, if so, finishing the filling, and exiting the filling process; otherwise, from A i+1 To A m+n-1 And traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
Referring to FIG. 8A, assume A i Is A 4 Then A can be substituted i Previously traversed perimeter blocks (e.g. A) 0 、A 1 、A 2 、A 3 ) All using A 4 Is filled in. Suppose traversing to A 5 When found, A 5 There is no available motion information, then use A 5 Nearest neighbor previous perimeter block a 4 Is filled in, assuming traversal to a 6 When found, A 6 There is no motion information available, then use A 6 Nearest neighbor previous perimeter block a 5 The motion information of (a) is filled, and so on.
The left adjacent block of the current block does not exist, the upper adjacent block exists, and the filling process is as follows: from A m+n+1 To A 2m+2n Performing sequential traversal, finding the first peripheral block with available motion information, and recording as A i . If i is greater than m + n +1, then A i Motion information of previously traversed peripheral blocks, all using A i Is filled in. Judging whether i is equal to 2m +2n, if so, finishing filling, and exiting the filling process; otherwise, from A i+1 To A 2m+2n And traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
The left adjacent block and the upper adjacent block of the current block exist, and the filling process is as follows: from A to A 0 To A 2m+2n Performing sequential traversal, finding the first peripheral block with available motion information, and recording as A i . If i is greater than 1, then A i Motion information of previously traversed peripheral blocks, all using A i Is filled in. Judging whether i is equal to 2m +2n, if so, finishing filling, and exiting the filling process; otherwise, from A i+1 To A 2m+2n And traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
In the above embodiment, the peripheral block where no motion information is available may be an unencoded block or an intra block.
Application scenario 2: similar to the implementation of application scenario 1, the difference is: whether a left adjacent block of the current block exists or not and whether an upper adjacent block of the current block exists or not are not distinguished, for example, whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not are all processed in the following way: from A 0 To A 2m+2n And traversing in sequence to find the first peripheral block with available motion information, and recording the first peripheral block as A i . Such asIf the fruit i is larger than 1, A is added i Motion information of previously traversed peripheral blocks, all using A i Is filled in. Judging whether i is equal to 2m +2n, if so, finishing filling, and exiting the filling process; otherwise, from A i+1 To A 2m+2n And traversing, if the motion information of the traversed peripheral block is unavailable, filling the motion information of the previous peripheral block which is most adjacent to the peripheral block until the traversal is finished.
Application scenario 3: similar to the implementation of application scenario 1, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. Other implementation processes are referred to as application scenario 1, and are not described in detail herein.
Application scenario 4: similar to the implementation of application scenario 1, the difference is: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. Other implementation processes are referred to as application scenario 1, and are not described in detail herein.
Application scenario 5: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image in which the current block is positioned, and the nonexistence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned outside the image in which the current block is positioned. If the peripheral block does not exist, or the peripheral block is not decoded (i.e. the peripheral block is an uncoded block), or the peripheral block is an intra block, it indicates that the peripheral block does not have available motion information. If the peripheral block exists, the peripheral block is not an uncoded block, and the peripheral block is not an intra block, then the peripheral block has available motion information.
If the left neighboring block of the current block exists and the upper neighboring block does not exist, the padding process is as follows: from A 0 To A m+n-1 And performing sequential traversal, and if the motion information of the traversed peripheral block is unavailable, filling the motion information of the peripheral block by using zero motion information or the motion information of the time domain corresponding position of the peripheral block. If the left neighboring block of the current block does not exist and the upper neighboring block exists, the padding process is as follows: from A to A m+n+1 To A 2m+2n And performing sequential traversal, and if the motion information of the traversed peripheral block is unavailable, filling the motion information of the peripheral block by using zero motion information or the motion information of the time domain corresponding position of the peripheral block. If the left neighboring block of the current block exists and the upper neighboring block exists, the padding process is as follows: from A to A 0 To A 2m+2n And performing sequential traversal, and if the motion information of the traversed peripheral block is unavailable, filling the motion information of the peripheral block by using zero motion information or the motion information of the time domain corresponding position of the peripheral block.
Application scenario 6: similar to the implementation of application scenario 5, except that: whether the left adjacent block and the upper adjacent block of the current block exist or not is not distinguished, and whether the left adjacent block and the upper adjacent block of the current block exist or not is not distinguished from A 0 To A 2m+2n And performing sequential traversal, and if the motion information of the traversed peripheral block is unavailable, filling the motion information of the peripheral block by using zero motion information or the motion information of the time domain corresponding position of the peripheral block.
Application scenario 7: similar to the implementation of application scenario 5, the difference is: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. Other implementation processes are referred to in application scenario 5, and are not described in detail herein.
Application scenario 8: similar to the implementation of application scenario 5, except that: the existence of the peripheral blocks of the current block indicates that the peripheral blocks are positioned in the image where the current block is positioned and the peripheral blocks are positioned in the image slice where the current block is positioned; the fact that the peripheral block of the current block does not exist means that the peripheral block is located outside the image where the current block is located, or the peripheral block is located inside the image where the current block is located, but the peripheral block is located outside the image slice where the current block is located. It is not necessary to distinguish whether the left adjacent block of the current block exists or not and whether the upper adjacent block of the current block exists or not. Other implementation processes are referred to in application scenario 5, and are not described in detail herein.
Application scenario 9-application scenario 16: similar to the implementation of application scenarios 1-8, the difference is that: the width W of the current block is more than or equal to 8, the height H of the current block is more than or equal to 8, the size of m is W/8,n is H/8, and the peripheral block A 0 Is 8*8, and the peripheral block of each 8*8 is marked as A 1 、A 2 、…、A 2m+2n That is, the size of each peripheral block is changed from 4*4 to 8*8, and other implementation processes refer to the above application scenarios and are not described herein again.
Application scenario 17: referring to fig. 8B, the width and height of the current block are both 16, and the motion information of the peripheral blocks is stored in the minimum unit of 4*4. Suppose A 14 、A 15 、A 16 And A 17 And filling the uncoded blocks if the uncoded blocks are uncoded, wherein the filling method can be any one of the following methods: padding with available motion information of neighboring blocks; filling by adopting default motion information; and filling the available motion information of the position block corresponding to the time domain reference frame. Of course, the above-described manner is merely an example, and is not limited thereto. If the size of the current block is other sizes, the above filling method can be adopted, and details are not repeated herein
Application scenario 18: referring to fig. 8C, the width and height of the current block are both 16, and the motion information of the surrounding blocks is stored in the minimum unit of 4*4. Suppose A 7 For an intra block, the intra block needs to be filled, and the filling method may be any one of the following methods: using available motion information for neighboring blocksFilling; filling by adopting default motion information; and filling the available motion information of the position block corresponding to the time domain reference frame. Of course, the above-described manner is merely an example, and is not limited thereto. If the size of the current block is other sizes, the padding can also be performed in the above manner, which is not described herein again.
Example 9: in the above embodiments, for example, the motion compensation is performed by using a motion information angle prediction mode, and the motion information of the current block is determined according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the motion information angle prediction mode, and the prediction value of the current block is determined according to the motion information of the current block. The encoding end determines the predicted value of the current block based on each motion information angle prediction mode in the motion information prediction mode candidate list; the decoding end determines the predicted value of the current block based on the target motion information angle prediction mode. In the following embodiments, the prediction value of the current block is determined based on a certain motion information angle prediction mode as an example. The following describes the motion compensation process with reference to a specific application scenario.
Application scenario 1: and selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle corresponding to the motion information angle prediction mode. The current block is divided into at least one sub-region, and the dividing manner is not limited. And aiming at each sub-region of the current block, selecting a peripheral matching block corresponding to the sub-region from the plurality of peripheral matching blocks, and determining the motion information of the sub-region according to the motion information of the selected peripheral matching block. And determining the motion information of all the sub-areas of the current block as the motion information of the current block, so as to obtain the motion information of the current block.
For example, the motion information of the selected peripheral matching block may be used as the motion information of the sub-region. If the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information can be used as the motion information of the sub-region; assuming that the motion information of the peripheral matching block is bidirectional motion information, the bidirectional motion information may be used as the motion information of the sub-region, or forward motion information in the bidirectional motion information may be used as the motion information of the sub-region, or backward motion information in the bidirectional motion information may be used as the motion information of the sub-region.
For example, the sub-region partition information may be independent of the motion information angle prediction mode, such as sub-region partition information of the current block according to which the current block is partitioned into at least one sub-region is determined according to the size of the current block. For example, if the size of the current block satisfies: the width is greater than or equal to the preset size parameter (according to the empirical configuration, such as 8), and the height is greater than or equal to the preset size parameter, then the size of the sub-region is 8*8, i.e. the current block is divided into at least one sub-region according to 8*8.
For example, the sub-region division information may relate to the motion information angle prediction mode, for example, when the motion information angle prediction mode is a horizontal upward prediction mode, a horizontal downward prediction mode, or a vertical rightward prediction mode, if the width of the current block is greater than or equal to a preset size parameter and the height of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8*8; if the width of the current block is smaller than the preset size parameter, or the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4*4. When the motion information angle prediction mode is a horizontal prediction mode, if the width of the current block is larger than a preset size parameter, the size of the sub-region is 4 × 4 of the current block, or the size of the sub-region is 4*4; if the width of the current block is equal to the preset size parameter and the height of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8*8; if the width of the current block is smaller than the preset size parameter, the size of the sub-region is 4*4. When the motion information angle prediction mode is a vertical prediction mode, if the height of the current block is greater than the preset size parameter, the size of the sub-region is 4 × the height of the current block, or the size of the sub-region is 4*4; if the height of the current block is higher than the preset size parameter and the width of the current block is larger than or equal to the preset size parameter, the size of the sub-region is 8*8; if the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4*4. When the motion information angular prediction mode is the horizontal prediction mode, if the width of the current block is greater than 8, the size of the sub-region may also be 4*4. When the motion information angular prediction mode is the vertical prediction mode, if the height of the current block is greater than 8, the size of the sub-region may also be 4*4.
Of course, the above are only examples, and the preset size parameter may be 8 or more than 8 without limitation.
Application scenario 2: and selecting a plurality of peripheral matching blocks pointed by a preset angle from peripheral blocks of the current block based on the preset angle corresponding to the motion information angle prediction mode. The current block is divided into at least one sub-region (i.e., the size of the sub-region is 8*8) in the manner of 8*8. And aiming at each sub-region of the current block, selecting a peripheral matching block corresponding to the sub-region from the plurality of peripheral matching blocks, and determining the motion information of the sub-region according to the motion information of the selected peripheral matching block. And determining the motion information of all the sub-areas of the current block as the motion information of the current block, so as to obtain the motion information of the current block.
Application scenario 3: referring to FIG. 9A, the width W (4) of the current block multiplied by the height H (8) of the current block is less than or equal to 32, and motion compensation is performed at an angle for each sub-region 4*4 within the current block. And if the motion information of the peripheral matching block is unidirectional motion information, determining the unidirectional motion information as the motion information of the sub-area. And if the motion information of the peripheral matching block is bidirectional motion information, determining the bidirectional motion information as the motion information of the sub-region, or determining the forward motion information or the backward motion information in the bidirectional motion information as the motion information of the sub-region.
According to fig. 9A, the size of the current block is 4*8, and when the target motion information prediction mode of the current block is a horizontal mode, the current block is divided into two sub-regions with the same size, wherein one sub-region of 4*4 corresponds to the peripheral matching block A1, and the motion information of the sub-region of 4*4 is determined according to the motion information of A1. Another 4*4 sub-region corresponds to the peripheral matching block A2, and the motion information of the 4*4 sub-region is determined according to the motion information of A2. When the target motion information prediction mode of the current block is a vertical mode, two sub-regions with the same size are divided, wherein one sub-region 4*4 corresponds to the peripheral matching block B1, and the motion information of the sub-region of 4*4 is determined according to the motion information of B1. The other sub-region 4*4 corresponds to the peripheral matching block B1, and the motion information of the sub-region of 4*4 is determined according to the motion information of B1. When the target motion information prediction mode of the current block is in the horizontal upward direction, two sub-regions with the same size are divided, wherein one sub-region 4*4 corresponds to the peripheral matching block E, and the motion information of the sub-region 4*4 is determined according to the motion information of E. The other sub-region 4*4 corresponds to the peripheral matching block A1, and the motion information of the sub-region of 4*4 is determined according to the motion information of A1. When the target motion information prediction mode of the current block is horizontal downward, two sub-regions with the same size are divided, wherein one sub-region 4*4 corresponds to the peripheral matching block A2, and the motion information of the sub-region of 4*4 is determined according to the motion information of A2. The other sub-region 4*4 corresponds to the peripheral matching block A3, and the motion information of the sub-region of 4*4 is determined according to the motion information of A3. When the target motion information prediction mode of the current block is horizontal downward, two sub-areas with the same size are divided, wherein one sub-area 4*4 corresponds to the peripheral matching block B2, and the motion information of the sub-area 4*4 is determined according to the motion information of B2. The other sub-region 4*4 corresponds to the peripheral matching block B3, and the motion information of the sub-region of 4*4 is determined according to the motion information of B3.
Application scenario 4: referring to fig. 9B, if the width W of the current block is less than 8 and the height H of the current block is greater than 8, motion compensation can be performed on each sub-region in the current block as follows: if the angular prediction mode is for the vertical prediction mode, motion compensation is performed on each sub-region of 4*H at a vertical angle. If the angular prediction mode is other angular prediction modes (e.g., horizontal prediction mode, horizontal up-prediction mode, horizontal down-prediction mode, vertical right-prediction mode, etc.), then motion compensation may be performed at an angle for each sub-region of 4*4 within the current block.
According to fig. 9B, when the size of the current block is 4 × 16, and the target motion information prediction mode of the current block is a horizontal mode, 4 sub-regions with the size of 4*4 are divided, wherein one sub-region of 4*4 corresponds to the peripheral matching block A1, and the motion information of the sub-region of 4*4 is determined according to the motion information of A1. One of the 4*4 sub-regions corresponds to the peripheral matching block A2, and the motion information of the 4*4 sub-region is determined according to the motion information of A2. One of the 4*4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4*4 sub-region is determined according to the motion information of A3. One of the 4*4 sub-regions corresponds to the peripheral matching block A4, and the motion information of the 4*4 sub-region is determined according to the motion information of A4. When the target motion information prediction mode of the current block is a vertical mode, 4 sub-regions with the size of 4*4 can be divided, each sub-region of 4*4 corresponds to a peripheral matching block B1, and the motion information of each sub-region of 4*4 is determined according to the motion information of B1. The motion information of the four sub-regions is the same, so in this embodiment, the current block may not be divided into sub-regions, the current block itself serves as a sub-region corresponding to a peripheral matching block B1, and the motion information of the current block is determined according to the motion information of B1.
When the target motion information prediction mode of the current block is a horizontal upward mode, 4 sub-regions with the size of 4*4 are divided, wherein one sub-region of 4*4 corresponds to a peripheral matching block E, and the motion information of the sub-region of 4*4 is determined according to the motion information of E. One of the 4*4 sub-regions corresponds to the peripheral matching block A1, and the motion information of the 4*4 sub-region is determined according to the motion information of A1. One of the 4*4 sub-regions corresponds to the peripheral matching block A2, and the motion information of the 4*4 sub-region is determined according to the motion information of A2. One of the 4*4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4*4 sub-region is determined according to the motion information of A3. When the target motion information prediction mode of the current block is a horizontal downward mode, 4 sub-regions with the size of 4*4 are divided, one sub-region of 4*4 corresponds to the peripheral matching block A2, and the motion information of the sub-region of 4*4 is determined according to the motion information of A2. One of the 4*4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4*4 sub-region is determined according to the motion information of A3. One of the 4*4 sub-regions corresponds to the peripheral matching block A5, and the motion information of the 4*4 sub-region is determined according to the motion information of A4. One of the 4*4 sub-regions corresponds to the peripheral matching block A5, and the motion information of the 4*4 sub-region is determined according to the motion information of A5. When the target motion information prediction mode of the current block is a horizontal downward mode, 4 sub-regions with the size of 4*4 are divided, one sub-region of 4*4 corresponds to the peripheral matching block B2, and the motion information of the sub-region of 4*4 is determined according to the motion information of B2. One of the 4*4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4*4 sub-region is determined according to the motion information of B3. One of the 4*4 sub-regions corresponds to the peripheral matching block B4, and the motion information of the 4*4 sub-region is determined according to the motion information of B4. One of the sub-regions 4*4 corresponds to the peripheral matching block B5, and the motion information of the sub-region of 4*4 is determined according to the motion information of B5.
Application scenario 5: referring to fig. 9C, if the width W of the current block is greater than 8 and the height H of the current block is less than 8, then each sub-region in the current block may be motion compensated as follows: and if the angular prediction mode is the horizontal prediction mode, performing motion compensation on each W4 sub-area according to the horizontal angle. If the angular prediction mode is other angular prediction modes, motion compensation can be performed at an angle for each sub-region 4*4 within the current block.
According to fig. 9C, the size of the current block is 16 × 4, and when the target motion information prediction mode of the current block is a horizontal mode, 4 sub-regions with the size of 4*4 may be divided, each sub-region of 4*4 corresponds to the peripheral matching block A1, and the motion information of each sub-region of 4*4 is determined according to the motion information of A1. The motion information of the four sub-regions is the same, so in this embodiment, the current block may not be divided into sub-regions, the current block itself serves as a sub-region corresponding to a peripheral matching block A1, and the motion information of the current block is determined according to the motion information of A1. When the target motion information prediction mode of the current block is a vertical mode, 4 sub-regions with the size of 4*4 are divided, wherein one sub-region of 4*4 corresponds to a peripheral matching block B1, and the motion information of the sub-region of 4*4 is determined according to the motion information of B1. One of the 4*4 sub-regions corresponds to the peripheral matching block B2, and the motion information of the 4*4 sub-region is determined according to the motion information of B2. One of the 4*4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4*4 sub-region is determined according to the motion information of B3. One of the 4*4 sub-regions corresponds to the peripheral matching block B4, and the motion information of the 4*4 sub-region is determined according to the motion information of B4.
When the target motion information prediction mode of the current block is a horizontal upward mode, 4 sub-regions with the size of 4*4 are divided, wherein one sub-region of 4*4 corresponds to a peripheral matching block E, and the motion information of the sub-region of 4*4 is determined according to the motion information of E. One of the 4*4 sub-regions corresponds to the peripheral matching block B1, and the motion information of the 4*4 sub-region is determined according to the motion information of B1. One of the 4*4 sub-regions corresponds to the peripheral matching block B2, and the motion information of the 4*4 sub-region is determined according to the motion information of B2. One of the 4*4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4*4 sub-region is determined according to the motion information of B3. When the target motion information prediction mode of the current block is a horizontal downward mode, 4 sub-regions with the size of 4*4 are divided, wherein one sub-region of 4*4 corresponds to the peripheral matching block A2, and the motion information of the sub-region of 4*4 is determined according to the motion information of A2. One of the 4*4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 4*4 sub-region is determined according to the motion information of A3. One of the 4*4 sub-regions corresponds to the peripheral matching block A4, and the motion information of the 4*4 sub-region is determined according to the motion information of A4. One of the 4*4 sub-regions corresponds to the peripheral matching block A5, and the motion information of the 4*4 sub-region is determined according to the motion information of A5. When the target motion information prediction mode of the current block is a vertical right mode, 4 sub-regions with the size of 4*4 are divided, wherein one sub-region of 4*4 corresponds to a peripheral matching block B2, and the motion information of the sub-region of 4*4 is determined according to the motion information of B2. One of the 4*4 sub-regions corresponds to the peripheral matching block B3, and the motion information of the 4*4 sub-region is determined according to the motion information of B3. One of the 4*4 sub-regions corresponds to the peripheral matching block B4, and the motion information of the 4*4 sub-region is determined according to the motion information of B4. One of the sub-regions 4*4 corresponds to the peripheral matching block B5, and the motion information of the sub-region of 4*4 is determined according to the motion information of B5.
Application scenario 6: the width W of the current block is equal to 8, and the height H of the current block is equal to 8, then motion compensation is performed according to a certain angle for each sub-region 8*8 in the current block (i.e. the sub-region is the current block itself). If the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block may be selected from the motion information of the plurality of peripheral matching blocks according to the corresponding angle with respect to the motion information of the sub-region.
For example, as shown in fig. 9D, for the horizontal prediction mode, the motion information of the peripheral matching block A1 may be selected, and the motion information of the peripheral matching block A2 may be selected. Referring to fig. 9E, for the vertical prediction mode, motion information of the peripheral matching block B1 may be selected, and motion information of the peripheral matching block B2 may be selected. Referring to fig. 9F, for the horizontal upward prediction mode, motion information of the peripheral matching block E may be selected, motion information of the peripheral matching block B1 may be selected, and motion information of the peripheral matching block A1 may be selected. Referring to fig. 9G, for the horizontal downward prediction mode, the motion information of the peripheral matching block A2 may be selected, the motion information of the peripheral matching block A3 may be selected, and the motion information of the peripheral matching block A4 may be selected. Referring to fig. 9H, for the vertical right prediction mode, motion information of the peripheral matching block B2 may be selected, motion information of the peripheral matching block B3 may be selected, and motion information of the peripheral matching block B4 may be selected.
According to fig. 9D, the size of the current block is 8*8, and when the target motion information prediction mode of the current block is a horizontal mode, the current block is divided into sub-regions of 8*8, the sub-regions correspond to the peripheral matching block A1, and the motion information of the sub-regions is determined according to the motion information of A1. Or, the sub-region corresponds to the peripheral matching block A2, and the motion information of the sub-region is determined according to the motion information of A2.
According to fig. 9E, the size of the current block is 8*8, and when the target motion information prediction mode of the current block is a vertical mode, a sub-region with a size of 8*8 is divided, and this sub-region corresponds to the peripheral matching block B1, and the motion information of this sub-region is determined according to the motion information of B1. Or, the sub-region corresponds to the peripheral matching block B2, and the motion information of the sub-region is determined according to the motion information of B2.
According to fig. 9F, the size of the current block is 8*8, and when the target motion information prediction mode of the current block is a horizontal up mode, the current block is divided into sub-regions of 8*8, and the sub-regions correspond to the peripheral matching block E, and the motion information of the sub-regions is determined according to the motion information of E. Or, the sub-area corresponds to the peripheral matching block B1, and the motion information of the sub-area is determined according to the motion information of B1. Or, the sub-area corresponds to the peripheral matching block A1, and the motion information of the sub-area is determined according to the motion information of A1.
According to fig. 9G, the size of the current block is 8*8, and when the target motion information prediction mode of the current block is the horizontal down mode, the current block is divided into sub-regions of 8*8, the sub-regions correspond to the peripheral matching block A2, and the motion information of the sub-regions is determined according to the motion information of A2. Or, the sub-region corresponds to the peripheral matching block A3, and the motion information of the sub-region is determined according to the motion information of A3. Or, the sub-area corresponds to the peripheral matching block A4, and the motion information of the sub-area is determined according to the motion information of A4.
According to fig. 9H, the size of the current block is 8*8, and when the target motion information prediction mode of the current block is the vertical right mode, the current block is divided into sub-regions of 8*8, the sub-regions correspond to the peripheral matching block B2, and the motion information of the sub-regions is determined according to the motion information of B2. Or, the sub-area corresponds to the peripheral matching block B3, and the motion information of the sub-area is determined according to the motion information of B3. Or, the sub-area corresponds to the peripheral matching block B4, and the motion information of the sub-area is determined according to the motion information of B4.
Application scenario 7: the width W of the current block may be equal to or greater than 16 and the height H of the current block may be equal to 8, based on which each sub-region within the current block may be motion compensated in the following manner: and if the angular prediction mode is the horizontal prediction mode, performing motion compensation on each W4 sub-area according to the horizontal angle. If the angular prediction mode is other angular prediction modes, motion compensation is performed according to a certain angle for each sub-region 8*8 in the current block. For each sub-region 8*8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks for the motion information of the sub-region. For example, referring to fig. 9I, for the horizontal prediction mode, the motion information of the peripheral matching block A1 may be selected for the first W × 4 sub-region, and the motion information of the peripheral matching block A2 may be selected for the second W × 4 sub-region. Referring to fig. 9J, for the vertical prediction mode, for the first sub-region 8*8, the motion information of the peripheral matching block B1 may be selected, and the motion information of the peripheral matching block B2 may be selected. For the sub-region of the second 8*8, the motion information of the peripheral matching block B3 may be selected, and the motion information of the peripheral matching block B4 may be selected. Other angular prediction modes are similar and will not be described herein.
According to fig. 9I, the size of the current block is 16 × 8, and when the target motion information prediction mode of the current block is the horizontal mode, 2 sub-regions with the size of 16 × 4 are divided, wherein one sub-region with 16 × 4 corresponds to the peripheral matching block A1, and the motion information of the sub-region with 16 × 4 is determined according to the motion information of A1. And the other 16 × 4 sub-area corresponds to the peripheral matching block A2, and the motion information of the 16 × 4 sub-area is determined according to the motion information of A2.
According to fig. 9J, the size of the current block is 16 × 8, and when the target motion information prediction mode is a vertical mode, 2 sub-regions with the size of 8*8 are divided, wherein one sub-region of 8*8 corresponds to the peripheral matching block B1 or B2, and the motion information of the sub-region of 8*8 is determined according to the motion information of B1 or B2. And the sub-region of another 8*8 corresponds to the peripheral matching block B3 or B4, and the motion information of the sub-region of 8*8 is determined according to the motion information of B3 or B4.
Application scenario 8: the width W of the current block may be equal to 8 and the height H of the current block may be equal to or greater than 16, based on which each sub-region within the current block may be motion compensated in the following manner: if the angular prediction mode is a vertical prediction mode, motion compensation is performed on each sub-region of 4*H according to a vertical angle. If the angular prediction mode is other angular prediction modes, motion compensation is performed according to a certain angle for each sub-region 8*8 in the current block. For each sub-region of 8*8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks for the motion information of the sub-region. For example, referring to fig. 9K, for the vertical prediction mode, the motion information of the peripheral matching block B1 may be selected for the first sub-region 4*H, and the motion information of the peripheral matching block B2 may be selected for the second sub-region 4*H. Referring to fig. 9L, for the horizontal prediction mode, for the sub-region of the first 8*8, the motion information of the peripheral matching block A1 may be selected, and the motion information of the peripheral matching block A2 may be selected. For the sub-region of the second 8*8, the motion information of the peripheral matching block A1 may be selected, and the motion information of the peripheral matching block A2 may be selected. Other angle prediction modes are similar and will not be described herein.
According to fig. 9K, the size of the current block is 8 × 16, and when the target motion information prediction mode of the current block is the vertical mode, 2 sub-regions with the size of 4 × 16 are divided, wherein one sub-region with 4 × 16 corresponds to the peripheral matching block B1, and the motion information of the sub-region with 4 × 16 is determined according to the motion information of B1. And the other 4 × 16 sub-area corresponds to the peripheral matching block B2, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B2.
According to fig. 9L, the size of the current block is 16 × 8, and when the target motion information prediction mode is the horizontal mode, 2 sub-regions with the size of 8*8 are divided, one sub-region of 8*8 corresponds to the peripheral matching block A1 or A2, and the motion information of the sub-region of 8*8 is determined according to the motion information of the corresponding peripheral matching block. The other sub-region 8*8 corresponds to the peripheral matching block A1 or A2, and the motion information of the sub-region of 8*8 is determined according to the motion information of the corresponding peripheral matching block.
Application scenario 9: the width W of the current block may be equal to or greater than 16, and the height H of the current block may be equal to or greater than 16, based on which each sub-region within the current block may be motion compensated in the following manner: if the angular prediction mode is a vertical prediction mode, motion compensation is performed on each sub-region 4*H according to a vertical angle. And if the angular prediction mode is the horizontal prediction mode, performing motion compensation on each W4 sub-area according to the horizontal angle. If the angular prediction mode is other angular prediction modes, motion compensation is performed at an angle for each sub-region of 8*8 in the current block. For each sub-region of 8*8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks for the motion information of the sub-region.
Referring to fig. 9M, for the vertical prediction mode, motion information of the peripheral matching block B1 may be selected for a first sub-region 4*H, motion information of the peripheral matching block B2 may be selected for a second sub-region 4*H, motion information of the peripheral matching block B3 may be selected for a third sub-region 4*H, and motion information of the peripheral matching block B4 may be selected for a fourth sub-region 4*H. For the horizontal prediction mode, the motion information of the peripheral matching block A1 is selected for the first W × 4 sub-region, the motion information of the peripheral matching block A2 is selected for the second W × 4 sub-region, the motion information of the peripheral matching block A3 is selected for the third W × 4 sub-region, and the motion information of the peripheral matching block A4 is selected for the fourth W × 4 sub-region. Other angular prediction modes are similar and will not be described herein.
According to fig. 9M, the size of the current block is 16 × 16, and when the target motion information prediction mode is the vertical mode, 4 sub-regions with the size of 4 × 16 are divided, one sub-region with 4 × 16 corresponds to the peripheral matching block B1, and the motion information of the sub-region with 4 × 16 is determined according to the motion information of B1. A sub-region of 4 × 16 corresponds to the peripheral matching block B2, and the motion information of the sub-region of 4 × 16 is determined based on the motion information of B2. And a4 × 16 sub-area corresponds to the peripheral matching block B3, and the motion information of the 4 × 16 sub-area is determined according to the motion information of B3. A4 × 16 sub-region corresponds to the peripheral matching block B4, and the motion information of the 4 × 16 sub-region is determined based on the motion information of B4.
According to fig. 9M, the size of the current block is 16 × 16, and when the target motion information prediction mode of the current block is the horizontal mode, 4 sub-regions with the size of 16 × 4 are divided, wherein one of the 16 × 4 sub-regions corresponds to the peripheral matching block A1, and the motion information of the 16 × 4 sub-region is determined according to the motion information of A1. One of the 16 × 4 sub-regions corresponds to the peripheral matching block A2, and the motion information of the 16 × 4 sub-region is determined according to the motion information of A2. One of the 16 × 4 sub-regions corresponds to the peripheral matching block A3, and the motion information of the 16 × 4 sub-region is determined according to the motion information of A3. One of the 16 × 4 sub-regions corresponds to the peripheral matching block A4, and the motion information of the 16 × 4 sub-region is determined according to the motion information of A4.
Application scenario 10: the width W of the current block may be greater than or equal to 8, and the height H of the current block may be greater than or equal to 8, then motion compensation is performed on the sub-region of each 8*8 in the current block. Referring to fig. 9N, for each sub-region 8*8, if the sub-region corresponds to a plurality of peripheral matching blocks, the motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks for the motion information of the sub-region. The sub-region division size is independent of the motion information angle prediction mode, and as long as the width is greater than or equal to 8 and the height is greater than or equal to 8, the sub-region division size may be 8*8 in any motion information angle prediction mode.
According to fig. 9N, the size of the current block is 16 × 16, and when the target motion information prediction mode of the current block is a horizontal mode, 4 sub-regions with the size of 8*8 are divided, wherein one sub-region of 8*8 corresponds to the peripheral matching block A1 or A2, and the motion information of the sub-region of 8*8 is determined according to the motion information of A1 or A2. One of the 8*8 sub-regions corresponds to the peripheral matching block A1 or A2, and the motion information of the sub-region of 8*8 is determined according to the motion information of A1 or A2. One of the 8*8 sub-regions corresponds to the peripheral matching block A3 or A4, and the motion information of the sub-region of 8*8 is determined according to the motion information of A3 or A4. One of the 8*8 sub-regions corresponds to the peripheral matching block A3 or A4, and the motion information of the 8*8 sub-region is determined according to the motion information of A3 or A4. When the target motion information prediction mode of the current block is a horizontal mode, 4 sub-regions with the size of 8*8 are divided, wherein one sub-region of 8*8 corresponds to a peripheral matching block B1 or B2, and the motion information of the sub-region of 8*8 is determined according to the motion information of B1 or B2. One of the 8*8 sub-regions corresponds to the peripheral matching block B1 or B2, and the motion information of the sub-region of 8*8 is determined according to the motion information of B1 or B2. One of the 8*8 sub-regions corresponds to the peripheral matching block B3 or B4, and the motion information of the 8*8 sub-region is determined according to the motion information of B3 or B4. One of the sub-regions 8*8 corresponds to the peripheral matching block B3 or B4, and the motion information of the sub-region of 8*8 is determined according to the motion information of B3 or B4. When the target motion information prediction mode of the current block is the horizontal up mode, 4 sub-regions of size 8*8 may be divided. Then, for each sub-region 8*8, a peripheral matching block (E, B or A2) corresponding to the sub-region 8*8 can be determined, the determination method is not limited, and the motion information of the sub-region 8*8 is determined according to the motion information of the peripheral matching block. When the target motion information prediction mode of the current block is the horizontal down mode, 4 sub-regions of size 8*8 are divided. Then, for each sub-region 8*8, a peripheral matching block (A3, A5 or A7) corresponding to the sub-region 8*8 can be determined, without limitation, and the motion information of the sub-region 8*8 is determined according to the motion information of the peripheral matching block. When the target motion information prediction mode of the current block is a vertical right mode, 4 sub-regions of size 8*8 are divided. Then, for each sub-region 8*8, a peripheral matching block (B3, B5 or B7) corresponding to the sub-region 8*8 can be determined, without limitation, and the motion information of the sub-region 8*8 is determined according to the motion information of the peripheral matching block.
Application scenario 11: when the width W of the current block is greater than or equal to 8 and the height H is greater than or equal to 8, motion compensation is performed on the sub-region of each 8*8 in the current block, and for each sub-region, any one of several pieces of motion information of the surrounding matching blocks is selected according to the corresponding angle, as shown in fig. 9N.
Based on the same application concept as the method, an embodiment of the present application provides an encoding and decoding apparatus applied to a decoding end or an encoding end, as shown in fig. 10A, which is a structural diagram of the apparatus, including:
a selecting module 111, configured to select, for any motion information angle prediction mode of a current block, a plurality of peripheral matching blocks pointed by a preconfigured angle from peripheral blocks of the current block based on the preconfigured angle of the motion information angle prediction mode; the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed;
a processing module 112, configured to, for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, add the motion information angle prediction mode to a motion information prediction mode candidate list of the current block when motion information of the first peripheral matching block and the second peripheral matching block is different.
The processing module 112 is further configured to: adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block if there is no available motion information in at least one of the first peripheral matching block and the second peripheral matching block.
The processing module 112 is further configured to: if there is no available motion information in at least one of the first and second peripheral matching blocks, prohibiting the motion information angular prediction mode from being added to the motion information prediction mode candidate list of the current block.
The processing module 112 is further configured to: and if the available motion information exists in the first peripheral matching block and the second peripheral matching block, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block when the motion information of the first peripheral matching block and the second peripheral matching block is the same.
The processing module 112 is further configured to: if the peripheral matching block is positioned outside the image where the current block is positioned or the peripheral matching block is positioned outside the image slice where the current block is positioned, determining that the peripheral matching block does not have available motion information;
if the peripheral matching block is an uncoded block, determining that the peripheral matching block does not have available motion information;
if the peripheral matching block is the intra-frame block, determining that the peripheral matching block does not have available motion information;
if the peripheral matching block is the inter-frame coded block, determining that the available motion information exists in the peripheral matching block.
The processing module 112 is further configured to: if an intra block and/or an unencoded block exists in the first peripheral matching block and the second peripheral matching block, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block; or, if an intra block and/or an unencoded block exists in the first peripheral matching block and the second peripheral matching block, prohibiting the motion information angle prediction mode from being added to a motion information prediction mode candidate list of the current block; alternatively, the first and second electrodes may be,
and if at least one of the first peripheral matching block and the second peripheral matching block is positioned outside the image of the current block or outside the image slice of the current block, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block.
The processing module 112 is further configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, and for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuing to judge whether available motion information exists in both the second peripheral matching block and the third peripheral matching block; if there is available motion information for both the second peripheral matching block and the third peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
The processing module 112 is further configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, and for the first peripheral matching block and the second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block and the motion information of the first peripheral matching block and the second peripheral matching block is the same, continuing to judge whether available motion information exists in both the second peripheral matching block and the third peripheral matching block; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
The processing module 112 is further configured to: if the plurality of peripheral matching blocks at least include a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, and if available motion information exists in both the first peripheral matching block and the second peripheral matching block for the first peripheral matching block and the second peripheral matching block to be traversed and the motion information of the first peripheral matching block and the motion information of the second peripheral matching block are different, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block.
The processing module 112 is further configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, aiming at the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block; alternatively, the motion information angular prediction mode is prohibited from being added to the motion information prediction mode candidate list of the current block.
The processing module 112 is further configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, aiming at the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is available motion information for both the second peripheral matching block and the third peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block are different.
The processing module 112 is further configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, aiming at the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
The processing module 112 is further configured to: if the plurality of peripheral matching blocks at least comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, aiming at the first peripheral matching block and the second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuously judging whether the second peripheral matching block and the third peripheral matching block both have available motion information;
if there is no motion information available for at least one of the second peripheral matching block and the third peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of the current block, or prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block.
Based on the same application concept as the method, an embodiment of the present application provides a coding and decoding apparatus applied to a coding end, as shown in fig. 10B, which is a structural diagram of the apparatus, including:
a filling module 121, configured to construct a motion information prediction mode candidate list of a current block, and fill motion information of neighboring blocks of the current block if a motion information angle prediction mode exists in the motion information prediction mode candidate list;
a determining module 122, configured to determine, for each motion information angle prediction mode in the motion information prediction mode candidate list, motion information of a current block according to motion information of a plurality of neighboring matching blocks pointed by a pre-configured angle of the motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
The filling module 121 is specifically configured to: traversing the peripheral blocks of the current block according to a traversing sequence from the peripheral blocks on the left side to the peripheral blocks on the upper side of the current block, and traversing the first peripheral block with available motion information; if the peripheral block comprises a first peripheral block without available motion information before, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise a second peripheral block without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block.
Based on the same application concept as the method, an embodiment of the present application provides an encoding and decoding apparatus applied to a decoding end, as shown in fig. 10C, which is a structural diagram of the apparatus, including:
a selecting module 131, configured to construct a motion information prediction mode candidate list of a current block, and select a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
a processing module 132, configured to fill the motion information of the neighboring blocks of the current block if the target motion information prediction mode is a target motion information angle prediction mode;
a determining module 133, configured to determine motion information of the current block according to motion information of a plurality of peripheral matching blocks pointed by preconfigured angles of the target motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
The processing module 132 is specifically configured to: traversing the peripheral blocks of the current block according to a traversing sequence from the peripheral blocks on the left side to the peripheral blocks on the upper side of the current block, and traversing the first peripheral block with available motion information; if the peripheral block comprises a first peripheral block without available motion information before, filling the motion information of the peripheral block into the first peripheral block; and continuously traversing the peripheral blocks behind the peripheral block, and if the peripheral blocks behind the peripheral block comprise a second peripheral block without available motion information, filling the motion information of the last peripheral block of the traversed second peripheral block into the second peripheral block.
As for the decoding-end device provided in the embodiment of the present application, in terms of a hardware level, a schematic diagram of a hardware architecture of the decoding-end device may specifically refer to fig. 11A. The method comprises the following steps: a processor 141 and a machine-readable storage medium 142, the machine-readable storage medium 142 storing machine-executable instructions executable by the processor 141; the processor 141 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
For example, the processor 141 is configured to execute machine-executable instructions to perform the following steps:
selecting a plurality of peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of a current block according to the pre-configuration angle of any motion information angle prediction mode of the current block; the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed;
for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of a current block when motion information of the first peripheral matching block and the second peripheral matching block is different; alternatively, the first and second electrodes may be,
the processor 141 is configured to execute machine executable instructions to implement the steps of:
constructing a motion information prediction mode candidate list of a current block, and selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list; if the target motion information prediction mode is a target motion information angle prediction mode, filling motion information of peripheral blocks of the current block;
determining the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
In terms of hardware, the hardware architecture diagram of the encoding end device provided in the embodiment of the present application may specifically refer to fig. 11B. The method comprises the following steps: a processor 151 and a machine-readable storage medium 152, the machine-readable storage medium 152 storing machine-executable instructions executable by the processor 151; the processor 151 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
For example, the processor 151 is configured to execute machine-executable instructions to perform the following steps:
selecting a plurality of peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of a current block according to the pre-configuration angle of any motion information angle prediction mode of the current block; the plurality of peripheral matching blocks at least comprise a first peripheral matching block and a second peripheral matching block to be traversed;
for a first peripheral matching block and a second peripheral matching block to be traversed, if available motion information exists in both the first peripheral matching block and the second peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of a current block when motion information of the first peripheral matching block and the second peripheral matching block is different; alternatively, the first and second electrodes may be,
the processor 151 is configured to execute machine executable instructions to perform the following steps:
constructing a motion information prediction mode candidate list of a current block, and filling motion information of peripheral blocks of the current block if a motion information angle prediction mode exists in the motion information prediction mode candidate list;
for each motion information angle prediction mode in the motion information angle prediction mode candidate list, determining motion information of a current block according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where a plurality of computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the encoding and decoding methods disclosed in the above examples of the present application can be implemented. The machine-readable storage medium may be, for example, any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, apparatuses, modules or units described in the above embodiments may be specifically implemented by a computer chip or an entity, or implemented by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (27)

1. A decoding method, applied to a decoding side, the method comprising:
selecting a plurality of peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of a current block according to the pre-configuration angle of any motion information angle prediction mode of the current block;
if the plurality of peripheral matching blocks comprise a first peripheral matching block and a second peripheral matching block to be traversed:
for a first peripheral matching block and a second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, prohibiting adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block;
if the plurality of peripheral matching blocks comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially:
for a first peripheral matching block and a second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to judge whether both the second peripheral matching block and the third peripheral matching block have available motion information;
if there is available motion information for both the second peripheral matching block and the third peripheral matching block, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block when the motion information of the second peripheral matching block and the third peripheral matching block is different; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
2. The method of claim 1,
the process of judging whether any peripheral matching block has available motion information includes:
if the peripheral matching block is positioned outside the image where the current block is positioned or the peripheral matching block is positioned outside the image slice where the current block is positioned, determining that the peripheral matching block does not have available motion information;
if the peripheral matching block is an un-decoded block, determining that the peripheral matching block does not have available motion information;
and if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block.
3. The method according to claim 1, wherein if the plurality of peripheral matching blocks includes a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially, the method further comprises:
for a first peripheral matching block and a second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to judge whether both the second peripheral matching block and the third peripheral matching block have available motion information;
if there is no available motion information in at least one of the second peripheral matched block and the third peripheral matched block, prohibiting the motion information angular prediction mode from being added to a motion information prediction mode candidate list of the current block.
4. The method according to any one of claims 1-3, further comprising:
selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
if the target motion information prediction mode is a target motion information angle prediction mode, filling motion information of peripheral blocks of the current block;
determining the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
5. The method of claim 4,
the filling motion information of peripheral blocks of the current block includes:
and filling zero motion information into the motion information of the peripheral block aiming at the peripheral block without available motion information.
6. The method of claim 4, wherein determining the motion information of the current block according to the motion information of the plurality of neighboring matching blocks pointed to by the pre-configured angle of the target motion information angular prediction mode comprises:
dividing the current block into at least one sub-region;
for each sub-region, selecting a peripheral matching block corresponding to the sub-region from the plurality of peripheral matching blocks;
and determining the motion information of the sub-area according to the motion information of the selected peripheral matching block.
7. An encoding method applied to an encoding end, the method comprising:
selecting a plurality of peripheral matching blocks pointed by a pre-configuration angle from peripheral blocks of a current block according to the pre-configuration angle of any motion information angle prediction mode of the current block; the plurality of peripheral matching blocks comprise a first peripheral matching block and a second peripheral matching block to be traversed;
for a first peripheral matching block and a second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, prohibiting adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block;
if the plurality of peripheral matching blocks comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed in sequence:
for a first peripheral matching block and a second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to judge whether both the second peripheral matching block and the third peripheral matching block have available motion information;
if there is available motion information for both the second peripheral matching block and the third peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of a current block when motion information of the second peripheral matching block and the third peripheral matching block are different; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
8. The method of claim 7,
the process of judging whether any peripheral matching block has available motion information includes:
if the peripheral matching block is positioned outside the image where the current block is positioned or the peripheral matching block is positioned outside the image slice where the current block is positioned, determining that the peripheral matching block does not have available motion information;
if the peripheral matching block is an uncoded block, determining that the peripheral matching block does not have available motion information;
and if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block.
9. The method of claim 7, wherein if the plurality of peripheral matching blocks includes a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially, the method further comprises:
for a first peripheral matching block and a second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to judge whether both the second peripheral matching block and the third peripheral matching block have available motion information;
and if at least one of the second peripheral matching block and the third peripheral matching block does not have available motion information, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block.
10. The method according to any one of claims 7-9, further comprising:
if the motion information angle prediction mode exists in the motion information prediction mode candidate list, filling motion information of peripheral blocks of the current block;
for each motion information angle prediction mode in the motion information prediction mode candidate list, determining motion information of a current block according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
11. The method of claim 10,
the filling motion information of peripheral blocks of the current block includes:
and filling zero motion information into the motion information of the peripheral block aiming at the peripheral block without available motion information.
12. The method of claim 10, wherein determining the motion information of the current block according to the motion information of the plurality of neighboring matching blocks pointed to by the preconfigured angle of the motion information angular prediction mode comprises:
dividing the current block into at least one sub-region;
for each sub-region, selecting a peripheral matching block corresponding to the sub-region from the plurality of peripheral matching blocks;
and determining the motion information of the sub-area according to the motion information of the selected peripheral matching block.
13. A decoding apparatus, applied to a decoding side, the decoding apparatus comprising:
a selection module, configured to select, for any one motion information angle prediction mode of a current block, a plurality of peripheral matching blocks pointed to by a preconfigured angle from peripheral blocks of the current block based on the preconfigured angle of the motion information angle prediction mode;
a processing module, configured to, if the plurality of peripheral matching blocks include a first peripheral matching block and a second peripheral matching block to be traversed: for a first and a second peripheral matching block to be traversed, if there is no available motion information in at least one of the first and the second peripheral matching block, refraining from adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block;
if the plurality of peripheral matching blocks comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially: for a first peripheral matching block and a second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to judge whether both the second peripheral matching block and the third peripheral matching block have available motion information;
if there is available motion information for both the second peripheral matching block and the third peripheral matching block, adding the motion information angle prediction mode to a motion information prediction mode candidate list of a current block when the motion information of the second peripheral matching block and the third peripheral matching block is different; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
14. The apparatus of claim 13,
the processing module is specifically configured to, when determining whether any peripheral matching block has available motion information:
if the peripheral matching block is positioned outside the image where the current block is positioned or the peripheral matching block is positioned outside the image slice where the current block is positioned, determining that the peripheral matching block does not have available motion information;
if the peripheral matching block is an un-decoded block, determining that the peripheral matching block does not have available motion information;
and if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block.
15. The apparatus of claim 13,
if the plurality of peripheral matching blocks include a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially, the processing module is further configured to: for a first peripheral matching block and a second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to judge whether both the second peripheral matching block and the third peripheral matching block have available motion information;
and if at least one of the second peripheral matching block and the third peripheral matching block does not have available motion information, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block.
16. The apparatus of any one of claims 13-15, wherein the processing module is further configured to: selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list; if the target motion information prediction mode is a target motion information angle prediction mode, filling motion information of peripheral blocks of the current block; determining the motion information of the current block according to the motion information of a plurality of peripheral matching blocks pointed by the pre-configured angles of the target motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
17. The apparatus of claim 16,
the processing module is specifically configured to, when filling the motion information of the peripheral blocks of the current block: and filling zero motion information into the motion information of the peripheral block aiming at the peripheral block without available motion information.
18. The apparatus of claim 16,
the processing module is specifically configured to, when determining motion information of a current block, according to motion information of a plurality of peripheral matching blocks pointed by preconfigured angles of the target motion information angle prediction mode: dividing the current block into at least one sub-region; for each sub-region, selecting a peripheral matching block corresponding to the sub-region from the plurality of peripheral matching blocks; and determining the motion information of the sub-area according to the motion information of the selected peripheral matching block.
19. An encoding apparatus applied to an encoding side, the encoding apparatus comprising:
a selection module, configured to select, for any motion information angle prediction mode of a current block, a plurality of peripheral matching blocks pointed to by a preconfigured angle from peripheral blocks of the current block based on the preconfigured angle of the motion information angle prediction mode;
a processing module, configured to, if the plurality of peripheral matching blocks include a first peripheral matching block and a second peripheral matching block to be traversed: for a first peripheral matching block and a second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, prohibiting adding the motion information angle prediction mode to a motion information prediction mode candidate list of the current block;
if the plurality of peripheral matching blocks comprise a first peripheral matching block, a second peripheral matching block and a third peripheral matching block to be traversed sequentially: for a first peripheral matching block and a second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to judge whether both the second peripheral matching block and the third peripheral matching block have available motion information;
if there is available motion information for both the second peripheral matching block and the third peripheral matching block, adding the motion information angular prediction mode to a motion information prediction mode candidate list of a current block when motion information of the second peripheral matching block and the third peripheral matching block are different; if there is available motion information in both the second peripheral matching block and the third peripheral matching block, prohibiting adding the motion information angular prediction mode to the motion information prediction mode candidate list of the current block when the motion information of the second peripheral matching block and the third peripheral matching block is the same.
20. The apparatus of claim 19,
the processing module is specifically configured to, when determining whether any peripheral matching block has available motion information:
if the peripheral matching block is positioned outside the image where the current block is positioned or the peripheral matching block is positioned outside the image slice where the current block is positioned, determining that the peripheral matching block does not have available motion information;
if the peripheral matching block is an uncoded block, determining that the peripheral matching block does not have available motion information;
and if the peripheral matching block is the intra-frame block, determining that no available motion information exists in the peripheral matching block.
21. The apparatus of claim 19,
if the plurality of peripheral matching blocks include a first peripheral matching block, a second peripheral matching block, and a third peripheral matching block to be traversed sequentially, the processing module is further configured to: for a first peripheral matching block and a second peripheral matching block to be traversed, if at least one of the first peripheral matching block and the second peripheral matching block does not have available motion information, continuing to judge whether both the second peripheral matching block and the third peripheral matching block have available motion information;
and if at least one of the second peripheral matching block and the third peripheral matching block does not have available motion information, prohibiting the motion information angle prediction mode from being added to the motion information prediction mode candidate list of the current block.
22. The apparatus of any one of claims 19-21, wherein the processing module is further configured to: if the motion information angle prediction mode exists in the motion information prediction mode candidate list, filling the motion information of the peripheral blocks of the current block; for each motion information angle prediction mode in the motion information angle prediction mode candidate list, determining motion information of a current block according to motion information of a plurality of peripheral matching blocks pointed by a pre-configured angle of the motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
23. The apparatus of claim 22,
the processing module is specifically configured to, when filling the motion information of the peripheral blocks of the current block: and filling zero motion information into the motion information of the peripheral block for the peripheral block without available motion information.
24. The apparatus of claim 22,
the processing module is specifically configured to, when determining motion information of a current block, according to motion information of a plurality of peripheral matching blocks pointed by preconfigured angles of the motion information angle prediction mode: dividing the current block into at least one sub-region; for each sub-region, selecting a peripheral matching block corresponding to the sub-region from the plurality of peripheral matching blocks; and determining the motion information of the sub-area according to the motion information of the selected peripheral matching block.
25. A decoding device, characterized by comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; wherein the content of the first and second substances,
the processor is configured to execute the machine-executable instructions to implement the method of any of claims 1-6.
26. An encoding device, characterized by comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; wherein the processor is configured to execute the machine executable instructions to implement the method of any of claims 7-12.
27. A machine-readable storage medium having stored thereon machine-executable instructions executable by a processor; wherein the processor is configured to execute the machine-executable instructions to implement the method of any of claims 1-6 or to implement the method of any of claims 7-12.
CN202111153196.4A 2019-08-23 2019-08-23 Encoding and decoding method, device and equipment Active CN113794884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111153196.4A CN113794884B (en) 2019-08-23 2019-08-23 Encoding and decoding method, device and equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111153196.4A CN113794884B (en) 2019-08-23 2019-08-23 Encoding and decoding method, device and equipment
CN201910786742.4A CN112422971B (en) 2019-08-23 2019-08-23 Encoding and decoding method, device and equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910786742.4A Division CN112422971B (en) 2019-08-23 2019-08-23 Encoding and decoding method, device and equipment

Publications (2)

Publication Number Publication Date
CN113794884A CN113794884A (en) 2021-12-14
CN113794884B true CN113794884B (en) 2022-12-23

Family

ID=74780152

Family Applications (4)

Application Number Title Priority Date Filing Date
CN202111153196.4A Active CN113794884B (en) 2019-08-23 2019-08-23 Encoding and decoding method, device and equipment
CN202111150945.8A Active CN113794883B (en) 2019-08-23 2019-08-23 Encoding and decoding method, device and equipment
CN201910786742.4A Active CN112422971B (en) 2019-08-23 2019-08-23 Encoding and decoding method, device and equipment
CN202111168217.XA Active CN113747166B (en) 2019-08-23 2019-08-23 Encoding and decoding method, device and equipment

Family Applications After (3)

Application Number Title Priority Date Filing Date
CN202111150945.8A Active CN113794883B (en) 2019-08-23 2019-08-23 Encoding and decoding method, device and equipment
CN201910786742.4A Active CN112422971B (en) 2019-08-23 2019-08-23 Encoding and decoding method, device and equipment
CN202111168217.XA Active CN113747166B (en) 2019-08-23 2019-08-23 Encoding and decoding method, device and equipment

Country Status (1)

Country Link
CN (4) CN113794884B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709502B (en) * 2021-03-19 2022-12-23 杭州海康威视数字技术股份有限公司 Decoding method, encoding method, device, equipment and machine readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107925759A (en) * 2015-06-05 2018-04-17 英迪股份有限公司 Method and apparatus for coding and decoding infra-frame prediction
CN109104609A (en) * 2018-09-12 2018-12-28 浙江工业大学 A kind of lens boundary detection method merging HEVC compression domain and pixel domain
CN110024402A (en) * 2016-11-29 2019-07-16 韩国电子通信研究院 Image coding/decoding method and device and the recording medium for being stored with bit stream

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101309421B (en) * 2008-06-23 2010-09-29 北京工业大学 Intra-frame prediction mode selection method
KR102182628B1 (en) * 2011-12-05 2020-11-24 엘지전자 주식회사 Method and device for intra prediction
KR20170115969A (en) * 2016-04-08 2017-10-18 한국전자통신연구원 Method and apparatus for derivation of motion prediction information
US11039130B2 (en) * 2016-10-28 2021-06-15 Electronics And Telecommunications Research Institute Video encoding/decoding method and apparatus, and recording medium in which bit stream is stored
CN109089119B (en) * 2017-06-13 2021-08-13 浙江大学 Method and equipment for predicting motion vector
CN109587479B (en) * 2017-09-29 2023-11-10 华为技术有限公司 Inter-frame prediction method and device for video image and coder-decoder

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107925759A (en) * 2015-06-05 2018-04-17 英迪股份有限公司 Method and apparatus for coding and decoding infra-frame prediction
CN110024402A (en) * 2016-11-29 2019-07-16 韩国电子通信研究院 Image coding/decoding method and device and the recording medium for being stored with bit stream
CN109104609A (en) * 2018-09-12 2018-12-28 浙江工业大学 A kind of lens boundary detection method merging HEVC compression domain and pixel domain

Also Published As

Publication number Publication date
CN113747166B (en) 2022-12-23
CN113794883A (en) 2021-12-14
CN112422971B (en) 2022-04-26
CN113794883B (en) 2022-12-23
CN113747166A (en) 2021-12-03
CN112422971A (en) 2021-02-26
CN113794884A (en) 2021-12-14

Similar Documents

Publication Publication Date Title
US10764597B2 (en) Video encoding and decoding
CN111263144B (en) Motion information determination method and equipment
CN113873249B (en) Encoding and decoding method, device and equipment
US20150103899A1 (en) Scalable encoding and decoding
CN113794884B (en) Encoding and decoding method, device and equipment
CN113709457B (en) Decoding and encoding method, device and equipment
CN115834904A (en) Inter-frame prediction method and device
CN114079783B (en) Encoding and decoding method, device and equipment
CN112449181B (en) Encoding and decoding method, device and equipment
CN112449180B (en) Encoding and decoding method, device and equipment
CN113709486B (en) Encoding and decoding method, device and equipment
CN113766234B (en) Decoding and encoding method, device and equipment
CN111669592B (en) Encoding and decoding method, device and equipment
CN112055220B (en) Encoding and decoding method, device and equipment
CN112073734B (en) Encoding and decoding method, device and equipment
CN114598889B (en) Encoding and decoding method, device and equipment
US20160366434A1 (en) Motion estimation apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant