CN116016946A - Encoding and decoding method, device and equipment thereof - Google Patents

Encoding and decoding method, device and equipment thereof Download PDF

Info

Publication number
CN116016946A
CN116016946A CN202211610564.8A CN202211610564A CN116016946A CN 116016946 A CN116016946 A CN 116016946A CN 202211610564 A CN202211610564 A CN 202211610564A CN 116016946 A CN116016946 A CN 116016946A
Authority
CN
China
Prior art keywords
motion information
prediction mode
current block
sub
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211610564.8A
Other languages
Chinese (zh)
Inventor
方树清
孙煜程
陈方栋
王莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202211610564.8A priority Critical patent/CN116016946A/en
Publication of CN116016946A publication Critical patent/CN116016946A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Abstract

The application provides a coding and decoding method, a device and equipment thereof, wherein the method comprises the following steps: if the target motion information prediction mode of the current block is the selected motion information angle prediction mode, then: determining the motion information of the current block according to the motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block. Through the technical scheme of the application, improvement of coding performance is brought, and a large number of bits are saved.

Description

Encoding and decoding method, device and equipment thereof
Technical Field
The present disclosure relates to the field of coding and decoding technologies, and in particular, to a coding and decoding method, device, and apparatus thereof.
Background
For the purpose of saving space, the video images are transmitted after being encoded, and the complete video encoding method can comprise the processes of prediction, transformation, quantization, entropy encoding, filtering and the like. The prediction coding comprises intra-frame coding and inter-frame coding, wherein the inter-frame coding utilizes the correlation of video time domains, and pixels adjacent to the coded image are used for predicting pixels of the current image so as to achieve the purpose of effectively removing video time domain redundancy.
In inter-frame coding, a Motion Vector (MV) may be used to represent the relative displacement between a current image block of a current frame video image and a reference image block of a reference frame video image. For example, when there is a strong temporal correlation between the video image a of the current frame and the video image B of the reference frame, and when the image block A1 (current image block) of the video image a needs to be transmitted, a motion search may be performed in the video image B to find the image block B1 (i.e., the reference image block) that is the best match with the image block A1, and determine the relative displacement between the image block A1 and the image block B1, that is, the motion vector of the image block A1.
In the prior art, the current coding unit is not required to be divided into blocks, but only one piece of motion information can be determined for the current coding unit by directly indicating the motion information index or the difference information index.
Since all sub-blocks inside the current coding unit share one piece of motion information, for some small moving objects, the coding unit needs to be divided into blocks before the optimal motion information can be obtained. However, if the current coding unit is divided into a plurality of sub-blocks, additional bit overhead is generated.
Disclosure of Invention
The application provides a coding and decoding method and equipment thereof, which can improve coding performance and save bits.
The application provides a coding and decoding method applied to a decoding end or an encoding end, comprising the following steps:
acquiring a motion information prediction mode of a current block, wherein the motion information prediction mode at least comprises a motion information angle prediction mode; and performing weight checking processing on the motion information angle prediction mode of the current block to obtain a motion information angle prediction mode after weight checking.
The application provides a coding and decoding method applied to a decoding end or an encoding end, comprising the following steps:
If the target motion information prediction mode of the current block is the selected motion information angle prediction mode, then:
determining the motion information of the current block according to the motion information angle prediction mode;
and determining the predicted value of the current block according to the motion information of the current block.
The application provides a coding and decoding device, which is applied to a decoding end or an encoding end, and comprises:
the first determining module is used for determining the motion information of the current block according to the motion information angle prediction mode if the target motion information prediction mode of the current block is the selected motion information angle prediction mode;
and the second determining module is used for determining the predicted value of the current block according to the motion information of the current block.
The application provides a decoding end device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine-executable instructions to perform the steps of:
if the target motion information prediction mode of the current block is the selected motion information angle prediction mode, determining the motion information of the current block according to the motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
The application provides a coding end device, including: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine-executable instructions to perform the steps of:
if the target motion information prediction mode of the current block is the selected motion information angle prediction mode, determining the motion information of the current block according to the motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
As can be seen from the above technical solutions, in the embodiments of the present application, the current block is not required to be divided, bit overhead caused by sub-block division is effectively solved, motion information is provided for each sub-region of the current block on the basis that the current block is not divided into sub-blocks, and different sub-regions of the current block can correspond to the same or different motion information, so that improvement of coding performance is brought, the problem of transmitting a large amount of motion information is solved, and a large number of bits can be saved. The motion information angle prediction modes are subjected to repeated searching processing, so that the number of the motion information angle prediction modes is reduced, and the coding performance can be further improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly describe the drawings that are required to be used in the embodiments of the present application or the description in the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may also be obtained according to these drawings of the embodiments of the present application for a person having ordinary skill in the art.
FIG. 1 is a schematic diagram of a video encoding framework in one embodiment of the present application;
FIGS. 2A and 2B are schematic diagrams of a partitioning approach in one embodiment of the present application;
3A-3F are application scenario diagrams in one embodiment of the present application;
FIG. 4 is a flow chart of a method of encoding and decoding in one embodiment of the present application;
fig. 5A and 5B are schematic diagrams of motion information angle prediction modes in the embodiment of the present application;
FIG. 6 is a flow chart of a method of encoding and decoding in one embodiment of the present application;
FIG. 7 is a flow chart of a method of encoding and decoding in one embodiment of the present application;
FIG. 8 is a schematic diagram of a peripheral block of a current block in one embodiment of the present application;
9A-9N are schematic diagrams of perimeter matching blocks in one embodiment of the present application;
Fig. 10A and 10B are block diagrams of a codec device in an embodiment of the present application;
FIG. 11 is a hardware configuration diagram of a decoding side device in an embodiment of the present application;
fig. 12 is a hardware configuration diagram of an encoding end device in an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to any or all possible combinations including one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. Depending on the context, the word "if" may be interpreted as "at … …" or "at … …" or "in response to a determination".
The embodiment of the application provides a coding and decoding method, which can relate to the following concepts:
motion Vector (MV): in inter-frame coding, a motion vector is used to represent a relative displacement between a current image block of a video image of a current frame and a reference image block of a video image of a reference frame, for example, there is a strong temporal correlation between a video image a of the current frame and a video image B of the reference frame, a motion search may be performed in the video image B when transmitting an image block A1 (current image block) of the video image a, an image block B1 (reference image block) that is the best match with the image block A1 is found, and a relative displacement between the image block A1 and the image block B1, that is, a motion vector of the image block A1 is determined. Each divided image block has a corresponding motion vector to be transmitted to the decoding end, and if the motion vector of each image block is independently encoded and transmitted, particularly divided into a large number of image blocks of small size, a considerable number of bits are consumed. In order to reduce the number of bits used for encoding a motion vector, spatial correlation between neighboring image blocks may be utilized, a motion vector of a current image block to be encoded may be predicted from a motion vector of a neighboring encoded image block, and then a prediction difference may be encoded, so that the number of bits representing the motion vector may be effectively reduced.
Illustratively, in the motion vector encoding process of the current image block, the motion vector of the current macroblock may be predicted by using the motion vectors of neighboring encoded image blocks, and then the Difference (MVD, motionVector Difference) between the predicted value (MVP, motion Vector Prediction) of the motion vector and the true estimate of the motion vector may be encoded, thereby effectively reducing the number of encoding bits of the motion vector.
Motion information (Motion Information): since the motion vector indicates the positional offset of the current image block from a certain reference image block, in order to accurately acquire information directed to the image block, index information of reference frame images is required in addition to the motion vector to indicate which reference frame image is used. In the video coding technology, for a current frame image, a reference frame image list may be generally established, and the reference frame image index information indicates what reference frame image in the reference frame image list is adopted by the current image block. Many coding techniques also support multiple reference picture lists, and therefore, an index value, which may be referred to as a reference direction, may also be used to indicate which reference picture list is used. In the video coding technology, information related to motion such as a motion vector, a reference frame index, a reference direction, and the like may be collectively referred to as motion information.
Rate distortion principle (Rate-Distortion Optimized): there are two major indicators for evaluating coding efficiency: the smaller the bit stream, the larger the compression ratio and the larger the PSNR, the better the reconstructed image quality, and the discrimination formula is essentially the integrated evaluation of the two in mode selection. For example, the cost of pattern correspondence: j (mode) =d+λ×r, illustratively D denotes Distortion, which can be measured generally using an SSE index, which refers to the mean square sum of differences between reconstructed image blocks and source images; λ is the lagrangian multiplier, and R is the actual number of bits required for coding an image block in this mode, including the sum of bits required for coding mode information, motion information, residuals, etc.
Intra-prediction and inter-prediction (intra prediction and inter prediction) techniques: intra prediction refers to that prediction encoding can be performed using reconstructed pixel values of spatial neighboring image blocks of the current image block (i.e., in the same frame of image as the current image block). Inter-frame prediction is to perform prediction coding by using reconstructed pixel values of adjacent image blocks in the time domain of the current image block (which are located in different frame images from the current image block), and inter-frame prediction is to use correlation in the time domain of video.
CTU (Coding Tree Unit) refers to a maximum Coding Unit supported by a Coding end and a maximum decoding Unit supported by a decoding end. For example, a frame of image may be divided into several disjoint CTUs, each of which may then determine whether to further divide into smaller blocks based on the actual situation.
Before introducing the technical solution of the embodiments of the present application, the following basic knowledge is briefly introduced:
referring to fig. 1, which is a schematic diagram of a video encoding framework, the encoding end processing flow of the embodiments of the present application may be implemented using the video encoding framework, and the schematic diagram of a video decoding framework is similar to fig. 1, which is not described herein again, and the decoding end processing flow of the embodiments of the present application may be implemented using the video decoding framework.
Illustratively, in the video encoding framework and video decoding framework, intra prediction, motion estimation/motion compensation, reference image buffers, in-loop filtering, reconstruction, transformation, quantization, inverse transformation, inverse quantization, entropy encoder, etc. modules may be included. At the encoding end, the processing flow of the encoding end can be realized through the coordination among the modules, and at the decoding end, the processing flow of the decoding end can be realized through the coordination among the modules.
In the image block partition technique, one CTU (Coding Tree Unit) may be recursively divided into CUs (Coding Unit) using a quadtree, and the CUs may be further divided into two or four PUs (Prediction Unit). After the prediction is completed and residual information is obtained, the CU may be further quadtree divided into a plurality of TUs (transmission Units).
The partitioning of image blocks in VVC (Versatile Video Coding, general video coding) has a large variation, and the binary tree/trigeminal tree/quadtree partitioning structure is mixed, i.e. the concepts of CU, PU and TU are cancelled, a more flexible CU partitioning manner is supported, CU may be square or rectangular, CTU first performs quadtree partitioning, and then leaf nodes of quadtree partitioning perform binary tree and trigeminal tree partitioning.
Referring to fig. 2A, a CU may have five partition types, which are respectively a quadtree partition, a horizontal binary tree partition, a vertical binary tree partition, a horizontal trigeminal tree partition, a vertical trigeminal tree partition, and the like. Referring to fig. 2B, the partitioning of CUs within a CTU may be any combination of the five partition types described above.
Simple introduction of Merge mode: in the inter-frame prediction module, because the video has strong time domain correlation, namely, two adjacent frames of images in the time domain have a plurality of similar image blocks, the image block of the current frame often carries out motion search in the adjacent reference image, and a block which is most matched with the current image block is found to be used as the reference image block. Since the similarity between the reference image block and the current image block is high, the difference value between the reference image block and the current image block is very small, so the code rate overhead of the coded difference value is usually far smaller than that caused by directly coding the pixel value of the current image block.
In order to represent the position of the reference image block that best matches the current image block, much motion information needs to be encoded and transferred to the decoding end so that the decoding end can know the position of the reference image block, and the motion information, especially the motion vector, needs to consume a very large code rate for encoding and transferring. In order to save this part of the code rate overhead, a coding mode is currently designed that saves motion information in comparison, i.e. the Merge mode.
In the Merge mode, the motion information of the current image block completely multiplexes the motion information of a neighboring block in the temporal or spatial domain, i.e., one motion information is selected as the motion information of the current image block from the set of motion information of a plurality of surrounding image blocks. Therefore, in the Merge mode, only one index value needs to be encoded to indicate which motion information in the motion information set is used by the current image block, thereby saving encoding overhead.
Simple introduction of AMVP (Advanced Motion Vector Prediction ) mode: the AMVP mode is similar to the Merge mode, and uses motion information prediction ideas of a space domain and a time domain, and selects an optimal candidate as motion information of a current image block through establishing a candidate motion information list and using rate distortion cost. The distinction between AMVP mode and Merge mode is shown in: in the Merge mode, MVs of the current unit are directly predicted by prediction units adjacent to each other in the spatial domain or the time domain, and there is no motion vector difference (Motion Vector Difference, MVD), while AMVP can be regarded as an MV prediction technique, and the encoder only needs to encode the difference between the actual MVs and the predicted MVs, so that MVDs exist. The two candidate MV queues are different in length, and the manner in which the MV list is constructed is also different.
In the Merge mode, a candidate list is established for the current prediction unit, and 5 candidate MVs (and corresponding reference frame information) exist in the candidate list. And traversing the 5 candidate MVs, calculating the rate-distortion cost, and finally selecting the candidate MVs with the minimum rate-distortion cost as the optimal MVs. If the encoding end and the decoding end construct the candidate list in the same mode, the encoding end only needs to transmit the index of the optimal MV in the candidate list, so that the encoding bit number of the motion information can be greatly saved.
The candidate list established by the Merge mode comprises a space domain situation and a time domain situation, and also comprises a combination list way for B Slice, and a space domain candidate list, a time domain candidate list and a combination list are explained below.
And (5) establishing a space domain candidate list. Referring to fig. 3A, A1 represents a lower-left most prediction unit of the current prediction unit, B1 represents a right-most prediction unit above the current prediction unit, B0 and A0 represent prediction units closest to the upper right and lower left of the current prediction unit, respectively, and B2 represents a prediction unit closest to the upper left corner of the current prediction unit. In the HEVC standard, a spatial candidate list is set up in the order of A1-B0-A0- (B2), where B2 is a replacement, providing at most 4 candidate MVs, i.e. motion information of 4 candidate blocks using at most the above 5 candidate blocks. When one or more of A1, B0, A0 is absent, then motion information of B2 is required, otherwise, motion information of B2 is not used.
And establishing a time domain candidate list. Referring to fig. 3B, a temporal candidate list may be established using motion information of prediction units of a current prediction unit at corresponding positions in a neighboring encoded image. Unlike the spatial candidate list, the temporal candidate list cannot directly use motion information of candidate blocks, but needs to be scaled accordingly according to the positional relationship of the reference image. In the HEVC standard, it is specified that the temporal candidate list provides at most one candidate MV, which is derived from the MV of the co-located prediction unit at the H position in fig. 3B by scaling, and if the H position is not available, the co-located PU at the C3 position is used for replacement.
It should be noted that if the number of candidate MVs in the current candidate list is less than 5, padding is required to reach a specified number by using default motion information (e.g., motion information (0, 0)) and the like.
And (3) establishing a combined list. For the prediction unit in B Slice, since there are two MVs, its candidate list also needs to provide two prediction MVs. The HEVC standard specifies that combining the first 4 candidate MVs in the MV candidates two by two can produce a combined list of B Slice.
And establishing a candidate list of the AMVP mode, and establishing the candidate list for the current prediction unit by utilizing the correlation of the motion vectors in the space domain and the time domain. The encoding end selects the optimal MV from the candidate list, performs differential encoding on the MV, and the decoding end can calculate the MV of the current prediction unit by establishing the same candidate list and only needing a motion vector residual error (MVD) and an index value of the prediction MV in the candidate list.
And (5) establishing a space domain candidate list. Referring to FIG. 3C, a candidate MV is generated on the left and top sides of the current prediction unit, with the left side being selected in the order A0-A1-scaled A0-scaled A1 and the top side being selected in the order B0-B1-B2 (scaled B0-scaled B1-scaled B2). For the upper three PUs, the scaling of their MVs can only be done if neither of the left two PUs is available or if both PUs are in intra prediction mode.
When the first "available" MV is detected to the left or above, the MV is used as a candidate MV for the current prediction unit, and the remaining steps are not performed. A0 At most one candidate among A1, scaled A0, scaled A1, and at most one candidate among B0, B1, B2, scaled B0, scaled B1, and scaled B2.
It should be noted that a candidate MV can be marked as "available" only if the reference picture to which the candidate MV corresponds is the same as the current prediction unit; otherwise, the candidate MVs need to be scaled correspondingly.
And establishing a time domain candidate list. The construction of the time domain candidate list of the AMVP is the same as that of the Merge. When the spatial and temporal candidates are less than two, the (0, 0) is used for complement.
Although the Merge mode can greatly save the encoding overhead of motion information, and the AMVP mode can improve the prediction accuracy of motion information, the two modes only have one motion information for the current coding unit, i.e., all sub-blocks inside the current coding unit share one motion information. For a small moving target, an application scene of optimal motion information can be obtained only after the coding unit is subjected to block division, and if the current coding unit is not divided, the current coding unit only has one motion information, so that the prediction accuracy is not very high.
For example, referring to fig. 3D, the region C, the region G, and the region H are regions within the current coding unit, and are not sub-image blocks divided within the current coding unit. Assuming that the current coding unit uses the motion information of the picture block F, each region within the current coding unit uses the motion information of the picture block F,
obviously, since the region H in the current coding unit is far from the image block F, if the region H also uses the motion information of the image block F, the prediction accuracy of the motion information of the region H is not very high.
For example, if the current coding unit is divided in the division manner of fig. 2A or fig. 2B, a plurality of sub-image blocks may be obtained. For example, referring to fig. 3E, the sub-image block C, the sub-image block G, the sub-image block H, and the sub-image block I are sub-image blocks divided within the current coding unit, and each sub-image block within the current coding unit may use motion information separately since the current coding unit is divided into a plurality of sub-image blocks. However, since the current coding unit is divided by using the division scheme of fig. 2A or 2B, additional bits are consumed to transmit the division scheme, resulting in a certain bit overhead.
Based on the working principles of the Merge mode and the AMVP mode, the motion information of part of sub-image blocks inside the current coding unit cannot be utilized to the coded motion information around the current coding unit, so that the available motion information is reduced, and the accuracy of the motion information is not high. For example, for the sub-picture block I inside the current coding unit, only the motion information of the sub-picture block C, the sub-picture block G, the sub-picture block H can be utilized, and the motion information of the picture block a, the picture block B, the picture block F, the picture block D, the picture block E cannot be utilized.
Aiming at the findings, the embodiment of the application provides a coding and decoding method, which can enable the current image block to correspond to a plurality of pieces of motion information on the basis of not dividing the current image block, namely on the basis of not increasing the cost caused by sub-block division, thereby improving the prediction precision of the motion information of the current image block. Since the current image block is not divided, it is possible to avoid consuming additional bits to transmit the division manner, and save the bit overhead. For each region of the current image block (note that, here, any region within the current image block, the size of which is smaller than the size of the current image block and is not a sub-image block obtained by dividing the current image block), the motion information of each region of the current image block may be obtained by using the encoded motion information around the current image block, in other words, different regions of the current image block may correspond to the same or different motion information, and the current image block may have a plurality of different motion information, thereby providing more motion information for the region inside the coding unit and improving the accuracy of the motion information.
Referring to fig. 3D, C is a sub-region inside a current image block (i.e., a current coding unit), A, B, D, E and F are coded blocks around the current image block, motion information of the current sub-region C can be directly obtained by using an angle prediction method, and other sub-regions inside the current coding unit (e.g., G, H and the like) are also obtained by using the same method. Thus, for the current coding unit, different motion information can be obtained without carrying out block division on the current coding unit, and bit overhead of a part of block division is saved.
The current image block (hereinafter simply referred to as a current block) in the embodiment of the present application is any image unit in the encoding and decoding process, and the encoding and decoding process is performed by taking the current block as a unit, such as the CU in the above embodiment. Referring to fig. 3F, the current block includes 9 regions (hereinafter referred to as sub-regions within the current block), such as sub-regions F1-F9, which are sub-regions within the current block, not sub-image blocks after dividing the current block.
For different sub-areas of the sub-areas f 1-f 9, the same or different motion information can be corresponding, so that the current block can also correspond to a plurality of motion information on the basis of not dividing the current block, for example, the sub-area f1 corresponds to the motion information 1, the sub-area f2 corresponds to the motion information 2, and the like.
For example, in determining the motion information of the sub-region f5, the motion information of the image block A1, the image block A2, the image block A3, the image block E, the image block B1, the image block B2, and the image block B3, that is, the motion information of the encoded blocks around the current block may be utilized, thereby providing more motion information for the sub-region f 5. Of course, the motion information of the image block A1, the image block A2, the image block A3, the image block E, the image block B1, the image block B2, and the image block B3 may be used for the motion information of the other sub-regions of the current block.
The following describes the codec method in the embodiments of the present application in conjunction with several specific embodiments.
Example 1: referring to fig. 4, a flow chart of a coding and decoding method in an embodiment of the present application is shown, where the method may be applied to a decoding end or an encoding end, and the method may include the following steps:
step 401, a motion information prediction mode of the current block is obtained, wherein the motion information prediction mode at least comprises a motion information angle prediction mode. Illustratively, the motion information angle prediction mode is used for indicating a pre-configuration angle, selecting a peripheral matching block from peripheral blocks of the current block for a sub-region of the current block according to the pre-configuration angle, and determining one or more pieces of motion information of the current block according to the motion information of the peripheral matching block; the peripheral matching block is a block at a specified position determined from the peripheral blocks according to the preset angle.
For example, the peripheral block includes a block adjacent to the current block; alternatively, the peripheral blocks include blocks adjacent to the current block and non-adjacent blocks. Of course, the peripheral blocks may also include other blocks, without limitation.
In one example, the motion information angle prediction modes include, but are not limited to, one or any combination of the following: horizontal prediction mode, vertical prediction mode, horizontal upward prediction mode, horizontal downward prediction mode, vertical rightward prediction mode. Of course, the foregoing is only a few examples, and other types of motion information angle prediction modes are also possible, and the prediction modes are related to the preset angle, for example, the preset angle may be 10 degrees or 20 degrees. Referring to fig. 5A, a schematic diagram of a horizontal prediction mode, a vertical prediction mode, a horizontal upward prediction mode, a horizontal downward prediction mode, and a vertical rightward prediction mode is shown, where different prediction modes correspond to different pre-configuration angles.
Step 402, performing a duplication checking process on the motion information angle prediction mode of the current block to obtain a motion information angle prediction mode after duplication checking, that is, removing the repeated motion information angle prediction mode.
Exemplary, the motion information angle prediction mode of the current block is subjected to a duplication checking process, so as to obtain a duplication checked motion information angle prediction mode, which may include, but is not limited to: determining a motion information angle prediction mode of the weight to be checked; for any two first motion information angle prediction modes of the weight to be checked and any two second motion information angle prediction modes of the weight to be checked, the following steps are performed: the first motion information can be determined according to a first motion information angle prediction mode of the weight to be checked; determining second motion information according to a second motion information angle prediction mode of the weight to be checked; then, if the first motion information and the second motion information are the same, it may be determined that the first motion information angle prediction mode and the second motion information angle prediction mode are repeated.
In one example, determining the motion information angle prediction mode for the weight to be examined may include, but is not limited to: for any motion information angle prediction mode, selecting a peripheral matching block pointed by a preset angle from peripheral blocks of the current block according to the preset angle of the motion information angle prediction mode; selecting a plurality of peripheral matching blocks to be traversed from the peripheral matching blocks; and if the motion information of the traversed peripheral matching blocks is the same, determining the motion information angle prediction mode as the motion information angle prediction mode of the to-be-checked weight.
Illustratively, the plurality of peripheral matching blocks traversed may be some or all of all peripheral matching blocks, and the traversal steps may be the same or different in different image blocks.
In one example, after determining that the first motion information angle prediction mode and the second motion information angle prediction mode are repeated, the repeated first motion information angle prediction mode may also be removed; alternatively, the repeated second motion information angle prediction mode is removed.
In one example, after the motion information angle prediction mode of the current block is subjected to the duplication checking process to obtain a motion information angle prediction mode after duplication checking, a motion information angle prediction mode for secondary duplication checking can be further determined from the motion information angle prediction modes after duplication checking; the motion information angle prediction mode after the weight checking may include a first part of motion information angle prediction mode and a second part of motion information angle prediction mode, where the first part of motion information angle prediction mode may be a motion information angle prediction mode without weight checking, and the second part of motion information angle prediction mode may include a motion information angle prediction mode after repeated motion information is removed from the motion information angle prediction mode after the weight checking, and the motion information angle prediction mode after the weight checking may be a second part of motion information angle prediction mode. And performing weight checking processing on inter-frame prediction modes to be checked except the motion information angle prediction mode in the motion information angle prediction mode of the secondary weight checking and the prediction mode of the current block to obtain a motion information prediction mode after weight checking.
In one example, the process of performing the duplicate checking on the inter prediction mode to be checked except the motion information angle prediction mode in the motion information angle prediction mode of the second duplicate checking and the prediction mode of the current block may include, but is not limited to: aiming at any one of the motion information angle prediction modes of the secondary check, acquiring motion information of the motion information angle prediction mode; for example, if the motion information of the motion information angle prediction mode is the same as the motion information of an inter prediction mode of the to-be-examined, it may be determined that the motion information angle prediction mode is repeated with the inter prediction mode of the to-be-examined.
In an example, the inter prediction mode of the to-be-checked heavy frame may be any one of all non-angle inter prediction modes corresponding to the current block, or the inter prediction mode of the to-be-checked heavy frame may also be any one of partial non-angle inter prediction modes corresponding to the current block, which is not limited.
In one example, the motion information angle prediction mode of the current block is subjected to duplication checking, and after the motion information angle prediction mode after duplication checking is obtained, the target motion information prediction mode of the current block can be determined; if the target motion information prediction mode of the current block is a motion information angle prediction mode, then: determining the motion information of the current block according to the motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
As can be seen from the above technical solution, in the embodiment of the present application, the current block does not need to be divided, the division information of the sub-region of the current block can be determined based on the motion information angle prediction mode and the current block size, so that the bit overhead caused by sub-block division is effectively solved, on the basis that the current block is not divided into sub-blocks, the motion information is provided for each sub-region of the current block, and different sub-regions of the current block can correspond to the same or different motion information, so that the cost of a large number of coding bits can be saved, and the coding performance is improved. The motion information angle prediction modes are subjected to repeated searching processing, so that the number of the motion information angle prediction modes is reduced, and the coding performance can be further improved.
Referring to fig. 5B, which is a schematic diagram of a horizontal prediction mode, a vertical prediction mode, a horizontal upward prediction mode, a horizontal downward prediction mode, and a vertical rightward prediction mode, it can be seen from fig. 5B that different motion information angle prediction modes have the same effect on the current block, such as a horizontal prediction mode, a vertical prediction mode, and a horizontal upward prediction mode, and have the same motion information on the current block.
If the repeat checking process is not performed, when the index of the horizontal downward prediction mode is encoded, since the horizontal prediction mode, the vertical prediction mode, and the horizontal upward prediction mode are present in front (of course, the order of each angle prediction mode is not fixed, which is only one example here), it may be necessary to encode 0001 to represent. However, if the repeat checking process of the embodiment of the present application is performed, only the horizontal prediction mode exists before the horizontal downward prediction mode, and the vertical prediction mode and the horizontal upward prediction mode (these two modes are removed by the repeat checking process) do not exist, so when the index of the horizontal downward prediction mode is encoded, only 01 may need to be encoded to represent, and the vertical prediction mode and the horizontal upward prediction mode need not be encoded. Therefore, by performing the duplicate checking process on different motion information angle prediction modes, the bit overhead caused by encoding the motion information angle prediction mode index information is reduced. In addition, the motion information angle prediction mode can be rapidly checked and reprocessed, so that the encoding bit cost is saved, and the hardware complexity is reduced. In addition, when motion compensation is performed using the motion information angle prediction mode, bi-prediction of small blocks is prohibited for the sake of hardware friendliness.
Example 2: based on the same application concept as the above method, referring to fig. 6, a flowchart of a coding and decoding method according to an embodiment of the present application is shown, where the method may be applied to a coding end, and the method may include:
in step 601, the encoding end creates a motion information prediction mode candidate list corresponding to the current block, where the motion information prediction mode candidate list includes a plurality of motion information prediction modes, such as a motion information angle prediction mode.
The motion information angle prediction mode is used for indicating a pre-configuration angle, selecting a peripheral matching block from peripheral blocks of the current block for a sub-region of the current block according to the pre-configuration angle, and determining one or more pieces of motion information of the current block according to the motion information of the peripheral matching block; the peripheral matching block is a block at a specified position determined from the peripheral blocks at a predetermined angle. The peripheral blocks include blocks adjacent to the current block; alternatively, the peripheral blocks include blocks adjacent to the current block and non-adjacent blocks. Of course, the peripheral blocks may also include other blocks, without limitation.
Motion information angle prediction modes include, but are not limited to, one or any combination of the following: horizontal prediction mode, vertical prediction mode, horizontal upward prediction mode, horizontal downward prediction mode, vertical rightward prediction mode. Of course, the above are just a few examples, and other types of motion information angle prediction modes are possible.
In one example, a motion information prediction mode candidate list corresponding to the current block needs to be created, i.e., both the encoding side and the decoding side need to create the motion information prediction mode candidate list corresponding to the current block.
The motion information prediction mode candidate list of the encoding end and the motion information prediction mode candidate list of the decoding end are identical according to protocol convention. The encoding side and decoding side may use the same strategy to create the same motion information prediction mode candidate list. Of course, the above manner is just a few examples, and the creation manner is not limited as long as the encoding side and the decoding side have the same motion information prediction mode candidate list.
For example, one motion information prediction mode candidate list may be created for the current block, i.e., all sub-regions within the current block may correspond to the same motion information prediction mode candidate list; alternatively, a plurality of motion information prediction mode candidate lists may be created for the current block. The same or different motion information prediction mode candidate list may be corresponding to different current blocks. For convenience of description, taking an example of creating one motion information prediction mode candidate list for each current block, for example, a motion information prediction mode candidate list 1 for the current block a, a motion information prediction mode candidate list 1 for the current block B, and so on.
For example, the motion information prediction mode candidate list sequentially includes: horizontal prediction mode, vertical prediction mode, horizontal upward prediction mode, horizontal downward prediction mode, vertical rightward prediction mode. Of course, the above manner is just a few examples, and the pre-configuration angle may be any angle between 0-360 degrees, and the center point of the sub-region may be positioned at 0 degrees in the right horizontal direction, so any angle rotated counterclockwise from 0 degrees may be a pre-configuration angle, or the center point of the sub-region may be positioned at 0 degrees in other directions. In practical applications, the preconfigured angle may be a fractional angle, such as 22.5 degrees, etc.
In one example, the motion information angle prediction mode in the embodiment of the present application may be an angle prediction mode for predicting motion information, that is, an angle prediction mode used for an inter-frame coding process, not an intra-frame coding process, and the motion information angle prediction mode is selected by matching blocks, not matching pixels.
Step 602, the encoding end performs duplication checking processing on a plurality of motion information angle prediction modes in the motion information prediction mode candidate list, so as to obtain a motion information angle prediction mode after duplication checking. For convenience of distinction, the motion information angle prediction mode after duplicate checking may be referred to as a candidate motion information angle prediction mode.
For example, referring to fig. 5B, it is assumed that the motion information prediction mode candidate list sequentially includes: and if the two modes are repeated, the vertical prediction mode is removed from the motion information prediction mode candidate list. And then, performing repeat checking processing on the horizontal prediction mode and the horizontal upward prediction mode, and if the two modes are repeated, removing the horizontal upward prediction mode from the motion information prediction mode candidate list. And then, performing repeat checking processing on the horizontal prediction mode and the horizontal downward prediction mode, if the two modes are not repeated, reserving the two modes, and the like. Illustratively, assuming that the motion information prediction mode candidate list finally remains a horizontal prediction mode, a horizontal downward prediction mode, and a vertical rightward prediction mode, the remaining motion information angle prediction modes are candidate motion information angle prediction modes.
In one example, performing the duplicate checking process on the plurality of motion information angle prediction modes in the motion information prediction mode candidate list to obtain candidate motion information angle prediction modes may include: determining a motion information angle prediction mode of the weight to be checked; for any two first motion information angle prediction modes of the weight to be checked and any two second motion information angle prediction modes of the weight to be checked, the following steps are performed: determining first motion information according to a first motion information angle prediction mode; determining second motion information according to the second motion information angle prediction mode; then, if the first motion information and the second motion information are the same, it may be determined that the first motion information angle prediction mode and the second motion information angle prediction mode are repeated.
In one example, determining the motion information angle prediction mode for the weight to be examined may include, but is not limited to: for any motion information angle prediction mode, selecting a peripheral matching block pointed by a preset angle from peripheral blocks of the current block according to the preset angle of the motion information angle prediction mode; selecting a plurality of peripheral matching blocks to be traversed from among the peripheral matching blocks; if the motion information of the traversed plurality of peripheral matching blocks is the same, the motion information angle prediction mode can be determined to be the motion information angle prediction mode of the to-be-checked weight.
For example, referring to fig. 5B, for the horizontal prediction mode, a peripheral matching block, such as 10 peripheral matching blocks, to which the pre-configuration angle points is selected from all peripheral blocks of the current block according to the pre-configuration angle of the horizontal prediction mode. Then, a plurality of peripheral matching blocks (e.g., all or part of 10 peripheral matching blocks, followed by 5 peripheral matching blocks as an example) are selected from the 10 peripheral matching blocks. If the motion information of the 5 peripheral matching blocks is the same, if the motion information of the 5 peripheral matching blocks is A1, determining the horizontal prediction mode as the motion information angle prediction mode of the weight to be checked. Similarly, the vertical prediction mode and the horizontal upward prediction mode are motion information angle prediction modes of the weight to be checked.
For the horizontal downward prediction mode, a peripheral matching block pointed by the preset angle, such as 8 peripheral matching blocks, is selected from all peripheral blocks of the current block according to the preset angle of the horizontal downward prediction mode. Then, a plurality of peripheral matching blocks (e.g., all or part of the 8 peripheral matching blocks, followed by 6 peripheral matching blocks as an example) are selected from the 8 peripheral matching blocks. If the motion information of the 6 peripheral matching blocks is not identical, such as A1, C2, C3, C4, etc., the horizontal downward prediction mode is not the motion information angle prediction mode of the weight to be checked. Similarly, the vertical rightward prediction mode is not the motion information angle prediction mode of the weight to be checked.
Illustratively, the horizontal prediction mode, the vertical prediction mode and the horizontal upward prediction mode are all motion information angle prediction modes of the weight to be checked. For the horizontal prediction mode and the vertical prediction mode, since the motion information of the horizontal prediction mode is A1 and the motion information of the vertical prediction mode is A1, that is, both are the same, the horizontal prediction mode and the vertical prediction mode are repeated, and the vertical prediction mode is removed from the motion information prediction mode candidate list. For the horizontal prediction mode and the horizontal upward prediction mode, since the motion information of the horizontal prediction mode is A1 and the motion information of the horizontal upward prediction mode is A1, that is, both are the same, the horizontal prediction mode and the horizontal upward prediction mode are repeated, and the horizontal upward prediction mode is removed from the motion information prediction mode candidate list. Through the above processing, the motion information prediction mode candidate list finally remains a horizontal prediction mode, a horizontal downward prediction mode and a vertical rightward prediction mode, namely a candidate motion information angle prediction mode.
It should be noted that if the motion information of the horizontal prediction mode is different from the motion information of the horizontal upward prediction mode, it is determined that the horizontal prediction mode and the horizontal upward prediction mode are not repeated, and the horizontal prediction mode or the horizontal upward prediction mode is not removed from the motion information prediction mode candidate list. Since the horizontal downward prediction mode and the vertical rightward prediction mode are not motion information angle prediction modes of the weight to be checked, the horizontal downward prediction mode and the vertical rightward prediction mode are retained in the motion information prediction mode candidate list.
In one example, for any motion information angle prediction mode, according to the preconfiguration angle of the motion information angle prediction mode, a peripheral matching block pointed by the preconfiguration angle (such as the 10 peripheral matching blocks) is selected from peripheral blocks of the current block, and a plurality of peripheral matching blocks to be traversed (such as the 5 peripheral matching blocks) are selected from the peripheral matching blocks. For example, the perimeter-matching blocks to be traversed are determined in terms of the traversal step. Assuming that the traversal step length is 2, the first peripheral matching block to be traversed is the matching block 1, the second peripheral matching block to be traversed is the matching block 3, and so on, other manners may be adopted, which is not limited.
For each peripheral matching block to be traversed, a peripheral matching block to be compared may be selected for the peripheral matching block from all peripheral matching blocks of the current block (e.g., the peripheral matching blocks may be selected as the peripheral matching blocks to be compared in the traversal order, for example, the peripheral matching blocks to be compared are determined in the traversal step, assuming the traversal step is 2, the first peripheral matching block to be traversed is the matching block 1, and for the matching block 1, the peripheral matching block to be compared is the matching block 3). And if the motion information of the peripheral matching blocks to be traversed is the same as the motion information of the peripheral matching blocks to be compared, setting the comparison value of the peripheral matching blocks to be traversed as a first identifier (such as 0). Or if the motion information of the peripheral matching block to be traversed is different from the motion information of the peripheral matching block to be compared, setting the comparison value of the peripheral matching block to be traversed as a second identifier (such as 1).
After the above processing is performed on each peripheral matching block to be traversed, a comparison value of each peripheral matching block to be traversed can be obtained. If the comparison value of each peripheral matching block to be traversed is the first identification, the motion information of the plurality of traversed peripheral matching blocks is the same, and the motion information angle prediction mode is determined to be the motion information angle prediction mode of the weight to be traversed. If the comparison value of any peripheral matching block to be traversed is the second identifier, determining that the motion information angle prediction mode is not the motion information angle prediction mode of the weight to be examined.
For example, referring to fig. 5A, assuming that the first peripheral matching block to be traversed is A3 and the traversal step is 2, the second peripheral matching block to be traversed is A1, the third peripheral matching block to be traversed is B1, and the fourth peripheral matching block to be traversed is B3. Aiming at a peripheral matching block A3 to be traversed, the peripheral matching block to be compared is A1; aiming at the peripheral matching block A1 to be traversed, the peripheral matching block to be compared is B1; for the peripheral matching block B1 to be traversed, the peripheral matching block to be compared is B3.
Based on the above, whether the motion information of the peripheral matching block A3 to be traversed is the same as the motion information of the peripheral matching block A1 to be compared is judged, if so, the comparison value of the peripheral matching block A3 to be traversed is stored as a first identifier, and if not, the comparison value of the peripheral matching block A3 to be traversed is stored as a second identifier.
Then, judging whether the motion information of the peripheral matching block A1 to be traversed is the same as the motion information of the peripheral matching block B1 to be compared, if so, storing the comparison value of the peripheral matching block A1 to be traversed as a first identifier, if not, storing the comparison value of the peripheral matching block A1 to be traversed as a second identifier, and so on.
In one example, when determining the peripheral matching block to be traversed from the peripheral matching blocks of the current block, the peripheral matching block may be selected according to different traversal steps. For example, two peripheral matching blocks are determined according to the first traversal step, and then the next peripheral matching block is determined according to the second traversal step, which is not limited.
In step 603, the encoding end selects a candidate motion information angle prediction mode from each candidate motion information angle prediction mode in the motion information prediction mode candidate list, and determines the selected candidate motion information angle prediction mode as a target motion information angle prediction mode corresponding to the current block.
Illustratively, for the process of step 603, the process may include the steps of:
step a1, determining a peripheral matching block corresponding to the current block according to each candidate motion information angle prediction mode of the motion information prediction mode candidate list.
For each candidate motion information angle prediction mode, such as a horizontal prediction mode, a horizontal downward prediction mode, a vertical rightward prediction mode, etc., as illustrated in fig. 5A and 5B, for example. Based on the preconfigured angle indicated by the horizontal prediction mode, the peripheral matching block 1 corresponding to the current block, such as a plurality of peripheral matching blocks 1, can be determined, and the determination process is not repeated. Based on the preconfigured angle indicated by the horizontal downward prediction mode, a peripheral matching block 2 corresponding to the current block, such as a plurality of peripheral matching blocks 2, may be determined. Based on the preconfigured angle indicated by the vertical rightward prediction mode, a peripheral matching block 3 corresponding to the current block, such as a plurality of peripheral matching blocks 3, may be determined.
Step a2, the encoding end determines the rate distortion cost corresponding to the candidate motion information angle prediction mode according to the motion information of the peripheral matching blocks (such as a plurality of motion information corresponding to a plurality of peripheral matching blocks respectively).
For example, the rate distortion cost 1 corresponding to the horizontal prediction mode is determined according to the motion information (i.e., the plurality of motion information) of the plurality of peripheral matching blocks 1. And determining the rate distortion cost 2 corresponding to the horizontal downward prediction mode according to the motion information (i.e. the plurality of motion information) of the plurality of peripheral matching blocks 2. A rate-distortion cost 3 corresponding to the vertical rightward prediction mode is determined from the motion information (i.e., the plurality of motion information) of the plurality of peripheral matching blocks 3.
In one example, based on a plurality of motion information of a plurality of peripheral matching blocks, a rate distortion cost corresponding to the candidate motion information angle prediction mode may be determined using a rate distortion principle. The rate-distortion cost may be determined by the following formula: j (mode) =d+λ×r, illustratively D denotes Distortion, which can be measured generally using an SSE index, which refers to the mean square sum of differences between reconstructed image blocks and source images; λ is a lagrange multiplier, and R is the actual number of bits required for coding an image block in the mode, including the sum of bits required for coding mode information, motion information, residual error, and the like, and the determination method is not limited.
And a step a3, the encoding end determines a candidate motion information angle prediction mode with the minimum rate distortion cost according to the rate distortion cost corresponding to each candidate motion information angle prediction mode, and determines the candidate motion information angle prediction mode with the minimum rate distortion cost as a target motion information angle prediction mode corresponding to the current block.
For example, assuming that the rate-distortion cost 1 corresponding to the horizontal prediction mode is smaller than the rate-distortion cost 2 corresponding to the horizontal downward prediction mode, and the rate-distortion cost 2 corresponding to the horizontal downward prediction mode is smaller than the rate-distortion cost 3 corresponding to the vertical rightward prediction mode, the horizontal prediction mode may be the target motion information angle prediction mode.
In step 604, the encoding end encodes the current block according to the target motion information angle prediction mode.
For example, the encoding end may determine the motion information of each sub-region in the current block according to the target motion information angle prediction mode, and motion compensate the sub-region by using the motion information of each sub-region.
In one example, encoding the current block according to the target motion information angle prediction mode may include: determining motion information of the current block according to the target motion information angle prediction mode; the prediction value of the current block is determined according to the motion information of the current block, which is a motion compensation process.
Illustratively, determining motion information of the current block according to the target motion information angle prediction mode includes:
in step 6041, a selection condition for acquiring motion information of the current block is determined according to the target motion information angle prediction mode and the size of the current block. For example, the selection condition may be a first selection condition that the motion information selected from the motion information of the peripheral matching block is not allowed to be bi-directional motion information (i.e., unidirectional motion information is allowed, or forward motion information in bi-directional motion information, or backward motion information in bi-directional motion information) or a second selection condition that the motion information selected from the motion information of the peripheral matching block is allowed to be bi-directional motion information (i.e., unidirectional motion information is allowed, forward motion information in bi-directional motion information, backward motion information in bi-directional motion information).
For example, if the size of the current block satisfies: the width is greater than or equal to a preset size parameter (which can be configured according to experience, such as 8, etc.), the height is greater than or equal to the preset size parameter, and the selection condition is determined to be a second selection condition according to any motion information angle prediction mode. If the size of the current block satisfies: the width is smaller than the preset size parameter, and is higher than the preset size parameter, and when the target motion information angle prediction mode is a vertical prediction mode, the selection condition is determined to be a second selection condition; and when the target motion information angle prediction mode is other prediction modes except the vertical prediction mode, determining the selection condition as a first selection condition.
If the size of the current block satisfies: the height is smaller than the preset size parameter, the width is larger than the preset size parameter, and when the target motion information angle prediction mode is a horizontal prediction mode, the selection condition is determined to be a second selection condition; and when the target motion information angle prediction mode is other prediction modes except the horizontal prediction mode, determining the selection condition as a first selection condition. If the size of the current block satisfies: the height is smaller than the preset size parameter, the width is smaller than the preset size parameter, and the selection condition is determined to be a first selection condition according to any motion information angle prediction mode. If the size of the current block satisfies: the method comprises the steps that the height is smaller than a preset size parameter, the width is equal to the preset size parameter, or the height is equal to the preset size parameter, the width is smaller than the preset size parameter, and for any motion information angle prediction mode, the selection condition is determined to be a first selection condition.
Referring to table 1 in the subsequent embodiment, taking the preset size parameter as 8 as an example, the "one-way" in table 1 indicates that the selection condition is the first selection condition, i.e., bidirectional motion information is not allowed, and the "two-way" in table 1 indicates that the selection condition is the second selection condition, i.e., bidirectional motion information is allowed.
In step 6042, the sub-region division information of the current block is determined according to the target motion information angle prediction mode and the size of the current block, that is, the sub-region division information indicates a manner of dividing the current block into sub-regions.
For example, when the target motion information angle prediction mode is a horizontal upward prediction mode, a horizontal downward prediction mode, or a vertical rightward prediction mode, if the width of the current block is greater than or equal to a preset size parameter, and the height of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8×8; if the width of the current block is smaller than the preset size parameter, or the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4*4.
When the target motion information angle prediction mode is a horizontal prediction mode, if the width of the current block is larger than a preset size parameter, the size of the sub-region is 4 times the width of the current block, or the size of the sub-region is 4*4; if the width of the current block is equal to the preset size parameter and the height of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8 x 8; if the width of the current block is smaller than the preset size parameter, the size of the sub-region is 4*4.
When the target motion information angle prediction mode is a vertical prediction mode, if the height of the current block is larger than a preset size parameter, the size of the sub-region is 4 x the height of the current block, or the size of the sub-region is 4*4; if the height of the current block is equal to the preset size parameter and the width of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8 x 8; if the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4*4.
As shown in table 1 in the subsequent examples, a preset dimensional parameter of 8 is taken as an example.
In one example, the size of the current block, the motion information angle prediction mode, the size of the sub-region, the direction of the sub-region (one-way indicates the first selection condition, i.e. not allowed to be bi-directional motion information, two-way indicates the second selection condition, i.e. allowed to be bi-directional motion information) may be as shown in table 1 below.
TABLE 1
Figure BDA0003999425510000201
Figure BDA0003999425510000211
In one example, when the target motion information angle prediction mode is the horizontal prediction mode, if the width of the current block is greater than 8, the size of the sub-region may also be 4*4 of the current block.
In one example, when the target motion information angle prediction mode is the vertical prediction mode, if the current block is higher than 8, the size of the sub-region may also be 4*4.
Step 6043, selecting a peripheral matching block pointed by the preset angle from the peripheral blocks of the current block according to the preset angle corresponding to the target motion information angle prediction mode.
For example, for any one of the motion information angle prediction modes of the horizontal prediction mode, the vertical prediction mode, the horizontal upward prediction mode, the horizontal downward prediction mode, and the vertical rightward prediction mode, the preconfigured angle corresponding to the motion information angle prediction mode may be known. After knowing the preconfigured angle, the peripheral matching block pointed by the preconfigured angle can be selected from the peripheral blocks of the current block, which is not limited.
Step 6044, determining motion information of the current block according to the selection condition, the sub-region division information and the motion information of the peripheral matching block. For example, dividing the current block into at least one sub-region according to the sub-region division information; for each sub-region of the current block, selecting a peripheral matching block corresponding to the sub-region from peripheral matching blocks of the current block according to the target motion information angle prediction mode, and determining the motion information of the sub-region according to the motion information and the selection condition of the peripheral matching block corresponding to the sub-region. Then, the motion information of the at least one sub-region is determined as the motion information of the current block.
The selecting, from the peripheral matching blocks of the current block, a peripheral matching block corresponding to the sub-region according to the target motion information angle prediction mode may include, but is not limited to: and selecting a peripheral matching block pointed by the preset angle from peripheral blocks of the current block according to the preset angle corresponding to the target motion information angle prediction mode.
For example, referring to the above-described embodiment, it is assumed that the current block is divided into sub-region 1 and sub-region 2 according to the sub-region division information. For the sub-region 1, a peripheral matching block 1 corresponding to the sub-region 1 may be selected from peripheral matching blocks of the current block according to a target motion information angle prediction mode (e.g., horizontal prediction mode, vertical prediction mode, horizontal upward prediction mode, horizontal downward prediction mode, vertical rightward prediction mode, etc.). Assuming that the peripheral matching block 1 stores bi-directional motion information including forward motion information and backward motion information, if the selection condition of the sub-region 1 is the first selection condition, the forward motion information or the backward motion information corresponding to the peripheral matching block 1 may be used as the motion information of the sub-region 1. If the selection condition of the sub-region 1 is the second selection condition, the bi-directional motion information corresponding to the peripheral matching block 1 may include the forward motion information and the backward motion information as the motion information of the sub-region 1.
For the sub-region 2, a peripheral matching block 2 corresponding to the sub-region 2 may be selected from among peripheral matching blocks of the current block according to the target motion information angle prediction mode. Assuming that the peripheral matching block 2 stores unidirectional motion information, the unidirectional motion information corresponding to the peripheral matching block 2 is taken as motion information of the sub-region 2. Then, both the motion information of the sub-region 1 and the motion information of the sub-region 2 may be determined as the motion information of the current block.
In one example, the motion information for sub-region 1, as well as the motion information for sub-region 2, may be stored in accordance with the 4*4 sub-block size.
In another example, determining motion information of the current block according to the target motion information angle prediction mode includes: determining a selection condition of the current block for acquiring motion information according to the size of the current block; the selection condition is a second selection condition that motion information selected from motion information of the surrounding matching blocks is allowed to be bi-directional motion information (unidirectional motion information is allowed, forward motion information in bi-directional motion information, backward motion information in bi-directional motion information). For example, if the size of the current block satisfies: the width is larger than or equal to a preset size parameter (configured according to experience, such as 8, and the like), and the height is larger than or equal to the preset size parameter, the selection condition is directly determined to be a second selection condition, and the selection condition is irrelevant to the target motion information angle prediction mode.
Then, sub-region division information of the current block is determined according to the size of the current block. For example, if the size of the current block satisfies: the width is greater than or equal to a preset size parameter (configured according to experience, such as 8, etc.), and the height is greater than or equal to the preset size parameter, the size of the subarea is 8 x 8, and the subarea is irrelevant to the angle prediction mode of the target motion information.
And then, selecting a peripheral matching block pointed by the preset angle from peripheral blocks of the current block according to the preset angle corresponding to the target motion information angle prediction mode. For example, after knowing the preconfigured angle, the peripheral matching block pointed to by the preconfigured angle may be selected from the peripheral blocks of the current block, which is not limited.
Then, according to the selection condition, the sub-region division information and the motion information of the peripheral matching block, determining the motion information of the current block. For example, dividing the current block into at least one sub-region according to the sub-region division information; for each sub-region of the current block, selecting a peripheral matching block corresponding to the sub-region from peripheral matching blocks of the current block according to the target motion information angle prediction mode, and determining the motion information of the sub-region according to the motion information and the selection condition of the peripheral matching block corresponding to the sub-region. Then, the motion information of the at least one sub-region is determined as the motion information of the current block.
Example 3: based on the same application concept as the above method, referring to fig. 7, a flow chart of a coding and decoding method according to an embodiment of the present application is shown, where the method may be applied to a decoding end, and the method may include:
in step 701, the decoding end creates a motion information prediction mode candidate list corresponding to the current block, where the motion information prediction mode candidate list includes a plurality of motion information prediction modes, such as a motion information angle prediction mode.
Motion information angle prediction modes include, but are not limited to, one or any combination of the following: horizontal prediction mode, vertical prediction mode, horizontal upward prediction mode, horizontal downward prediction mode, vertical rightward prediction mode. Of course, the above are just a few examples, and other types of motion information angle prediction modes are possible.
The motion information prediction mode candidate list at the decoding end is the same as the motion information prediction mode candidate list at the encoding end, namely the sequence of the motion information angle prediction modes of the two is identical.
For example, the implementation process of step 701 may refer to step 601, which is not described herein.
Step 702, the decoding end performs a duplication checking process on a plurality of motion information angle prediction modes in the motion information prediction mode candidate list, so as to obtain a motion information angle prediction mode after duplication checking. For convenience of distinction, the motion information angle prediction mode after duplicate checking may be referred to as a candidate motion information angle prediction mode.
For example, the implementation process of step 702 may refer to step 602, which is not described herein.
In step 703, the decoding end selects a candidate motion information angle prediction mode from each candidate motion information angle prediction mode in the motion information prediction mode candidate list, and determines the selected candidate motion information angle prediction mode as a target motion information angle prediction mode corresponding to the current block.
For example, for the process of step 703, the process may include the steps of:
and b1, the decoding end acquires indication information from the coded bit stream, wherein the indication information is used for indicating index information of a target motion information angle prediction mode in a motion information prediction mode candidate list.
For example, when the encoding end transmits the encoded bitstream to the decoding end, the encoded bitstream may carry indication information for indicating index information of the target motion information angle prediction mode in the motion information prediction mode candidate list. For example, assuming that the motion information prediction mode candidate list sequentially includes a horizontal prediction mode, a horizontal downward prediction mode, and a vertical rightward prediction mode, and the horizontal prediction mode is a target motion information angle prediction mode, the indication information is used to indicate index information 1, and the index information 1 may represent a first candidate motion information angle prediction mode of the motion information prediction mode candidate list.
And b2, the decoding end selects a candidate motion information angle prediction mode corresponding to the index information from the motion information prediction mode candidate list, and determines the selected candidate motion information angle prediction mode as a target motion information angle prediction mode corresponding to the current block. For example, when the indication information is used to indicate index information 1, the decoding end may determine the 1 st candidate motion information angle prediction mode in the motion information prediction mode candidate list as the target motion information angle prediction mode corresponding to the current block.
In step 704, the decoding end decodes the current block according to the target motion information angle prediction mode.
For example, the decoding end may determine the motion information of each sub-region in the current block according to the target motion information angle prediction mode, and motion compensate the sub-region by using the motion information of each sub-region.
For example, the implementation process of step 704 may refer to step 604, which is not described herein.
Example 4: based on the same application concept as the above method, another codec method is also provided in the embodiments of the present application, where the codec method may be applied to the encoding end, and the method may include:
In step c1, the encoding end creates a motion information prediction mode candidate list corresponding to the current block, where the motion information prediction mode candidate list includes a plurality of motion information angle prediction modes and a motion information non-angle prediction mode (i.e., a motion information prediction mode of a different type from the motion information angle prediction mode).
The implementation process of step c1 may refer to step 601, and unlike step 601, the motion information prediction mode candidate list includes a motion information non-angle prediction mode, and does not limit the motion information non-angle prediction mode, and may be different from the motion information angle prediction mode, in addition to the horizontal prediction mode, the vertical prediction mode, the horizontal upward prediction mode, the horizontal downward prediction mode, and the vertical rightward prediction mode.
And c2, the coding end performs duplication checking processing on a plurality of motion information angle prediction modes in the motion information prediction mode candidate list to obtain a motion information angle prediction mode after duplication checking. For convenience of distinction, the motion information angle prediction mode after duplicate checking may be referred to as a candidate motion information angle prediction mode.
For example, the implementation process of step c2 may refer to step 602, and will not be repeated here.
And c3, the encoding end performs the weight checking processing on the candidate motion information angle prediction mode of the weight to be checked and the inter-frame prediction mode of the weight to be checked (namely, the inter-frame prediction mode of the weight to be checked except the motion information angle prediction mode can be the motion information non-angle prediction mode) to obtain the motion information prediction mode after weight checking.
For example, referring to step 602 above, after performing the search processing on the plurality of motion information angle prediction modes, the candidate motion information angle prediction modes may include: horizontal prediction mode, horizontal downward prediction mode, vertical rightward prediction mode. Then, from among the horizontal prediction mode, the horizontal downward prediction mode and the vertical rightward prediction mode, the candidate motion information angle prediction mode of the weight to be checked is determined, and the specific determination mode is referred to as step 602. Obviously, the horizontal prediction mode is a motion information angle prediction mode of the weight to be checked, and the horizontal downward prediction mode and the vertical rightward prediction mode are not motion information angle prediction modes of the weight to be checked.
In summary, the candidate motion information angle prediction modes in the prediction mode candidate list may include a first partial motion information angle prediction mode and a second partial motion information angle prediction mode. The first partial motion information angle prediction mode is a motion information angle prediction mode that is not repeated, for example, a horizontal downward prediction mode and a vertical rightward prediction mode. The second partial motion information angle prediction mode includes a motion information angle prediction mode from which repetition is removed, for example, a horizontal prediction mode, among motion information angle prediction modes of the search. Obviously, the candidate motion information angle prediction mode of the weight to be checked (namely, the motion information angle prediction mode of the secondary weight checking) can be the motion information angle prediction mode of the second part.
The motion information prediction mode candidate list includes motion information non-angular prediction modes (also referred to as non-angular inter prediction modes), the number of which is one or more, for example a plurality. The inter prediction mode for the heavy to be checked may be any one of all non-angle inter prediction modes corresponding to the current block or any one of partial non-angle inter prediction modes corresponding to the current block.
And aiming at any one of candidate motion information angle prediction modes to be checked, such as a horizontal prediction mode, acquiring motion information of the candidate motion information angle prediction mode. If the motion information of the candidate motion information angle prediction mode is the same as the motion information of the inter-frame prediction mode of the weight to be checked, the candidate motion information angle prediction mode is determined to be repeated with the inter-frame prediction mode of the weight to be checked, and the candidate motion information angle prediction mode or the inter-frame prediction mode of the weight to be checked is removed from the motion information prediction mode candidate list. If the motion information of the candidate motion information angle prediction mode is different from the motion information of the inter-frame prediction mode of the weight to be checked, determining that the candidate motion information angle prediction mode is not repeated with the inter-frame prediction mode of the weight to be checked, and reserving the candidate motion information angle prediction mode and the inter-frame prediction mode of the weight to be checked in a motion information prediction mode candidate list.
And c4, the encoding end selects one motion information prediction mode from each motion information prediction mode (such as a motion information angle prediction mode and a motion information non-angle prediction mode) of the motion information prediction mode candidate list after the duplicate checking, and determines the selected motion information prediction mode as a target motion information prediction mode of the current block.
For example, the implementation process of step c4 may refer to step 603, unlike step 603, in which, in step 603, a motion information angle prediction mode with a low cost value is selected as a target motion information prediction mode based on the cost value of each motion information angle prediction mode, and in step c4, a motion information prediction mode with a low cost value is selected as a target motion information prediction mode based on the cost values of all motion information prediction modes (such as a motion information angle prediction mode and a motion information non-angle prediction mode) after weight checking, that is, the target motion information prediction mode may be a motion information angle prediction mode or a motion information non-angle prediction mode, which is not repeated herein.
And c5, the encoding end performs motion compensation on the current block according to the target motion information prediction mode.
If the target motion information prediction mode is a motion information angle prediction mode, the coding end determines the motion information of each sub-region in the current block according to the motion information angle prediction mode, and performs motion compensation on the sub-region by utilizing the motion information of each sub-region. If the target motion information prediction mode is a motion information non-angle prediction mode, the coding end determines the motion information of each sub-region in the current block according to the motion information non-angle prediction mode, and performs motion compensation on the sub-region by utilizing the motion information of each sub-region.
For example, the implementation process of step c5 may refer to step 604, and will not be repeated here.
Example 5: based on the same application concept as the above method, another codec method is also provided in the embodiments of the present application, where the codec method may be applied to a decoding end, and the method may include:
in step d1, the decoding end creates a motion information prediction mode candidate list corresponding to the current block, wherein the motion information prediction mode candidate list comprises a plurality of motion information angle prediction modes and a motion information non-angle prediction mode (namely, motion information prediction modes of different types from the motion information angle prediction modes).
And d2, the decoding end performs duplication checking processing on a plurality of motion information angle prediction modes in the motion information prediction mode candidate list to obtain a motion information angle prediction mode after duplication checking. For convenience of distinction, the motion information angle prediction mode after duplicate checking may be referred to as a candidate motion information angle prediction mode.
And d3, the decoding end performs the weight checking processing on the candidate motion information angle prediction mode of the weight to be checked and the inter-frame prediction mode of the weight to be checked (namely, the inter-frame prediction mode of the weight to be checked except the motion information angle prediction mode can be the motion information non-angle prediction mode) to obtain the motion information prediction mode after weight checking.
Step d4, the decoding end selects one motion information prediction mode from each motion information prediction mode (such as a motion information angle prediction mode and a motion information non-angle prediction mode) of the motion information prediction mode candidate list, and determines the selected motion information prediction mode as a target motion information prediction mode of the current block.
And d5, the decoding end performs motion compensation on the current block according to the target motion information prediction mode.
For example, the implementation of the step d1 to the step d5 may refer to the step c1 to the step c5, which are not described herein.
Example 6: based on the same application concept as the above method, the embodiment of the present application proposes another encoding and decoding method, which can be applied to the encoding end or the decoding end. As shown in fig. 8, the peripheral blocks of the current block may include peripheral blocks A1, A2, am, am+1, am+n, am+n+1, am+n+2, am+n+1, a2m+n+2, and a2m+2. In summary, the peripheral blocks of the current block may include, but are not limited to, blocks adjacent to the current block, blocks not adjacent to the current block, and even blocks in other adjacent frames.
Referring to fig. 8, the width and height of the current block are W and H, respectively, i.e., the width value of the current block is W, the height value of the current block is H, and the motion information of the peripheral blocks is stored in a minimum unit of 4x 4.
In embodiment 6, the duplication checking procedure of the above embodiment is described through several specific application scenarios.
Based on the illustration of fig. 8, the following describes a codec method in connection with several specific application scenarios.
Application scenario 1: the sizes of m and n are W/4 and H/4 respectively, let i be any integer in [1, m ], let j=i+step, 1< =step < =Max (m, n), step is the traversing step, is an integer, max (m, n) is the maximum value in m and n, let k be any integer in [2m+n+2,2m+2n+1 ]. Based on this, a comparison process is performed:
And e1, judging whether j is larger than k, if so, exiting the comparison process, otherwise, executing step e2.
Step e2, comparing the motion information of the peripheral block Ai with the motion information of the peripheral block Aj.
Illustratively, if the motion information of the peripheral block Ai is the same as that of the peripheral block Aj, diff [ i ] of the peripheral block Ai may be noted as 0; if the motion information of the peripheral block Ai is different from that of the peripheral block Aj, diff [ i ] of the peripheral block Ai may be noted as 1. After step e2, step e3 is performed.
Step e3, let i=j, j=j+step, the value of step be any integer in [1, max (m, n) ] and the value of step may be the same each time or different each time, and then return to step e1.
After the above processing, after the comparison process is exited, the motion information angle prediction mode in the motion information prediction mode candidate list may be subjected to the duplication checking processing according to the comparison result (i.e., the value of Diff).
For horizontal prediction mode, then the pair i belongs to the interval [ m+1, m+n ]]Any j Diff [ i ]]Value is judged, 1<=j<=n, if any j Diff values are all 0, mode [0 ] is noted]=0, meaning that the motion information is all the same, motion information M [0 ] ]Is marked as A k K is the interval [ m+1, m+n ]]Any one integer of (a); otherwise, record mode [0 ]]=1, meaning that the motion information is not all the same.
For vertical prediction mode, then pair i belongs to the interval [ m+n+2,2m+n+1]Any j Diff [ i ]]Value is judged, 1<=j<=m, if any j Diff values are all 0, then mode [1 ] is noted]=0, motion information M [1 ]]Is marked as A k K is the interval [ m+n+2,2m+n+1 ]]Any one integer of (a); otherwise, record mode [1 ]]=1。
For the horizontal upward prediction mode, pair i belongs to the interval [ m+1,2m+n+1 ]]Any j Diff [ i ]]Value is judged, 1<=j<=m+n+1, if any j Diff values are all 0, mode [2 ] is noted]=0, motion information M [2 ]]Is marked as A k K is the interval [ m+1,2m+n+1 ]]Any one integer of (a); otherwise, record mode [2 ]]=1。
For horizontal downward pre-feedingThe measurement mode is that i belongs to the interval [1, m+n ]]Any j Diff [ i ]]Value is judged, 1<=j<=m+n, if any j Diff values are all 0, mode [3 ] is noted]=0, motion information M [3 ]]Is marked as A k K is the interval [1, m+n ]]Any one integer of (a); otherwise, record mode [3 ]]=1。
For the vertical rightward prediction mode, pair i belongs to the interval [ m+n+2,2m+2n+1]Any j Diff [ i ]]Value is judged, 1<=j<=m+n, if any j Diff values are all 0, mode [4 ] ]=0, motion information M [4 ]]Is marked as A k K is the interval [ m+n+2,2m+2n+1 ]]Any integer in (a); otherwise record mode [4 ]]=1。
Through the above processing, if the mode value of the motion information angle prediction mode is 0, the motion information angle prediction mode is the motion information angle prediction mode of the weight to be checked. Based on this, all the motion information angle prediction modes with mode value of 0 are found first, for example, mode [0] =0, mode [1] =0, mode [2] =0, and then the horizontal prediction mode, the vertical prediction mode and the horizontal upward prediction mode are motion information angle prediction modes to be checked. Then, whether the motion information M [0] and the motion information M [1] are the same or not is compared, if so, the horizontal prediction mode and the vertical prediction mode are repeated, and if not, the horizontal prediction mode and the vertical prediction mode are not repeated. Similarly, whether the motion information M [0] and the motion information M [2] are the same or not is compared, if so, the horizontal prediction mode and the horizontal upward prediction mode are repeated, and if not, the horizontal prediction mode and the horizontal upward prediction mode are not repeated. And so on, the determination of whether this is repeated is not repeated.
Application scenario 2: the width W of the current block is greater than or equal to 32, the height H is greater than or equal to 32, and the sizes of m and n are W/4 and H/4. Let i=w/16, j=i+step, step=w/16, the following comparison procedure is performed:
And f1, judging whether j is greater than 2m+2n+1, if yes, exiting the comparison process, and otherwise, executing the step f2.
Step f2, comparing the motion information of the peripheral block Ai with the motion information of the peripheral block Aj.
Illustratively, if the motion information of the peripheral block Ai is the same as that of the peripheral block Aj, diff [ i ] of the peripheral block Ai may be noted as 0; if the motion information of the peripheral block Ai is different from that of the peripheral block Aj, diff [ i ] of the peripheral block Ai may be noted as 1. After step f2, step f3 is performed.
Step f3, judging whether m < = j < m+n is true, if yes, step = H/16; otherwise, it can also be determined whether m+n < = j < m+n+2 holds. If true, step=1; otherwise, it is further determined whether m+n+2< = j <2m+n+1 is true. If true, step = W/16; otherwise, judging whether 2m+n+1< = j <2m+2n+1 is true. If true, step=H/16; otherwise, step remains unchanged.
Step f4, let i=j, j=j+step, and then return to step f1 to perform the processing.
After the above processing, after the comparison process is exited, the motion information angle prediction mode in the motion information prediction mode candidate list may be subjected to the duplication checking processing according to the comparison result (i.e., the value of Diff).
For the horizontal prediction mode, the values of a plurality of Diff [ i ] can be determined. Illustratively, i may take the values of m+n-H/16, m+n-H/8 and m+n-H/16 x 3. If the values of these diffs are all 0, mode [0] =0 can be noted and motion information M [0] can be noted as am+n; otherwise, note mode [0] =1.
For the vertical prediction mode, the values of a plurality of Diff [ i ] can be determined. Illustratively, i may take the values of m+n+2, m+n+2+W/16, and m+n+2+W/8. If the values of these diffs are all 0, mode [1] =0 can be noted and motion information M [1] can be noted as am+n+2; otherwise, note mode [1] =1.
For the horizontal upward prediction mode, the values of a plurality of Diff [ i ] are judged. The value of i can be m+n-H/8, m+n-H/16, m+n, m+n+1, m+n+2, m+n+2+W/16. If the values of these diffs are all 0, then mode [2] =0, and motion information M [2] is denoted am+n+1; otherwise, mode [2] =1.
For the horizontal downward prediction mode, then, a determination may be made of the values of the plurality of Diff [ i ]. Illustratively, i may be W/16, W/8, W/16 x 3, m, m+h/16, and m+h/8. If the values of these diffs are all 0, then mode [3] =0, and motion information M [3] is denoted Am; otherwise note mode [3] =1.
For the vertical rightward prediction mode, the values of the plurality of Diff [ i ] can be determined. Illustratively, i may take the values of m+n+2+W/16, m+n+2+W/8, m+n+2+W/16×3, 2m+n+2, 2m+n+2+h/16, and 2m+n+2+h/8. If the values of these diffs are all 0, mode [4] =0 can be noted, and the motion information M [4] can be noted as a2m+n+2; otherwise, mode [4] =1 can be noted.
Based on this, all motion information angle prediction modes with mode value 0 are found first, such as mode [0] =0, mode [1] =0, mode [2] =0. Then, comparing whether the motion information M [0] and the motion information M [1] are the same, if so, explaining that the horizontal prediction mode and the vertical prediction mode are repeated, if not, explaining that the horizontal prediction mode and the vertical prediction mode are not repeated, and so on, and the determination of whether to repeat is not repeated.
Application scenario 3: the width W of the current block is more than or equal to 32, the height H is less than 32, and the sizes of m and n are W/4 and H/4. Let i=w/16, j=i+step, step=w/16, the following comparison procedure is performed:
and g1, judging whether j is greater than 2m+2n+1, if yes, exiting the comparison process, and otherwise, executing the step g2.
Step g2, comparing the motion information of the peripheral block Ai with the motion information of the peripheral block Aj.
Illustratively, if the motion information of the peripheral block Ai is the same as that of the peripheral block Aj, diff [ i ] of the peripheral block Ai may be noted as 0; if the motion information of the peripheral block Ai is different from that of the peripheral block Aj, diff [ i ] of the peripheral block Ai may be noted as 1. After step g2, step g3 is performed.
Step g3, judging whether m < = j < m+n+2 is true, if yes, step=1; otherwise, it is determined whether m+n+2< = j <2m+n+1 is true. If true, step = W/16; otherwise, judging whether 2m+n+1< = j <2m+2n+1 is true, if yes, step=1; otherwise, step remains unchanged.
Step g4, i=j, j=j+step, and then, the process returns to step g 1.
After the above processing, after the comparison process is exited, the motion information angle prediction mode in the motion information prediction mode candidate list may be subjected to the duplication checking processing according to the comparison result (i.e., the value of Diff).
For the horizontal prediction mode, the values of a plurality of Diff [ i ] are judged, and the value of i can be an integer in [ m+1, m+n-1], if the value of i which does not meet the condition is not adopted, or the values of Diff are all 0, mode [0] =0 is recorded, and the motion information M [0] is recorded as am+n; otherwise, mode [0] =1.
For the vertical prediction mode, the values of a plurality of Diff [ i ] are determined, and exemplary values of i can be m+n+2, m+n+2+W/16, and m+n+2+W/8. If the values of these diffs are all 0, then mode [1] =0, and motion information M [1] is denoted as am+n+2; otherwise, mode [1] =1.
For the horizontal upward prediction mode, the values of a plurality of Diff [ i ] are judged, and the value of i can be each integer of m+n+2, m+n+2+W/16 and [ m+2, m+n+1 ]. If the values of these diffs are all 0, then mode [2] =0, and motion information M [2] is denoted am+n+1; otherwise, mode [2] =1.
For the horizontal downward prediction mode, a determination is made as to the values of a plurality of Diff [ i ], with exemplary values of i being each integer of W/16, W/8, W/16 x 3, and [ m, m+n-2 ]. If the values of these diffs are all 0, then mode [3] =0, and motion information M [3] is denoted Am; otherwise note mode [3] =1.
For the vertical rightward prediction mode, the values of the Diff [ i ] may be determined, and exemplary values of i may be each integer of m+n+2+W/16, m+n+2+W/8, m+n+2+W/16×3, and [2m+n+2,2m+2n ]. By way of example, if these Diff values are all 0, mode [4] =0 can be noted, and motion information M [4] can be noted as a2m+n+2; otherwise, mode [4] =1 can be noted.
Based on this, all motion information angle prediction modes with mode value 0 are found first, such as mode [0] =0, mode [1] =0, mode [2] =0. Then, comparing whether the motion information M [0] and the motion information M [1] are the same, if so, explaining that the horizontal prediction mode and the vertical prediction mode are repeated, if not, explaining that the horizontal prediction mode and the vertical prediction mode are not repeated, and so on, and the determination of whether to repeat is not repeated.
Application scenario 4: the width W of the current block is smaller than 32, the height H is larger than or equal to 32, and the sizes of m and n are W/4 and H/4. Let i=1, j=i+step, step=1, based on which the following comparison procedure can be performed:
and step h1, judging whether j is greater than 2m+2n+1, if yes, exiting the comparison process, and otherwise, executing step h2.
And step h2, comparing the motion information of the peripheral block Ai with the motion information of the peripheral block Aj.
Illustratively, if the motion information of the peripheral block Ai is the same as that of the peripheral block Aj, diff [ i ] of the peripheral block Ai may be noted as 0; if the motion information of the peripheral block Ai is different from that of the peripheral block Aj, diff [ i ] of the peripheral block Ai may be noted as 1. After step h2, step h3 is performed.
Step H3, judging whether m < = j < m+n is true, if yes, step = H/16; otherwise, it can be judged whether m+n < = j < m+n+2 is true; if true, step=1; otherwise, an exemplary determination is made as to whether m+n+2< = j <2m+n+1 holds. If true, step=1; otherwise, it is determined whether 2m+n+1< = j <2m+2n+1 is true. If true, step=H/16; otherwise, step remains unchanged.
Step h4, let i=j, j=j+step, and then return to step h1 for processing.
After the above processing, after the comparison process is exited, the motion information angle prediction mode in the motion information prediction mode candidate list may be subjected to the duplication checking processing according to the comparison result (i.e., the value of Diff).
For the horizontal prediction mode, the values of a plurality of Diff [ i ] can be determined. Illustratively, i may take the values of m+n-H/16, m+n-H/8 and m+n-H/16 x 3. If the values of these diffs are all 0, mode [0] =0 can be noted, and the motion information M [0] is noted as am+n; otherwise, mode [0] =1 can be noted.
For the vertical prediction mode, the values of a plurality of Diff [ i ] can be determined. Illustratively, the value of i may be all integers in [ m+n+2,2m+n ]. If the values of these diffs are all 0, then mode [1] =0, and motion information M [1] can be noted as am+n+2; otherwise, mode [1] =1 can be noted.
For the horizontal upward prediction mode, the values of a plurality of Diff [ i ] are judged. The value of i is all integer values in m+n-H/8, m+n-H/16, m+n, [ m+n+1,2m+n-1 ]. If these Diff values are all 0, then mode [2] =0, motion information M [2] is noted am+n+1; otherwise, mode [2] =1.
For the horizontal downward prediction mode, then, a determination may be made of the values of the plurality of Diff [ i ]. Illustratively, i may be all integer values of m, m+H/16, m+H/8, and [1, m-1 ]. If the values of these diffs are all 0, mode [3] =0 can be noted, and the motion information M [3] is noted as Am; otherwise note mode [3] =1.
For the vertical rightward prediction mode, the values of a plurality of Diff [ i ] are judged, wherein the values of i are all integer values in 2m+n+2, 2m+n+2+H/16, 2m+n+2+H/8, [ m+n+3,2m+n+1 ]. If the values of these diffs are all 0, then mode [4] =0, motion information M [4] is denoted a2m+n+2; otherwise, note mode [4] =1.
Based on this, all motion information angle prediction modes with mode value 0 are found first, such as mode [0] =0, mode [1] =0, mode [2] =0. Then, comparing whether the motion information M [0] and the motion information M [1] are the same, if so, explaining that the horizontal prediction mode and the vertical prediction mode are repeated, if not, explaining that the horizontal prediction mode and the vertical prediction mode are not repeated, and so on, and the determination of whether to repeat is not repeated.
Application scenario 5: the width W of the current block is smaller than 32, the height H is smaller than 32, and the sizes of m and n are W/4 and H/4 respectively. Let i=1, j=i+step, step=1, based on which the following comparison procedure can be performed:
And step s1, judging whether j is greater than 2m+2n+1, if yes, exiting the comparison process, and otherwise, executing step s2.
Step s2, comparing the motion information of the peripheral block Ai with the motion information of the peripheral block Aj.
Illustratively, if the motion information of the peripheral block Ai is the same as that of the peripheral block Aj, diff [ i ] of the peripheral block Ai may be noted as 0; if the motion information of the peripheral block Ai is different from that of the peripheral block Aj, diff [ i ] of the peripheral block Ai may be noted as 1. After step s2, step s3 is performed.
Step s3, i=j, j=j+step, and then, the process returns to step s 1.
After the above processing, after the comparison process is exited, the motion information angle prediction mode in the motion information prediction mode candidate list may be subjected to the duplication checking processing according to the comparison result (i.e., the value of Diff).
For the horizontal prediction mode, the values of a plurality of Diff [ i ] can be determined, and the value of i can be all integers in [ m+1, m+n-1] by way of example. If the values of these diffs are all 0, mode [0] =0 can be noted and motion information M [0] can be noted as am+n; otherwise, mode [0] =1.
For the vertical prediction mode, the values of a plurality of Diff [ i ] can be determined, and the value of i is exemplified as all integers in [ m+n+2,2m+n ]. If the values of these diffs are all 0, mode [1] =0 can be noted, and the motion information M [1] is noted as am+n+2; otherwise, mode [1] =1.
For the horizontal upward prediction mode, the values of a plurality of Diff [ i ] can be determined, and the value of i can be all integers in [ m+2,2m+n-1] by way of example. If the values of these diffs are all 0, mode [2] =0 can be noted, and the motion information M [2] is noted as am+n+1; otherwise, mode [2] =1.
For the horizontal downward prediction mode, the values of a plurality of Diff [ i ] can be determined, and the value of i can be all integers in [1, m+n-2] by way of example. If the values of these diffs are all 0, mode [3] =0 can be noted and motion information M [3] can be noted as Am; otherwise note mode [3] =1.
For the vertical rightward prediction mode, the values of a plurality of Diff [ i ] can be determined, and the value of i can be all integers in [ m+n+3,2m+2n ] by way of example. If the values of these diffs are all 0, mode [4] =0 can be noted and motion information M [4] can be noted as a2m+n+2; otherwise note mode [4] =1.
Based on this, all motion information angle prediction modes with mode value 0 are found first, such as mode [0] =0, mode [1] =0, mode [2] =0. Then, comparing whether the motion information M [0] and the motion information M [1] are the same, if so, explaining that the horizontal prediction mode and the vertical prediction mode are repeated, if not, explaining that the horizontal prediction mode and the vertical prediction mode are not repeated, and so on, and the determination of whether to repeat is not repeated.
Example 7: based on the same application concept as the above method, another encoding and decoding method is provided in the embodiments of the present application, and the encoding and decoding method can be applied to an encoding end or a decoding end. In one example, if the target motion information prediction mode of the current block is the selected motion information angle prediction mode, determining the motion information of the current block according to the motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
Illustratively, determining the motion information of the current block according to the motion information angle prediction mode includes: determining a selection condition of the current block for acquiring motion information according to the motion information angle prediction mode and the size of the current block; the selection condition is a first selection condition or a second selection condition, wherein the first selection condition is that motion information selected from motion information of the peripheral matching block is not allowed to be bidirectional motion information, and the second selection condition is that motion information selected from motion information of the peripheral matching block is allowed to be bidirectional motion information. And determining the subarea division information of the current block according to the motion information angle prediction mode and the size of the current block. And selecting a peripheral matching block pointed by the preset angle from peripheral blocks of the current block according to the preset angle corresponding to the motion information angle prediction mode. And determining the motion information of the current block according to the selection condition, the sub-region division information and the motion information of the peripheral matching blocks.
Disallowing bi-directional motion information includes: allowing the unidirectional motion information of the peripheral matching block to be selected as the motion information of the current block or the sub-region if the motion information of the peripheral matching block is unidirectional motion information; if the motion information of the surrounding matching block is bi-directional motion information, it is allowed to select either forward motion information or backward motion information among the bi-directional motion information of the surrounding matching block as the motion information of the current block or sub-region.
The allowed bi-directional motion information includes: allowing the unidirectional motion information of the peripheral matching block to be selected as the motion information of the current block or the sub-region if the motion information of the peripheral matching block is unidirectional motion information; if the motion information of the surrounding matching block is bi-directional motion information, it is allowed to select the bi-directional motion information of the surrounding matching block as the motion information of the current block or sub-region.
Illustratively, determining the selection condition of the current block for acquiring the motion information according to the motion information angle prediction mode and the size of the current block may include, but is not limited to: if the size of the current block satisfies: the width is larger than or equal to a preset size parameter, the height is larger than or equal to a preset size parameter, and a second selection condition is determined as a selection condition according to any motion information angle prediction mode. If the size of the current block satisfies: the width is smaller than the preset size parameter, and is higher than the preset size parameter, and when the motion information angle prediction mode is a vertical prediction mode, the selection condition is determined to be a second selection condition; when the motion information angle prediction mode is a prediction mode other than the vertical prediction mode, the selection condition is determined to be a first selection condition. If the size of the current block satisfies: the width is larger than the preset size parameter, and when the motion information angle prediction mode is a horizontal prediction mode, the selection condition is determined to be a second selection condition; when the motion information angle prediction mode is a prediction mode other than the horizontal prediction mode, the selection condition is determined to be a first selection condition. If the size of the current block satisfies: the method comprises the steps that the height is smaller than a preset size parameter, the width is smaller than the preset size parameter, and a selection condition is determined to be a first selection condition according to any motion information angle prediction mode. If the size of the current block satisfies: the method comprises the steps that the height is smaller than a preset size parameter, the width is equal to the preset size parameter, or the height is equal to the preset size parameter, the width is smaller than the preset size parameter, and according to any motion information angle prediction mode, a first selection condition is determined as a selection condition.
In one example, determining the sub-region division information of the current block according to the motion information angle prediction mode and the size of the current block may include, but is not limited to:
when the motion information angle prediction mode is a horizontal upward prediction mode, a horizontal downward prediction mode or a vertical rightward prediction mode, if the width of the current block is greater than or equal to a preset size parameter, and the height of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8 x 8; if the width of the current block is smaller than the preset size parameter or the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4*4;
when the motion information angle prediction mode is a horizontal prediction mode, if the width of the current block is smaller than a preset size parameter, the size of the sub-region is 4*4, and the height of the current block can be larger than the preset size parameter, can be equal to the preset size parameter or can be smaller than the preset size parameter; if the width of the current block is greater than the preset size parameter, the size of the sub-region is 4 times the width of the current block, or the size of the sub-region is 4*4, and the height of the current block can be greater than the preset size parameter, can be equal to the preset size parameter, or can be smaller than the preset size parameter; if the width of the current block is equal to the preset size parameter and the height of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8 x 8.
When the motion information angle prediction mode is a vertical prediction mode, if the height of the current block is smaller than a preset size parameter, the size of the sub-region is 4*4, and the width of the current block can be larger than the preset size parameter, can be equal to the preset size parameter or can be smaller than the preset size parameter; if the current block is higher than the preset size parameter, the size of the sub-region is 4 x higher than the current block, or the size of the sub-region is 4*4, and the width of the current block can be larger than the preset size parameter, can be equal to the preset size parameter, or can be smaller than the preset size parameter; if the height of the current block is equal to the preset size parameter and the width of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8 x 8.
In one example, the preset size parameter may be 8.
In one example, when the preset size parameter is 8, the determination of the sub-region division and selection condition of the current block may be described with reference to table 1.
In one example, determining the motion information of the current block according to the selection condition, the sub-region division information, and the motion information of the surrounding matching block may include, but is not limited to:
dividing the current block into at least one sub-region according to the sub-region dividing information;
Selecting a peripheral matching block corresponding to the subarea from peripheral matching blocks of the current block according to the motion information angle prediction mode aiming at each subarea of the current block, and determining the motion information of the subarea according to the motion information of the peripheral matching block corresponding to the subarea and the selection condition;
and determining the motion information of the at least one sub-region as the motion information of the current block.
In one example, determining motion information of the current block according to the motion information angle prediction mode includes: determining a selection condition of the current block for acquiring motion information according to the size of the current block; the selection condition is a second selection condition, and the second selection condition is motion information permission selected from motion information of peripheral matching blocks is bidirectional motion information; determining subarea division information of the current block according to the size of the current block; the sub-region division information of the current block includes: the size of the sub-region of the current block is 8 x 8. Selecting a peripheral matching block pointed by the preset angle from peripheral blocks of the current block according to the preset angle corresponding to the motion information angle prediction mode; and determining the motion information of the current block according to the selection condition, the sub-region division information and the motion information of the peripheral matching block.
In one example, before determining the target motion information prediction mode of the current block, a motion information prediction mode of the current block may also be obtained, where the motion information prediction mode includes at least a motion information angle prediction mode. And performing weight checking processing on the motion information angle prediction mode of the current block to obtain a motion information angle prediction mode after weight checking, namely, removing the repeated motion information angle prediction mode.
Exemplary, the motion information angle prediction mode of the current block is subjected to a duplication checking process, so as to obtain a duplication checked motion information angle prediction mode, which may include, but is not limited to: determining a motion information angle prediction mode of the weight to be checked; for any two first motion information angle prediction modes of the weight to be checked and any two second motion information angle prediction modes of the weight to be checked, the following steps are performed: the first motion information can be determined according to a first motion information angle prediction mode of the weight to be checked; determining second motion information according to a second motion information angle prediction mode of the weight to be checked; then, if the first motion information and the second motion information are the same, it may be determined that the first motion information angle prediction mode and the second motion information angle prediction mode are repeated.
In one example, determining the motion information angle prediction mode for the weight to be examined may include, but is not limited to: for any motion information angle prediction mode, selecting a peripheral matching block pointed by a preset angle from peripheral blocks of the current block according to the preset angle of the motion information angle prediction mode; selecting a plurality of peripheral matching blocks to be traversed from the peripheral matching blocks; and if the motion information of the traversed peripheral matching blocks is the same, determining the motion information angle prediction mode as the motion information angle prediction mode of the to-be-checked weight.
In one example, after determining that the first motion information angle prediction mode and the second motion information angle prediction mode are repeated, the repeated first motion information angle prediction mode may also be removed; alternatively, the repeated second motion information angle prediction mode is removed.
In one example, after the motion information angle prediction mode of the current block is subjected to the duplication checking process to obtain a motion information angle prediction mode after duplication checking, a motion information angle prediction mode for secondary duplication checking can be further determined from the motion information angle prediction modes after duplication checking; the motion information angle prediction mode after the weight checking may include a first part of motion information angle prediction mode and a second part of motion information angle prediction mode, where the first part of motion information angle prediction mode may be a motion information angle prediction mode without weight checking, and the second part of motion information angle prediction mode may include a motion information angle prediction mode after repeated motion information is removed from the motion information angle prediction mode after the weight checking, and the motion information angle prediction mode after the weight checking may be a second part of motion information angle prediction mode. And performing weight checking processing on inter-frame prediction modes to be checked except the motion information angle prediction mode in the motion information angle prediction mode of the secondary weight checking and the prediction mode of the current block to obtain a motion information prediction mode after weight checking.
In one example, the process of performing the duplicate checking on the inter prediction mode to be checked except the motion information angle prediction mode in the motion information angle prediction mode of the second duplicate checking and the prediction mode of the current block may include, but is not limited to: aiming at any one of the motion information angle prediction modes of the secondary check, acquiring motion information of the motion information angle prediction mode; for example, if the motion information of the motion information angle prediction mode is the same as the motion information of an inter prediction mode of the to-be-examined, it may be determined that the motion information angle prediction mode is repeated with the inter prediction mode of the to-be-examined.
In an example, the inter prediction mode of the to-be-checked heavy frame may be any one of all non-angle inter prediction modes corresponding to the current block, or the inter prediction mode of the to-be-checked heavy frame may also be any one of partial non-angle inter prediction modes corresponding to the current block, which is not limited.
In one example, the motion information angle prediction mode of the current block is subjected to duplication checking, and after the motion information angle prediction mode after duplication checking is obtained, the target motion information prediction mode of the current block can be determined; if the target motion information prediction mode is a motion information angle prediction mode, determining the motion information of the current block according to the motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
The motion compensation process in the above embodiment is described below in connection with several specific embodiments.
Example 8: referring to fig. 9A, the width W (4) of the current block multiplied by the height H (8) of the current block is 32 or less, unidirectional motion compensation (Uni) is performed at an angle for each sub-region 4*4 within the current block, and bidirectional motion information is not allowed. For example, if the motion information of the surrounding matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block is bidirectional motion information, instead of determining bidirectional motion information as motion information of the sub-region, forward motion information or backward motion information in the bidirectional motion information is determined as motion information of the sub-region.
As shown in table 1, embodiment 8 is an example in which the subblock division size is 4*4 for any angle prediction mode for a width of 32 or less in table 1, and the selection condition is unidirectional.
According to fig. 9A, when the size of the current block is 4*8 and the target motion information prediction mode of the current block is horizontal mode, two sub-regions with identical sizes are divided, wherein one 4*4 sub-region corresponds to the peripheral matching block A1, the motion information of the sub-region of 4*4 is determined according to the motion information of the A1, and if the motion information of the peripheral matching block A1 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block A1 is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as the motion information of the subarea. The other 4*4 sub-region corresponds to the peripheral matching block A2, the motion information of the sub-region of 4*4 is determined according to the motion information of A2, and if the motion information of the peripheral matching block A2 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block A2 is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as the motion information of the subarea.
According to fig. 9A, when the size of the current block is 4*8 and the target motion information prediction mode of the current block is the vertical mode, two sub-regions with identical sizes are divided, wherein one 4*4 sub-region corresponds to the peripheral matching block B1, the motion information of the sub-region of 4*4 is determined according to the motion information of the B1, and if the motion information of the peripheral matching block B1 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block B1 is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as the motion information of the subarea. The other 4*4 sub-region corresponds to the peripheral matching block B1, the motion information of the sub-region of 4*4 is determined according to the motion information of B1, and if the motion information of the peripheral matching block B1 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block B1 is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as the motion information of the subarea.
According to fig. 9A, when the size of the current block is 4*8 and the target motion information prediction mode of the current block is horizontal upward, two sub-regions with identical sizes are divided, wherein one 4*4 sub-region corresponds to a peripheral matching block E, motion information of the sub-region of 4*4 is determined according to the motion information of E, and if the motion information of the peripheral matching block E is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching block E is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as the motion information of the subarea. The other 4*4 sub-region corresponds to the peripheral matching block A1, the motion information of the sub-region of 4*4 is determined according to the motion information of A1, and if the motion information of the peripheral matching block A1 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block A1 is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as the motion information of the subarea.
According to fig. 9A, when the size of the current block is 4*8 and the target motion information prediction mode of the current block is horizontal downward, two sub-regions with identical sizes are divided, wherein one 4*4 sub-region corresponds to a peripheral matching block A2, the motion information of the sub-region of 4*4 is determined according to the motion information of A2, and if the motion information of the peripheral matching block A2 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block A2 is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as the motion information of the subarea. The other 4*4 sub-region corresponds to the peripheral matching block A3, the motion information of the sub-region of 4*4 is determined according to the motion information of A3, and if the motion information of the peripheral matching block A3 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block A3 is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as the motion information of the subarea.
According to fig. 9A, when the size of the current block is 4*8 and the target motion information prediction mode of the current block is horizontal downward, two sub-regions with identical sizes are divided, wherein one 4*4 sub-region corresponds to a peripheral matching block B2, motion information of the sub-region of 4*4 is determined according to motion information of the B2, and if the motion information of the peripheral matching block B2 is unidirectional motion information, the unidirectional motion information is determined as motion information of the sub-region. If the motion information of the peripheral matching block B2 is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as the motion information of the subarea. The other 4*4 sub-region corresponds to the peripheral matching block B3, the motion information of the sub-region of 4*4 is determined according to the motion information of B3, and if the motion information of the peripheral matching block B3 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the peripheral matching block B3 is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as the motion information of the subarea.
Example 9: referring to fig. 9B, if the width W of the current block is smaller than 8 and the height H of the current block is greater than 8, each sub-region within the current block may be motion-compensated in the following manner:
if the angle prediction mode is a vertical prediction mode, the sub-region of each 4*H is motion compensated by a vertical angle. Bi-directional motion information is allowed when motion compensation is performed. For example, if the motion information of the surrounding matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as the motion information of the subareas.
If the angle prediction mode is other angle prediction modes (e.g., horizontal prediction mode, horizontal upward prediction mode, horizontal downward prediction mode, vertical rightward prediction mode, etc.), unidirectional motion compensation may be performed at an angle for each sub-region of 4*4 within the current block, and bidirectional motion information is not allowed. For example, if the motion information of the surrounding matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as motion information of the subarea.
As shown in table 1, embodiment 9 is an example for a width of less than 8 and a height of more than 8 in table 1, that is, for the vertical prediction mode, the subblock division size is 4×high, and the condition is selected to allow bi-direction. For other angular prediction modes, the sub-block partition size is 4*4, the selection condition is unidirectional.
According to fig. 9B, when the size of the current block is 4×16 and the target motion information prediction mode of the current block is the horizontal mode, 4 sub-regions with a size of 4*4 are divided, wherein one 4*4 sub-region corresponds to the peripheral matching block A1, and the motion information of the sub-region of 4*4 is determined according to the motion information of A1. One of the sub-regions 4*4 corresponds to the peripheral matching block A2, and motion information of the sub-region 4*4 is determined according to the motion information of A2. One of the sub-regions 4*4 corresponds to the peripheral matching block A3, and motion information of the sub-region 4*4 is determined according to the motion information of A3. One of the sub-regions 4*4 corresponds to the peripheral matching block A4, and motion information of the sub-region 4*4 is determined according to the motion information of A4. For any one of A1 to A4, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as motion information of the corresponding subarea.
According to fig. 9B, when the size of the current block is 4×16 and the target motion information prediction mode of the current block is the vertical mode, 4 sub-regions of size 4*4 may be divided, each 4*4 sub-region corresponds to the peripheral matching block B1, and motion information of each 4*4 sub-region is determined according to the motion information of B1. And if the motion information of the peripheral matching block B1 is unidirectional motion information, determining the unidirectional motion information as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block B1 is bidirectional motion information, determining the bidirectional motion information as the motion information of the corresponding sub-region. The motion information of the four sub-areas is the same, so in this embodiment, the current block may not be divided into sub-areas, and the current block corresponds to a peripheral matching block B1 as a sub-area, and the motion information of the current block is determined according to the motion information of B1.
According to fig. 9B, when the size of the current block is 4×16 and the target motion information prediction mode of the current block is the horizontal up mode, 4 sub-regions with a size of 4*4 are divided, wherein one 4*4 sub-region corresponds to the peripheral matching block E, and the motion information of the sub-region of 4*4 is determined according to the motion information of E. One of the sub-regions 4*4 corresponds to the peripheral matching block A1, and motion information of the sub-region 4*4 is determined according to the motion information of A1. One of the sub-regions 4*4 corresponds to the peripheral matching block A2, and motion information of the sub-region 4*4 is determined according to the motion information of A2. One of the sub-regions 4*4 corresponds to the peripheral matching block A3, and motion information of the sub-region 4*4 is determined according to the motion information of A3. For any one of E to A3, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as motion information of the corresponding subarea.
According to fig. 9B, when the size of the current block is 4×16 and the target motion information prediction mode of the current block is the horizontal down mode, 4 sub-regions with a size of 4*4 are divided, wherein one 4*4 sub-region corresponds to the peripheral matching block A2, and the motion information of the sub-region of 4*4 is determined according to the motion information of A2. One of the sub-regions 4*4 corresponds to the peripheral matching block A3, and motion information of the sub-region 4*4 is determined according to the motion information of A3. One of the sub-regions 4*4 corresponds to the peripheral matching block A5, and motion information of the sub-region 4*4 is determined according to the motion information of A4. One of the sub-regions 4*4 corresponds to the peripheral matching block A5, and motion information of the sub-region 4*4 is determined according to the motion information of A5. For any one of A2 to A5, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as motion information of the corresponding subarea.
According to fig. 9B, when the size of the current block is 4×16 and the target motion information prediction mode of the current block is the horizontal down mode, 4 sub-regions with a size of 4*4 are divided, wherein one 4*4 sub-region corresponds to the peripheral matching block B2, and the motion information of the 4*4 sub-region is determined according to the motion information of the B2. One of the sub-regions 4*4 corresponds to the peripheral matching block B3, and motion information of the sub-region 4*4 is determined according to the motion information of B3. One of the sub-regions 4*4 corresponds to the peripheral matching block B4, and motion information of the sub-region 4*4 is determined according to the motion information of B4. One of the sub-regions 4*4 corresponds to the peripheral matching block B5, and motion information of the sub-region 4*4 is determined according to the motion information of B5. For any one of B2 to B5, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as motion information of the corresponding subarea.
Example 10: referring to fig. 9C, if the width W of the current block is greater than 8 and the height H of the current block is less than 8, each sub-region within the current block may be motion compensated in the following manner:
and if the angle prediction mode is a horizontal prediction mode, performing motion compensation on each W4 subarea according to the horizontal angle. Bi-directional motion information is allowed when motion compensation is performed. For example, if the motion information of the surrounding matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as the motion information of the subareas.
If the angle prediction mode is other angle prediction modes, unidirectional motion compensation can be performed at an angle for each sub-region of 4*4 within the current block, and bidirectional motion information is not allowed. For example, if the motion information of the surrounding matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as motion information of the subarea.
As shown in table 1, embodiment 10 is an example for a width of more than 8 and a height of less than 8 in table 1, that is, for the horizontal prediction mode, the subblock division size is wide by 4, and the condition is selected to allow bi-direction. For other angular prediction modes, the sub-block partition size is 4*4, the selection condition is unidirectional.
According to fig. 9C, when the size of the current block is 16×4 and the target motion information prediction mode of the current block is the horizontal mode, 4 sub-regions of size 4*4 may be divided, each 4*4 sub-region corresponds to the peripheral matching block A1, and motion information of each 4*4 sub-region is determined according to the motion information of the A1. And if the motion information of the peripheral matching block A1 is unidirectional motion information, determining the unidirectional motion information as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block A1 is bidirectional motion information, determining the bidirectional motion information as the motion information of the corresponding sub-region. The motion information of the four sub-areas is the same, so in this embodiment, the current block may not be divided into sub-areas, and the current block corresponds to a peripheral matching block A1 as a sub-area, and the motion information of the current block is determined according to the motion information of A1.
According to fig. 9C, when the size of the current block is 16×4 and the target motion information prediction mode of the current block is the vertical mode, 4 sub-regions with a size of 4*4 are divided, wherein one 4*4 sub-region corresponds to the peripheral matching block B1, and the motion information of the sub-region of 4*4 is determined according to the motion information of B1. One of the sub-regions 4*4 corresponds to the peripheral matching block B2, and motion information of the sub-region 4*4 is determined according to the motion information of B2. One of the sub-regions 4*4 corresponds to the peripheral matching block B3, and motion information of the sub-region 4*4 is determined according to the motion information of B3. One of the sub-regions 4*4 corresponds to the peripheral matching block B4, and motion information of the sub-region 4*4 is determined according to the motion information of B4. For any one of B1 to B4, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as motion information of the corresponding subarea.
According to fig. 9C, when the size of the current block is 16×4 and the target motion information prediction mode of the current block is the horizontal up mode, 4 sub-regions with a size of 4*4 are divided, wherein one 4*4 sub-region corresponds to the peripheral matching block E, and the motion information of the sub-region of 4*4 is determined according to the motion information of E. One of the sub-regions 4*4 corresponds to the peripheral matching block B1, and motion information of the sub-region 4*4 is determined according to the motion information of B1. One of the sub-regions 4*4 corresponds to the peripheral matching block B2, and motion information of the sub-region 4*4 is determined according to the motion information of B2. One of the sub-regions 4*4 corresponds to the peripheral matching block B3, and motion information of the sub-region 4*4 is determined according to the motion information of B3. For any one of E to B3, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as motion information of the corresponding subarea.
According to fig. 9C, when the size of the current block is 16×4 and the target motion information prediction mode of the current block is the horizontal down mode, 4 sub-regions with a size of 4*4 are divided, wherein one 4*4 sub-region corresponds to the peripheral matching block A2, and the motion information of the sub-region of 4*4 is determined according to the motion information of A2. One of the sub-regions 4*4 corresponds to the peripheral matching block A3, and motion information of the sub-region 4*4 is determined according to the motion information of A3. One of the sub-regions 4*4 corresponds to the peripheral matching block A4, and motion information of the sub-region 4*4 is determined according to the motion information of A4. One of the sub-regions 4*4 corresponds to the peripheral matching block A5, and motion information of the sub-region 4*4 is determined according to the motion information of A5. For any one of A2 to A5, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as motion information of the corresponding subarea.
According to fig. 9C, when the size of the current block is 16×4 and the target motion information prediction mode of the current block is the vertical rightward mode, 4 sub-regions with a size of 4*4 are divided, wherein one 4*4 sub-region corresponds to the peripheral matching block B2, and the motion information of the sub-region of 4*4 is determined according to the motion information of B2. One of the sub-regions 4*4 corresponds to the peripheral matching block B3, and motion information of the sub-region 4*4 is determined according to the motion information of B3. One of the sub-regions 4*4 corresponds to the peripheral matching block B4, and motion information of the sub-region 4*4 is determined according to the motion information of B4. One of the sub-regions 4*4 corresponds to the peripheral matching block B5, and motion information of the sub-region 4*4 is determined according to the motion information of B5. For any one of B2 to B5, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching block is bidirectional motion information, determining forward motion information or backward motion information in the bidirectional motion information as motion information of the corresponding subarea.
Example 11: the width W of the current block is equal to 8, the height H of the current block is equal to 8, and motion compensation is performed on each 8 x 8 sub-region (i.e. the sub-region is the current block itself) in the current block according to a certain angle, and bidirectional motion information is allowed when motion compensation is performed. For example, if the motion information of the surrounding matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as the motion information of the subareas.
If the sub-region corresponds to a plurality of peripheral matching blocks, motion information of any peripheral matching block can be selected from the motion information of the plurality of peripheral matching blocks according to a corresponding angle with respect to the motion information of the sub-region.
For example, referring to fig. 9D, for the horizontal prediction mode, the motion information of the peripheral matching block A1 may be selected, and the motion information of the peripheral matching block A2 may be selected. Referring to fig. 9E, for the vertical prediction mode, motion information of the peripheral matching block B1 may be selected, or motion information of the peripheral matching block B2 may be selected. Referring to fig. 9F, for the horizontal upward prediction mode, motion information of the peripheral matching block E, motion information of the peripheral matching block B1, and motion information of the peripheral matching block A1 may be selected. Referring to fig. 9G, for the horizontal downward prediction mode, the motion information of the peripheral matching block A2 may be selected, the motion information of the peripheral matching block A3 may be selected, and the motion information of the peripheral matching block A4 may be selected. Referring to fig. 9H, for the vertical rightward prediction mode, the motion information of the peripheral matching block B2, the motion information of the peripheral matching block B3, and the motion information of the peripheral matching block B4 may be selected.
Referring to table 1, embodiment 11 is an example for a width equal to 8 and a height equal to 8 in table 1, that is, for any angular prediction mode, the subblock division size is 8×8, and the condition is selected to allow bi-direction.
According to fig. 9D, when the size of the current block is 8×8 and the target motion information prediction mode of the current block is the horizontal mode, a sub-region with a size of 8×8 is divided, the sub-region corresponds to the peripheral matching block A1, the motion information of the sub-region is determined according to the motion information of A1, and if the motion information of A1 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the A1 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the subarea. Or the sub-region corresponds to the peripheral matching block A2, the motion information of the sub-region is determined according to the motion information of the A2, and if the motion information of the A2 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the A2 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the subarea.
According to fig. 9E, when the size of the current block is 8×8 and the target motion information prediction mode of the current block is the vertical mode, a sub-region with a size of 8×8 is divided, the sub-region corresponds to the peripheral matching block B1, the motion information of the sub-region is determined according to the motion information of B1, and if the motion information of B1 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of B1 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the subarea. Or the sub-region corresponds to the peripheral matching block B2, the motion information of the sub-region is determined according to the motion information of the B2, and if the motion information of the B2 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of B2 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the sub-region.
According to fig. 9F, when the size of the current block is 8×8 and the target motion information prediction mode of the current block is a horizontal up mode, dividing a sub-region with a size of 8×8, the sub-region corresponds to a peripheral matching block E, determining motion information of the sub-region according to the motion information of E, and if the motion information of E is unidirectional motion information, determining the unidirectional motion information as the motion information of the sub-region. And if the motion information of E is bidirectional motion information, determining the bidirectional motion information as the motion information of the subarea. Or the sub-region corresponds to the peripheral matching block B1, the motion information of the sub-region is determined according to the motion information of the B1, and if the motion information of the B1 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of B1 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the subarea. Or the sub-region corresponds to the peripheral matching block A1, the motion information of the sub-region is determined according to the motion information of the A1, and if the motion information of the A1 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the A1 is bidirectional motion information, determining the bidirectional motion information as the motion information of the subarea.
According to fig. 9G, when the size of the current block is 8×8 and the target motion information prediction mode of the current block is the horizontal down mode, dividing a sub-region with a size of 8×8, the sub-region corresponds to the peripheral matching block A2, determining motion information of the sub-region according to the motion information of A2, and if the motion information of A2 is unidirectional motion information, determining the unidirectional motion information as the motion information of the sub-region. And if the motion information of the A2 is bidirectional motion information, determining the bidirectional motion information as the motion information of the subarea. Or the sub-region corresponds to the peripheral matching block A3, the motion information of the sub-region is determined according to the motion information of the A3, and if the motion information of the A3 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of the A3 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the subarea. Or the sub-region corresponds to the peripheral matching block A4, the motion information of the sub-region is determined according to the motion information of the A4, and if the motion information of the A4 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the A4 is bidirectional motion information, determining the bidirectional motion information as the motion information of the subarea.
According to fig. 9H, when the size of the current block is 8×8 and the target motion information prediction mode of the current block is the vertical rightward mode, dividing a sub-region with a size of 8×8, the sub-region corresponds to the peripheral matching block B2, determining motion information of the sub-region according to the motion information of B2, and if the motion information of B2 is unidirectional motion information, determining the unidirectional motion information as the motion information of the sub-region. And if the motion information of the B2 is bidirectional motion information, determining the bidirectional motion information as the motion information of the subarea. Or the sub-region corresponds to the peripheral matching block B3, the motion information of the sub-region is determined according to the motion information of the B3, and if the motion information of the B3 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of B3 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the subarea. Or the sub-region corresponds to the peripheral matching block B4, the motion information of the sub-region is determined according to the motion information of the B4, and if the motion information of the B4 is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. If the motion information of B4 is bidirectional motion information, the bidirectional motion information is determined as the motion information of the subarea.
Example 12: the width W of the current block may be equal to or greater than 16, and the height H of the current block may be equal to 8, based on which each sub-region within the current block may be motion compensated in the following manner:
and if the angle prediction mode is a horizontal prediction mode, performing motion compensation on each W4 subarea according to the horizontal angle. Bi-directional motion information is allowed when motion compensation is performed. For example, if the motion information of the surrounding matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as the motion information of the subareas.
And if the angle prediction mode is other angle prediction modes, performing bidirectional motion compensation according to a certain angle for each 8 x 8 sub-region in the current block. For example, if the motion information of the surrounding matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as the motion information of the subareas. For each sub-region of 8 x 8, if the sub-region corresponds to a plurality of peripheral matching blocks, motion information of any peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks for the motion information of the sub-region.
For example, referring to fig. 9I, for the horizontal prediction mode, the motion information of the peripheral matching block A1 may be selected for the first w×4 sub-region, and for the second w×4 sub-region, the motion information of the peripheral matching block A2 may be selected. Referring to fig. 9J, for the vertical prediction mode, motion information of the peripheral matching block B1 may be selected and motion information of the peripheral matching block B2 may be selected for the first 8×8 sub-region. For the second 8 x 8 sub-region, the motion information of the peripheral matching block B3 may be selected and the motion information of the peripheral matching block B4 may be selected. Other angle prediction modes are similar and will not be described in detail herein.
As shown in table 1, embodiment 12 is an example of the table 1 with a width of 16 or more and a height of 8 or less, and the subblock division size is 4 or less for the horizontal prediction mode, and the condition is selected to allow bi-direction. For other angular prediction modes, the sub-block partition size is 8 x 8, with the condition being chosen to allow bi-direction.
According to fig. 9I, when the size of the current block is 16×8 and the target motion information prediction mode of the current block is a horizontal mode, dividing 2 sub-regions with a size of 16×4, wherein one of the 16×4 sub-regions corresponds to the peripheral matching block A1, and determining the motion information of the 16×4 sub-region according to the motion information of A1. The other 16 x 4 sub-region corresponds to the peripheral matching block A2, and the motion information of the 16 x 4 sub-region is determined according to the motion information of A2. For the two sub-regions of 16×4, if the motion information of the peripheral matching block is unidirectional motion information, determining the unidirectional motion information as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as motion information of the corresponding sub-region.
According to fig. 9J, when the size of the current block is 16×8 and the target motion information prediction mode of the current block is the vertical mode, dividing 2 sub-regions with a size of 8×8, wherein one of the sub-regions with a size of 8×8 corresponds to the peripheral matching block B1 or B2, and determining the motion information of the sub-region with a size of 8×8 according to the motion information of B1 or B2. The other 8 x 8 sub-region corresponds to the peripheral matching block B3 or B4, and the motion information of the 8 x 8 sub-region is determined according to the motion information of the B3 or B4. For the two sub-regions of 8 x 8, if the motion information of the peripheral matching block is unidirectional motion information, determining the unidirectional motion information as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as motion information of the corresponding sub-region.
Example 13: the width W of the current block may be equal to 8, and the height H of the current block may be equal to or greater than 16, based on which each sub-region within the current block may be motion compensated in the following manner:
if the angle prediction mode is a vertical prediction mode, the sub-region of each 4*H is motion compensated by a vertical angle. Bi-directional motion information is allowed when motion compensation is performed. For example, if the motion information of the surrounding matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as the motion information of the subareas.
And if the angle prediction mode is other angle prediction modes, performing bidirectional motion compensation according to a certain angle for each 8 x 8 sub-region in the current block. For example, if the motion information of the surrounding matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as the motion information of the subareas. For each sub-region of 8 x 8, if the sub-region corresponds to a plurality of peripheral matching blocks, motion information of any peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks for the motion information of the sub-region.
For example, referring to fig. 9K, for the vertical prediction mode, the motion information of the peripheral matching block B1 may be selected for the first sub-region 4*H, and the motion information of the peripheral matching block B2 may be selected for the second sub-region 4*H. Referring to fig. 9L, for the horizontal prediction mode, motion information of the peripheral matching block A1 may be selected and motion information of the peripheral matching block A2 may be selected for the first 8×8 sub-region. For the second 8 x 8 sub-region, the motion information of the perimeter-matching block A1 may be selected, and the motion information of the perimeter-matching block A2 may be selected. Other angle prediction modes are similar and will not be described in detail herein.
As shown in table 1, embodiment 13 is an example of the sub-block division size of 4×high for the vertical prediction mode for the height of 16 or more and the width of 8 in table 1, and the condition is selected to allow bi-direction. For other angular prediction modes, the sub-block partition size is 8 x 8, with the condition being chosen to allow bi-direction.
According to fig. 9K, when the size of the current block is 8×16 and the target motion information prediction mode of the current block is the vertical mode, dividing 2 sub-areas with a size of 4×16, wherein one of the sub-areas with a size of 4×16 corresponds to the peripheral matching block B1, and determining the motion information of the sub-area with a size of 4×16 according to the motion information of B1. The other sub-region of 4 x 16 corresponds to the peripheral matching block B2, and the motion information of the sub-region of 4 x 16 is determined according to the motion information of B2. For the two sub-regions of 4×16, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as motion information of the corresponding sub-region.
According to fig. 9L, when the size of the current block is 16×8 and the target motion information prediction mode of the current block is a horizontal mode, dividing 2 sub-regions with a size of 8×8, wherein one of the sub-regions with a size of 8×8 corresponds to the peripheral matching block A1 or A2, and determining the motion information of the sub-region with a size of 8×8 according to the motion information of the corresponding peripheral matching block. The other 8 x 8 sub-region corresponds to the peripheral matching block A1 or A2, and the motion information of the 8 x 8 sub-region is determined according to the motion information of the corresponding peripheral matching block. For the two sub-regions of 8 x 8, if the motion information of the peripheral matching block is unidirectional motion information, determining the unidirectional motion information as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as motion information of the corresponding sub-region.
Example 14: the width W of the current block may be equal to or greater than 16, and the height H of the current block may be equal to or greater than 16, based on which each sub-region within the current block may be motion compensated in the following manner:
if the angle prediction mode is a vertical prediction mode, the sub-region of each 4*H is motion compensated by a vertical angle. Bi-directional motion information is allowed when motion compensation is performed. For example, if the motion information of the surrounding matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as the motion information of the subareas.
And if the angle prediction mode is a horizontal prediction mode, performing motion compensation on each W4 subarea according to the horizontal angle. Bi-directional motion information is allowed when motion compensation is performed. For example, if the motion information of the surrounding matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as the motion information of the subareas.
And if the angle prediction mode is other angle prediction modes, performing bidirectional motion compensation according to a certain angle for each 8 x 8 sub-region in the current block. For example, if the motion information of the surrounding matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as the motion information of the subareas. For each sub-region of 8 x 8, if the sub-region corresponds to a plurality of peripheral matching blocks, motion information of any peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks for the motion information of the sub-region.
Referring to fig. 9M, for the vertical prediction mode, the motion information of the peripheral matching block B1 may be selected for the first sub-region 4*H, the motion information of the peripheral matching block B2 may be selected for the second sub-region 4*H, the motion information of the peripheral matching block B3 may be selected for the third sub-region 4*H, and the motion information of the peripheral matching block B4 may be selected for the fourth sub-region 4*H. For the horizontal prediction mode, the motion information of the peripheral matching block A1 is selected for the first w×4 sub-region, the motion information of the peripheral matching block A2 is selected for the second w×4 sub-region, the motion information of the peripheral matching block A3 is selected for the third w×4 sub-region, and the motion information of the peripheral matching block A4 is selected for the fourth w×4 sub-region. Other angle prediction modes are similar and will not be described in detail herein.
As shown in table 1, embodiment 14 is an example of the sub-block division size of 4×high for the vertical prediction mode for the height of 16 or more and the width of 16 or more in table 1, and the condition is selected to allow bi-direction. For horizontal prediction mode, the sub-block partition size is wide by 4, the condition is chosen to allow bi-direction. For other angular prediction modes, the sub-block partition size is 8 x 8, with the condition being chosen to allow bi-direction.
According to fig. 9M, when the size of the current block is 16×16 and the target motion information prediction mode of the current block is the vertical mode, dividing 4 sub-areas with the size of 4×16, wherein one of the 4×16 sub-areas corresponds to the peripheral matching block B1, and determining the motion information of the 4×16 sub-area according to the motion information of the B1. One of the sub-regions of 4 x 16 corresponds to the peripheral matching block B2, and the motion information of the sub-region of 4 x 16 is determined according to the motion information of B2. One of the sub-regions of 4 x 16 corresponds to the peripheral matching block B3, and the motion information of the sub-region of 4 x 16 is determined according to the motion information of B3. One of the sub-regions of 4 x 16 corresponds to the peripheral matching block B4, and the motion information of the sub-region of 4 x 16 is determined according to the motion information of B4. For the four sub-regions of 4×16, if the motion information of the peripheral matching block is unidirectional motion information, the unidirectional motion information is determined as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as motion information of the corresponding sub-region.
According to fig. 9M, when the size of the current block is 16×16 and the target motion information prediction mode of the current block is a horizontal mode, 4 sub-regions with a size of 16×4 are divided, wherein one of the 16×4 sub-regions corresponds to the peripheral matching block A1, and the motion information of the 16×4 sub-region is determined according to the motion information of A1. One 16×4 sub-region corresponds to the peripheral matching block A2, and the motion information of the 16×4 sub-region is determined according to the motion information of the A2. One 16×4 sub-region corresponds to the peripheral matching block A3, and motion information of the 16×4 sub-region is determined according to the motion information of the A3. One 16 x 4 sub-region corresponds to the peripheral matching block A4, and the motion information of the 16 x 4 sub-region is determined according to the motion information of the A4. For the four sub-regions of 16×4, if the motion information of the peripheral matching block is unidirectional motion information, determining the unidirectional motion information as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as motion information of the corresponding sub-region.
Example 15: the width W of the current block may be equal to or greater than 8, and the height H of the current block may be equal to or greater than 8, and then motion compensation is performed on each 8×8 sub-region within the current block. Referring to fig. 9N, for each sub-region of 8×8, if the sub-region corresponds to a plurality of peripheral matching blocks, motion information of any one peripheral matching block is selected from the motion information of the plurality of peripheral matching blocks for the motion information of the sub-region.
In embodiment 15, the sub-block division size is independent of the motion information angle prediction mode, and the sub-block division size may be 8×8 as long as the width is 8 or more and the height is 8 or more, whichever of the motion information angle prediction modes is. The selection condition is independent of the motion information angle prediction mode, and the selection condition is that bidirectional is allowed as long as the width is 8 or more and the height is 8 or more in any motion information angle prediction mode.
According to fig. 9N, when the size of the current block is 16×16 and the target motion information prediction mode of the current block is a horizontal mode, 4 sub-regions with a size of 8×8 are divided, wherein one of the sub-regions with a size of 8×8 corresponds to the peripheral matching block A1 or A2, and the motion information of the sub-region with a size of 8×8 is determined according to the motion information of A1 or A2. One of the 8 x 8 sub-regions corresponds to the peripheral matching block A1 or A2, and the motion information of the 8 x 8 sub-region is determined according to the motion information of the A1 or A2. One of the 8 x 8 sub-regions corresponds to the peripheral matching block A3 or A4, and the motion information of the 8 x 8 sub-region is determined according to the motion information of A3 or A4. One of the 8 x 8 sub-regions corresponds to the peripheral matching block A3 or A4, and the motion information of the 8 x 8 sub-region is determined according to the motion information of A3 or A4. For the four sub-regions of 8 x 8, if the motion information of the peripheral matching block is unidirectional motion information, determining the unidirectional motion information as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as motion information of the corresponding sub-region.
According to fig. 9N, when the size of the current block is 16×16 and the target motion information prediction mode of the current block is a horizontal mode, 4 sub-regions with a size of 8×8 are divided, wherein one of the sub-regions with a size of 8×8 corresponds to the peripheral matching block B1 or B2, and the motion information of the sub-region with a size of 8×8 is determined according to the motion information of B1 or B2. One of the 8 x 8 sub-regions corresponds to the peripheral matching block B1 or B2, and the motion information of the 8 x 8 sub-region is determined according to the motion information of the B1 or B2. One of the 8 x 8 sub-regions corresponds to the peripheral matching block B3 or B4, and the motion information of the 8 x 8 sub-region is determined according to the motion information of the B3 or B4. One of the 8 x 8 sub-regions corresponds to the peripheral matching block B3 or B4, and the motion information of the 8 x 8 sub-region is determined according to the motion information of the B3 or B4. For the four sub-regions of 8 x 8, if the motion information of the peripheral matching block is unidirectional motion information, determining the unidirectional motion information as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as motion information of the corresponding sub-region.
According to fig. 9N, when the size of the current block is 16×16 and the target motion information prediction mode of the current block is the horizontal up mode, 4 sub-regions with a size of 8×8 are divided. Then, for each 8×8 sub-region, a peripheral matching block (E, B or A2) corresponding to the 8×8 sub-region may be determined, which is not limited, and motion information of the 8×8 sub-region is determined according to the motion information of the peripheral matching block. For each sub-region of 8 x 8, if the motion information of the peripheral matching block is unidirectional motion information, determining the unidirectional motion information as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as motion information of the corresponding sub-region.
According to fig. 9N, when the size of the current block is 16×16 and the target motion information prediction mode of the current block is the horizontal down mode, 4 sub-regions with a size of 8×8 are divided. Then, for each 8×8 sub-region, a peripheral matching block (A3, A5 or A7) corresponding to the 8×8 sub-region may be determined, which is not limited, and the motion information of the 8×8 sub-region is determined according to the motion information of the peripheral matching block. For each sub-region of 8 x 8, if the motion information of the peripheral matching block is unidirectional motion information, determining the unidirectional motion information as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as motion information of the corresponding sub-region.
According to fig. 9N, when the size of the current block is 16×16 and the target motion information prediction mode of the current block is the vertical right mode, 4 sub-regions with a size of 8×8 are divided. Then, for each 8×8 sub-region, a peripheral matching block (B3, B5 or B7) corresponding to the 8×8 sub-region may be determined, which is not limited, and the motion information of the 8×8 sub-region is determined according to the motion information of the peripheral matching block. For each sub-region of 8 x 8, if the motion information of the peripheral matching block is unidirectional motion information, determining the unidirectional motion information as the motion information of the corresponding sub-region. And if the motion information of the peripheral matching blocks is bidirectional motion information, determining the bidirectional motion information as motion information of the corresponding sub-region.
Example 16: based on the same application concept as the above method, an embodiment of the present application proposes a codec device applied to a decoding end or an encoding end, as shown in fig. 10A, which is a structural diagram of the device, including:
an obtaining module 101, configured to obtain a motion information prediction mode of a current block, where the motion information prediction mode includes at least a motion information angle prediction mode; and the processing module 102 is configured to perform a duplication checking process on the motion information angle prediction mode of the current block, so as to obtain a duplication checked motion information angle prediction mode.
The processing module 102 performs a duplication checking process on the motion information angle prediction mode of the current block, and is specifically configured to:
determining a motion information angle prediction mode of the weight to be checked; for any two first motion information angle prediction modes of the weight to be checked and any two second motion information angle prediction modes of the weight to be checked, the following steps are performed:
determining first motion information according to a first motion information angle prediction mode of the weight to be checked;
determining second motion information according to a second motion information angle prediction mode to be checked;
and if the first motion information and the second motion information are the same, determining that the first motion information angle prediction mode and the second motion information angle prediction mode are repeated.
The processing module 102 is specifically configured to, when determining the motion information angle prediction mode of the weight to be checked:
for any motion information angle prediction mode, selecting a peripheral matching block pointed by a preset angle from peripheral blocks of the current block according to the preset angle of the motion information angle prediction mode;
selecting a plurality of peripheral matching blocks to be traversed from the peripheral matching blocks;
and if the motion information of the traversed peripheral matching blocks is the same, determining the motion information angle prediction mode as the motion information angle prediction mode of the weight to be checked.
The processing module 102 is further configured to, after determining that the first motion information angle prediction mode and the second motion information angle prediction mode are repeated: removing the repeated first motion information angle prediction mode; or, removing the repeated second motion information angle prediction mode.
The processing module 102 performs a duplication checking process on the motion information angle prediction mode of the current block, and is further configured to: determining a motion information angle prediction mode of secondary weight checking from the motion information angle prediction modes after weight checking; the motion information angle prediction mode after weight checking comprises a first part of motion information angle prediction mode and a second part of motion information angle prediction mode, wherein the first part of motion information angle prediction mode is a motion information angle prediction mode without weight checking, the second part of motion information angle prediction mode comprises a motion information angle prediction mode after repeated removal in the motion information angle prediction mode after weight checking, and the motion information angle prediction mode after weight checking for the second time is the second part of motion information angle prediction mode; and performing duplicate checking processing on the motion information angle prediction mode of the secondary duplicate checking and the inter-frame prediction mode of the current block, except for the motion information angle prediction mode, to obtain the motion information prediction mode after duplicate checking.
The processing module 102 is specifically configured to, when performing the duplicate checking processing on the inter prediction modes to be checked except the motion information angle prediction mode in the motion information angle prediction mode of the second duplicate checking and the prediction mode of the current block: aiming at any one of the motion information angle prediction modes of the secondary check, acquiring motion information of the motion information angle prediction mode;
if the motion information of the motion information angle prediction mode is the same as the motion information of the inter-frame prediction mode of the to-be-checked, determining that the motion information angle prediction mode and the inter-frame prediction mode of the to-be-checked are repeated.
In one example, the inter prediction mode of the to-be-checked heavy frame is any one of all non-angle inter prediction modes corresponding to the current block, or the inter prediction mode of the to-be-checked heavy frame is any one of partial non-angle inter prediction modes corresponding to the current block.
The apparatus further comprises: a determining module, configured to determine a target motion information prediction mode of the current block; if the target motion information prediction mode of the current block is a motion information angle prediction mode, then:
determining the motion information of the current block according to the motion information angle prediction mode;
And determining the predicted value of the current block according to the motion information of the current block.
Referring to fig. 10B, there is shown a schematic diagram of another codec device, which includes:
a first determining module 103, configured to determine, if the target motion information prediction mode of the current block is the selected motion information angle prediction mode, motion information of the current block according to the motion information angle prediction mode; the second determining module 104 is configured to determine a prediction value of the current block according to the motion information of the current block.
The first determining module 103 is specifically configured to determine, according to the motion information angle prediction mode, motion information of the current block: determining a selection condition of the current block for acquiring motion information according to the motion information angle prediction mode and the size of the current block; the selection condition is a first selection condition or a second selection condition, wherein the first selection condition is that motion information selected from motion information of the peripheral matching blocks is not allowed to be bidirectional motion information, and the second selection condition is that motion information selected from motion information of the peripheral matching blocks is allowed to be bidirectional motion information;
determining subarea division information of the current block according to the motion information angle prediction mode and the size of the current block; selecting a peripheral matching block pointed by the preset angle from peripheral blocks of the current block according to the preset angle corresponding to the motion information angle prediction mode; and determining the motion information of the current block according to the selection condition, the sub-region division information and the motion information of the peripheral matching block.
The first determining module 103 is specifically configured to determine, according to the motion information angle prediction mode and the size of the current block, a selection condition of the current block for obtaining motion information when:
if the size of the current block satisfies: the width is larger than or equal to a preset size parameter, the height is larger than or equal to a preset size parameter, and the selection condition is determined to be a second selection condition according to any motion information angle prediction mode;
if the size of the current block satisfies: the width is smaller than a preset size parameter, and is higher than the preset size parameter, and when the motion information angle prediction mode is a vertical prediction mode, the selection condition is determined to be a second selection condition; when the motion information angle prediction mode is other prediction modes except the vertical prediction mode, determining the selection condition as a first selection condition;
if the size of the current block satisfies: the height is smaller than a preset size parameter, the width is larger than a preset size parameter, and when the motion information angle prediction mode is a horizontal prediction mode, the selection condition is determined to be a second selection condition; when the motion information angle prediction mode is other prediction modes except a horizontal prediction mode, determining the selection condition as a first selection condition;
If the size of the current block satisfies: the height is smaller than a preset size parameter, the width is smaller than a preset size parameter, and the selection condition is determined to be a first selection condition according to any motion information angle prediction mode;
if the size of the current block satisfies: the method comprises the steps that the height is smaller than a preset size parameter, the width is equal to the preset size parameter, or the height is equal to the preset size parameter, the width is smaller than the preset size parameter, and for any motion information angle prediction mode, the selection condition is determined to be a first selection condition.
The first determining module 103 is specifically configured to determine the sub-region division information of the current block according to the motion information angle prediction mode and the size of the current block:
when the motion information angle prediction mode is a horizontal upward prediction mode, a horizontal downward prediction mode or a vertical rightward prediction mode, if the width of the current block is greater than or equal to a preset size parameter, and the height of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8 x 8; if the width of the current block is smaller than the preset size parameter or the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4*4;
when the motion information angle prediction mode is a horizontal prediction mode, if the width of the current block is larger than a preset size parameter, the size of the sub-region is 4 times the width of the current block, or the size of the sub-region is 4*4; if the width of the current block is equal to the preset size parameter and the height of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8 x 8; if the width of the current block is smaller than the preset size parameter, the size of the sub-region is 4*4;
When the motion information angle prediction mode is a vertical prediction mode, if the height of the current block is larger than a preset size parameter, the size of the sub-region is 4 x the height of the current block, or the size of the sub-region is 4*4; if the height of the current block is equal to the preset size parameter and the width of the current block is greater than or equal to the preset size parameter, the size of the sub-region is 8 x 8; if the height of the current block is smaller than the preset size parameter, the size of the sub-region is 4*4.
The first determining module 103 is specifically configured to determine the motion information of the current block according to the selection condition, the sub-region division information, and the motion information of the surrounding matching block:
dividing the current block into at least one sub-region according to the sub-region dividing information;
selecting a peripheral matching block corresponding to the subarea from peripheral matching blocks of the current block according to the motion information angle prediction mode aiming at each subarea of the current block, and determining the motion information of the subarea according to the motion information of the peripheral matching block corresponding to the subarea and the selection condition;
and determining the motion information of the at least one sub-region as the motion information of the current block.
The first determining module 103 is specifically configured to determine, according to the motion information angle prediction mode, motion information of the current block: determining a selection condition of the current block for acquiring motion information according to the size of the current block; wherein the selection condition is a second selection condition, and the second selection condition is motion information permission selected from motion information of peripheral matching blocks is bidirectional motion information;
determining subarea division information of the current block according to the size of the current block; wherein, the size of the subarea of the current block is 8 x 8;
selecting a peripheral matching block pointed by the preset angle from peripheral blocks of the current block according to the preset angle corresponding to the motion information angle prediction mode;
and determining the motion information of the current block according to the selection condition, the sub-region division information and the motion information of the peripheral matching block.
In one example, the apparatus further comprises: the processing module is used for acquiring a motion information prediction mode of the current block, wherein the motion information prediction mode at least comprises a motion information angle prediction mode; and performing weight checking processing on the motion information angle prediction mode of the current block to obtain a motion information angle prediction mode after weight checking.
The processing module performs the duplicate checking processing on the motion information angle prediction mode of the current block, and is specifically used for obtaining the motion information angle prediction mode after duplicate checking when:
determining a motion information angle prediction mode of the weight to be checked; for any two first motion information angle prediction modes of the weight to be checked and any two second motion information angle prediction modes of the weight to be checked, the following steps are performed:
determining first motion information according to a first motion information angle prediction mode of the weight to be checked;
determining second motion information according to a second motion information angle prediction mode to be checked;
and if the first motion information and the second motion information are the same, determining that the first motion information angle prediction mode and the second motion information angle prediction mode are repeated.
The processing module is specifically used for determining a motion information angle prediction mode of the weight to be checked:
for any motion information angle prediction mode, selecting a peripheral matching block pointed by a preset angle from peripheral blocks of the current block according to the preset angle of the motion information angle prediction mode;
selecting a plurality of peripheral matching blocks to be traversed from the peripheral matching blocks;
and if the motion information of the traversed peripheral matching blocks is the same, determining the motion information angle prediction mode as the motion information angle prediction mode of the weight to be checked.
The processing module determines that the first motion information angle prediction mode and the second motion information angle prediction mode are repeated and then further configured to:
removing the repeated first motion information angle prediction mode; or alternatively, the process may be performed,
and removing the repeated second motion information angle prediction mode.
The processing module performs the duplicate checking processing on the motion information angle prediction mode of the current block, and is further used for:
determining a motion information angle prediction mode of secondary weight checking from the motion information angle prediction modes after weight checking; the motion information angle prediction mode after weight checking comprises a first part of motion information angle prediction mode and a second part of motion information angle prediction mode, wherein the first part of motion information angle prediction mode is a motion information angle prediction mode without weight checking, the second part of motion information angle prediction mode comprises a motion information angle prediction mode after repeated removal in the motion information angle prediction mode after weight checking, and the motion information angle prediction mode after weight checking for the second time is the second part of motion information angle prediction mode;
and performing duplicate checking processing on the motion information angle prediction mode of the secondary duplicate checking and the inter-frame prediction mode of the current block, except for the motion information angle prediction mode, to obtain the motion information prediction mode after duplicate checking.
The processing module is specifically configured to perform a duplicate checking process on an inter-frame prediction mode to be checked except the motion information angle prediction mode in the motion information angle prediction mode of the secondary duplicate checking and the prediction mode of the current block: aiming at any one of the motion information angle prediction modes of the secondary check, acquiring motion information of the motion information angle prediction mode;
if the motion information of the motion information angle prediction mode is the same as the motion information of the inter-frame prediction mode of the to-be-checked, determining that the motion information angle prediction mode and the inter-frame prediction mode of the to-be-checked are repeated.
In the decoding side device provided in the embodiment of the present application, from a hardware level, a schematic diagram of a hardware architecture of the decoding side device may be specifically shown in fig. 11. Comprising the following steps: a processor 111 and a machine-readable storage medium 112, the machine-readable storage medium 112 storing machine-executable instructions executable by the processor 111; the processor 111 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
For example, the processor is configured to execute machine-executable instructions to perform the steps of:
acquiring a motion information prediction mode of a current block, wherein the motion information prediction mode at least comprises a motion information angle prediction mode; performing weight checking processing on the motion information angle prediction mode of the current block to obtain a motion information angle prediction mode after weight checking; or alternatively, the process may be performed,
The processor is configured to execute machine-executable instructions to perform the steps of:
if the target motion information prediction mode of the current block is the selected motion information angle prediction mode, determining the motion information of the current block according to the motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
In the encoding end device provided in the embodiment of the present application, from a hardware level, a schematic diagram of a hardware architecture of the encoding end device may be specifically shown in fig. 12. Comprising the following steps: a processor 121 and a machine-readable storage medium 122, the machine-readable storage medium 122 storing machine-executable instructions executable by the processor 121; the processor 121 is configured to execute machine executable instructions to implement the methods disclosed in the above examples of the present application.
For example, the processor is configured to execute machine-executable instructions to perform the steps of:
acquiring a motion information prediction mode of a current block, wherein the motion information prediction mode at least comprises a motion information angle prediction mode; performing weight checking processing on the motion information angle prediction mode of the current block to obtain a motion information angle prediction mode after weight checking; or alternatively, the process may be performed,
The processor is configured to execute machine-executable instructions to perform the steps of:
if the target motion information prediction mode of the current block is the selected motion information angle prediction mode, determining the motion information of the current block according to the motion information angle prediction mode; and determining the predicted value of the current block according to the motion information of the current block.
Based on the same application concept as the above method, the embodiments of the present application further provide a machine-readable storage medium, where a number of computer instructions are stored, where the computer instructions can implement the codec method disclosed in the above example of the present application when executed by a processor.
By way of example, the machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, and the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state drive, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Moreover, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. A decoding method, applied to a decoding end, the method comprising:
acquiring a motion information prediction mode of a current block, wherein the motion information prediction mode at least comprises a motion information angle prediction mode; performing weight checking processing on the motion information angle prediction mode of the current block to obtain a motion information angle prediction mode after weight checking, and adding the motion information angle prediction mode after weight checking into a motion information prediction mode candidate list corresponding to the current block;
Selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list; if the target motion information prediction mode of the current block is a motion information angle prediction mode, then:
selecting a peripheral matching block for the sub-region of the current block from peripheral blocks of the current block according to a preset angle indicated by the motion information angle prediction mode, and determining motion information of the sub-region of the current block according to the motion information of the peripheral matching block; the peripheral matching block is a block at a specified position determined from peripheral blocks of the current block according to the preset angle;
and determining the predicted value of the current block according to the motion information of the subarea of the current block.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the selecting a peripheral matching block from peripheral blocks of the current block for the sub-region of the current block according to the preconfigured angle indicated by the motion information angle prediction mode, and determining motion information of the sub-region of the current block according to the motion information of the peripheral matching block, including:
determining a selection condition for acquiring motion information of a current block according to the size of the current block, wherein the selection condition is motion information permission selected from motion information of peripheral matching blocks is bidirectional motion information;
Determining subarea division information of the current block according to the size of the current block; the subarea division information is used for indicating that the size of the subarea of the current block is 8 x 8;
selecting a peripheral matching block pointed by the preset angle from peripheral blocks of the current block according to the preset angle indicated by the motion information angle prediction mode;
and determining the motion information of the subarea of the current block according to the selection condition, the subarea division information and the motion information of the peripheral matching block.
3. The method of claim 1, wherein the motion information angle prediction modes in the motion information prediction mode candidate list comprise at least one of: a motion information horizontal angle prediction mode, a motion information vertical angle prediction mode, a motion information horizontal up angle prediction mode, a motion information horizontal down angle prediction mode, and a motion information vertical right angle prediction mode.
4. A coding method, applied to a coding end, the method comprising:
acquiring a motion information prediction mode of a current block, wherein the motion information prediction mode at least comprises a motion information angle prediction mode; performing weight checking processing on the motion information angle prediction mode of the current block to obtain a motion information angle prediction mode after weight checking, and adding the motion information angle prediction mode after weight checking into a motion information prediction mode candidate list corresponding to the current block;
Selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list; if the target motion information prediction mode of the current block is a motion information angle prediction mode, then:
selecting a peripheral matching block for the sub-region of the current block from peripheral blocks of the current block according to a preset angle indicated by the motion information angle prediction mode, and determining motion information of the sub-region of the current block according to the motion information of the peripheral matching block; the peripheral matching block is a block at a specified position determined from peripheral blocks of the current block according to the preset angle;
and determining the predicted value of the current block according to the motion information of the subarea of the current block.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
the selecting a peripheral matching block from peripheral blocks of the current block for the sub-region of the current block according to the preconfigured angle indicated by the motion information angle prediction mode, and determining motion information of the sub-region of the current block according to the motion information of the peripheral matching block, including:
determining a selection condition for acquiring motion information of a current block according to the size of the current block, wherein the selection condition is motion information permission selected from motion information of peripheral matching blocks is bidirectional motion information;
Determining subarea division information of the current block according to the size of the current block; the subarea division information is used for indicating that the size of the subarea of the current block is 8 x 8;
selecting a peripheral matching block pointed by the preset angle from peripheral blocks of the current block according to the preset angle indicated by the motion information angle prediction mode;
and determining the motion information of the subarea of the current block according to the selection condition, the subarea division information and the motion information of the peripheral matching block.
6. The method of claim 4, wherein the motion information angle prediction modes in the motion information prediction mode candidate list comprise at least one of: a motion information horizontal angle prediction mode, a motion information vertical angle prediction mode, a motion information horizontal up angle prediction mode, a motion information horizontal down angle prediction mode, and a motion information vertical right angle prediction mode.
7. A decoding device, for use at a decoding end, the device comprising:
the processing module is used for acquiring a motion information prediction mode of the current block, wherein the motion information prediction mode at least comprises a motion information angle prediction mode; performing weight checking processing on the motion information angle prediction mode of the current block to obtain a motion information angle prediction mode after weight checking, and adding the motion information angle prediction mode after weight checking into a motion information prediction mode candidate list corresponding to the current block; selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
The first determining module is configured to, if the target motion information prediction mode of the current block is a motion information angle prediction mode,: selecting a peripheral matching block for the sub-region of the current block from peripheral blocks of the current block according to a preset angle indicated by the motion information angle prediction mode, and determining motion information of the sub-region of the current block according to the motion information of the peripheral matching block; the peripheral matching block is a block at a specified position determined from peripheral blocks of the current block according to the preset angle;
and the second determining module is used for determining the predicted value of the current block according to the motion information of the subarea of the current block.
8. An encoding apparatus, for use at an encoding end, the apparatus comprising:
the processing module is used for acquiring a motion information prediction mode of the current block, wherein the motion information prediction mode at least comprises a motion information angle prediction mode; performing weight checking processing on the motion information angle prediction mode of the current block to obtain a motion information angle prediction mode after weight checking, and adding the motion information angle prediction mode after weight checking into a motion information prediction mode candidate list corresponding to the current block; selecting a target motion information prediction mode of the current block from the motion information prediction mode candidate list;
The first determining module is configured to, if the target motion information prediction mode of the current block is a motion information angle prediction mode,: selecting a peripheral matching block for the sub-region of the current block from peripheral blocks of the current block according to a preset angle indicated by the motion information angle prediction mode, and determining motion information of the sub-region of the current block according to the motion information of the peripheral matching block; the peripheral matching block is a block at a specified position determined from peripheral blocks of the current block according to the preset angle;
and the second determining module is used for determining the predicted value of the current block according to the motion information of the subarea of the current block.
9. A decoding end apparatus, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute the machine executable instructions to implement the method of any of claims 1-3.
10. An encoding end device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute the machine executable instructions to implement the method of any of claims 4-6.
CN202211610564.8A 2019-03-05 2019-03-05 Encoding and decoding method, device and equipment thereof Pending CN116016946A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211610564.8A CN116016946A (en) 2019-03-05 2019-03-05 Encoding and decoding method, device and equipment thereof

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211610564.8A CN116016946A (en) 2019-03-05 2019-03-05 Encoding and decoding method, device and equipment thereof
CN201910165085.1A CN111669592B (en) 2019-03-05 2019-03-05 Encoding and decoding method, device and equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910165085.1A Division CN111669592B (en) 2019-03-05 2019-03-05 Encoding and decoding method, device and equipment

Publications (1)

Publication Number Publication Date
CN116016946A true CN116016946A (en) 2023-04-25

Family

ID=72338403

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202211610564.8A Pending CN116016946A (en) 2019-03-05 2019-03-05 Encoding and decoding method, device and equipment thereof
CN201910165085.1A Active CN111669592B (en) 2019-03-05 2019-03-05 Encoding and decoding method, device and equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910165085.1A Active CN111669592B (en) 2019-03-05 2019-03-05 Encoding and decoding method, device and equipment

Country Status (2)

Country Link
CN (2) CN116016946A (en)
WO (1) WO2020177747A1 (en)

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8675736B2 (en) * 2009-05-14 2014-03-18 Qualcomm Incorporated Motion vector processing
CN102223528B (en) * 2010-04-15 2014-04-30 华为技术有限公司 Method for obtaining reference motion vector
KR101791078B1 (en) * 2010-04-16 2017-10-30 에스케이텔레콤 주식회사 Video Coding and Decoding Method and Apparatus
JP2012147331A (en) * 2011-01-13 2012-08-02 Sony Corp Image processing apparatus and method
CN102164278B (en) * 2011-02-15 2013-05-15 杭州海康威视数字技术股份有限公司 Video coding method and device for removing flicker of I frame
JP5678814B2 (en) * 2011-06-20 2015-03-04 株式会社Jvcケンウッド Image encoding device, image encoding method, image encoding program, transmission device, transmission method, and transmission program
JP2017537529A (en) * 2014-10-31 2017-12-14 サムスン エレクトロニクス カンパニー リミテッド Video encoding apparatus and video decoding apparatus using high-precision skip encoding, and method thereof
WO2017048008A1 (en) * 2015-09-17 2017-03-23 엘지전자 주식회사 Inter-prediction method and apparatus in video coding system
KR20170058837A (en) * 2015-11-19 2017-05-29 한국전자통신연구원 Method and apparatus for encoding/decoding of intra prediction mode signaling
US10448011B2 (en) * 2016-03-18 2019-10-15 Mediatek Inc. Method and apparatus of intra prediction in image and video processing
US10390021B2 (en) * 2016-03-18 2019-08-20 Mediatek Inc. Method and apparatus of video coding
US20170347094A1 (en) * 2016-05-31 2017-11-30 Google Inc. Block size adaptive directional intra prediction
CN106454378B (en) * 2016-09-07 2019-01-29 中山大学 Converting video coding method and system in a kind of frame per second based on amoeboid movement model
CN109089119B (en) * 2017-06-13 2021-08-13 浙江大学 Method and equipment for predicting motion vector

Also Published As

Publication number Publication date
CN111669592A (en) 2020-09-15
CN111669592B (en) 2022-11-25
WO2020177747A1 (en) 2020-09-10

Similar Documents

Publication Publication Date Title
CN111385569B (en) Coding and decoding method and equipment thereof
CN109997363B (en) Image encoding/decoding method and apparatus, and recording medium storing bit stream
KR102094436B1 (en) Method for inter prediction and apparatus thereof
CN111698515B (en) Method and related device for inter-frame prediction
CN111698500B (en) Encoding and decoding method, device and equipment
CN111263144B (en) Motion information determination method and equipment
CN111953995A (en) Method and device for inter-frame prediction
CN111107373A (en) Method and related device for inter-frame prediction based on affine prediction mode
CN112153389B (en) Method and device for inter-frame prediction
CN112135137A (en) Video encoder, video decoder and corresponding methods
CN113170176B (en) Video encoder, video decoder and corresponding methods
CN112422971B (en) Encoding and decoding method, device and equipment
CN112565747B (en) Decoding and encoding method, device and equipment
CN112449180B (en) Encoding and decoding method, device and equipment
CN111669592B (en) Encoding and decoding method, device and equipment
CN112449181B (en) Encoding and decoding method, device and equipment
CN111510726B (en) Coding and decoding method and equipment thereof
CN112055220B (en) Encoding and decoding method, device and equipment
CN111405277A (en) Inter-frame prediction method and device and corresponding encoder and decoder
CN112468817B (en) Encoding and decoding method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination