CN111050168A - Affine prediction method and related device thereof - Google Patents

Affine prediction method and related device thereof Download PDF

Info

Publication number
CN111050168A
CN111050168A CN201911383635.3A CN201911383635A CN111050168A CN 111050168 A CN111050168 A CN 111050168A CN 201911383635 A CN201911383635 A CN 201911383635A CN 111050168 A CN111050168 A CN 111050168A
Authority
CN
China
Prior art keywords
block
sub
motion vector
blocks
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911383635.3A
Other languages
Chinese (zh)
Other versions
CN111050168B (en
Inventor
曾飞洋
江东
林聚财
殷俊
方诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201911383635.3A priority Critical patent/CN111050168B/en
Publication of CN111050168A publication Critical patent/CN111050168A/en
Priority to EP20905556.5A priority patent/EP4062638A4/en
Priority to PCT/CN2020/138402 priority patent/WO2021129627A1/en
Application granted granted Critical
Publication of CN111050168B publication Critical patent/CN111050168B/en
Priority to US17/739,185 priority patent/US20220272374A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application provides an affine prediction method and a related device thereof. The affine prediction method comprises the following steps: dividing a current coding block into a plurality of sub blocks, and determining initial predicted values of all pixel points in each sub block; dividing each sub-block into a plurality of sub-blocks, wherein at least one sub-block comprises at least two whole pixel points; determining a motion vector difference value and a gradient of each block; calculating a pixel compensation value of each block based on the motion vector difference value and the gradient; and taking the pixel compensation value of each block as the pixel compensation value of all pixel points in each block, and calculating the final prediction value of each pixel point in the current coding block according to the initial prediction value and the pixel compensation value of each pixel point. The method can reduce the calculation complexity of inter-frame prediction.

Description

Affine prediction method and related device thereof
Technical Field
The present application relates to the field of video encoding and decoding technologies, and in particular, to an affine prediction method and a related apparatus thereof.
Background
Because the amount of video image data is large, it is usually necessary to encode and compress the video image data before transmitting or storing the video image data, and the encoded data is called a video code stream. Subject to hardware and other constraints, such as limited storage space, limited transmission bandwidth, etc., encoders always want to keep the video stream as small as possible.
The video coding mainly comprises the processes of prediction, transformation, quantization, coding and the like, wherein the prediction is divided into an intra-frame prediction part and an inter-frame prediction part which are respectively used for removing the spatial redundancy and the temporal redundancy.
The inter prediction is to search a block that most matches the current block within a reference frame image of the current block to predict the current block. The inter prediction modes are classified into several major categories, i.e., a conventional Advanced Motion Vector Prediction (AMVP) mode, a conventional Merge mode, a triangle mode, a HASH mode, and an affine prediction mode. In the affine prediction method, the predicted values of all pixel points of the current coding block need to be calculated, and then the prediction direction is determined based on the residual errors between the predicted values of all pixel points and the actual pixel value of the current coding block. The method for calculating the predicted values of all pixel points in the current coding block mainly comprises the steps of calculating an initial predicted value, calculating a pixel compensation value, calculating a final predicted value through the initial predicted value and the pixel compensation value and the like.
Disclosure of Invention
The application provides an affine prediction method and a related device thereof, which can reduce the calculation complexity of inter-frame prediction.
In order to solve the technical problem, the present application provides an affine prediction method, including: dividing a current coding block into a plurality of sub blocks, and determining initial predicted values of all pixel points in each sub block; dividing each sub-block into a plurality of sub-blocks, wherein at least one sub-block comprises at least two whole pixel points; determining a motion vector difference value and a gradient of each block; calculating a pixel compensation value of each block based on the motion vector difference value and the gradient; and taking the pixel compensation value of each block as the pixel compensation value of all pixel points in each block, and calculating the final prediction value of each pixel point in the current coding block according to the initial prediction value and the pixel compensation value of each pixel point.
To solve the technical problem, the present application provides a codec, which includes a processor and a memory; the memory has stored therein a computer program for execution by the processor to perform the steps of the method as described above.
To solve the technical problem, the present application provides a storage device in which a computer program is stored, and the computer program implements the steps in the affine prediction method when executed.
The method comprises the following steps: the method comprises the steps that a current coding block is divided into a plurality of sub blocks, and initial predicted values of all pixel points in each sub block are determined; and because the difference of the pixel compensation values of adjacent pixel points is small, the pixel compensation values of the sub blocks are used as the pixel compensation values of all the pixel points in the sub blocks, so that the influence of the pixel compensation values of all the pixel points in the sub blocks on the coding effect is small or no influence is generated.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic flow chart of a first embodiment of the affine prediction method of the present application;
FIG. 2 is a schematic diagram of the locations of control points in the current coding block of the present application;
FIG. 3 is a schematic diagram of the affine prediction method dividing all sub-blocks into a plurality of sub-blocks by using the same dividing method;
FIG. 4 is a schematic diagram of the affine prediction method dividing all sub-blocks into a plurality of sub-blocks by different dividing methods;
FIG. 5 is a block diagram of an embodiment of a codec of the present application;
FIG. 6 is a schematic structural diagram of an embodiment of a memory device according to the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present application, the affine prediction method and the related apparatus provided in the present application are further described in detail below with reference to the accompanying drawings and the detailed description.
Referring to fig. 1 in detail, fig. 1 is a schematic flow chart of a first embodiment of the affine prediction method of the present application. The affine prediction method of the present embodiment includes the following steps.
S101: dividing the current coding block into a plurality of sub blocks, and determining the initial predicted values of all pixel points in each sub block.
The current coding block is an image block to be coded in the current frame, the current frame is sequentially coded in a certain sequence in the form of the image block, and the current coding block is an image block to be coded in the current frame at the next moment in the sequence. Each sub-block may have a variety of dimensions, such as 16X16, 32X32, or 32X16, where the number represents the number of rows and columns of pixel points on the current coding block.
After the current coding block is obtained, the current coding block may be divided into a plurality of sub-blocks. The sub-block may be a set of pixels with a smaller size, usually the sub-block is 4X4 size, or the sub-block may be 4X4, 8X4, or 8X8 size. In addition, in this embodiment, the sizes of all sub-blocks in the current coding block may be uniform, that is, the current coding block may be divided into a plurality of sub-blocks arranged in an array and having uniform sizes. In other embodiments, the size of all sub-blocks in the current coding block may not be uniform.
After the current coding block is divided into a plurality of sub-blocks, the initial predicted values of all pixel points in each sub-block can be determined through various methods, for example, the initial predicted values of all pixel points in each sub-block are determined through an affine prediction method. It can be understood that "determining the initial predicted values of all the pixel points in each sub-block" herein is "determining the initial predicted values of all the whole pixel points in each sub-block".
Specifically, determining the initial predicted values of all pixel points in each sub-block by an affine prediction method may include: determining a motion vector (CPMV) of a Control Point (CP) of a current coding block; weighting and calculating the Motion Vector (MV) of each sub-block in the current coding block according to the position relation of each sub-block in the current coding block by the Motion Vector of the control point in the current coding block; and determining initial predicted values of all pixel points in each sub-block based on the motion vector of each sub-block. Wherein the control point (v in 4-parameter emulation) of the current coding block0,v1(ii) a Or v in 6-parameter profiling0,v1,v2) May be as shown in fig. 2.
Wherein the motion vector of the control point of the current coding block can be determined based on the existing methods affine _ AMVP or affine _ MERGE. The affine _ AMVP or affine _ MERGE in the existing method is an inter-frame prediction technology in an h.266 standard protocol document, and the detailed implementation process of the inter-frame prediction technology is not described in detail herein.
Alternatively, the motion vector of each sub-block may be calculated by integrating the position relationship between the first central point of each sub-block and each control point and the motion vector of the control point, and the specific calculation formula may be referred to as formula (1) and formula (2), but is not limited thereto.
For a 4-parameter affine model
Figure BDA0002342922440000041
For 6 parameter mode model
Figure BDA0002342922440000042
Wherein (v)0x,v0y)、(v1x,v1y)(v2x,v2y) The CPMV is the upper left point, the upper right point and the lower left point of the current coding block respectively, w and h are the width and the height of the current coding block, and i and j are the offset values of pixel points in the coding block relative to the upper left point of the current coding block in the horizontal direction and the vertical direction respectively. Specifically, when calculating the motion vector of each sub-block, the offset values of the first center point of each sub-block in the horizontal and vertical directions with respect to the upper left point of the current coding block are substituted into the formula (1) and the formula (2).
Similarly, the initial prediction values of all pixel points in each sub-block can also be determined according to the motion vector of each sub-block based on the existing methods affine _ AMVP or affine _ MERGE.
S102: each sub-block is divided into a plurality of partitions.
Each sub-block may be further divided into a plurality of sub-blocks, and at least one sub-block contains at least two integer pixels. It can be understood that all the blocks include at least one whole pixel point, so as to avoid calculating the pixel compensation value of the block not including the whole pixel point, and avoid increasing unnecessary calculation due to the existence of the block not including the whole pixel point.
In addition, at least one of the blocks has a width or height greater than 1.
In the present embodiment, as shown in fig. 3, all sub-blocks may be divided into a plurality of blocks using the same division method. In other embodiments, as shown in fig. 4, each sub-block may be divided into a plurality of blocks by using different division methods.
Specifically, a dividing method of dividing one sub-block into a plurality of blocks may be as follows.
In an implementation, the sub-block may be divided horizontally, so that one sub-block is divided into a plurality of blocks arranged in a column direction.
In another implementation, the sub-block may be vertically divided such that one sub-block is divided into a plurality of blocks arranged in a row direction.
In yet another implementation, the sub-blocks may be divided horizontally and vertically in sequence, so that one sub-block is divided into a plurality of blocks arranged in an array.
It should be noted that the subblock may be divided horizontally and/or vertically N times, and the ratio of each division is not limited, where N is a positive integer. For example, a sub-block may be divided into 4 blocks by dividing the sub-block into 2 blocks in a horizontal half and then dividing the 2 blocks into 4 blocks in a vertical half.
Further, the width of each block may be the same, and the height of each block may be the same, i.e. the specification size of each block may be the same, for example, each block is arranged in 2X 2. Of course, the width of each partition may also be different and/or the height of each partition may also be different, for example, one of the sub-blocks may contain a 2X2 arrangement and there may be another partition containing a 2X4 arrangement of sub-blocks.
Further, the width and height of each partition may be an integer, such as 1, 2, 3, or 4.
Further, the shape of each block may be the same or different. For example, the sub-block is divided into L-shaped blocks and rectangular blocks, and the plurality of blocks formed by L-shaped division of the sub-block include at least one L-shaped block. Of course, the number and specification of the L-shaped divisions may not be limited.
S103: the motion vector difference and gradient for each block are determined.
In one implementation, the motion vector difference of each sub-block in any sub-block may be calculated first, and the motion vector difference of each sub-block in the sub-block may be multiplexed into other sub-blocks in the current coding block. For example, the motion vector difference values of all blocks in the A sub-block shown in FIG. 3, namely, the A-1 block, the A-2 block, the A-3 block and the A-4 block, may be calculated, the motion vector difference value of each block in the A sub-block may be used as the motion vector difference value of the block corresponding to each block in the A sub-block in other sub-blocks, for example, the motion vector difference value of A-1 may be used as the motion vector difference value of B-1 block, C-1 block and D-1 block, the motion vector difference value of A-2 may be used as the motion vector difference value of B-2 block, C-2 block and D-2 block, the motion vector difference value of A-3 may be used as the motion vector difference value of B-3 block, C-3 block and D-3 block, and the motion vector difference value of A-4 may be used as the motion vector difference value of B-4 block, B-4 block, The motion vector difference of the C-4 block and the D-4 block.
Before this, all sub-blocks can be divided into a plurality of sub-blocks by using the same dividing method, so that the motion vector difference value of each sub-block in one sub-block can be multiplexed to other sub-blocks. It will be appreciated that the gradient of each sub-block in all sub-blocks needs to be calculated.
In another implementation, the motion vector difference and gradient of each sub-block in all sub-blocks may be calculated.
Specifically, calculating the motion vector difference and gradient of a block may include: and taking a pixel point from all integral pixel points and/or sub-pixel points of the blocks as a representative pixel point, calculating the motion vector difference and the gradient of the representative pixel point, and taking the motion vector difference and the gradient of the representative pixel point as the motion vector difference and the gradient of the block to which the representative pixel point belongs. It is to be understood that when the specification size and shape of each block are uniform, the positions of the representative pixels in each block may be uniform, for example, as shown in fig. 3, the representative pixels (black dots) of each block have the same offset values in the horizontal and vertical directions with respect to the upper left point of the block to which the representative pixels belong. Of course, the positions of the representative pixels in the blocks may not be uniform, for example, the offset values of the representative pixels in a part of the blocks in the horizontal and vertical directions with respect to the upper left point of the block to which the representative pixels belong are (1,1), and the offset values of the representative pixels in a part of the blocks in the horizontal and vertical directions with respect to the upper left point of the block to which the representative pixels belong are (0.5 ).
Wherein, the whole pixel point is the coding effective pixel point. The sub-pixel point is a virtual point, does not actually exist, and is a pixel point between two whole pixel points, and the predicted values (i.e., the initial predicted value and the final predicted value) of the sub-pixel point can be obtained by interpolation of the whole pixel points.
Specifically, the method for calculating the motion vector difference value representing the pixel point may include: determining that the representative pixel point is at a horizontal sum relative to a second center point of the sub-block to which the representative pixel point belongsOffset values (n, m) in the vertical direction, and then a motion vector difference value (Deltav) is calculated from (n, m)x,Δvy)。
Alternatively, a motion vector difference (Δ v) is calculated from (n, m)x,Δvy) The calculation formula of (c) may be:
Figure BDA0002342922440000071
wherein c, d, e and f can be obtained by calculating the CPMV of the control point of the current coding block, and the width and height of the current coding block, which can be specifically referred to formula (4) or formula (5).
For a 4-parameter affine model
Figure BDA0002342922440000072
For 6 parameter mode model
Figure BDA0002342922440000073
Wherein (v)0x,v0y)、(v1x,v1y)(v2x,v2y) CPMV of the top left point, top right point and bottom left point of the current coding block, w and h are width and height of the current coding block.
It can be understood that the positions of the second central point in each block and the first central point in the affine prediction can be the same, and a better final predicted value can be obtained by the affine prediction method of the present application, so that the pixel compensation accuracy is improved. Optionally, the first center point may be any one of all integer pixel points and/or all sub-pixel points of the sub-block to which the first center point belongs, that is, any one of the pixel points may be taken from all integer pixel points and/or all sub-pixel points in the sub-block as the first center point and the second center point of the sub-block. Preferably, the offset values of the first central point and the second central point in the same block relative to the upper left point of the block are both (2,2) or (1.5 ), so that the final predicted value can be more accurate, and the pixel compensation precision can be further improved. Of course, the second center point and the first center point in each segment may not be located at the same position, for example, the offset value of the first center point in a segment with respect to the upper left point of the segment is (2,2), and the offset value of the second center point of the segment with respect to the upper left point of the segment is (1.5 ).
Alternatively, when the representative pixel point is a sub-pixel point, the gradient of the representative pixel point may be calculated by using the initial prediction value of the sub-pixel point adjacent to the representative pixel point, for example, when the representative pixel point is 1/2 sub-pixel points, the gradient of the representative pixel point may be calculated by using the initial prediction value of 1/2 sub-pixel points adjacent to the representative pixel point. And when the representative pixel point is the whole pixel point, calculating the gradient of the representative pixel point by adopting the initial predicted value of the whole pixel point adjacent to the representative pixel point.
In one implementation, the gradient representing the pixel points may be calculated by a three-tap filter. Specifically, the gradient [ g ] of the representative pixel point may be calculated from the initial prediction value determined in step S101x(i,j),gy(i,j)]The specific calculation formula is as follows:
Figure BDA0002342922440000081
wherein, I (I +1, j), I (I-1, j), I (I, j +1) and I (I, j-1) are the initial prediction values of pixel points (I +1, j), (I-1, j), (I, j +1) and (I, j-1), respectively.
In another implementation mode, the gradient of the representative pixel point can be calculated through a Sobel operator, and the pixel compensation precision can be improved. Specifically, the gradient [ g ] of the pixel point can be obtained by performing planar convolution on an image which takes the representative pixel point as the center and has the specification size of 3 × 3 and two inner cores of a Sobel operatorx(i,j),gy(i,j)]The specific calculation formula can be as follows:
Figure BDA0002342922440000082
Figure BDA0002342922440000083
wherein, I (I-1, j-1), I (I-1, j +1), I (I, j-1), I (I, j +1), I (I +1, j-1), I (I +1, j) and I (I +1, j +1) are the initial prediction values of pixel points (I-1, j-1), (I-1, j +1), (I, j-1), (I, j +1), (I +1, j-1), (I +1, j) and (I +1, j +1), respectively.
In yet another implementation, the gradient of the representative pixel point may be calculated by a gradient calculation method such as Robert gradient operator, Prewitt operator, and the like.
It is understood that the same or different gradient calculation methods may be used to calculate the gradient of the representative pixel points in each partition.
S104: a pixel compensation value for each block is calculated based on the motion vector difference value and the gradient.
The pixel compensation value (Δ I (I, j)) of each block can be calculated by the motion vector difference value and the gradient of each block, and a specific calculation formula can be as shown in formula 9, but is not limited thereto.
ΔI(i,j)=gx(i,j)*Δvx(i,j)+gy(i,j)*Δvy(i, j) formula (9).
S105: and taking the pixel compensation value of each block as the pixel compensation value of all pixel points in each block, and calculating the final prediction value of each pixel point in the current coding block according to the initial prediction value and the pixel compensation value of each pixel point.
The inventor finds that the difference of the pixel compensation values of the adjacent pixel points is very small, and the compensation value of one pixel point can usually represent the compensation value of the adjacent pixel point. Based on this, the pixel compensation value of the representative pixel point in one block is calculated firstly, and then the pixel compensation values of all the pixel points in each block can be unified into the pixel compensation value of the representative pixel point in each block, so that the coding effect is not reduced, the calculation complexity of inter-frame prediction is greatly reduced, and the calculation efficiency is improved.
After determining the pixel compensation values of all the pixel points in the current coding block, the final prediction value of each pixel point in the current coding block can be calculated through the initial prediction value and the pixel compensation value of each pixel point, and the specific calculation formula can be shown as formula (9). It can be understood that "calculating the final predicted value of each pixel point in the current coding block" herein is "calculating the final predicted value of each whole pixel point in the current coding block".
I' (I, j) ═ I (I, j) + Δ I (I, j) formula (9);
wherein I (I, j) is an initial predicted value of the pixel point (I, j); delta I (I, j) is the pixel compensation value of the (I, j) pixel point; and I' (I, j) is an initial predicted value of the pixel point (I, j).
Furthermore, an identifier may be set in the current coding block or the coding result of the stripe of the current coding block, where the identifier is used to indicate whether the current coding block needs to apply the affine prediction method of the present application, or which affine prediction method of the present application is used to perform inter-frame prediction, where different identifiers indicate different prediction modes.
For example, a new flag PROF _ advance is added to indicate which affine prediction method is used by the current coding block for inter prediction.
The flag PROF _ advance is 0, which indicates that the current coding block is predicted according to the prior art, and the subblock does not need to be divided into a plurality of subblocks.
The flag PROF _ advance is 1, which indicates that the current coding block needs to divide the 4x4 sub-block into 4x 2 sub-blocks, and the motion vector difference, the gradient and the pixel compensation value of the pixel points in the sub-blocks are uniformly calculated based on the 2x2 sub-blocks. The offset values of the first central point and the second central point relative to the upper left point of the sub-block to which the first central point and the second central point belong are (1.5 ), and the gradient calculation method is a Sobel operator.
The flag PROF _ advance is 2, which indicates that the current coding block needs to divide the 4x4 sub-block into 2x4 sub-blocks, and the motion vector difference, the gradient and the pixel compensation value of the pixel points in the sub-blocks are uniformly calculated based on the 2x4 sub-blocks. The offset values of the first central point and the second central point relative to the upper left point of the sub-block to which the first central point and the second central point belong are unified into (2,2), and the gradient calculation method is a Prewitt operator.
It will be appreciated that the flag PROF _ impulse may be set to different values representing the positions of the first and second center points, different combinations of the dividing method of dividing the sub-block into a plurality of blocks, and the gradient calculation method.
It will be appreciated that the above described affine prediction method can be used not only to predict the luminance component but also the chrominance component.
The application discloses an affine prediction method and a related device thereof, which are different from the prior art. Dividing a current coding block into a plurality of sub blocks, and determining initial predicted values of all pixel points in each sub block; and because the difference of the pixel compensation values of adjacent pixel points is small, the pixel compensation values of the sub blocks are used as the pixel compensation values of all the pixel points in the sub blocks, so that the influence of the pixel compensation values of all the pixel points in the sub blocks on the coding effect is small or no influence is generated.
The above affine prediction method is generally implemented by a codec, and thus the present application also proposes a codec. Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of a codec of the present application. The present codec 10 includes a processor 12 and a memory 11; the memory 11 has stored therein a computer program for execution by the processor 12 to implement the steps in the affine prediction method as described above.
The logic process of the affine prediction method is presented as a computer program, and on the computer program side, if it is sold or used as a stand-alone software product, it can be stored in a storage device, so that the present application proposes a storage device. Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of a storage device 20 of the present application, in which a computer program 21 is stored, and when the computer program is executed by a processor, the steps in the affine prediction method are implemented.
The storage device 20 may be a medium that can store a computer program, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, or may be a server that stores the computer program, and the server may send the stored computer program to another device for running, or may run the stored computer program by itself. The storage device 20 may be a combination of a plurality of entities in terms of physical entities, for example, a plurality of servers, a server plus a memory, or a memory plus a removable hard disk.
The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.

Claims (11)

1. An affine prediction method, comprising:
dividing a current coding block into a plurality of sub blocks, and determining initial predicted values of all pixel points in each sub block;
dividing each subblock into a plurality of blocks, wherein at least one block comprises at least two whole pixels;
determining a motion vector difference value and a gradient of each block;
calculating a pixel compensation value for each of the blocks based on the motion vector difference value and the gradient;
and taking the pixel compensation value of each block as the pixel compensation value of all pixel points in each block, and calculating the final prediction value of each pixel point in the current coding block according to the initial prediction value and the pixel compensation value of each pixel point.
2. The affine prediction method as claimed in claim 1, wherein the dividing each sub-block into a plurality of blocks comprises: dividing all sub-blocks into a plurality of sub-blocks by using the same dividing method;
the determining a motion vector difference value of each block includes: and calculating the motion vector difference of all the sub-blocks in any sub-block, and taking the motion vector difference of each sub-block in any sub-block as the motion vector difference of the corresponding sub-block in other sub-blocks.
3. The affine prediction method according to claim 1, comprising: the width of all the blocks is the same, and the height of all the blocks is the same; and the width and the height of each block are integers.
4. The affine prediction method according to claim 1, comprising:
the calculating the motion vector difference and the gradient of each block comprises:
taking a pixel point from all whole pixel points and/or all sub-pixel points of each block as a representative pixel point;
and calculating the motion vector difference and gradient of the representative pixel point, and taking the motion vector difference and gradient of the representative pixel point as the motion vector difference and gradient of the block to which the representative pixel point belongs.
5. The affine prediction method according to claim 4, comprising:
the dividing the current coding block into a plurality of sub-blocks and determining an initial prediction value of each sub-block includes: calculating a motion vector of each subblock based on the offset of the first central point of each subblock relative to the upper left point of the current coding block, and determining the initial predicted values of all pixel points in each subblock according to the motion vector of each subblock;
the calculating the motion vector difference and the gradient of the representative pixel point includes: calculating a motion vector difference value of the representative pixel point according to offset values of the representative pixel point in the horizontal and vertical directions relative to a second central point of the subblock to which the representative pixel point belongs;
wherein the first center point and the second center point of each sub-block are located at the same position.
6. The affine prediction method according to claim 5, comprising:
the first central point is any one of all whole pixel points and/or all sub-pixel points of the sub-block to which the first central point belongs.
7. The affine prediction method according to claim 5, comprising:
the offset of the first center point relative to the top left point of the sub-block to which the first center point belongs is (2,2) or (1.5 ).
8. The affine prediction method according to claim 1, comprising:
the calculating the motion vector difference and the gradient of each block comprises the following steps:
calculating the gradient of each block based on a gradient calculation method, wherein the gradient calculation method comprises a three-tap filter, a Robert gradient operator, a Sobel operator and a Prewitt operator.
9. The affine prediction method according to claim 1, further comprising:
and setting marks in the coding result of the current coding block or the coding result of the stripe to which the current coding block belongs, wherein different marks represent different prediction modes.
10. A codec, comprising: a memory and a processor coupled to each other, the memory for storing program instructions, the processor for executing the program instructions to implement the affine prediction method of any one of claims 1 to 9.
11. A storage device characterized by storing a program file capable of implementing the affine prediction method according to any one of claims 1 to 9.
CN201911383635.3A 2019-12-27 2019-12-27 Affine prediction method and related device thereof Active CN111050168B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201911383635.3A CN111050168B (en) 2019-12-27 2019-12-27 Affine prediction method and related device thereof
EP20905556.5A EP4062638A4 (en) 2019-12-27 2020-12-22 Affine prediction method and related devices
PCT/CN2020/138402 WO2021129627A1 (en) 2019-12-27 2020-12-22 Affine prediction method and related devices
US17/739,185 US20220272374A1 (en) 2019-12-27 2022-05-09 Affine prediction method and related devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911383635.3A CN111050168B (en) 2019-12-27 2019-12-27 Affine prediction method and related device thereof

Publications (2)

Publication Number Publication Date
CN111050168A true CN111050168A (en) 2020-04-21
CN111050168B CN111050168B (en) 2021-07-13

Family

ID=70240963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911383635.3A Active CN111050168B (en) 2019-12-27 2019-12-27 Affine prediction method and related device thereof

Country Status (1)

Country Link
CN (1) CN111050168B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021057578A1 (en) * 2019-09-23 2021-04-01 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and apparatus
CN112601081A (en) * 2020-12-04 2021-04-02 浙江大华技术股份有限公司 Adaptive partition multi-prediction method and device
WO2021129627A1 (en) * 2019-12-27 2021-07-01 Zhejiang Dahua Technology Co., Ltd. Affine prediction method and related devices
CN113630602A (en) * 2021-06-29 2021-11-09 杭州未名信科科技有限公司 Affine motion estimation method and device for coding unit, storage medium and terminal
CN113630601A (en) * 2021-06-29 2021-11-09 杭州未名信科科技有限公司 Affine motion estimation method, device, equipment and storage medium
WO2022022278A1 (en) * 2020-07-29 2022-02-03 Oppo广东移动通信有限公司 Inter-frame prediction method, encoder, decoder, and computer storage medium
CN114125466A (en) * 2020-08-26 2022-03-01 Oppo广东移动通信有限公司 Inter-frame prediction method, encoder, decoder, and computer storage medium
CN114342390A (en) * 2020-07-30 2022-04-12 北京达佳互联信息技术有限公司 Method and apparatus for prediction refinement for affine motion compensation
CN114979668A (en) * 2020-08-20 2022-08-30 Oppo广东移动通信有限公司 Inter-frame prediction method, encoder, decoder, and computer storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1879419A (en) * 2004-09-28 2006-12-13 华为技术有限公司 Video image coding method
CN102665061A (en) * 2012-04-27 2012-09-12 中山大学 Motion vector processing-based frame rate up-conversion method and device
CN103402045A (en) * 2013-08-20 2013-11-20 长沙超创电子科技有限公司 Image de-spin and stabilization method based on subarea matching and affine model
CN105306952A (en) * 2015-09-30 2016-02-03 南京邮电大学 Method for reducing computation complexity of side information generation
US20180184121A1 (en) * 2016-12-23 2018-06-28 Apple Inc. Sphere Projected Motion Estimation/Compensation and Mode Decision
CN109155855A (en) * 2016-05-16 2019-01-04 高通股份有限公司 Affine motion for video coding is predicted
CN110324623A (en) * 2018-03-30 2019-10-11 华为技术有限公司 A kind of bidirectional interframe predictive method and device
CN110446044A (en) * 2019-08-21 2019-11-12 浙江大华技术股份有限公司 Linear Model for Prediction method, apparatus, encoder and storage device
CN110557631A (en) * 2015-03-10 2019-12-10 华为技术有限公司 Image prediction method and related device
CN110602493A (en) * 2018-09-19 2019-12-20 北京达佳互联信息技术有限公司 Method and equipment for interlaced prediction of affine motion compensation
WO2019244809A1 (en) * 2018-06-21 2019-12-26 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Coding device, decoding device, coding method, and decoding method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1879419A (en) * 2004-09-28 2006-12-13 华为技术有限公司 Video image coding method
CN102665061A (en) * 2012-04-27 2012-09-12 中山大学 Motion vector processing-based frame rate up-conversion method and device
CN103402045A (en) * 2013-08-20 2013-11-20 长沙超创电子科技有限公司 Image de-spin and stabilization method based on subarea matching and affine model
CN110557631A (en) * 2015-03-10 2019-12-10 华为技术有限公司 Image prediction method and related device
CN105306952A (en) * 2015-09-30 2016-02-03 南京邮电大学 Method for reducing computation complexity of side information generation
CN109155855A (en) * 2016-05-16 2019-01-04 高通股份有限公司 Affine motion for video coding is predicted
US20180184121A1 (en) * 2016-12-23 2018-06-28 Apple Inc. Sphere Projected Motion Estimation/Compensation and Mode Decision
CN110324623A (en) * 2018-03-30 2019-10-11 华为技术有限公司 A kind of bidirectional interframe predictive method and device
WO2019244809A1 (en) * 2018-06-21 2019-12-26 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Coding device, decoding device, coding method, and decoding method
CN110602493A (en) * 2018-09-19 2019-12-20 北京达佳互联信息技术有限公司 Method and equipment for interlaced prediction of affine motion compensation
CN110446044A (en) * 2019-08-21 2019-11-12 浙江大华技术股份有限公司 Linear Model for Prediction method, apparatus, encoder and storage device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021057578A1 (en) * 2019-09-23 2021-04-01 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and apparatus
WO2021129627A1 (en) * 2019-12-27 2021-07-01 Zhejiang Dahua Technology Co., Ltd. Affine prediction method and related devices
WO2022022278A1 (en) * 2020-07-29 2022-02-03 Oppo广东移动通信有限公司 Inter-frame prediction method, encoder, decoder, and computer storage medium
CN114342390A (en) * 2020-07-30 2022-04-12 北京达佳互联信息技术有限公司 Method and apparatus for prediction refinement for affine motion compensation
CN114979668A (en) * 2020-08-20 2022-08-30 Oppo广东移动通信有限公司 Inter-frame prediction method, encoder, decoder, and computer storage medium
CN114125466A (en) * 2020-08-26 2022-03-01 Oppo广东移动通信有限公司 Inter-frame prediction method, encoder, decoder, and computer storage medium
CN112601081A (en) * 2020-12-04 2021-04-02 浙江大华技术股份有限公司 Adaptive partition multi-prediction method and device
CN112601081B (en) * 2020-12-04 2022-06-24 浙江大华技术股份有限公司 Adaptive partition multi-prediction method and device
CN113630602A (en) * 2021-06-29 2021-11-09 杭州未名信科科技有限公司 Affine motion estimation method and device for coding unit, storage medium and terminal
CN113630601A (en) * 2021-06-29 2021-11-09 杭州未名信科科技有限公司 Affine motion estimation method, device, equipment and storage medium
CN113630601B (en) * 2021-06-29 2024-04-02 杭州未名信科科技有限公司 Affine motion estimation method, affine motion estimation device, affine motion estimation equipment and storage medium

Also Published As

Publication number Publication date
CN111050168B (en) 2021-07-13

Similar Documents

Publication Publication Date Title
CN111050168B (en) Affine prediction method and related device thereof
US11979559B2 (en) Image prediction method and device
CN110290388B (en) Intra-frame prediction method, video encoding method, computer device and storage device
WO2022104498A1 (en) Intra-frame prediction method, encoder, decoder and computer storage medium
EP3198867A1 (en) Method of improved directional intra prediction for video coding
WO2020258010A1 (en) Image encoding method, image decoding method, encoder, decoder and storage medium
CN111031319B (en) Local illumination compensation prediction method, terminal equipment and computer storage medium
CN103583043B (en) Method and device for encoding video image, and method and device for decoding video image
CN107046645A (en) Image coding/decoding method and device
JP2010514300A (en) Method for decoding a block of a video image
CN104754362B (en) Image compression method using fine-divided block matching
US20230063062A1 (en) Hardware codec accelerators for high-performance video encoding
CN110971897B (en) Method, apparatus and system for encoding and decoding intra prediction mode of chrominance component
WO2022116113A1 (en) Intra-frame prediction method and device, decoder, and encoder
CN104935945A (en) Image compression method of extended reference pixel sample value set
CN110636301B (en) Affine prediction method, computer device, and computer-readable storage medium
CN116600134B (en) Parallel video compression method and device adapting to graphic engine
CN109729363A (en) A kind for the treatment of method and apparatus of video image
WO2021129627A1 (en) Affine prediction method and related devices
KR20140031974A (en) Image coding method, image decoding method, image coding device, image decoding device, image coding program, and image decoding program
TW202211690A (en) Inter-frame prediction method, encoder, decoder, and computer storage medium
CN112153385B (en) Encoding processing method, device, equipment and storage medium
CN101268623A (en) Variable shape motion estimation in video sequence
CN113473119B (en) Image/video encoding method, apparatus, system, and computer-readable storage medium
WO2022257674A1 (en) Encoding method and apparatus using inter-frame prediction, device, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant