CN109510991B - Motion vector deriving method and device - Google Patents

Motion vector deriving method and device Download PDF

Info

Publication number
CN109510991B
CN109510991B CN201710834419.0A CN201710834419A CN109510991B CN 109510991 B CN109510991 B CN 109510991B CN 201710834419 A CN201710834419 A CN 201710834419A CN 109510991 B CN109510991 B CN 109510991B
Authority
CN
China
Prior art keywords
motion vector
decoding unit
current decoding
vector
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710834419.0A
Other languages
Chinese (zh)
Other versions
CN109510991A (en
Inventor
虞露
孙煜程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201710834419.0A priority Critical patent/CN109510991B/en
Publication of CN109510991A publication Critical patent/CN109510991A/en
Application granted granted Critical
Publication of CN109510991B publication Critical patent/CN109510991B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Abstract

The invention provides a motion vector deriving method and a device, comprising the following steps: and according to the similarity of the triangles, deriving the motion vector predictor or the motion vector predictor candidate or the motion vector candidate of the current decoding unit or the sub-block inside the current decoding unit by utilizing the motion information of the decoded area around the current decoding unit. The motion vector derivation method and the motion vector derivation device can fully utilize the motion information of adjacent peripheral decoded areas of the current decoding unit, improve the accuracy of motion vector derivation under the common and special modes between frames, and improve the coding and decoding efficiency.

Description

Motion vector deriving method and device
Technical Field
The present invention relates to video processing technologies, and in particular, to a motion vector derivation method and apparatus.
Background
In the video coding and decoding technology, the redundancy of video information in time and space is eliminated by using the prediction in the time domain and the prediction in the space domain. The inter-frame prediction technology is a technology commonly applied in the field of video encoding and decoding. Information of the current frame is predicted by motion compensation using information in the decoded frame. In the motion compensation process, a lot of side information needs to be transmitted in a video code stream, and a decoding end reconstructs pixel information of a current frame by using the side information and the transformed residual error. In the transmission process of the motion vector information, the transmission code rate of the motion information can be effectively compressed by a reasonable motion vector prediction method. The motion vector derivation mainly uses the space-time domain of the current inter-frame prediction unit to perform a motion vector prediction value (MVP) of the current inter-frame prediction unit. Therefore, in the actual code stream, only the difference (MVD) between the final motion vector and the MVP of the current inter-frame prediction unit needs to be transmitted or the derived MVP is directly used as the motion vector MV of the current decoding unit without transmitting the MVD, and the whole final motion vector does not need to be transmitted. In a special case, the reference frame information of the current inter prediction unit may also be kept consistent with the reference frame information corresponding to the MVP, i.e., the reference frame information of the current inter prediction unit does not need to be transmitted additionally.
The AMVP (advanced motion vector prediction) technique derives a motion vector or a motion vector predictor by constructing a motion vector candidate list, transmitting a selected motion vector predictor or a sequence number of a motion vector in a code stream, and a decoding end according to the sequence number and the motion vector candidate list constructed according to the same rule. The application of AMVP technique is called the Merge mode in the special mode. The Merge mode utilizes not only motion vector information in neighboring blocks but also reference frame information of the neighboring blocks when deriving motion vectors. The candidate list constructed in the Merge mode stores the motion information of the neighboring blocks, including the motion vector information and the reference frame information, obtained in the priority order. That is, the Merge mode derives a motion vector and also derives a reference frame corresponding to the motion vector. In Merge mode, the current block is not only identical to the selected neighboring block in motion vector but also to the reference frame. The idea of Merge mode is that the current block and the surrounding decoded blocks belong to the same translation model, i.e. the motion vectors MV are consistent.
A problem with AMVP and Merge is that the current block derives a predictor of motion information only on one peripheral block. If the motion models of the current block and the peripheral blocks are not the translational motion models, AMVP and Merge cannot accurately predict the motion vector of the current block.
The STMVP (spatiotemporal motion vector prediction) technique scales the motion vector of the top block (if any), the motion vector of the left block (if any) and the TMVP (temporal motion vector predictor) of the current block to the first frame of the current block reference frame list. These up to three motion vectors are averaged to obtain a new motion vector predictor for the current block. The problem with the STMVP technique is that although a new MVP is derived by using a plurality of neighboring blocks, the direct averaging method has no accurate physical meaning, i.e., the motion correlation between neighboring blocks is not fully utilized.
Under perspective projection, the motion of the rigid body in a three-dimensional space and the projection on an imaging plane are represented as translation, rotation and scaling. The affine motion model is a motion information expression model considering translation, rotation and scaling. A general affine motion model has six parameters:
Figure RE-GDA0001564865950000011
in the above equation, (x, y) is the point of the current frame, (x ', y') is the point of the reference frame, and a, b, c, d, e, f are the six parameters of the affine motion model. To derive these six parameters, 3 known pairs of (x, y) and (x ', y') are required, and three sets of equations are coupled to calculate the values of the six parameters a, b, c, d, e, f.
Under the simplified model of the affine motion model, the inter-frame prediction block is divided into a plurality of small regions with equal sizes, the motion speed in each small region (i.e. sub-block) is consistent (i.e. the motion speed of each pixel may not be different, which is a method for ruling a gradual change motion speed with reduced precision), and the motion compensation model of each small region is still a plane translation model (the image block only translates in the image plane, and does not change the shape and size, so the description of the motion of the sub-block can still be parameterized as a motion vector). When the rotation axis of the rotational motion of the object is perpendicular to the image plane, the scaling of any two points of the affine object is consistent (the size of the included angle formed by any two straight lines is kept constant). Under the above limitation, affine motion of six parameters a, b, c, d, e, and f degenerates to an affine motion model with four parameters, that is, there is a certain relationship between four parameters a, b, c, and d, and as shown below, only two parameters are needed to derive the four parameters a, b, c, and d.
Figure RE-GDA0001564865950000021
S is the lengthening of the current block, since the current block is square in the existing affine model, i.e., sxs. See FIG. 10, vx0Is v is0Transverse component of vy0Is v is0A longitudinal component of (a); v. ofx1Is v is1Transverse component of vy1Is v is1The longitudinal component of (a).
This special affine motion requires an affine motion vector field derivation with two control points. The two control point positions are generally the position of the upper left pixel point of the current interframe prediction block and the position of the upper right pixel point of the current interframe prediction block.
The area of affine motion is typically within the current inter-prediction block, independent of the motion mode of the surrounding decoded blocks. The Merge mode mentioned above is that the current block and the peripheral decoded blocks belong to a translation model, i.e. the motion vector is consistent with the reference frame information. The idea of the Merge mode of affine motion is that the current block and the surrounding decoded blocks belong to the same affine motion model, sharing affine parameters. And the Merge mode is that the current block and the peripheral decoded blocks belong to the same plane translation model and share one motion vector.
Disclosure of Invention
The present invention is directed to deriving a motion vector predictor or a motion vector predictor candidate or a motion vector candidate of a current decoding unit or an internal subblock of the current decoding unit using motion information of a decoded area surrounding the current decoding unit.
The invention is based on a four-parameter affine motion model, i.e. a translation of the support object, a scaling and a rotational motion with a rotation axis perpendicular to the image plane. The design idea of the invention is as follows: the prior art takes a point inside a current decoding unit as a control point position, and limits the accuracy of deriving a control point motion vector through a decoded area to a certain extent. Whereas the affine motion model control points of the present invention are not points inside the current decoding unit but points outside the current decoding unit, which facilitates deriving accurate motion vector values of the control points using the decoded area.
The invention provides a motion vector derivation method, which comprises the following steps:
according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of a corresponding point C' of the C is derived by using the motion vector MV1 and the motion vector MV2 of the peripheral decoded area of the current decoding unit, wherein MV1 is not equal to MV2, wherein A and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C is the representative position of the current decoding unit or the subblock inside the current decoding unit;
and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate of the current decoding unit or an inner sub-block of the current decoding unit.
Wherein, the sub-block is the further division of the current decoding unit, and the sub-block is not equal to the current decoding unit.
Preferably, the motion vector derivation method further includes a plausibility check step, the plausibility check step being based on the following criteria:
the ratio of the length of the vector AB to the length of the vector A 'B' does not exceed the range of [ k, l ], and k is more than or equal to 0 and less than or equal to l and less than or equal to 10;
the size of an included angle between the vector AB and the vector A 'B' does not exceed the range of theta, and the theta is less than or equal to 90 degrees;
if the above at least one criterion is met, the vector CC' output is determined to be valid.
Preferably, the motion vector derivation method further includes one of:
obtaining said motion vector MV1 from the left decoded area of the current decoding unit and said motion vector MV2 from the top decoded area of the current decoding unit;
the motion vector MV2 is obtained from the left decoded area of the current decoding unit and the motion vector MV1 is obtained from the top decoded area of the current decoding unit.
Preferably, the motion vector deriving method further comprises searching the decoded reference picture for the motion vector MV1 and the motion vector MV2 from a peripheral decoded area outside the current decoding unit in the picture in which the current decoding unit is located.
Preferably, the number of the current decoding unit inner subblocks is at least two, the representative position of each current decoding unit inner subblock is independent, and the vector CC 'calculated by the position C and the position C' of each current decoding unit inner subblock is also independent.
Another object of the present invention is to provide a motion vector deriving device, which includes,
a position derivation module: according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of a corresponding point C' of the C is derived by using the motion vector MV1 and the motion vector MV2 of the peripheral decoded area of the current decoding unit, wherein MV1 is not equal to MV2, wherein A and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C is the representative position of the current decoding unit or the subblock inside the current decoding unit;
a motion vector generation module: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate of the current decoding unit or an inner sub-block of the current decoding unit.
Preferably, said motion vector derivation means includes adding a plausibility check module before said position derivation module, said plausibility check module being dependent on the following criteria:
the ratio of the length of the vector AB to the length of the vector A 'B' does not exceed the range of [ k, l ], and k is more than or equal to 0 and less than or equal to l and less than or equal to 10;
the size of an included angle between the vector AB and the vector A 'B' does not exceed the range of theta, and the theta is less than or equal to 90 degrees;
if the above at least one criterion is met, the outputs of the vectors CC' of the position derivation module and the motion vector generation module are determined to be valid.
Preferably, the motion vector deriving device further comprises a motion vector obtaining module added before the position deriving module, and the motion vector obtaining module comprises one of the following cases:
obtaining said motion vector MV1 from the left decoded area of the current decoding unit and said motion vector MV2 from the top decoded area of the current decoding unit;
the motion vector MV2 is obtained from the left decoded area of the current decoding unit and the motion vector MV1 is obtained from the top decoded area of the current decoding unit.
Preferably, the motion vector deriving device further includes: a motion vector search derivation module is added before the position derivation module, and the motion vector search derivation module is used for searching the decoded reference picture for the motion vector MV1 and the motion vector MV2 in a peripheral decoded area outside the current decoding unit in the picture in which the current decoding unit is positioned.
Compared with the prior art, the existing affine motion needs to transmit the grammar of whether the affine model is used or not in the code stream of the current decoding unit, but the method can determine whether the motion vector predicted value or the motion vector predicted value candidate or the motion vector candidate of the current decoding unit is derived through the affine model by checking the physical reasonability of the affine model, such as whether the scaling is reasonable or not and whether the rotation angle is reasonable or not, only according to the code stream except the code stream of the current decoding unit.
In addition, the invention also limits the control points of the affine motion model to a certain extent: on one hand, the control points are respectively from the upper side and the left side of the current decoding unit, and the influence degree of the control point MV on the current decoding unit or the sub-blocks inside the current decoding unit can be ensured to a certain extent; on the other hand, the accuracy of the control point motion vector MV can be ensured by re-searching the motion vector with the decoded region, thereby ensuring the accuracy of the affine motion vector derivation.
When the current decoding unit performs affine motion vector derivation at the sub-block level using the affine motion model, the sub-blocks should share the same set of control point parameters, i.e., share the control point location and the motion vectors MV1 and MV2 at the control point location, to ensure affine motion consistency of the entire current decoding block.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a diagram of an inter-frame current decoding unit and its neighboring blocks according to an embodiment of the present invention;
fig. 2 is a schematic diagram of deriving a motion vector using motion vectors MV1 and MV2 according to an embodiment of the present invention;
fig. 3 is a schematic diagram of deriving motion vectors of sub-blocks by using motion vectors MV1 and MV2 according to an embodiment of the present invention;
FIG. 4 is a block diagram of an embodiment of an inter-frame current decoding unit partitioning sub-blocks and their neighboring blocks;
fig. 5(a) is a schematic diagram of a motion vector deriving apparatus according to an embodiment of the present invention;
fig. 5(b) is a schematic diagram of a sub-block motion vector deriving apparatus according to an embodiment of the present invention;
fig. 6(a) is a schematic diagram of a motion vector deriving apparatus according to an embodiment of the present invention;
fig. 6(b) is a schematic diagram of a sub-block motion vector deriving apparatus according to an embodiment of the present invention;
fig. 7(a) is a schematic diagram of a motion vector deriving apparatus according to an embodiment of the present invention;
fig. 7(b) is a schematic diagram of a sub-block motion vector deriving apparatus according to an embodiment of the present invention;
fig. 8(a) is a schematic diagram of a motion vector deriving apparatus according to an embodiment of the present invention;
fig. 8(b) is a schematic diagram of a sub-block motion vector deriving apparatus according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a sub-block motion vector deriving apparatus according to an embodiment of the present invention;
FIG. 10 is a diagram of a four-parameter affine motion model in the background art of the present invention.
Detailed Description
In video coding, video data can be divided into M × N (M, N is typically a power of 2) inter-prediction pixel blocks of different sizes. These M × N pixel blocks to be decoded are referred to as a current decoding unit. There may be a plurality of decoded pixel blocks around the current decoding unit, these decoded pixel blocks are called peripheral decoded blocks, and the area where the peripheral decoded blocks are located is the peripheral decoded area.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the start point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the start point of the motion vector MV2, and C is the representative position of the current decoding unit.
Specifically, one available derivation method: as shown in FIG. 1, the peripheral decoded blocks of the current decoding unit have A1, A2, … …, An, B1, B2 and … … Bm, wherein n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 and MV2 are obtained from the k inter-predicted blocks. A is the central position of the inter-prediction block where the MV1 is located, B is the central position of the prediction block where the MV2 is located, C is the central position of the current decoding unit, and the physical meanings of the positions A, B and C are not bound.
As shown in FIG. 2, triangle ABC and triangle A ' B ' C ' satisfy the following relationship:
ΔABC∽ΔA′B′C′
that is, in the similarity relationship between triangle ABC and triangle A 'B' C ', A corresponds to A', B corresponds to B ', and C corresponds to C'. The vector AA ' is a motion vector MV1, the vector BB ' is a motion vector MV2, and the motion vector MV3 to be derived is a vector CC '. Let θ be1Is the angle between vector AB and vector A 'B'; theta'1Is the angle between vector AC and vector A 'C'; let θ be2Is the angle between vector BA and vector AC; theta'2Is the angle between vector B 'A' and vector A 'C', then threeThe similarity of the triangle ABC to the triangle A ' B ' C ' can be represented by the following formula:
Figure RE-GDA0001564865950000041
if (a)1,b1) Is vector AB, (a)2,b2) Is the vector A 'B', (c)1,d1) Is vector AC is (c)2,d2) Is the vector A 'C', (e)1,f1) Is the vector CB. The similarity of triangle ABC and triangle A ' B ' C ' is rewritten as:
Figure RE-GDA0001564865950000042
the value of the C 'position (x', y ') is derived from the similarity of triangle ABC and triangle A' B 'C' in the above formula:
Figure RE-GDA0001564865950000043
alpha, beta is obtained from the formula12
Figure RE-GDA0001564865950000044
The functions g (k, l) and h (k, l) are obtained from the following equations:
Figure RE-GDA0001564865950000045
wherein, (x'1,y′1) Is the coordinate of point A ' (' x '2,y′2) Is the coordinate of point B'.
The value of the C ' position (x ', y ') can also be calculated by other procedures equivalent thereto or procedures similar thereto.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000046
example 2
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the start point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the start point of the motion vector MV2, and C is the representative position of the current decoding unit.
Specifically, one useful derivation method is shown as step one of embodiment 1.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000051
example 3
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the start point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the start point of the motion vector MV2, and C is the representative position of the current decoding unit.
Specifically, one useful derivation method is shown as step one of embodiment 1.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000052
example 4
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the start point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the start point of the motion vector MV2, and C is the representative position of the current decoding unit.
Specifically, one useful derivation method is shown as step one of embodiment 1.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000053
example 5
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the start point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the start point of the motion vector MV2, and C is the representative position of the current decoding unit.
Specifically, one useful derivation method is shown as step one of embodiment 1.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000061
example 6
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the start point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the start point of the motion vector MV2, and C is the representative position of the current decoding unit.
Specifically, one useful derivation method is shown as step one of embodiment 1.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000062
example 7
The motion vector derivation method provided in this example specifically includes:
and step one, judging whether to perform step two according to the fact that the size of an included angle between the vector AB and the vector A 'B' does not exceed the range of theta (theta is less than or equal to 90 degrees).
Specifically, the degree of parallelism of the line segment AB and the line segment a 'B' is checked. The parallelism of the line segment AB and the line segment a 'B' characterizes the degree of rotation of the object, whereas the block-level affine model is no longer applicable after a certain degree of rotation between the reference frame and the current frame. Therefore, when the included angle between the line segment AB and the line segment A 'B' is smaller than the given threshold value gamma, the inspection is judged to be reasonable.
And step two, according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of the C is derived by using the motion vector MV1 and the motion vector MV2 of the peripheral decoded area of the current decoding unit, wherein MV1 is not equal to MV2, wherein A and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C is the representative position of the current decoding unit.
Specifically, one useful derivation method is shown as step one of embodiment 1.
Step three: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000063
example 8
The motion vector derivation method provided in this example specifically includes:
step one, judging whether to carry out step two according to the condition that the ratio of the length of the vector AB to the length of the vector A 'B' does not exceed the range of [ k, l ] (k is more than or equal to 0 and less than or equal to l and less than or equal to 10).
Specifically, the proportional sizes of the lengths of the line segment AB and the line segment a 'B' are detected. Because the proportional size of the lengths of the line segment AB and the line segment a 'B' characterizes the degree of scaling of the object, the affine model at the block level is no longer applicable after the scaling between the reference frame and the current frame exceeds a certain degree. Therefore, when the length proportion of the line segment AB and the line segment A 'B' is in a given range [ 1/delta, delta ] and delta is larger than or equal to 1, the test is judged to be reasonable, and the step two is carried out.
And step two, according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of the C is derived by using the motion vector MV1 and the motion vector MV2 of the peripheral decoded area of the current decoding unit, wherein MV1 is not equal to MV2, wherein A and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C is the representative position of the current decoding unit.
Specifically, one useful derivation method is shown as step one of embodiment 1.
Step three: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000071
example 9
The motion vector derivation method provided in this example specifically includes:
and step one, obtaining information from the code stream of the current decoding unit, and determining whether to perform step two according to the information obtained from the code stream of the current decoding unit.
Specifically, one method for obtaining information from the code stream of the current decoding unit is to transmit a flag bit in the code stream of the current decoding unit. And when the flag bit is 1 after decoding, performing the operation of the second step.
And step two, according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of the C is derived by using the motion vector MV1 and the motion vector MV2 of the peripheral decoded area of the current decoding unit, wherein MV1 is not equal to MV2, wherein A and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C is the representative position of the current decoding unit.
Specifically, one useful derivation method is shown as step one of embodiment 1.
Step three: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000072
example 10
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the start point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the start point of the motion vector MV2, and C is the representative position of the current decoding unit. Where motion vector MV1 and motion vector MV2 both come from the left decoded region or motion vector MV1 and motion vector MV2 both come from the upper decoded region.
Specifically, as shown in fig. 1, the peripheral decoded blocks of the current decoding unit belonging to the left decoded region have a1, a2, … …, An, and the peripheral decoded blocks of the current decoding unit belonging to the upper decoded region have B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 and MV2 are obtained from the n left decoded region inter-predicted blocks or motion vectors MV1 and MV2 are obtained from the m upper decoded region inter-predicted blocks. The location of ABC can take the locations in embodiments 1,2,3,4,5,6 and other representative locations where the location of AB is outside the current decoding unit and the physical meaning of the a, B, C locations is not tied.
The further derivation method is shown as step one in example 1.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000073
example 11
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the start point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the start point of the motion vector MV2, and C is the representative position of the current decoding unit. Where motion vector MV1 and motion vector MV2 both come from the left decoded region or motion vector MV1 and motion vector MV2 both come from the upper decoded region.
Specifically, as shown in fig. 1, the peripheral decoded blocks of the current decoding unit belonging to the left decoded region have a1, a2, … …, An, and the peripheral decoded blocks of the current decoding unit belonging to the upper decoded region have B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 are obtained from the n left decoded region inter-predicted blocks and motion vectors MV2 are obtained from the m upper decoded region inter-predicted blocks. Thereby increasing the degree of influence of the motion vector MV1 and the motion vector MV2 on the entire current decoding unit. The location of ABC can take the locations in embodiments 1,2,3,4,5,6 and other representative locations where the location of AB is outside the current decoding unit and the physical meaning of the a, B, C locations is not tied.
The further derivation method is shown as step one in example 1.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000081
example 12
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the start point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the start point of the motion vector MV2, and C is the representative position of the current decoding unit. Where motion vector MV1 and motion vector MV2 both come from the left decoded region or motion vector MV1 and motion vector MV2 both come from the upper decoded region.
Specifically, as shown in fig. 1, the peripheral decoded blocks of the current decoding unit belonging to the left decoded region have a1, a2, … …, An, and the peripheral decoded blocks of the current decoding unit belonging to the upper decoded region have B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV2 are obtained from the n left decoded region inter-predicted blocks and motion vectors MV1 are obtained from the m upper decoded region inter-predicted blocks. Thereby increasing the degree of influence of the motion vector MV1 and the motion vector MV2 on the entire current decoding unit. The location of ABC can take the locations in embodiments 1,2,3,4,5,6 and other representative locations where the location of AB is outside the current decoding unit and the physical meaning of the a, B, C locations is not tied.
The further derivation method is shown as step one in example 1.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000082
example 13
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the start point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the start point of the motion vector MV2, and C is the representative position of the current decoding unit. The motion vector MV1 and the motion vector MV2 are obtained by searching a peripheral decoded area outside the current decoding unit in the image in which the current decoding unit is positioned on a decoded reference image to obtain a motion vector MV1 and a motion vector MV 2.
Specifically, as shown in fig. 1, the peripheral decoded blocks of the current decoding unit have a1, a2, … …, An, B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 and MV2 are obtained from the k inter-predicted blocks. The location of ABC can take the locations in embodiments 1,2,3,4,5,6 and other representative locations where the location of AB is outside the current decoding unit and the physical meaning of the a, B, C locations is not tied. The central sxw area of the peripheral decoded block where MV1 is located is taken and called the local area. The method comprises the steps of utilizing the criterion that SAD or SATD of the region on a reference frame pointed by MV1 is minimum, and taking MV1 as a search starting point to derive the value of a motion vector MV 1; the central sxw area of the peripheral decoded block where MV2 is located is taken and called the local area. The derivation of the value of motion vector MV2 is performed using the temporal extension value of MV2 as the search starting point, using the criterion that the region is minimum SAD or SATD at MV1 points to the reference frame.
The further derivation method is shown as step one in example 1.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000091
example 14
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the start point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the start point of the motion vector MV2, and C is the representative position of the current decoding unit. Wherein the motion vector MV1 and the motion vector MV2 are obtained by searching a decoded reference picture for a peripheral decoded area outside the current decoding unit in the picture in which the current decoding unit is located.
Specifically, as shown in fig. 1, the peripheral decoded blocks of the current decoding unit have a1, a2, … …, An, B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 and MV2 are obtained from the k inter-predicted blocks. The location of ABC can take the locations in embodiments 1,2,3,4,5,6 and other representative locations where the location of AB is outside the current decoding unit and the physical meaning of the a, B, C locations is not tied. The central sxw area of the peripheral decoded block where MV1 is located is taken and called the local area. The derivation of the value of the motion vector MV1 is carried out by using the criterion that the SAD or SATD of the region is minimum on the reference frame pointed by MV2 and taking the time domain expansion value of MV1 as a search starting point; the central sxw area of the peripheral decoded block where MV2 is located is taken and called the local area. The derivation of the value of motion vector MV2 is performed with MV2 as the search starting point, using the criterion that this region is minimum SAD or SATD at MV2 points to the reference frame.
The further derivation method is shown as step one in example 1.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000092
example 15
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the start point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the start point of the motion vector MV2, and C is the representative position of the current decoding unit. Wherein the motion vector MV1 and the motion vector MV2 are obtained by searching a decoded reference picture for a peripheral decoded area outside the current decoding unit in the picture in which the current decoding unit is located.
Specifically, as shown in fig. 1, the peripheral decoded blocks of the current decoding unit have a1, a2, … …, An, B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 and MV2 are obtained from the k inter-predicted blocks. The location of ABC can take the locations in embodiments 1,2,3,4,5,6 and other representative locations where the location of AB is outside the current decoding unit and the physical meaning of the a, B, C locations is not tied. The central sxw area of the peripheral decoded block where MV1 is located is taken and called the local area. Deriving the value of the motion vector MV1 by using the criterion that the SAD or SATD of the region is minimum on a certain reference frame Ref of the current frame and taking the time domain expansion value of MV1 as a search starting point; the central sxw area of the peripheral decoded block where MV2 is located is taken and called the local area. The derivation of the value of the motion vector MV2 is performed using the criterion that the region minimizes SAD or SATD on the reference frame Ref, with the temporal extension value of MV2 as a search starting point.
The further derivation method is shown as step one in example 1.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000101
example 16
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block.
Specifically, one available derivation method: as shown in FIG. 3, the peripheral decoded blocks of the current decoding unit have A1, A2, … …, An, B1, B2 and … … Bm, wherein n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 and MV2 are obtained from the k inter-predicted blocks. A is the central position of the MV1 in the interframe prediction block, B is the central position of the prediction block in which MV2 is located, C is the central position of the current sub-block, and the physical meanings of the positions A, B and C are not bound.
As shown in FIG. 3, triangle ABC and triangle A ' B ' C ' satisfy the following relationship:
ΔABC∽ΔA′B′C′
that is, in the similarity relationship between triangle ABC and triangle A 'B' C ', A corresponds to A', B corresponds to B ', and C corresponds to C'. The vector AA ' is a motion vector MV1, the vector BB ' is a motion vector MV2, and the motion vector MV3 to be derived is a vector CC '. Let θ be1Is the angle between vector AB and vector A 'B'; theta'1Is the angle between vector AC and vector A 'C'; let θ be2Is the angle between vector BA and vector AC; theta'2Is the angle between vector B ' A ' and vector A ' C ', the similarity of triangle ABC to triangle A ' B ' C ' can be represented by the following equation:
Figure RE-GDA0001564865950000102
if (a)1,b1) Is vector AB, (a)2,b2) Is the vector A 'B', (c)1,d1) Is vector AC is (c)2,d2) Is toThe amounts A 'C', (e)1,f1) Is the vector CB. The similarity of triangle ABC and triangle A ' B ' C ' is rewritten as:
Figure RE-GDA0001564865950000103
the value of the C 'position (x', y ') is derived from the similarity of triangle ABC and triangle A' B 'C' in the above formula:
Figure RE-GDA0001564865950000104
alpha, beta is obtained from the formula12
Figure RE-GDA0001564865950000105
The functions g (k, l) and h (k, l) are obtained from the following equations:
Figure RE-GDA0001564865950000106
wherein, (x'1,y′1) Is the coordinate of point A ' (' x '2,y′2) Is the coordinate of point B'.
The value of the C ' position (x ', y ') can also be calculated by other procedures equivalent thereto or procedures similar thereto.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000111
example 17
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block.
Specifically, one useful derivation method is shown as step one of example 16.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000112
example 18
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block.
Specifically, one useful derivation method is shown as step one of example 16.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000113
example 19
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block.
Specifically, one useful derivation method is shown as step one of example 16.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000114
example 20
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block.
Specifically, one useful derivation method is shown as step one of example 16.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000121
example 21
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block.
Specifically, one useful derivation method is shown as step one of example 16.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000122
example 22
The motion vector derivation method provided in this example specifically includes:
and step one, only needing a code stream except the current subblock code stream, and judging whether to carry out step two according to the rationality of the physical meanings of the positions of the motion vector MV1, the motion vector MV2, the A point and the B point.
Specifically, the degree of parallelism of the line segment AB and the line segment a 'B' is checked. The parallelism of the line segment AB and the line segment a 'B' characterizes the degree of rotation of the object, while the block-level affine model is no longer applicable after a certain degree of rotation between the reference frame and the current frame. Therefore, when the included angle between the line segment AB and the line segment A 'B' is smaller than the given threshold value gamma, the inspection is judged to be reasonable.
And step two, according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of a corresponding point C' of the C is derived by using the motion vector MV1 and the motion vector MV2 of the peripheral decoded area of the current decoding unit, wherein MV1 is not equal to MV2, wherein A and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block.
Specifically, one useful derivation method is shown as step one of example 16.
Step three: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000123
example 23
The motion vector derivation method provided in this example specifically includes:
and step one, only needing a code stream except the current subblock code stream, and judging whether to carry out step two according to the rationality of the physical meanings of the positions of the motion vector MV1, the motion vector MV2, the A point and the B point.
Specifically, the proportional sizes of the lengths of the line segment AB and the line segment a 'B' are detected. Because the proportional size of the lengths of the line segment AB and the line segment a 'B' characterizes the degree of scaling of the object, the affine model at the block level is no longer applicable after the scaling between the reference frame and the current frame exceeds a certain degree. Therefore, when the length proportion of the line segment AB and the line segment A 'B' is in a given range [ 1/delta, delta ] and delta is larger than or equal to 1, the test is judged to be reasonable, and the step two is carried out.
And step two, according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of a corresponding point C' of the C is derived by using the motion vector MV1 and the motion vector MV2 of the peripheral decoded area of the current decoding unit, wherein MV1 is not equal to MV2, wherein A and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block.
Specifically, one useful derivation method is shown as step one of example 16.
Step three: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000131
example 24
The motion vector derivation method provided in this example specifically includes:
and step one, obtaining information from the code stream of the current subblock, and determining whether to perform step two according to the information obtained from the code stream of the current subblock.
Specifically, a method for obtaining information from the code stream of the current sub-block is to transmit a flag bit in the code stream of the current sub-block. And when the flag bit is 1 after decoding, performing the operation of the second step.
And step two, according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of a corresponding point C' of the C is derived by using the motion vector MV1 and the motion vector MV2 of the peripheral decoded area of the current decoding unit, wherein MV1 is not equal to MV2, wherein A and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block.
Specifically, one useful derivation method is shown as step one of example 16.
Step three: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000132
example 25
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block. Where motion vector MV1 and motion vector MV2 both come from the left decoded region or motion vector MV1 and motion vector MV2 both come from the upper decoded region.
Specifically, as shown in fig. 4, the peripheral decoded blocks of the current decoding unit belonging to the left decoded region have a1, a2, … …, An, and the peripheral decoded blocks of the current decoding unit belonging to the upper decoded region have B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 and MV2 are obtained from the n left decoded region inter-predicted blocks or motion vectors MV1 and MV2 are obtained from the m upper decoded region inter-predicted blocks. The location of ABC can take the locations in examples 16,17,18,19,20,21 and other representative locations where the location of AB is outside the current decoding unit and the physical meaning of the a, B, C locations is not tied.
A further derivation is shown as step one of example 16.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000133
example 26
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block. Where motion vector MV1 and motion vector MV2 both come from the left decoded region or motion vector MV1 and motion vector MV2 both come from the upper decoded region.
Specifically, as shown in fig. 4, the peripheral decoded blocks of the current decoding unit belonging to the left decoded region have a1, a2, … …, An, and the peripheral decoded blocks of the current decoding unit belonging to the upper decoded region have B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 are obtained from the n left decoded region inter-predicted blocks and motion vectors MV2 are obtained from the m upper decoded region inter-predicted blocks. Thereby increasing the degree of influence of the motion vector MV1 and the motion vector MV2 on the entire current subblock. The location of ABC can take the locations in examples 16,17,18,19,20,21 and other representative locations where the location of AB is outside the current decoding unit and the physical meaning of the a, B, C locations is not tied.
A further derivation is shown as step one of example 16.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000141
example 27
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block. Where motion vector MV1 and motion vector MV2 both come from the left decoded region or motion vector MV1 and motion vector MV2 both come from the upper decoded region.
Specifically, as shown in fig. 4, the peripheral decoded blocks of the current decoding unit belonging to the left decoded region have a1, a2, … …, An, and the peripheral decoded blocks of the current decoding unit belonging to the upper decoded region have B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV2 are obtained from the n left decoded region inter-predicted blocks and motion vectors MV1 are obtained from the m upper decoded region inter-predicted blocks. Thereby increasing the degree of influence of the motion vector MV1 and the motion vector MV2 on the entire current subblock. The location of ABC can take the locations in examples 16,17,18,19,20,21 and other representative locations where the location of AB is outside the current decoding unit and the physical meaning of the a, B, C locations is not tied.
A further derivation is shown as step one of example 16.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000142
example 28
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block. Wherein the motion vector MV1 and the motion vector MV2 are obtained by searching a decoded reference picture for a peripheral decoded area outside the current decoding unit in the picture in which the current decoding unit is located.
Specifically, as shown in fig. 4, the peripheral decoded blocks of the current decoding unit have a1, a2, … …, An, B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 and MV2 are obtained from the k inter-predicted blocks. The location of ABC can take the locations in examples 16,17,18,19,20,21 and other representative locations where the location of AB is outside the current decoding unit and the physical meaning of the a, B, C locations is not tied. The central sxw area of the peripheral decoded block where MV1 is located is taken and called the local area. The method comprises the steps of utilizing the criterion that SAD or SATD of the region on a reference frame pointed by MV1 is minimum, and taking MV1 as a search starting point to derive the value of a motion vector MV 1; the central sxw area of the peripheral decoded block where MV2 is located is taken and called the local area. The derivation of the value of motion vector MV2 is performed using the temporal extension value of MV2 as the search starting point, using the criterion that the region is minimum SAD or SATD at MV1 points to the reference frame.
A further derivation is shown as step one of example 16.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000151
example 29
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block. Wherein the motion vector MV1 and the motion vector MV2 are obtained by searching a decoded reference picture for a peripheral decoded area outside the current decoding unit in the picture in which the current decoding unit is located.
Specifically, as shown in fig. 4, the peripheral decoded blocks of the current decoding unit have a1, a2, … …, An, B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 and MV2 are obtained from the k inter-predicted blocks. The location of ABC can take the locations in examples 16,17,18,19,20,21 and other representative locations where the location of AB is outside the current decoding unit and the physical meaning of the a, B, C locations is not tied. The central sxw area of the peripheral decoded block where MV1 is located is taken and called the local area. The derivation of the value of the motion vector MV1 is carried out by using the criterion that the SAD or SATD of the region is minimum on the reference frame pointed by MV2 and taking the time domain expansion value of MV1 as a search starting point; the central sxw area of the peripheral decoded block where MV2 is located is taken and called the local area. The derivation of the value of motion vector MV2 is performed with MV2 as the search starting point, using the criterion that this region is minimum SAD or SATD at MV2 points to the reference frame.
A further derivation is shown as step one of example 16.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000152
example 30
The motion vector derivation method provided in this example specifically includes:
step one, according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block. Wherein the motion vector MV1 and the motion vector MV2 are obtained by searching a decoded reference picture for a peripheral decoded area outside the current decoding unit in the picture in which the current decoding unit is located.
Specifically, as shown in fig. 4, the peripheral decoded blocks of the current decoding unit have a1, a2, … …, An, B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 and MV2 are obtained from the k inter-predicted blocks. The location of ABC can take the locations in examples 16,17,18,19,20,21 and other representative locations where the location of AB is outside the current decoding unit and the physical meaning of the a, B, C locations is not tied. The central sxw area of the peripheral decoded block where MV1 is located is taken and called the local area. Deriving the value of the motion vector MV1 by using the criterion that the SAD or SATD of the region is minimum on a certain reference frame Ref of the current frame and taking the time domain expansion value of MV1 as a search starting point; the central sxw area of the peripheral decoded block where MV2 is located is taken and called the local area. The derivation of the value of the motion vector MV2 is performed using the criterion that the region minimizes SAD or SATD on the reference frame Ref, with the temporal extension value of MV2 as a search starting point.
A further derivation is shown as step one of example 16.
Step two: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000161
example 31
The present example provides a motion vector deriving apparatus, as shown in fig. 5 (a):
module S101, the inputs of which include: motion vector MV1, motion vector MV2, coordinates of the position of an A point outside a current decoding unit, coordinates of the position of a B point outside the current decoding unit, and coordinates of the position of a C point of a representative position of the current decoding unit; the output of which comprises: c' point position coordinates;
the function of the module is as follows: according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2, wherein MV1 is not equal to MV2, of the peripheral decoded area of the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, and the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV 2.
Specifically, one available derivation method: as shown in FIG. 2, triangle ABC and triangle A ' B ' C ' satisfy the following relationship:
ΔABC∽ΔA′B′C′
that is, in the similarity relationship between triangle ABC and triangle A 'B' C ', A corresponds to A', B corresponds to B ', and C corresponds to C'. The vector AA ' is a motion vector MV1, the vector BB ' is a motion vector MV2, and the motion vector MV3 to be derived is a vector CC '. Let θ be1Is the angle between vector AB and vector A 'B'; theta'1Is the angle between vector AC and vector A 'C'; let θ be2Is the angle between vector BA and vector AC; theta'2Is the angle between vector B ' A ' and vector A ' C ', the similarity of triangle ABC to triangle A ' B ' C ' can be represented by the following equation:
Figure RE-GDA0001564865950000162
if (a)1,b1) Is vector AB, (a)2,b2) Is the vector A 'B', (c)1,d1) Is vector AC is (c)2,d2) Is the vector A 'C', (e)1,f1) Is the vector CB. The similarity of triangle ABC and triangle A ' B ' C ' is rewritten as:
Figure RE-GDA0001564865950000163
the value of the C 'position (x', y ') is derived from the similarity of triangle ABC and triangle A' B 'C' in the above formula:
Figure RE-GDA0001564865950000164
alpha, beta is obtained from the formula12
Figure RE-GDA0001564865950000165
The functions g (k, l) and h (k, l) are obtained from the following equations:
Figure RE-GDA0001564865950000166
wherein, (x'1,y′1) Is the coordinate of point A ' (' x '2,y′2) Is the coordinate of point B'.
The value of the C ' position (x ', y ') can also be calculated by other procedures equivalent thereto or procedures similar thereto.
Module S102, the inputs of which include: c' point position coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000171
example 32
The present example provides a motion vector deriving apparatus, as shown in fig. 5 (b):
module S201, the inputs of which include: motion vector MV1, motion vector MV2, coordinates of the position of an A point outside a current decoding unit, coordinates of the position of a B point outside the current decoding unit, and coordinates of the position of a C point of a representative position of a current subblock; the output of which comprises: c' point position coordinates;
the function of the module is as follows: according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2, wherein MV1 is not equal to MV2, of the peripheral decoded area of the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, and the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
A module S202, the inputs of which include: c' point position coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block or the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000172
example 33
The present example provides a motion vector deriving apparatus, as shown in fig. 6 (a):
module S301, the inputs of which include: motion vector MV1, motion vector MV2, coordinates of the position of the A point outside the current decoding unit, and coordinates of the position of the B point outside the current decoding unit; the output of which comprises: checking the rationality of the result;
the function of the module is as follows: and judging the rationality according to the fact that the size of an included angle between the vector AB and the vector A 'B' does not exceed the range of theta (theta is less than or equal to 90 degrees).
Specifically, the degree of parallelism of the line segment AB and the line segment a 'B' is checked. The parallelism of the line segment AB and the line segment a 'B' characterizes the degree of rotation of the object, whereas the block-level affine model is no longer applicable after a certain degree of rotation between the reference frame and the current frame. Therefore, when the included angle between the line segment AB and the line segment A 'B' is smaller than the given threshold value gamma, the judgment is 'reasonable'.
Module S302, the inputs of which include: the rationality test result comprises a motion vector MV1, a motion vector MV2, a position coordinate of an A point outside a current decoding unit, a position coordinate of a B point outside the current decoding unit and a position coordinate of a C point of a representative position of the current decoding unit; the output of which comprises: c' point position coordinates;
the function of the module is as follows: if the result of the rationality check is "rational", according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, and the end point of the motion vector MV1 is determined to be a 'with a as the start point of the motion vector MV1 and the end point of the motion vector MV2 is determined to be B' with B as the start point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
Module S303, the inputs of which include: c' point position coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000173
example 34
The present example provides a motion vector deriving apparatus, as shown in fig. 6 (a):
module S301, the inputs of which include: motion vector MV1, motion vector MV2, coordinates of the position of the A point outside the current decoding unit, and coordinates of the position of the B point outside the current decoding unit; the output of which comprises: checking the rationality of the result;
the function of the module is as follows: the rationality is judged according to the fact that the ratio of the length of the vector AB to the length of the vector A 'B' does not exceed the range of [ k, l ] (k is more than or equal to 0 and less than or equal to l and less than or equal to 10).
Specifically, the proportional sizes of the lengths of the line segment AB and the line segment a 'B' are detected. Because the proportional size of the lengths of the line segment AB and the line segment a 'B' characterizes the degree of scaling of the object, the affine model at the block level is no longer applicable after the scaling between the reference frame and the current frame exceeds a certain degree. Therefore, when the length proportion of the line segment AB to the line segment A 'B' is in a given range [ 1/delta, delta ] and delta is larger than or equal to 1, the judgment is 'reasonable'.
Module S302, the inputs of which include: the rationality test result comprises a motion vector MV1, a motion vector MV2, a position coordinate of an A point outside a current decoding unit, a position coordinate of a B point outside the current decoding unit and a position coordinate of a C point of a representative position of the current decoding unit; the output of which comprises: c' point position coordinates;
the function of the module is as follows: if the result of the rationality check is "rational", according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, and the end point of the motion vector MV1 is determined to be a 'with a as the start point of the motion vector MV1 and the end point of the motion vector MV2 is determined to be B' with B as the start point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
Module S303, the inputs of which include: c' point position coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000181
example 35
The present example provides a motion vector deriving apparatus, as shown in fig. 6 (b):
module S401, the inputs of which include: motion vector MV1, motion vector MV2, coordinates of the position of the A point outside the current decoding unit, and coordinates of the position of the B point outside the current decoding unit; the output of which comprises: checking the rationality of the result;
the function of the module is as follows: the rationality is judged according to the fact that the included angle between the vector AB and the vector A 'B' does not exceed the range of [0 degrees, 90 degrees ].
Specifically, the degree of parallelism of the line segment AB and the line segment a 'B' is checked. The parallelism of the line segment AB and the line segment a 'B' characterizes the degree of rotation of the object, whereas the block-level affine model is no longer applicable after a certain degree of rotation between the reference frame and the current frame. Therefore, when the included angle between the line segment AB and the line segment A 'B' is smaller than the given threshold value gamma, the judgment is 'reasonable'.
Module S402, the inputs of which include: the rationality test result comprises a motion vector MV1, a motion vector MV2, a position coordinate of an A point outside a current decoding unit, a position coordinate of a B point outside the current decoding unit and a position coordinate of a C point of a representative position of a current subblock; the output of which comprises: c' point position coordinates;
the function of the module is as follows: if the result of the rationality check is "rational", according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, and the end point of the motion vector MV1 is determined to be a 'with a as the start point of the motion vector MV1 and the end point of the motion vector MV2 is determined to be B' with B as the start point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
Module S403, the inputs of which include: c' point position coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block or the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000182
example 36
The present example provides a motion vector deriving apparatus, as shown in fig. 6 (b):
module S401, the inputs of which include: motion vector MV1, motion vector MV2, coordinates of the position of the A point outside the current decoding unit, and coordinates of the position of the B point outside the current decoding unit; the output of which comprises: checking the rationality of the result;
the function of the module is as follows: the rationality is judged according to the fact that the ratio of the length of the vector AB to the length of the vector A 'B' does not exceed the range of [ k, l ] (k is more than or equal to 0 and less than or equal to l and less than or equal to 10).
Specifically, the proportional sizes of the lengths of the line segment AB and the line segment a 'B' are detected. Because the proportional size of the lengths of the line segment AB and the line segment a 'B' characterizes the degree of scaling of the object, the affine model at the block level is no longer applicable after the scaling between the reference frame and the current frame exceeds a certain degree. Therefore, when the length proportion of the line segment AB to the line segment A 'B' is in a given range [ 1/delta, delta ] and delta is larger than or equal to 1, the judgment is 'reasonable'.
Module S402, the inputs of which include: the rationality test result comprises a motion vector MV1, a motion vector MV2, a position coordinate of an A point outside a current decoding unit, a position coordinate of a B point outside the current decoding unit and a position coordinate of a C point of a representative position of a current subblock; the output of which comprises: c' point position coordinates;
the function of the module is as follows: if the result of the rationality check is "rational", according to the similarity between the triangle ABC and the triangle a 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, and the end point of the motion vector MV1 is determined to be a 'with a as the start point of the motion vector MV1 and the end point of the motion vector MV2 is determined to be B' with B as the start point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
Module S403, the inputs of which include: c' point position coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block or the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000191
example 37
The present example provides a motion vector deriving apparatus, as shown in fig. 7 (a):
module S501, the inputs of which include: motion vectors of decoded blocks around the current decoding unit; the output of which comprises: motion vector MV1, motion vector MV 2;
the function of the module is as follows: motion vector MV1 is obtained from the left decoded region and motion vector MV2 is obtained from the upper decoded region or motion vector MV2 is obtained from the left decoded region and motion vector MV1 is obtained from the upper decoded region.
Specifically, as shown in fig. 1, the peripheral decoded blocks of the current decoding unit belonging to the left decoded region have a1, a2, … …, An, and the peripheral decoded blocks of the current decoding unit belonging to the upper decoded region have B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 are obtained from the n left decoded region inter-predicted blocks and motion vectors MV2 are obtained from the m upper decoded region inter-predicted blocks. Thereby increasing the degree of influence of the motion vector MV1 and the motion vector MV2 on the entire current decoding unit.
Module S502, the inputs of which include: motion vector MV1, motion vector MV2, coordinates of the position of an A point outside a current decoding unit, coordinates of the position of a B point outside the current decoding unit, and coordinates of the position of a C point of a representative position of the current decoding unit; the output of which comprises: c' point position coordinates;
the function of the module is as follows: according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2, wherein MV1 is not equal to MV2, of the peripheral decoded area of the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, and the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
Module S503, the inputs of which include: c' point position coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000192
example 38
The present example provides a motion vector deriving apparatus, as shown in fig. 7 (b):
module S601, the inputs of which include: motion vectors of decoded blocks around the current decoding unit; the output of which comprises: motion vector MV1, motion vector MV 2;
the function of the module is as follows: motion vector MV1 is obtained from the left decoded region and motion vector MV2 is obtained from the upper decoded region or motion vector MV2 is obtained from the left decoded region and motion vector MV1 is obtained from the upper decoded region.
Specifically, as shown in fig. 4, the peripheral decoded blocks of the current decoding unit belonging to the left decoded region have a1, a2, … …, An, and the peripheral decoded blocks of the current decoding unit belonging to the upper decoded region have B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 are obtained from the n left decoded region inter-predicted blocks and motion vectors MV2 are obtained from the m upper decoded region inter-predicted blocks. Thereby increasing the degree of influence of the motion vector MV1 and the motion vector MV2 on the entire current decoding unit.
Module S602, the inputs of which include: motion vector MV1, motion vector MV2, coordinates of the position of the A point outside the current decoding unit, coordinates of the position of the B point outside the current decoding unit, and coordinates of the position of the C point of the representative position of the current subblock; the output of which comprises: c' point position coordinates;
the function of the module is as follows: according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2, wherein MV1 is not equal to MV2, of the peripheral decoded area of the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, and the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
Module S603, the inputs of which include: c' point position coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block or the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000201
example 39
The present example provides a motion vector deriving apparatus, as shown in fig. 8 (a):
module S701, the inputs of which include: motion vectors of decoded blocks around the current decoding unit; the output of which comprises: motion vector MV1, motion vector MV 2;
the function of the module is as follows: and searching a peripheral decoded area outside the current decoding unit in the image in which the current decoding unit is positioned on the decoded reference image to obtain a motion vector MV1 and a motion vector MV 2.
Specifically, as shown in fig. 1, the peripheral decoded blocks of the current decoding unit have a1, a2, … …, An, B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 and MV2 are obtained from the k inter-predicted blocks. The central sxw area of the peripheral decoded block where MV1 is located is taken and called the local area. The method comprises the steps of utilizing the criterion that SAD or SATD of the region on a reference frame pointed by MV1 is minimum, and taking MV1 as a search starting point to derive the value of a motion vector MV 1; the central sxw area of the peripheral decoded block where MV2 is located is taken and called the local area. The derivation of the value of motion vector MV2 is performed using the temporal extension value of MV2 as the search starting point, using the criterion that the region is minimum SAD or SATD at MV1 points to the reference frame.
Module S702, the inputs of which include: motion vector MV1, motion vector MV2, coordinates of the position of an A point outside a current decoding unit, coordinates of the position of a B point outside the current decoding unit, and coordinates of the position of a C point of a representative position of the current decoding unit; the output of which comprises: c' point position coordinates;
the function of the module is as follows: according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2, wherein MV1 is not equal to MV2, of the peripheral decoded area of the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, and the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
Module S703, the inputs of which include: c' point position coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000202
example 40
The present example provides a motion vector deriving apparatus, as shown in fig. 8 (a):
module S701, the inputs of which include: motion vectors of decoded blocks around the current decoding unit; the output of which comprises: motion vector MV1, motion vector MV 2;
the function of the module is as follows: and searching a peripheral decoded area outside the current decoding unit in the image in which the current decoding unit is positioned on the decoded reference image to obtain a motion vector MV1 and a motion vector MV 2.
Specifically, as shown in fig. 1, the peripheral decoded blocks of the current decoding unit have a1, a2, … …, An, B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 and MV2 are obtained from the k inter-predicted blocks. The central sxw area of the peripheral decoded block where MV1 is located is taken and called the local area. The derivation of the value of the motion vector MV1 is carried out by using the criterion that the SAD or SATD of the region is minimum on the reference frame pointed by MV2 and taking the time domain expansion value of MV1 as a search starting point; the central sxw area of the peripheral decoded block where MV2 is located is taken and called the local area. The derivation of the value of motion vector MV2 is performed with MV2 as the search starting point, using the criterion that this region is minimum SAD or SATD at MV2 points to the reference frame.
Module S702, the inputs of which include: motion vector MV1, motion vector MV2, coordinates of the position of an A point outside a current decoding unit, coordinates of the position of a B point outside the current decoding unit, and coordinates of the position of a C point of a representative position of the current decoding unit; the output of which comprises: c' point position coordinates;
the function of the module is as follows: according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2, wherein MV1 is not equal to MV2, of the peripheral decoded area of the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, and the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
Module S703, the inputs of which include: c' point position coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000211
EXAMPLE 41
The present example provides a motion vector deriving apparatus, as shown in fig. 8 (a):
module S701, the inputs of which include: motion vectors of decoded blocks around the current decoding unit; the output of which comprises: motion vector MV1, motion vector MV 2;
the function of the module is as follows: and searching a peripheral decoded area outside the current decoding unit in the image in which the current decoding unit is positioned on the decoded reference image to obtain a motion vector MV1 and a motion vector MV 2.
Specifically, as shown in fig. 1, the peripheral decoded blocks of the current decoding unit have a1, a2, … …, An, B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 and MV2 are obtained from the k inter-predicted blocks. The central sxw area of the peripheral decoded block where MV1 is located is taken and called the local area. Deriving the value of the motion vector MV1 by using the criterion that the SAD or SATD of the region is minimum on a certain reference frame Ref of the current frame and taking the time domain expansion value of MV1 as a search starting point; the central sxw area of the peripheral decoded block where MV2 is located is taken and called the local area. The derivation of the value of the motion vector MV2 is performed using the criterion that the region minimizes SAD or SATD on the reference frame Ref, with the temporal extension value of MV2 as a search starting point.
Module S702, the inputs of which include: motion vector MV1, motion vector MV2, coordinates of the position of an A point outside a current decoding unit, coordinates of the position of a B point outside the current decoding unit, and coordinates of the position of a C point of a representative position of the current decoding unit; the output of which comprises: c' point position coordinates;
the function of the module is as follows: according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2, wherein MV1 is not equal to MV2, of the peripheral decoded area of the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, and the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
Module S703, the inputs of which include: c' point position coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000212
example 42
The present example provides a motion vector deriving apparatus, as shown in fig. 8 (b):
module S801, the inputs of which include: motion vectors of decoded blocks around the current decoding unit; the output of which comprises: motion vector MV1, motion vector MV 2;
the function of the module is as follows: and searching a peripheral decoded area outside the current decoding unit in the image in which the current decoding unit is positioned on the decoded reference image to obtain a motion vector MV1 and a motion vector MV 2.
Specifically, as shown in fig. 1, the peripheral decoded blocks of the current decoding unit have a1, a2, … …, An, B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 and MV2 are obtained from the k inter-predicted blocks. The central sxw area of the peripheral decoded block where MV1 is located is taken and called the local area. The method comprises the steps of utilizing the criterion that SAD or SATD of the region on a reference frame pointed by MV1 is minimum, and taking MV1 as a search starting point to derive the value of a motion vector MV 1; the central sxw area of the peripheral decoded block where MV2 is located is taken and called the local area. The derivation of the value of motion vector MV2 is performed using the temporal extension value of MV2 as the search starting point, using the criterion that the region is minimum SAD or SATD at MV1 points to the reference frame.
Module S802, the inputs of which include: motion vector MV1, motion vector MV2, coordinates of the position of an A point outside a current decoding unit, coordinates of the position of a B point outside the current decoding unit, and coordinates of the position of a C point of a representative position of a current subblock; the output of which comprises: c' point position coordinates;
the function of the module is as follows: according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2, wherein MV1 is not equal to MV2, of the peripheral decoded area of the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, and the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
A block S803, the inputs of which include: c' point position coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000221
example 43
The present example provides a motion vector deriving apparatus, as shown in fig. 8 (b):
module S801, the inputs of which include: motion vectors of decoded blocks around the current decoding unit; the output of which comprises: motion vector MV1, motion vector MV 2;
the function of the module is as follows: and searching a peripheral decoded area outside the current decoding unit in the image in which the current decoding unit is positioned on the decoded reference image to obtain a motion vector MV1 and a motion vector MV 2.
Specifically, as shown in fig. 1, the peripheral decoded blocks of the current decoding unit have a1, a2, … …, An, B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 and MV2 are obtained from the k inter-predicted blocks. The central sxw area of the peripheral decoded block where MV1 is located is taken and called the local area. The derivation of the value of the motion vector MV1 is carried out by using the criterion that the SAD or SATD of the region is minimum on the reference frame pointed by MV2 and taking the time domain expansion value of MV1 as a search starting point; the central sxw area of the peripheral decoded block where MV2 is located is taken and called the local area. The derivation of the value of motion vector MV2 is performed with MV2 as the search starting point, using the criterion that this region is minimum SAD or SATD at MV2 points to the reference frame.
Module S802, the inputs of which include: motion vector MV1, motion vector MV2, coordinates of the position of an A point outside a current decoding unit, coordinates of the position of a B point outside the current decoding unit, and coordinates of the position of a C point of a representative position of a current subblock; the output of which comprises: c' point position coordinates;
the function of the module is as follows: according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2, wherein MV1 is not equal to MV2, of the peripheral decoded area of the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, and the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
A block S803, the inputs of which include: c' point position coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block or the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000222
example 44
The present example provides a motion vector deriving apparatus, as shown in fig. 8 (b):
module S801, the inputs of which include: motion vectors of decoded blocks around the current decoding unit; the output of which comprises: motion vector MV1, motion vector MV 2;
the function of the module is as follows: and searching a peripheral decoded area outside the current decoding unit in the image in which the current decoding unit is positioned on the decoded reference image to obtain a motion vector MV1 and a motion vector MV 2.
Specifically, as shown in fig. 1, the peripheral decoded blocks of the current decoding unit have a1, a2, … …, An, B1, B2, … … Bm, where n and m are natural numbers greater than 0. Where k blocks are inter prediction blocks, 0 ≦ k ≦ n + m. Motion vectors MV1 and MV2 are obtained from the k inter-predicted blocks. The central sxw area of the peripheral decoded block where MV1 is located is taken and called the local area. Deriving the value of the motion vector MV1 by using the criterion that the SAD or SATD of the region is minimum on a certain reference frame Ref of the current frame and taking the time domain expansion value of MV1 as a search starting point; the central sxw area of the peripheral decoded block where MV2 is located is taken and called the local area. The derivation of the value of the motion vector MV2 is performed using the criterion that the region minimizes SAD or SATD on the reference frame Ref, with the temporal extension value of MV2 as a search starting point.
Module S802, the inputs of which include: motion vector MV1, motion vector MV2, coordinates of the position of an A point outside a current decoding unit, coordinates of the position of a B point outside the current decoding unit, and coordinates of the position of a C point of a representative position of a current subblock; the output of which comprises: c' point position coordinates;
the function of the module is as follows: according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2, wherein MV1 is not equal to MV2, of the peripheral decoded area of the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, and the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
A block S803, the inputs of which include: c' point position coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block or the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000231
example 45
The motion vector derivation method provided in this example specifically includes:
take the example of the current decoding unit dividing four sub-blocks (1,2,3,4 are four sub-blocks):
1 2
3 4
step one, according to the similarity between the triangle ABC2 and the triangle a 'B' C2 ', the position of the corresponding point C2' of the C2 is derived by using the motion vector MV1 and the motion vector MV2 of the decoded area around the current decoding unit, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'by taking a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C2 is the representative position of the 2 nd sub-block.
Specifically, one useful derivation method is shown as step one of example 16.
According to the similarity between the triangle ABC3 and the triangle a 'B' C3 ', the position of the corresponding point C3' of the C3 is derived by using the motion vector MV1 and the motion vector MV2 of the decoded area around the current decoding unit, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'by taking a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C3 is the representative position of the 3 rd sub-block.
Specifically, one useful derivation method is shown as step one of example 16.
Step two: the vector C2C2 ' is calculated from the position C2 and the position C2 ', and the vector C2C2 ' is output as the motion vector predictor or motion vector predictor candidate or motion vector candidate MVX2 of the 2 nd sub-block.
In particular, according to the coordinates (x) of C2,y2) And the coordinates (x) of C2’,y2') the lateral component MVX2x of MVX2 and the longitudinal component MVX2y of MVX2 are derived by the following method:
Figure RE-GDA0001564865950000232
the vector C3C3 ' is calculated from the position C3 and the position C3 ', and the vector C3C3 ' is output as the motion vector predictor or motion vector predictor candidate or motion vector candidate MVX3 of the 3 rd sub-block.
In particular, according to the coordinates (x) of C3,y3) And the coordinates (x) of C3’,y3') the lateral component MVX1x of MVX1 and the longitudinal component MVX1y of MVX1 are derived by the following method:
Figure RE-GDA0001564865950000241
example 46
The motion vector derivation method provided in this example specifically includes:
take the example of the current decoding unit dividing four sub-blocks (1,2,3,4 are four sub-blocks):
1 2
3 4
step one, according to the similarity between the triangle ABC1 and the triangle a 'B' C1 ', the position of the corresponding point C1' of the C1 is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'with a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' with B as the starting point of the motion vector MV2, and C1 is the representative position of the 1 st sub-block.
Specifically, one useful derivation method is shown as step one of example 16.
According to the similarity between the triangle ABC2 and the triangle a 'B' C2 ', the position of the corresponding point C2' of the C2 is derived by using the motion vector MV1 and the motion vector MV2 of the decoded area around the current decoding unit, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'by taking a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C2 is the representative position of the 2 nd sub-block.
Specifically, one useful derivation method is shown as step one of example 16.
According to the similarity between the triangle ABC3 and the triangle a 'B' C3 ', the position of the corresponding point C3' of the C3 is derived by using the motion vector MV1 and the motion vector MV2 of the decoded area around the current decoding unit, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'by taking a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C3 is the representative position of the 3 rd sub-block.
Specifically, one useful derivation method is shown as step one of example 16.
According to the similarity between the triangle ABC4 and the triangle a 'B' C4 ', the position of the corresponding point C1' of the C4 is derived by using the motion vector MV1 and the motion vector MV2 of the decoded area around the current decoding unit, where a and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be a 'by taking a as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C4 is the representative position of the 4 th sub-block.
Specifically, one useful derivation method is shown as step one of example 16.
Step two: the vector C1C1 ' is calculated from the position C1 and the position C1 ', and the vector C1C1 ' is output as the motion vector predictor or motion vector predictor candidate or motion vector candidate MVX1 of the 1 st sub-block.
Specifically, according to the coordinate (x) of C11,y1) And coordinates (x) of C11’,y1') the lateral component MVX1x of MVX1 and the longitudinal component MVX1y of MVX1 are derived by the following method:
Figure RE-GDA0001564865950000242
the vector C2C2 ' is calculated from the position C2 and the position C2 ', and the vector C2C2 ' is output as the motion vector predictor or motion vector predictor candidate or motion vector candidate MVX2 of the 2 nd sub-block.
In particular, according to CLabel (x)2,y2) And the coordinates (x) of C2’,y2') the lateral component MVX2x of MVX2 and the longitudinal component MVX2y of MVX2 are derived by the following method:
Figure RE-GDA0001564865950000243
the vector C3C3 ' is calculated from the position C3 and the position C3 ', and the vector C3C3 ' is output as the motion vector predictor or motion vector predictor candidate or motion vector candidate MVX3 of the 3 rd sub-block.
In particular, according to the coordinates (x) of C3,y3) And the coordinates (x) of C3’,y3') the lateral component MVX1x of MVX1 and the longitudinal component MVX1y of MVX1 are derived by the following method:
Figure RE-GDA0001564865950000244
the vector C4C4 ' is calculated from the position C4 and the position C4 ', and the vector C4C4 ' is output as the motion vector predictor or motion vector predictor candidate or motion vector candidate MVX4 of the 4 th sub-block.
Specifically, according to the coordinate (x) of C44,y4) And coordinates (x) of C44’,y4') the lateral component MVX4x of MVX4 and the longitudinal component MVX4y of MVX4 are derived by the following method:
Figure RE-GDA0001564865950000251
example 47
The present example provides a motion vector deriving apparatus, as shown in fig. 5 (b):
take the example of the current decoding unit dividing four sub-blocks (1,2,3,4 are four sub-blocks):
the following modules loop twice to obtain the respective motion vectors of the 1 st sub-block and the 4 th sub-block under the input of the same MV1 and MV2 and the input of different C point positions of the 1 st sub-block and the 4 th sub-block.
1 2
3 4
Module S201, the inputs of which include: motion vector MV1, motion vector MV2, coordinates of the position of an A point outside a current decoding unit, coordinates of the position of a B point outside the current decoding unit, and coordinates of the position of a C point of a representative position of a current subblock; the output of which comprises: c' point position coordinates;
the function of the module is as follows: according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2, wherein MV1 is not equal to MV2, of the peripheral decoded area of the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, and the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
A module S202, the inputs of which include: c' point position coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block or the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000252
example 48
The present example provides a motion vector deriving apparatus, as shown in fig. 5 (b):
take the example of the current decoding unit dividing four sub-blocks (1,2,3,4 are four sub-blocks):
the following module loops four times to get the respective motion vectors of all sub-blocks at the same inputs of MV1 and MV2, at different C-point positions of all sub-blocks.
1 2
3 4
Module S201, the inputs of which include: motion vector MV1, motion vector MV2, coordinates of the position of an A point outside a current decoding unit, coordinates of the position of a B point outside the current decoding unit, and coordinates of the position of a C point of a representative position of a current subblock; the output of which comprises: c' point position coordinates;
the function of the module is as follows: according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of C is derived by using the motion vector MV1 and the motion vector MV2, wherein MV1 is not equal to MV2, of the peripheral decoded area of the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, and the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
A module S202, the inputs of which include: c' point position coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block or the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000261
example 49
The motion vector derivation method provided in this example specifically includes:
and step one, judging whether to perform step two according to the fact that the size of an included angle between the vector AB and the vector A 'B' does not exceed the range of theta (theta is less than or equal to 90 degrees).
Specifically, the degree of parallelism of the line segment AB and the line segment a 'B' is checked. The parallelism of the line segment AB and the line segment a 'B' characterizes the degree of rotation of the object, whereas the block-level affine model is no longer applicable after a certain degree of rotation between the reference frame and the current frame. Therefore, when the included angle between the line segment AB and the line segment A 'B' is smaller than the given threshold value gamma, the inspection is judged to be reasonable. Wherein the value of γ varies with the size of the current decoding unit, and the larger the size of the current decoding unit, the smaller γ.
And step two, according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of the C is derived by using the motion vector MV1 and the motion vector MV2 of the peripheral decoded area of the current decoding unit, wherein MV1 is not equal to MV2, wherein A and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C is the representative position of the current decoding unit.
Specifically, one useful derivation method is shown as step one of embodiment 1.
Step three: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000262
example 50
The motion vector derivation method provided in this example specifically includes:
step one, judging whether to carry out step two according to the condition that the ratio of the length of the vector AB to the length of the vector A 'B' does not exceed the range of [ k, l ] (k is more than or equal to 0 and less than or equal to l and less than or equal to 10).
Specifically, the proportional sizes of the lengths of the line segment AB and the line segment a 'B' are detected. Because the proportional size of the lengths of the line segment AB and the line segment a 'B' characterizes the degree of scaling of the object, the affine model at the block level is no longer applicable after the scaling between the reference frame and the current frame exceeds a certain degree. Therefore, when the length proportion of the line segment AB and the line segment A 'B' is in a given range [ 1/delta, delta ] and delta is larger than or equal to 1, the test is judged to be reasonable, and the step two is carried out. Where the value of δ varies with the size of the current decoding unit, the larger the size of the current decoding unit, the smaller γ.
And step two, according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of the corresponding point C' of the C is derived by using the motion vector MV1 and the motion vector MV2 of the peripheral decoded area of the current decoding unit, wherein MV1 is not equal to MV2, wherein A and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C is the representative position of the current decoding unit.
Specifically, one useful derivation method is shown as step one of embodiment 1.
Step three: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current decoding unit.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000263
example 51
The motion vector derivation method provided in this example specifically includes:
and step one, only needing a code stream except the current subblock code stream, and judging whether to carry out step two according to the rationality of the physical meanings of the positions of the motion vector MV1, the motion vector MV2, the A point and the B point.
Specifically, the degree of parallelism of the line segment AB and the line segment a 'B' is checked. The parallelism of the line segment AB and the line segment a 'B' characterizes the degree of rotation of the object, while the block-level affine model is no longer applicable after a certain degree of rotation between the reference frame and the current frame. Therefore, when the included angle between the line segment AB and the line segment A 'B' is smaller than the given threshold value gamma, the inspection is judged to be reasonable. Wherein the value of γ varies with the size of the current sub-block, the larger the size of the current sub-block, the smaller γ.
And step two, according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of a corresponding point C' of the C is derived by using the motion vector MV1 and the motion vector MV2 of the peripheral decoded area of the current decoding unit, wherein MV1 is not equal to MV2, wherein A and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block.
Specifically, one useful derivation method is shown as step one of example 16.
Step three: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000271
example 52
The motion vector derivation method provided in this example specifically includes:
and step one, only needing a code stream except the current subblock code stream, and judging whether to carry out step two according to the rationality of the physical meanings of the positions of the motion vector MV1, the motion vector MV2, the A point and the B point.
Specifically, the proportional sizes of the lengths of the line segment AB and the line segment a 'B' are detected. Because the proportional size of the lengths of the line segment AB and the line segment a 'B' characterizes the degree of scaling of the object, the affine model at the block level is no longer applicable after the scaling between the reference frame and the current frame exceeds a certain degree. Therefore, when the length proportion of the line segment AB and the line segment A 'B' is in a given range [ 1/delta, delta ] and delta is larger than or equal to 1, the test is judged to be reasonable, and the step two is carried out. Where the value of δ varies with the size of the current sub-block, the larger the size of the current sub-block, the smaller γ.
And step two, according to the similarity between the triangle ABC and the triangle A 'B' C ', the position of a corresponding point C' of the C is derived by using the motion vector MV1 and the motion vector MV2 of the peripheral decoded area of the current decoding unit, wherein MV1 is not equal to MV2, wherein A and B are points outside the current decoding unit, the end point of the motion vector MV1 is determined to be A 'by taking A as the starting point of the motion vector MV1, the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV2, and C is the representative position of the current sub-block.
Specifically, one useful derivation method is shown as step one of example 16.
Step three: and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate MVX of the current sub-block.
Specifically, from the coordinates (x, y) of C and the coordinates (x ', y ') of C ', the horizontal component MVXx of MVX and the vertical component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000272
example 52
The present example provides a motion vector derivation apparatus, as shown in fig. 9:
take the example of the current decoding unit dividing four sub-blocks (1,2,3,4 are four sub-blocks):
1 2
3 4
module S901, whose inputs include: motion vector MV1, motion vector MV2, position coordinates of A point outside the current decoding unit, position coordinates of B point outside the current decoding unit, and position coordinates of C1 point of the 1 st sub-block representative position; the output of which comprises: c1' point location coordinates;
the function of the module is as follows: according to the similarity between the triangle ABC1 and the triangle a 'B' C1 ', the position of the corresponding point C1' of C1 is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area in the periphery of the current decoding unit, where MV1 is not equal to MV2, the end point of the motion vector MV1 is determined to be a 'by taking a as the start point of the motion vector MV1, and the end point of the motion vector MV2 is determined to be B' by taking B as the start point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
Module S902, the inputs of which include: c1' point location coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: the vector C1C1 ' is calculated from the position C1 and the position C1 ', and the vector C1C1 ' is outputted as the motion vector predictor or motion vector predictor candidate or motion vector candidate MVX of the 1 st sub-block or the current decoding unit.
Specifically, from the coordinates (x, y) of C1 and the coordinates (x ', y ') of C1 ', the lateral component MVXx of MVX and the longitudinal component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000281
module S903, the inputs of which include: motion vector MV1, motion vector MV2, position coordinates of A point outside the current decoding unit, position coordinates of B point outside the current decoding unit, and position coordinates of C2 point of the 2 nd sub-block representative position; the output of which comprises: c2' point location coordinates;
the function of the module is as follows: according to the similarity between the triangle ABC and the triangle a 'B' C2 ', the position of the corresponding point C2' of the C2 is derived by using the motion vector MV1 and the motion vector MV2 of the already decoded area around the current decoding unit, where MV1 is not equal to MV2, the end point of the motion vector MV1 is determined to be a 'by taking a as the starting point of the motion vector MV1, and the end point of the motion vector MV2 is determined to be B' by taking B as the starting point of the motion vector MV 2.
Specifically, one possible derivation method is shown as block S101 in example 31.
Module S904, the inputs of which include: c2' point location coordinates; the output of which comprises a motion vector MVX;
the function of the module is as follows: the vector C2C2 ' is calculated from the position C2 and the position C2 ', and the vector C2C2 ' is output as the motion vector predictor or motion vector predictor candidate or motion vector candidate MVX of the sub-block 2.
Specifically, from the coordinates (x, y) of C2 and the coordinates (x ', y ') of C2 ', the lateral component MVXx of MVX and the longitudinal component MVXy of MVX are derived by the following method:
Figure RE-GDA0001564865950000282
example 53
The motion vector derivation method provided in this example specifically includes:
for the case that the current decoding unit or the current sub-block is bi-directional inter-prediction block, the motion vector derivation in either direction can be performed in the same manner as in all the previous embodiments.

Claims (8)

1. A motion vector derivation method, comprising:
obtaining a motion vector MV1 and a motion vector MV2 of a peripheral decoded region of the current decoding unit, wherein MV1 is not equal to MV 2; determining the position of the end point a 'of the motion vector MV1 with the start point a of the motion vector MV1 and the motion vector MV1, where a is a point outside the current decoding unit, and determining the position of the end point B' of the motion vector MV2 with the start point B of the motion vector MV2 and the motion vector MV2, where B is a point outside the current decoding unit;
the rationality test was performed according to the following criteria:
the ratio of the length of the vector AB to the length of the vector A 'B' does not exceed the range of [ k, l ], wherein k is more than or equal to 0 and less than or equal to l is less than or equal to 10,
the included angle between the vector AB and the vector A 'B' is not more than the range of theta, and the theta is less than or equal to 90 degrees;
if the above at least one criterion is met, then
For the representative position C of the current decoding unit or the sub-block inside the current decoding unit, deriving the position of a corresponding point C 'of the C according to the similarity between the triangle ABC and the triangle A' B 'C';
and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate of the current decoding unit or an inner sub-block of the current decoding unit.
2. The motion vector derivation method according to claim 1, comprising one of:
obtaining said motion vector MV1 from the left decoded area of the current decoding unit and said motion vector MV2 from the top decoded area of the current decoding unit;
the motion vector MV2 is obtained from the left decoded area of the current decoding unit and the motion vector MV1 is obtained from the top decoded area of the current decoding unit.
3. The motion vector derivation method according to claim 1, further comprising: and searching the decoded reference image by the peripheral decoded area outside the current decoding unit in the image where the current decoding unit is located to obtain the motion vector MV1 and the motion vector MV 2.
4. A motion vector derivation method as claimed in claim 1, further characterized by: the number of the current decoding unit internal subblocks is at least two, the representative positions of each current decoding unit internal subblock are independent, and the vector CC 'calculated by the position C and the position C' of each current decoding unit internal subblock is also independent.
5. A motion vector deriving device characterized by comprising:
a rationality testing module:
obtaining a motion vector MV1 and a motion vector MV2 of a peripheral decoded region of the current decoding unit, wherein MV1 is not equal to MV 2; determining the position of the end point a 'of the motion vector MV1 with the start point a of the motion vector MV1 and the motion vector MV1, where a is a point outside the current decoding unit, and determining the position of the end point B' of the motion vector MV2 with the start point B of the motion vector MV2 and the motion vector MV2, where B is a point outside the current decoding unit;
the ratio of the length of the vector AB to the length of the vector A 'B' does not exceed the range of [ k, l ], wherein k is more than or equal to 0 and less than or equal to l is less than or equal to 10,
the included angle between the vector AB and the vector A 'B' is not more than the range theta, theta is less than or equal to 90 degrees,
if the criterion meets at least one criterion, the output rationality test result is reasonable;
a position derivation module:
if the rationality test result is reasonable, for the representative position C of the current decoding unit or the sub-block inside the current decoding unit, deriving the position of a corresponding point C 'of the C according to the similarity between the triangle ABC and the triangle A' B 'C';
a motion vector generation module:
and calculating to obtain a vector CC ' according to the position C and the position C ', and outputting the vector CC ' as a motion vector predicted value or a motion vector predicted value candidate or a motion vector candidate of the current decoding unit or an inner sub-block of the current decoding unit.
6. The motion vector derivation apparatus as claimed in claim 5, further comprising a motion vector acquisition module, said motion vector acquisition module comprising one of:
obtaining said motion vector MV1 from the left decoded area of the current decoding unit and said motion vector MV2 from the top decoded area of the current decoding unit;
the motion vector MV2 is obtained from the left decoded area of the current decoding unit and the motion vector MV1 is obtained from the top decoded area of the current decoding unit.
7. The motion vector derivation apparatus according to claim 5, further comprising: and the motion vector search derivation module is used for searching the decoded reference image for the motion vector MV1 and the motion vector MV2 in a peripheral decoded area outside the current decoding unit in the image of the current decoding unit.
8. The motion vector deriving device according to claim 5, wherein the number of the current decoding unit intra-subblocks is at least two, the representative position of each current decoding unit intra-subblock is independent, and the vector CC 'calculated from the position C and the position C' of each current decoding unit intra-subblock is also independent.
CN201710834419.0A 2017-09-15 2017-09-15 Motion vector deriving method and device Active CN109510991B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710834419.0A CN109510991B (en) 2017-09-15 2017-09-15 Motion vector deriving method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710834419.0A CN109510991B (en) 2017-09-15 2017-09-15 Motion vector deriving method and device

Publications (2)

Publication Number Publication Date
CN109510991A CN109510991A (en) 2019-03-22
CN109510991B true CN109510991B (en) 2021-02-19

Family

ID=65745089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710834419.0A Active CN109510991B (en) 2017-09-15 2017-09-15 Motion vector deriving method and device

Country Status (1)

Country Link
CN (1) CN109510991B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113055683B (en) * 2019-06-24 2022-11-01 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN113709486B (en) * 2019-09-06 2022-12-23 杭州海康威视数字技术股份有限公司 Encoding and decoding method, device and equipment
CN110691253B (en) * 2019-10-17 2022-03-01 北京大学深圳研究生院 Encoding and decoding method and device based on inter-frame prediction

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105187838A (en) * 2011-05-31 2015-12-23 Jvc建伍株式会社 Image decoding device, moving image decoding method, reception device and reception method
US9131239B2 (en) * 2011-06-20 2015-09-08 Qualcomm Incorporated Unified merge mode and adaptive motion vector prediction mode candidates selection
CN104935938B (en) * 2015-07-15 2018-03-30 哈尔滨工业大学 Inter-frame prediction method in a kind of hybrid video coding standard
CN108600749B (en) * 2015-08-29 2021-12-28 华为技术有限公司 Image prediction method and device
US20190028731A1 (en) * 2016-01-07 2019-01-24 Mediatek Inc. Method and apparatus for affine inter prediction for video coding system

Also Published As

Publication number Publication date
CN109510991A (en) 2019-03-22

Similar Documents

Publication Publication Date Title
CN110677675B (en) Method, device and storage medium for efficient affine Merge motion vector derivation
JP7231727B2 (en) Interpolation for inter-prediction with refinement
US20200396465A1 (en) Interaction between ibc and affine
CN107534770B (en) Image prediction method and relevant device
CN110572645A (en) Asymmetric weighted bidirectional prediction Merge
CN109510991B (en) Motion vector deriving method and device
WO2020098809A1 (en) Construction of affine candidates in video processing
ES2718830T3 (en) Video decoding device, video decoding method, and storage medium
US11677973B2 (en) Merge with MVD for affine
US20220103827A1 (en) Sub-region based determination of motion information refinement
US11825074B2 (en) Generation and usage of combined affine merge candidate
US11805259B2 (en) Non-affine blocks predicted from affine motion
CN113196773A (en) Motion vector precision in Merge mode with motion vector difference
JP7460661B2 (en) Structure of motion candidate list for video encoding
TW202025749A (en) Calculating motion vector predictors
JPH08140094A (en) Motion vector search device
CN111083485A (en) Utilization of motion information in affine mode
CN106063272A (en) Method for encoding multi-view video and apparatus therefor and method for decoding multi-view video and apparatus therefor
MX2021010830A (en) Symmetric merge mode motion vector coding.
CN113273187A (en) Affine-based Merge with Motion Vector Difference (MVD)
US20210297691A1 (en) Method and Apparatus of Motion Vector Buffer Management for Video Coding System
CN113826394A (en) Improvement of adaptive motion vector difference resolution in intra block copy mode
ES2893816T3 (en) Video encoding and decoding by inheritance of a motion vector field
TWI450590B (en) Embedded system and method for loading data on motion estimation therein
CN113273189A (en) Interaction between Merge and AMVR with MVD

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant