CN105681809B - For the motion compensation process of double forward prediction units - Google Patents

For the motion compensation process of double forward prediction units Download PDF

Info

Publication number
CN105681809B
CN105681809B CN201610091950.9A CN201610091950A CN105681809B CN 105681809 B CN105681809 B CN 105681809B CN 201610091950 A CN201610091950 A CN 201610091950A CN 105681809 B CN105681809 B CN 105681809B
Authority
CN
China
Prior art keywords
forward prediction
pixel
image block
pixel point
prediction image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610091950.9A
Other languages
Chinese (zh)
Other versions
CN105681809A (en
Inventor
马思伟
赵磊
张健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN201610091950.9A priority Critical patent/CN105681809B/en
Publication of CN105681809A publication Critical patent/CN105681809A/en
Application granted granted Critical
Publication of CN105681809B publication Critical patent/CN105681809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/573Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • H04N19/615Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding using motion compensated temporal filtering [MCTF]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the invention provides a kind of motion compensation process for double forward prediction units.This method specifically includes that acquisition for predicting two of current image block initial forward-predicted picture blocks, chooses the pixel on described two initial forward-predicted picture blocks;The direction x and the y directional derivative that the pixel on the forward-predicted picture block is calculated using gradient calculation formula calculate the deviant of the pixel on the forward-predicted picture block by training window;According to the direction x of the pixel on the forward-predicted picture block, y directional derivative and deviant, the position of the pixel on the forward-predicted picture block is adjusted.The motion compensation process for double forward prediction units that the embodiment of the present invention proposes can further improve the prediction effect of forecast image block on the basis of not increasing code rate, improve the accuracy of forecast image block, to improve the forecast quality of current image block, the code efficiency of double forward prediction units is improved.

Description

Motion compensation method for double forward prediction unit
Technical Field
The present invention relates to the field of video coding technologies, and in particular, to a motion compensation method for a dual forward prediction unit.
Background
With the widespread use of multimedia technology and the increasing expansion of multimedia data, the importance of video coding technology is increasing. Modern coding techniques employ a hybrid coding framework, including the processes of prediction, transformation, quantization, and entropy coding. The predictive coding includes intra-frame prediction in which an image block to be currently encoded is predicted by using an image block already encoded and reconstructed in the same frame image, and inter-frame prediction in which an image block to be currently encoded is predicted by using an image of another frame already encoded and reconstructed. Inter-frame predictive coding utilizes the time correlation of video sequences, removes spatial redundancy, and is a very important link in the current video coding framework.
In the recent video coding standard (HEVC), a dual forward motion compensation method is introduced, and in a Lowdelay configuration, when a PU (predictive unit) is predicted, an encoder searches for two predicted image blocks, and uses weighted values of the two predicted image blocks as a predicted value of the current PU. In the prior art, there is no method for performing pixel-level fine adjustment on a prediction image block to further improve prediction quality.
Disclosure of Invention
The embodiment of the invention provides a motion compensation method aiming at a double forward prediction unit so as to improve the accuracy of predicting an image block.
In order to achieve the purpose, the invention adopts the following technical scheme.
A method of motion compensation for a bi-forward prediction unit, comprising:
acquiring two forward prediction image blocks for predicting a current image block, and selecting pixel points on the two forward prediction image blocks;
calculating derivatives in the x direction and the y direction of pixel points on the forward prediction image block by using a gradient calculation formula, and calculating the offset value of the pixel points on the forward prediction image block through a training window;
and adjusting the positions of the pixel points on the forward prediction image blocks according to derivatives in the x direction and the y direction of the pixel points on the forward prediction image blocks and the offset value.
Further, the obtaining two forward prediction image blocks for predicting the current image block and selecting pixel points on the two forward prediction image blocks includes:
selecting pixel points p on the two forward prediction image blocks1[i,j]And p0[i,j]Let a pixel point p0[i,j]Is p 'as the adjusted optimal predicted pixel point'0[i,j]Of pixel p'0[i,j]Relative to pixel point p0[i,j]Is (v) ofx,vy) Let a pixel point p0[i,j]Is p 'as the adjusted optimal predicted pixel point'1[i,j]Of pixel p'1[i,j]Relative to pixel point p1[i,j]Is (-v)x,-vy);
According to Taylor first order expansion formula, p'1[i,j]And p'0[i,j]The estimated value of (c) is calculated as follows:
p'0[i,j]≈p0[i,j]+vx·Ix0+vy·Iy0
p'1[i,j]≈p1[i,j]-vx·Ix1-vy·Iy1
Ix0,Iy0representing a pixel point p0[i,j]Derivatives in the x-and y-directions of (I)x1,Iy1Representing a pixel point p1[i,j]The x-direction and y-direction derivatives of (a).
Further, the calculating the derivatives in the x direction and the y direction of the pixel points on the forward prediction image block by using the gradient calculation formula includes:
i is calculated by the following gradient calculation formulax1,Ix0,Iy1,Iy0
Ix0=(p0[i+Δ,j]-p0[i-Δ,j])/2
Ix1=(p1[i+Δ,j]-p1[i-Δ,j])/2
Iy0=(p0[i,j+Δ]-p0[i,j-Δ])/2
Iy1=(p1[i,j+Δ]-p1[i,j-Δ])/2
Where Δ represents a predetermined sub-pixel interpolation accuracy, p0[i+Δ,j]、p0[i-Δ,j]、p1[i+Δ,j]、p1[i-Δ,j]、p0[i,j+Δ]、p0[i,j-Δ]、p1[i,j+Δ]、p1[i,j-Δ]Respectively obtained by interpolation of a DCT interpolation filter.
Further, the calculating the offset value of the pixel point on the forward prediction image block through the training window includes:
respectively by pixel point p1[i,j]And p0[i,j]A training window is opened in a certain surrounding neighborhood range, and an offset value v is solved by utilizing a least square methodx,vy
The calculation method for windowing training specifically comprises the following steps:
Δ[i,j]=p'0[i,j]-p'1[i,j]
=(p0[i,j]+vx[i,j]·Ix0[i,j]+vy[i,j]·Iy0[i,j])
-(p1[i,j]-vx[i,j]·Ix1[i,j]-vy[i,j]·Iy1[i,j])
by training the windowThe minimum value results in an optimum offset value min, wherein,
det1=s3s5-s2s6,det2=s1s6-s3s4,det=s1s5-s2s4
wherein Ω represents a training window area selected when performing the least square operation.
Further, the adjusting the positions of the pixels on the forward prediction image block according to the derivatives in the x direction and the y direction of the pixels on the forward prediction image block and the offset value includes:
taking 2x2 pixel points as a unit, taking the mean value of the position deviation of four adjacent pixel points as the integral offset, and performing mean filtering on the position deviation values of 4 adjacent pixel points, wherein the mean filtering calculation formula is as follows:
vx_average=(vx1+vx2+vx3+vx4)/4
vy_average=(vy1+vy2+vy3+vy4)/4
(v) in the above formulax1,vy1)、(vx2,vy2)、(vx3,vy3)、(vx4,vy4) Respectively are the position deviation values of four adjacent pixel points.
According to the Taylor first-order expansion formula, after the position deviation value is adjusted, two initial pixel points p1[i,j]And p0[i,j]The adjustment is as follows:
p'0[i,j]≈p0[i,j]+vx_average·Ix0+vy_average·Iy0
p'1[i,j]≈p1[i,j]-vx_average·Ix1-vy_average·Iy1
further, the method further comprises the following steps:
and adjusting the final predicted value of the corresponding pixel point on the current image block as follows:
Ppre[i,j]=(p'1[i,j]+p'0[i,j])/2
and determining whether the motion compensation method for the bi-forward prediction unit is adopted according to the RD cost after the coding of the current image block is completed after the motion compensation method for the bi-forward prediction unit is adopted.
It can be seen from the technical solutions provided by the embodiments of the present invention that, in the motion compensation method for a bi-forward prediction unit provided by the embodiments of the present invention, fine tuning of a predicted value at a pixel level is further realized on the basis of an original image prediction block through taylor expansion and a high-precision gradient calculation process, so that a prediction effect of a predicted image block is further improved on the basis of not increasing a code rate, and accuracy of the predicted image block is improved, thereby improving prediction quality of a current image block and improving coding efficiency of the bi-forward prediction unit.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic diagram illustrating an implementation principle of a motion compensation method for dual forward PUs according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a motion compensation method for dual forward PUs according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating filtering performed by the position deviation values of 4 adjacent pixels according to the embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
For the convenience of understanding the embodiments of the present invention, the following description will be further explained by taking several specific embodiments as examples in conjunction with the drawings, and the embodiments are not to be construed as limiting the embodiments of the present invention.
The technical problem to be solved by the embodiment of the invention is to perform pixel-level fine adjustment on a dual forward prediction image block so as to further improve the prediction quality. Through Taylor expansion and a high-precision gradient calculation process, the embodiment of the invention further realizes the fine adjustment of the predicted value at the pixel level on the basis of the predicted block of the original image, further improves the prediction effect of the predicted block on the basis of not increasing the code rate, and improves the coding efficiency of the double-forward prediction unit.
A schematic diagram of an implementation principle of a motion compensation method for a dual forward PU according to an embodiment of the present invention is shown in fig. 1, and a specific processing flow is shown in fig. 2, where the implementation principle includes the following processing steps:
step S210, calculating an optimal prediction pixel point p 'according to a Taylor first-order expansion formula'1[i,j]And p'0[i,j]An estimate of (d).
For a dual forward PU, two initial forward prediction image blocks for predicting a current image block are obtained by searching using a block-based motion estimation algorithm, and both of the two initial forward prediction image blocks are located in front of the current image block on a time axis.
For pixel point p of corresponding position on two initial forward prediction image blocks1[i,j]And p0[i,j]Suppose that the present invention needs to solve the pixel point p0[i,j]Is p 'as the adjusted optimal predicted pixel point'0[i,j]Of pixel p'0[i,j]At a pixel point p0[i,j]Near, with respect to the initial position p0[i,j]Is (v) ofx,vy). Pixel point p to be solved1[i,j]After adjustment ofIs p'1[i,j]Of pixel p'1[i,j]At a pixel point p1[i,j]Nearby, with respect to the original position p1[i,j]Is (-v)x,-vy)。
According to Taylor first order expansion formula, p'1[i,j]And p'0[i,j]The estimated value of (c) is shown as follows:
p'0[i,j]≈p0[i,j]+vx·Ix0+vy·Iy0
p'1[i,j]≈p1[i,j]-vx·Ix1-vy·Iy1
Ix0,Iy0representing a pixel point p0[i,j]Derivatives in the x-and y-directions of (I)x1,Iy1Representing a pixel point p1[i,j]The x-direction and y-direction derivatives of (a).
Step S220, calculating derivative value I through gradient calculation formulax1,Ix0,Iy1,Iy0
To calculate to get p'1[i,j]And p'0[i,j]It is necessary to obtain I separatelyx1,Ix0,Iy1,Iy0And v andx,vy. In order to calculate and obtain a high-precision gradient value, the embodiment of the invention uses a high-precision sub-pixel interpolation scheme to calculate I through the following gradient calculation formulax1,Ix0,Iy1,Iy0
Ix0=(p0[i+Δ,j]-p0[i-Δ,j])/2
Ix1=(p1[i+Δ,j]-p1[i-Δ,j])/2
Iy0=(p0[i,j+Δ]-p0[i,j-Δ])/2
Iy1=(p1[i,j+Δ]-p1[i,j-Δ])/2
Where Δ represents a predetermined sub-pixel interpolation accuracy. Wherein p is0[i+Δ,j]、p0[i-Δ,j]、p1[i+Δ,j]、p1[i-Δ,j]、p0[i,j+Δ]、p0[i,j-Δ]、p1[i,j+Δ]、p1[i,j-Δ]Can be interpolated by a Discrete Cosine Transform (DCT) interpolation filter.
1/12 pixel position pixel values are obtained through calculation by using an image interpolation method, an 8-step interpolation filter is used in the scheme, the position gradient value of an original prediction pixel point is obtained through calculation according to the position of the original prediction pixel point, and the coefficients of the interpolation filter are as follows:
integer pixel position: { -8,19, -40,0,43, -20,12, -6}
1/4 pixel position: { -8,12, -16, -32,56, -16,12, -8}
1/2 pixel position: {0,4,8, -52,52, -8, -4,0}
3/4 pixel position: {8, -12,16, -56,32,16, -12,8}
Step S230, calculating an offset value v through a training windowx,vy
For adjusted predicted value p'1[i,j]And p'0[i,j]The smaller the difference value is, the better the difference value is, and on the basis, the pixel points p are respectively used1[i,j]And p0[i,j]A training window is opened in a certain surrounding neighborhood range (such as a 5x5 range), and the deviation value v is solved by using a least square methodx,vy
The calculation method for windowing training specifically comprises the following steps:
Δ[i,j]=p'0[i,j]-p'1[i,j]
=(p0[i,j]+vx[i,j]·Ix0[i,j]+vy[i,j]·Iy0[i,j])
-(p1[i,j]-vx[i,j]·Ix1[i,j]-vy[i,j]·Iy1[i,j])
by training the windowThe minimum value can be calculated to obtain an optimal offset value, and the overall optimal offset of the pixels in the window is used as the optimal offset of the current pixel, so that a more robust effect can be obtained. Wherein,
det1=s3s5-s2s6,det2=s1s6-s3s4,det=s1s5-s2s4
wherein Ω represents a training window area selected when performing the least square operation.
In practical application, p'1[i,j]Relative to p1[i,j]The position shift can also be expressed as (scale v)x,scale*vy) Wherein scale represents p1[i,j]Distance between reference frame and current frame and p0[i,j]The ratio of the distance between the reference frame and the current frame.
Step S240, position offset filtering processing of four adjacent pixel points
In order to avoid the situation that the prediction effect of partial pixels is deteriorated after the method disclosed by the patent, offset values v of adjacent 4 pixel points are subjected tox,vyAnd the displacement fine adjustment value is taken as a whole after filtering processing.
Taking 2 × 2 pixel points as a unit, taking the mean value of the position offsets of four adjacent pixel points as the overall offset, and performing mean filtering on the position offset values of 4 adjacent pixel points, fig. 3 is a schematic diagram illustrating filtering performed on the position offset values of 4 adjacent pixel points according to an embodiment of the present invention. The average filtering calculation method specifically comprises the following steps:
vx_average=(vx1+vx2+vx3+vx4)/4
vy_average=(vy1+vy2+vy3+vy4)/4
(v) in the above formulax1,vy1)、(vx2,vy2)、(vx3,vy3)、(vx4,vy4) Respectively are the position deviation values of four adjacent pixel points.
According to the Taylor first-order expansion formula, after the position deviation value is adjusted, two initial prediction pixel points p1[i,j]And p0[i,j]The adjustment is as follows:
p'0[i,j]≈p0[i,j]+vx_average·Ix0+vy_average·Iy0
p'1[i,j]≈p1[i,j]-vx_average·Ix1-vy_average·Iy1
the final predicted value of the corresponding pixel point on the current image block is adjusted as follows:
Ppre[i,j]=(p'1[i,j]+p'0[i,j])/2
and step S250, increasing the coding flag bit of each level. After the predicted value of each pixel point is adjusted, whether the current coding unit uses the scheme in the embodiment of the present invention is determined by using RD (rate-distortion cost) as a principle.
Setting a flag at a CTU (coding tree unit) level and a flag at a PU level indicates whether the current CTU or PU uses the motion compensation method proposed by the present invention.
The method comprises the following specific steps:
and the coding end respectively uses two schemes for motion compensation for each CTU, one scheme is the motion compensation method provided by the invention, the other scheme uses the original coding scheme of the coder, the optimal motion compensation scheme is selected by taking RD cost as an index after the coding of the two schemes is finished, if the RD cost of the compensation scheme provided by the invention is lower, the CTU-level flag bit is set to be 1, otherwise, the CTU-level flag bit is set to be 0. The flag bit is transmitted to a decoder along with the code stream, and the decoder selects a corresponding mode to perform decoding end motion compensation after decoding the flag bit. Wherein,
RD-cost=R+λ*D
r represents the bits required to encode the current CTU, D represents the pixel value deviation between the reconstructed CTU and the original CTU, and λ is a constant determined by the encoder.
In addition, when the CTU level flag is 1, a flag is added to a PU in the CTU that uses AMVP mode for prediction to indicate whether the PU uses the motion compensation scheme proposed by the present invention.
The invention is integrated on HEVC reference software HM12.0, and for an HEVC general test sequence, the performance achieved under the Lowdelay configuration under the condition of a test time of 2s is shown in table 1.
Experimental data show that the Y component of the scheme on the HM12.0 platform can obtain an average performance gain of 1.3%, and the performance of the sequence with abundant texture and fine motion is improved more obviously, for example, BQsquare, and the performance gain of the algorithm can be 3.9% under the sequence.
It should be noted that although the present invention is integrated in HEVC reference software HM12.0, it may be equally applicable to other codec platforms, such as h.264/AVC, AVS2, etc.
Table 1 performance data of the algorithm under different test sequences
In summary, the motion compensation method for the dual forward prediction unit according to the embodiment of the present invention further achieves fine tuning of the predicted value at the pixel level on the basis of the prediction block of the original image through taylor expansion and a high-precision gradient calculation process, so as to further improve the prediction effect of the predicted image block and improve the accuracy of the predicted image block on the basis of not increasing the code rate, thereby improving the prediction quality of the current image block and improving the coding efficiency of the dual forward prediction unit.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
From the above description of the embodiments, it is clear to those skilled in the art that the present invention can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for apparatus or system embodiments, since they are substantially similar to method embodiments, they are described in relative terms, as long as they are described in partial descriptions of method embodiments. The above-described embodiments of the apparatus and system are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. A method for motion compensation of a bi-forward prediction unit, comprising:
acquiring two forward prediction image blocks for predicting a current image block, and selecting pixel points on the two forward prediction image blocks;
calculating derivatives in the x direction and the y direction of pixel points on the forward prediction image block by using a gradient calculation formula, and calculating the offset value of the pixel points on the forward prediction image block through a training window;
adjusting the positions of the pixels on the forward prediction image blocks according to derivatives in the x direction and the y direction of the pixels on the forward prediction image blocks and offset values;
and increasing coding zone bits of each level, and after the predicted value of each pixel point is adjusted, determining whether the current coding unit uses the motion compensation method for the double-forward prediction unit or not by using the Rate Distortion (RD) cost as a principle.
2. The method of claim 1, wherein the obtaining two forward prediction image blocks for predicting a current image block and selecting pixel points on the two forward prediction image blocks comprises:
selecting pixel points p on the two forward prediction image blocks1[i,j]And p0[i,j]Let a pixel point p0[i,j]Is p 'as the adjusted optimal predicted pixel point'0[i,j]Of pixel p'0[i,j]Relative to pixel point p0[i,j]Is (v) ofx,vy) Let a pixel point p0[i,j]Is p 'as the adjusted optimal predicted pixel point'1[i,j]Of pixel p'1[i,j]Relative to pixel point p1[i,j]Is (-v)x,-vy);
According to Taylor first order expansion formula, p'1[i,j]And p'0[i,j]The estimated value of (c) is calculated as follows:
p'0[i,j]≈p0[i,j]+vx·Ix0+vy·Iy0
p'1[i,j]≈p1[i,j]-vx·Ix1-vy·Iy1
Ix0,Iy0representing a pixel point p0[i,j]Derivatives in the x-and y-directions of (I)x1,Iy1Representing a pixel point p1[i,j]The x-direction and y-direction derivatives of (a).
3. The method as claimed in claim 2, wherein the calculating the x-direction and y-direction derivatives of the pixel points on the forward prediction image block by using a gradient calculation formula comprises:
i is calculated by the following gradient calculation formulax1,Ix0,Iy1,Iy0
Ix0=(p0[i+Δ,j]-p0[i-Δ,j])/2
Ix1=(p1[i+Δ,j]-p1[i-Δ,j])/2
Iy0=(p0[i,j+Δ]-p0[i,j-Δ])/2
Iy1=(p1[i,j+Δ]-p1[i,j-Δ])/2
Where Δ represents a predetermined sub-pixel interpolation accuracy, p0[i+Δ,j]、p0[i-Δ,j]、p1[i+Δ,j]、p1[i-Δ,j]、p0[i,j+Δ]、p0[i,j-Δ]、p1[i,j+Δ]、p1[i,j-Δ]Respectively obtained by interpolation of a DCT interpolation filter.
4. The method of claim 3, wherein the calculating the offset value of the pixel point on the forward prediction image block through the training window comprises:
respectively by pixel point p1[i,j]And p0[i,j]A training window is opened in a certain surrounding neighborhood range, and an offset value v is solved by utilizing a least square methodx,vy
The calculation method for windowing training specifically comprises the following steps:
Θ[i,j]=p'0[i,j]-p'1[i,j]
≈(p0[i,j]+vx[i,j]·Ix0[i,j]+vy[i,j]·Iy0[i,j])
-(p1[i,j]-vx[i,j]·Ix1[i,j]-vy[i,j]·Iy1[i,j])
by training the windowThe minimum value results in an optimum offset value min, wherein,
det1=s3s5-s2s6,det2=s1s6-s3s4,det=s1s5-s2s4
wherein Ω represents a training window area selected when performing the least square operation.
5. The method as claimed in claim 4, wherein the adjusting the positions of the pixels on the forward prediction image block according to the x-direction, y-direction derivatives and the offset values of the pixels on the forward prediction image block comprises:
taking 2x2 pixel points as a unit, taking the mean value of the position deviation of four adjacent pixel points as the integral offset, and performing mean filtering on the position deviation values of 4 adjacent pixel points, wherein the mean filtering calculation formula is as follows:
vx_average=(vx1+vx2+vx3+vx4)/4
vy_average=(vy1+vy2+vy3+vy4)/4
(v) in the above formulax1,vy1)、(vx2,vy2)、(vx3,vy3)、(vx4,vy4) Respectively, the position deviation values of the adjacent four pixel points:
according to the Taylor first-order expansion formula, after the position deviation value is adjusted, two initial pixel points p1[i,j]And p0[i,j]The adjustment is as follows:
p'0[i,j]≈p0[i,j]+vx_average·Ix0+vy_average·Iy0
p'1[i,j]≈p1[i,j]-vx_average·Ix1-vy_average·Iy1
6. the method of claim 5, further comprising:
and adjusting the final predicted value of the corresponding pixel point on the current image block as follows:
Ppre[i,j]=(p'1[i,j]+p'0[i,j])/2
and determining whether the motion compensation method for the bi-forward prediction unit is adopted according to the RD cost after the coding of the current image block is completed after the motion compensation method for the bi-forward prediction unit is adopted.
CN201610091950.9A 2016-02-18 2016-02-18 For the motion compensation process of double forward prediction units Active CN105681809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610091950.9A CN105681809B (en) 2016-02-18 2016-02-18 For the motion compensation process of double forward prediction units

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610091950.9A CN105681809B (en) 2016-02-18 2016-02-18 For the motion compensation process of double forward prediction units

Publications (2)

Publication Number Publication Date
CN105681809A CN105681809A (en) 2016-06-15
CN105681809B true CN105681809B (en) 2019-05-21

Family

ID=56304681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610091950.9A Active CN105681809B (en) 2016-02-18 2016-02-18 For the motion compensation process of double forward prediction units

Country Status (1)

Country Link
CN (1) CN105681809B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108337513B (en) * 2017-01-20 2021-07-23 浙江大学 Intra-frame prediction pixel generation method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101766030A (en) * 2007-07-31 2010-06-30 三星电子株式会社 Use video coding and the coding/decoding method and the equipment of weight estimation
CN102742272A (en) * 2010-01-18 2012-10-17 索尼公司 Image processing device, method, and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9215475B2 (en) * 2006-02-02 2015-12-15 Thomson Licensing Method and apparatus for motion estimation using combined reference bi-prediction

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101766030A (en) * 2007-07-31 2010-06-30 三星电子株式会社 Use video coding and the coding/decoding method and the equipment of weight estimation
CN102742272A (en) * 2010-01-18 2012-10-17 索尼公司 Image processing device, method, and program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Overview of HEVC High-Level Syntax and Reference Picture Management;Rickard sjoberg,Ying Chen;《IEEE AND TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》;20121231;第22卷(第12期);全文

Also Published As

Publication number Publication date
CN105681809A (en) 2016-06-15

Similar Documents

Publication Publication Date Title
CN111385569B (en) Coding and decoding method and equipment thereof
CN112954340B (en) Encoding and decoding method, device and equipment
CN105681809B (en) For the motion compensation process of double forward prediction units
CN105306952B (en) A method of it reducing side information and generates computation complexity
US20120300848A1 (en) Apparatus and method for generating an inter-prediction frame, and apparatus and method for interpolating a reference frame used therein
CN105611299A (en) Motion estimation method based on HEVC

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant