CN107483960B - Motion compensation frame rate up-conversion method based on spatial prediction - Google Patents

Motion compensation frame rate up-conversion method based on spatial prediction Download PDF

Info

Publication number
CN107483960B
CN107483960B CN201710831783.1A CN201710831783A CN107483960B CN 107483960 B CN107483960 B CN 107483960B CN 201710831783 A CN201710831783 A CN 201710831783A CN 107483960 B CN107483960 B CN 107483960B
Authority
CN
China
Prior art keywords
frame
block
motion
blocks
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710831783.1A
Other languages
Chinese (zh)
Other versions
CN107483960A (en
Inventor
李然
吉秉彧
沈克琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinyang Normal University
Original Assignee
Xinyang Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinyang Normal University filed Critical Xinyang Normal University
Priority to CN201710831783.1A priority Critical patent/CN107483960B/en
Publication of CN107483960A publication Critical patent/CN107483960A/en
Application granted granted Critical
Publication of CN107483960B publication Critical patent/CN107483960B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence

Abstract

The invention discloses a motion compensation frame rate up-conversion method based on spatial prediction, and relates to the technical field of video processing. The method comprises the following steps: dividing a frame to be inserted into A-type blocks and B-type blocks according to a template; carrying out full-search motion estimation on the A-type block to obtain a motion vector, wherein a continuous elimination method is adopted to reduce the calculation complexity; calculating the motion vector of the B-type block according to the spatial correlation and the minimum error matching principle by combining the motion vector information of the A-type block; combining the motion vectors of A, B two kinds of blocks into a motion vector field of the frame to be inserted according to the reference frame ftAnd ft+1The frame f to be interpolated is obtained by interpolation by adopting an overlapped block motion compensation technologyt+0.5. The invention effectively utilizes the advantages of full-search motion estimation and spatial correlation, not only ensures the accuracy of motion estimation, but also effectively reduces the operation complexity and saves the calculation cost.

Description

Motion compensation frame rate up-conversion method based on spatial prediction
Technical Field
The invention relates to the technical field of video processing, in particular to a motion compensation frame rate up-conversion method based on spatial prediction.
Background
With the development of multimedia technology and the update of hardware devices, and in order to obtain better visual experience, people put higher demands on the resolution and frame rate of videos. However, due to the limitation of bandwidth, a frame skipping strategy is adopted before video transmission, so that fast transmission is realized. Therefore, at the receiving end, it is necessary to recover the discarded frame by using the existing frame information, and to restore the quality of the original video as much as possible. Under such a demand, Frame Rate Up-Conversion (FRUC) has received attention from researchers in the field of video processing. This is because FRUC, as a post-processing technique, can up-convert video from a lower frame rate to a higher frame rate by inserting intermediate frames between two decoded frames.
There are various methods for frame rate up-conversion, which can be classified into a non-motion compensation frame interpolation method and a motion compensation frame interpolation method depending on whether the object motion is considered. The former is simple and mainly includes frame copying and frame averaging, and these two modes do not consider the motion of the inter-frame object, and directly utilize two adjacent frames of information to make compensation interpolation by means of copying or averaging. In contrast, the motion compensation frame interpolation method needs to consider the motion of the object in the image: and calculating the motion vector of the object, calculating the position of the object in the frame to be interpolated according to the motion track, and then compensating the pixel value. Comparing the two methods, when the video sequence contains less motion, frame duplication or frame averaging is fast and effective, but in the video sequence with more object motion, the non-motion compensation frame interpolation method can cause the image to be jittered and blurred. In this case, a motion compensation frame interpolation method needs to be adopted, and the motion blur phenomenon is effectively reduced by considering the motion condition between frames. In real life, most of video sequences contain a lot of motion or the combination of dynamic and static pictures, so the research and application of the motion compensation frame interpolation method are very important.
The motion compensation frame rate up-conversion method mainly comprises two steps: motion estimation and motion compensated interpolation. The motion estimation is used for calculating a motion vector field between adjacent frames, and the motion compensation interpolation is to interpolate an intermediate frame according to the motion vector field. It can be seen that the accuracy of motion estimation directly affects the quality of the restored video. Therefore, research on FRUC technology focuses on the efficient motion estimation method. In order to suppress the blurring effect, some classical methods commonly employ overlapped block compensation techniques in the motion compensation interpolation process. In the existing literature, a bidirectional motion estimation method is mostly adopted, and a motion vector field to two frames before and after is calculated by taking a frame to be interpolated as a starting point, for example, for the problem of "hole" and "overlapped block" generated by unidirectional motion estimation in the literature "Dual motion estimation for frame rate-conversion" (Suk-Ju Kang, Sungjoo Yoo and Young Hwan Kim, IEEEtransactions on Circuits and Systems for Video Technology, vol.20, No.12, pp.1909-1914,2010), a method for directly calculating the motion vector field of the frame to be interpolated is proposed. The method causes the pixels without motion vectors or with a plurality of motion vectors to no longer exist in the frame to be interpolated, thereby improving the efficiency and the reliability of motion estimation. However, this method does not feature spatial correlation, and thus greatly increases computational complexity. In order to further improve the accuracy of Motion vectors, some researchers have proposed a hybrid Motion Estimation method based on bidirectional Motion Estimation, for example, the literature "Direction-Select Motion Estimation for Motion-Compensated Frame Rate Up-Conversion" (bamboo Dong-Gon, Kang Suk-Ju, and KimYoung Hwan, Journal of Display Technology, vol.9, No.10, pp.840-850,2013) first calculates a unidirectional Motion vector field of a reference Frame, and then estimates a bidirectional Motion vector field of a Frame to be interpolated, which increases the accuracy of Motion Estimation. However, this method only uses the information of the neighboring blocks to calculate the motion vector of the block to be interpolated, which results in the transmission of incorrect motion vector information block by block, and this transmission of errors reduces the accuracy of motion estimation. In order to reduce the calculation amount, in the document "a Multilevel successful evaluation Algorithm for block Matching Motion Estimation" (x.q.gao, c.j.dunmu and c.r.zuo, ieee transactions on Image Processing, vol.9, No.3, pp.501-504,2000), a continuous Elimination method is proposed for solving the problem that the calculation amount is large due to excessive candidate Matching blocks, and the calculation time is effectively reduced by calculating the sum of the matched luminance values and setting a threshold value to greatly delete the candidate Matching blocks. However, the problem of spatial correlation is not considered in this document, and the amount of calculation cannot be further reduced.
The existing frame rate up-conversion technology needs to seek balance between calculation precision and calculation complexity, and some methods improve the estimation precision of motion vectors by adopting a full search strategy, but consume a large amount of running time; although some methods use the information of the neighboring blocks to calculate the motion vector of the block to be inserted, the inaccuracy of the initial motion vector may cause the transmission of errors, thereby reducing the estimation accuracy of the motion vector.
In summary, the motion compensation frame rate up-conversion method in the prior art has a problem that both the calculation accuracy and the calculation complexity cannot be considered.
Disclosure of Invention
The embodiment of the invention provides a motion compensation frame rate up-conversion method based on spatial prediction, which is used for solving the problem that the prior art cannot take both calculation precision and calculation complexity into consideration.
The embodiment of the invention provides a motion compensation frame rate up-conversion method based on spatial prediction, which comprises the following steps:
a, dividing a frame to be inserted into blocks, and dividing the blocks into A-type blocks and B-type blocks;
b, carrying out full search motion estimation based on a continuous elimination method on the A-type block, and determining a motion vector of the A-type block;
step c, combining the motion vector information of the A-type block, and calculating the motion vector of the B-type block according to the spatial correlation and the minimum error matching principle;
d, combining the motion vector of the A-type block and the motion vector of the B-type block into a motion vector field of the frame to be inserted, and according to the reference frame ftAnd ft+1The frame f to be interpolated is obtained by interpolation by adopting an overlapped block motion compensation methodt+0.5(ii) a Wherein f ist、ft+1And ft+0.5The luminance values of the t-th frame, the t + 1-th frame and the t + 0.5-th frame, respectively.
Preferably, the step a specifically includes:
setting frame f to be insertedt+0.5The spatial resolution of the frame to be interpolated is MxN, the block size is s, and each frame to be interpolated contains MxN/s2A standard block; the crossed blocks of the odd rows and the odd columns and the even rows and the even columns are divided into A-type blocks, the rest are B-type blocks, and M and N are divided by s.
Preferably, step b specifically includes:
and calculating the coordinate of the t frame and the t +1 frame as the luminance accumulated sum of the (i, j) block by taking the coordinate (i, j) of the upper left corner of the block as a reference as follows:
Figure GDA0002317136580000041
Figure GDA0002317136580000042
wherein f ist(i+m,j+n)、ft+1(i + m, j + n) are the brightness values of the t frame and the t +1 frame at coordinates (i + m, j + n), respectively, and (m, n) are the coordinates of pixels in the block;
and (3) traversing candidate blocks in a search window by setting the upper left pixel coordinate p of the current A-type block as (i, j), and referring to a frame ft+1Offset v 'of nth candidate block from (i, j)'n(x, y), reference frame ftThe offset of the nth candidate block to (i, j) is-v'n(ii) a Setting the radius of the search window as r, x, y E [ -r, r](ii) a Let initial offset v'0(r, r), calculate the initial difference D of the current class a block0The formula is as follows:
D0=||Bt(p-v'0)-Bt+1(p+v'0)||1
wherein, Bt(i-r, j-r) is a vector formed by arranging all pixels in a block in a row in the coordinate of the upper left corner of the t-th frame, | | · |, a1Is a vector of1A norm; will be provided with
Figure GDA0002317136580000043
Updating the coordinate offset of the next candidate block in the search window if the following inequality is satisfied
|Pt(i-x,j-y)-Pt+1(i+x,j+y)|<D0
The difference D of the nth candidate block is calculatednThe following were used:
Dn=||Bt(p-v'n)-Bt+1(p+v'n)||1
updating the motion vector v of the current class a block as followss
vs=v'n=(x,y)
And update D0=min{Dn,D0}; on the contrary, vsKeeping the same; and according to the process until all candidate blocks in the search window are traversed.
Preferably, step c specifically includes:
after the motion vector of the A-type block is calculated according to the full search motion estimation, the motion vectors v of four A-type blocks adjacent to the B-type block are selecteda1、va2、va3And va4As candidate vectors, a set V of candidate vectors is composedc
Vc={va1,va2,va3,va4}
And if the coordinate of the upper left pixel of the current B type block is p, the motion vector v of the B type blockpCalculating according to the minimum error matching principle:
Figure GDA0002317136580000051
wherein, Bt(p-v) is a vector formed by arranging all pixels in a block with the coordinate of p-v at the upper left corner of the t-th frame in a row, | · | | survival1Is a vector of1Norm, v is the candidate vector.
Preferably, step d specifically includes:
integrating the motion vectors of all the A-type blocks and B-type blocks into a frame f to be interpolatedt+0.5Of a motion vector field Vt+0.5(ii) a Calculating the frame f to be interpolated by using the following formulat+0.5The value at pixel position p ═ i, j):
Figure GDA0002317136580000052
wherein v isi,jIs a Vt+0.5A motion vector at p; k represents the type of the block, and takes 1 as a non-overlapping part, 2 as an overlapping part of two blocks and 3 as an overlapping part of four blocks; the coefficient omega takes a corresponding value according to the value of k.
In the embodiment of the present invention, a motion compensation frame rate up-conversion method based on spatial prediction is provided, and compared with the prior art, the method has the following beneficial effects: the invention provides a low-complexity motion compensation method based on spatial prediction, which carries out full search motion estimation on a specified block to be inserted according to a set template, obtains motion vectors of other blocks by spatial prediction, namely, in order to establish balance between precision and complexity, the advantages of a full search strategy and spatial correlation need to be considered at the same time. The invention effectively utilizes the respective advantages of full-search motion estimation and spatial correlation, not only ensures the accuracy of motion estimation, obviously reduces the operation complexity and reduces the calculation cost, but also improves 9 sets of CIF format video sequences, and estimates the interpolation quality by Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (Structural SIMilarity, SSIM).
Drawings
Fig. 1 is a flowchart of a method for motion-compensated frame rate up-conversion based on spatial prediction according to an embodiment of the present invention;
fig. 2 illustrates a dividing manner of two types of blocks in a motion compensation frame rate up-conversion method based on spatial prediction according to an embodiment of the present invention;
fig. 3 is a simplified flowchart of a method for motion-compensated frame rate up-conversion based on spatial prediction according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a method for motion-compensated frame rate up-conversion based on spatial prediction according to an embodiment of the present invention; fig. 3 is a simplified flowchart of a method for motion-compensated frame rate up-conversion based on spatial prediction according to an embodiment of the present invention. As shown in fig. 1 and 3, the method of the embodiment of the present invention includes:
step a, inputting a video sequence, extracting two adjacent frames, and performing block segmentation and block classification on a frame to be interpolated.
Setting frame f to be insertedt+0.5With a spatial resolution of mxn and a block size of s (both M and N must be divisible by s), the frame to be interpolated will contain mxn/s2And (5) standard blocks. As shown in fig. 2, the crossing blocks of odd rows and odd columns and even rows and european columns are classified into a class a, and the rest are B classes.
Step b, obtaining the motion vector of the A-type block by adopting full-search motion estimation based on a continuous elimination method, which specifically comprises the following steps:
step b 1: and calculating the coordinate of the t frame and the t +1 frame as the luminance accumulated sum of the (i, j) block by taking the coordinate (i, j) of the upper left corner of the block as a reference as follows:
Figure GDA0002317136580000071
Figure GDA0002317136580000072
wherein f ist(i+m,j+n)、ft-1(i + m, j + n) are the luminance values at coordinates (i + m, j + n) of the t-th frame and the t + 1-th frame, respectively, and (m, n) are the coordinates of the pixels in the block.
Step b 2: and (3) traversing candidate blocks in a search window by setting the upper left pixel coordinate p of the current A-type block as (i, j), and referring to a frame ft+1Offset v 'of nth candidate block from (i, j)'n(x, y), reference frame ftThe offset of the nth candidate block to (i, j) is-v'n. Setting the radius of the search window as r, x, y E [ -r, r]. Let initial offset v'0(r, r), calculate the initial difference D of the current class a block0The formula is as follows:
D0=||Bt(p-v'0)-Bt+1(p+v'0)||1(3)
wherein, Bt(i-r, j-r) is a vector formed by arranging all pixels in a block in a row in the coordinate of the upper left corner of the t-th frame, | | · |, a1Is a vector of1And (4) norm.
Step b 3: will be provided with
Figure GDA0002317136580000073
Updating the coordinate offset of the next candidate block in the search window if the following inequality is satisfied
|Pt(i-x,j-y)-Pt+1(i+x,j+y)|<D0(4)
The difference D of the nth candidate block is calculatednThe following were used:
Dn=||Bt(p-v'n)-Bt+1(p+v'n)||1(5)
next, the motion vector v of the current class A block is updated as followss
vs=v'n=(x,y) (6)
And update D0=min{Dn,D0}. If inequality (4) does not hold, vsRemain unchanged.
Step b 4: go to step b3 until all candidate blocks within the search window have been traversed.
Step c, calculating a B-type block motion vector; the specific process comprises the following steps:
step c 1: setting the coordinate of the upper left pixel of the current B type block as p, selecting the motion vectors v of four A type blocks adjacent to the B type blocka1、va2、va3And va4As candidate vectors, a set V of candidate vectors is composedc
Vc={va1,va2,va3,va4} (7)
Step c 2: calculating motion vector v of B-type block according to minimum error matching principlep
Figure GDA0002317136580000081
And d, interpolating the intermediate frame by adopting an overlapped block technology.
Step d 1: integrating the motion vectors of all the A-type blocks and B-type blocks into a frame f to be interpolatedt+0.5Of a motion vector field Vt+0.5:;
Step d 2: calculating the frame f to be interpolated by using the following formulat+0.5The value at pixel location (i, j):
Figure GDA0002317136580000082
wherein p ═ i, j, vi,jIs a Vt+0.5Motion vector at p, superscript k represents the type of block: when taking 1, the part is the non-overlapping part; when taking 2, the overlapping part of the two blocks is formed; taking 3, it is the overlapping portion of the four blocks. The coefficient omega takes a corresponding value according to the value of k.
Simulation result
The invention provides a method for evaluating 9 groups of test video sequences in a CIF format. The comparison method comprises the following steps:
1) the two-way motion estimation frame rate up-conversion technique proposed by the document "Dual motion estimation for frame up-conversion" (Suk-JuKang, Sungjoo Yoo and Young Hwan Kim, IEEE Transactions on Circuits and System for Video Technology, vol.20, No.12, pp.1909-1914,2010), abbreviated as the Dual _ ME method; 2) the DS _ ME method is a hybrid Motion Estimation Frame Rate Up-Conversion technique proposed by "Direction-Select Motion Estimation for Motion-Compensated Frame Rate Up-Conversion" (Dong-Gon Yoo, Suk-Ju Kang, and Young Hwan Kim, Journal of Display Technology, vol.9, No.10, pp.840-850,2013). The evaluation performance index adopts peak signal-to-noise ratio, structural similarity and average single-frame processing time which can reflect objective quality. The hardware platform is a core i7CPU computer with a main frequency of 3.60GHz and an internal memory of 8GB, and the software platform is a Windows 764 bit operating system and Matlab R2014b simulation experiment software.
TABLE 1 PSNR value comparison for different frame rate upconversion techniques
Figure GDA0002317136580000091
TABLE 2 SSIM value comparison for different frame rate up-conversion techniques
Figure GDA0002317136580000092
TABLE 3 comparison of time (in s/frame) required to interpolate a frame for different frame rate upconversion techniques
Figure GDA0002317136580000101
Table 1 lists PSNR values for different frame rate upconversion techniques. Aiming at 9 groups of test sequences, compared with a Dual _ ME method, the method provided by the invention obviously improves the PSNR value which can reach 2.72dB at most, and achieves the purpose of improving the quality of the recovered video; compared with the DS _ ME method, the method is relatively better for video sequences which are relatively static or contain a small amount of motion, such as foreman and mothers, but for videos which contain a large amount of motion, such as bus, city, football, mobile and stepan, the motion track of the object can be estimated by adopting the method of the invention, and the PSNR value is improved by 3.21dB to the maximum. Table 2 lists the SSIM values of the different frame rate upconversion techniques, but by contrast, the method proposed by the present invention is significantly better than the two comparison methods, and only slightly lower in video processing with less motion than the DS _ ME method. As shown in Table 3, the running time of the present invention is lower than that of the DS _ ME method and the Dual _ ME method, which represents lower operation complexity. Therefore, compared with the comparison technology, the computing resource configuration of the invention is more effective, and the computing time is saved on the premise of ensuring the accuracy by dividing the block types and combining the advantages of full-search motion estimation and spatial correlation.
The above disclosure is only a few specific embodiments of the present invention, and those skilled in the art can make various modifications and variations of the present invention without departing from the spirit and scope of the present invention, and it is intended that the present invention encompass these modifications and variations as well as others within the scope of the appended claims and their equivalents.

Claims (1)

1. A method for motion compensated frame rate up-conversion based on spatial prediction, comprising:
a, dividing a frame to be inserted into blocks, and dividing the blocks into A-type blocks and B-type blocks;
b, carrying out full search motion estimation based on a continuous elimination method on the A-type block, and determining a motion vector of the A-type block;
step c, combining the motion vector information of the A-type block, and calculating the motion vector of the B-type block according to the spatial correlation and the minimum error matching principle;
d, combining the motion vector of the A-type block and the motion vector of the B-type block into a motion vector field of the frame to be inserted, and according to the reference frame ftAnd ft+1The frame f to be interpolated is obtained by interpolation by adopting an overlapped block motion compensation methodt+0.5(ii) a Wherein f ist、ft+1And ft+0.5The luminance values of the t frame, the t +1 frame and the t +0.5 frame respectively;
step a, specifically comprising:
setting frame f to be insertedt+0.5The spatial resolution of the frame to be interpolated is MxN, the block size is s, and each frame to be interpolated contains MxN/s2A standard block; wherein, the crossed blocks of the odd rows and the odd columns and the even rows and the even columns are divided into A-type blocks, the rest are B-type blocks, and M and N are divided by s;
step b, specifically comprising:
and calculating the coordinate of the t frame and the t +1 frame as the luminance accumulated sum of the (i, j) block by taking the coordinate (i, j) of the upper left corner of the block as a reference as follows:
Figure FDA0002317136570000011
Figure FDA0002317136570000012
wherein f ist(i+m,j+n)、ft+1(i + m, j + n) are each independently the numberthe luminance values of the t frame and the t +1 th frame at coordinates (i + m, j + n), wherein (m, n) is the coordinate of the pixel in the block;
and (3) traversing candidate blocks in a search window by setting the upper left pixel coordinate p of the current A-type block as (i, j), and referring to a frame ft+1Offset v 'of nth candidate block from (i, j)'n(x, y), reference frame ftThe offset of the nth candidate block to (i, j) is-v'n(ii) a Setting the radius of the search window as r, x, y E [ -r, r](ii) a Let initial offset v'0(r, r), calculate the initial difference D of the current class a block0The formula is as follows:
D0=||Bt(p-v'0)-Bt+1(p+v'0)||1
wherein, Bt(i-r, j-r) is a vector formed by arranging all pixels in a block in a row in the coordinate of the upper left corner of the t-th frame, | | · |, a1Is a vector of1A norm; v'nUpdating the coordinate offset of the next candidate block in the search window if the following inequality is satisfied
|Pt(i-x,j-y)-Pt+1(i+x,j+y)|<D0
The difference D of the nth candidate block is calculatednThe following were used:
Dn=||Bt(p-v'n)-Bt+1(p+v'n)||1
updating the motion vector v of the current class a block as followss
vs=v'n=(x,y)
And update D0=min{Dn,D0}; on the contrary, vsKeeping the same; according to the process until all candidate blocks in the search window are traversed;
step c, specifically comprising:
after the motion vector of the A-type block is calculated according to the full search motion estimation, the motion vectors v of four A-type blocks adjacent to the B-type block are selecteda1、va2、va3And va4As candidate vectors, a set V of candidate vectors is composedc
Vc={va1,va2,va3,va4}
And if the coordinate of the upper left pixel of the current B type block is p, the motion vector v of the B type blockpCalculating according to the minimum error matching principle:
Figure FDA0002317136570000021
wherein, Bt(p-v) is a vector formed by arranging all pixels in a block with the coordinate of p-v at the upper left corner of the t-th frame in a row, | · | | survival1Is a vector of1Norm, v is a candidate vector;
step d, specifically comprising:
integrating the motion vectors of all the A-type blocks and B-type blocks into a frame f to be interpolatedt+0.5Of a motion vector field Vt+0.5(ii) a Calculating the frame f to be interpolated by using the following formulat+0.5The value at pixel position p ═ i, j):
Figure FDA0002317136570000031
wherein v isi,jIs a Vt+0.5A motion vector at p; k represents the type of the block, and takes 1 as a non-overlapping part, 2 as an overlapping part of two blocks and 3 as an overlapping part of four blocks; the coefficient omega takes a corresponding value according to the value of k.
CN201710831783.1A 2017-09-15 2017-09-15 Motion compensation frame rate up-conversion method based on spatial prediction Active CN107483960B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710831783.1A CN107483960B (en) 2017-09-15 2017-09-15 Motion compensation frame rate up-conversion method based on spatial prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710831783.1A CN107483960B (en) 2017-09-15 2017-09-15 Motion compensation frame rate up-conversion method based on spatial prediction

Publications (2)

Publication Number Publication Date
CN107483960A CN107483960A (en) 2017-12-15
CN107483960B true CN107483960B (en) 2020-06-02

Family

ID=60584535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710831783.1A Active CN107483960B (en) 2017-09-15 2017-09-15 Motion compensation frame rate up-conversion method based on spatial prediction

Country Status (1)

Country Link
CN (1) CN107483960B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7096373B2 (en) 2018-06-07 2022-07-05 北京字節跳動網絡技術有限公司 Partial cost calculation
TWI719519B (en) 2018-07-02 2021-02-21 大陸商北京字節跳動網絡技術有限公司 Block size restrictions for dmvr
CN109756778B (en) * 2018-12-06 2021-09-14 中国人民解放军陆军工程大学 Frame rate conversion method based on self-adaptive motion compensation
CN113630621B (en) * 2020-05-08 2022-07-19 腾讯科技(深圳)有限公司 Video processing method, related device and storage medium
CN112995677B (en) * 2021-02-08 2022-05-31 信阳师范学院 Video frame rate up-conversion method based on pixel semantic matching

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103702128A (en) * 2013-12-24 2014-04-02 浙江工商大学 Interpolation frame generating method applied to up-conversion of video frame rate
CN104718756A (en) * 2013-01-30 2015-06-17 英特尔公司 Content adaptive predictive and functionally predictive pictures with modified references for next generation video coding
CN105872559A (en) * 2016-03-20 2016-08-17 信阳师范学院 Frame rate up-conversion method based on mixed matching of chromaticity

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104718756A (en) * 2013-01-30 2015-06-17 英特尔公司 Content adaptive predictive and functionally predictive pictures with modified references for next generation video coding
CN103702128A (en) * 2013-12-24 2014-04-02 浙江工商大学 Interpolation frame generating method applied to up-conversion of video frame rate
CN105872559A (en) * 2016-03-20 2016-08-17 信阳师范学院 Frame rate up-conversion method based on mixed matching of chromaticity

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Multi-Channel Mixed-Pattern Based Frame Rate Up-Conversion Using Spatio-Temporal Motion Vector Refinement and Dual-Weighted Overlapped Block Motion;Ran Li et al.;《Journal of Display Technology》;20141231;第10卷(第12期);全文 *
帧频提升关键技术的研究及实现;李真真;《中国优秀硕士学位论文全文数据库信息科技辑,2016年第6期》;20160615;全文 *

Also Published As

Publication number Publication date
CN107483960A (en) 2017-12-15

Similar Documents

Publication Publication Date Title
CN107483960B (en) Motion compensation frame rate up-conversion method based on spatial prediction
Kang et al. Motion compensated frame rate up-conversion using extended bilateral motion estimation
US8736767B2 (en) Efficient motion vector field estimation
CN106254885B (en) Data processing system, method of performing motion estimation
US8571114B2 (en) Sparse geometry for super resolution video processing
US8711938B2 (en) Methods and systems for motion estimation with nonlinear motion-field smoothing
CN108574844B (en) Multi-strategy video frame rate improving method for space-time significant perception
JP2009527173A (en) Method and apparatus for determining motion between video images
Veselov et al. Iterative hierarchical true motion estimation for temporal frame interpolation
Kim et al. An efficient motion-compensated frame interpolation method using temporal information for high-resolution videos
Shimano et al. Video temporal super-resolution based on self-similarity
JP2001520781A (en) Motion or depth estimation
US20210407105A1 (en) Motion estimation method, chip, electronic device, and storage medium
Huang et al. Algorithm and architecture design of multirate frame rate up-conversion for ultra-HD LCD systems
Guo et al. Frame rate up-conversion using linear quadratic motion estimation and trilateral filtering motion smoothing
Guo et al. Motion-compensated frame interpolation with weighted motion estimation and hierarchical vector refinement
KR101544158B1 (en) Method for searching bidirectional motion using multiple frame and image apparatus with the same technique
CN112532907A (en) Video frame frequency improving method, device, equipment and medium
CN109788297B (en) Video frame rate up-conversion method based on cellular automaton
Lee et al. Motion vector correction based on the pattern-like image analysis
EP2237559A1 (en) Background motion estimate based halo reduction
KR101359351B1 (en) Fast method for matching stereo images according to operation skip
Lu et al. An artifact information based motion vector processing method for motion compensated frame interpolation
Basher Two minimum three step search algorithm for motion estimation of images from moving IR camera
CN112995677B (en) Video frame rate up-conversion method based on pixel semantic matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant