CN103888767B - A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation - Google Patents

A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation Download PDF

Info

Publication number
CN103888767B
CN103888767B CN201410125926.3A CN201410125926A CN103888767B CN 103888767 B CN103888767 B CN 103888767B CN 201410125926 A CN201410125926 A CN 201410125926A CN 103888767 B CN103888767 B CN 103888767B
Authority
CN
China
Prior art keywords
motion
block
motion estimation
frame rate
optical flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410125926.3A
Other languages
Chinese (zh)
Other versions
CN103888767A (en
Inventor
孙国霞
赵悦
刘琚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201410125926.3A priority Critical patent/CN103888767B/en
Publication of CN103888767A publication Critical patent/CN103888767A/en
Application granted granted Critical
Publication of CN103888767B publication Critical patent/CN103888767B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

本发明提供一种帧率提升的方法。该方法主要分为四个步骤,分别是图像分割,得到前景、背景以及物体边缘;对前景与背景采用可变尺寸块匹配运动估计,对物体边缘采用基于光流场的运动估计;对运动向量进行后处理,得到靠谱的运动矢量;采用重叠块运动补偿和双线性插值方法进行运动补偿,合成插入帧。本发明提出来的帧率提升的方法能够解决传统帧率提升方法中出现的光环效应、边缘块锯齿效应等问题,在帧率提升领域得到广泛的应用。

The invention provides a method for increasing the frame rate. The method is mainly divided into four steps, which are image segmentation to obtain foreground, background and object edges; variable-size block matching motion estimation is used for the foreground and background, and optical flow field-based motion estimation is used for object edges; motion vector Perform post-processing to obtain reliable motion vectors; use overlapping block motion compensation and bilinear interpolation methods for motion compensation, and synthesize and insert frames. The method for increasing the frame rate proposed by the present invention can solve the problems such as the halo effect and the edge block jagged effect in the traditional method for increasing the frame rate, and is widely used in the field of increasing the frame rate.

Description

UMH块匹配运动估计与光流场运动估计相结合的一种帧率提 升方法A frame rate improvement based on the combination of UMH block matching motion estimation and optical flow field motion estimation liter method

技术领域technical field

本发明涉及一种帧率视频提升的方法,属于视频数据处理领域。The invention relates to a method for increasing frame rate video, which belongs to the field of video data processing.

背景技术Background technique

视频的帧率提升方法是通过在原始帧率视频序列中插入预测的帧来提升低帧率视频的视觉质量,获取高帧率的视频。由于视频帧率提升多样化的应用,帧率提升技术在消费电子领域越来越重要。HDTV和多媒体PC系统可以播放比广播视频流帧率高的视频,视频帧率提升技术就可以应用到提升原始视频帧率来提高终端用户的观影效果。The frame rate improvement method of video is to improve the visual quality of low frame rate video and obtain high frame rate video by inserting predicted frames into the original frame rate video sequence. Due to the diverse applications of video frame rate boosting, frame rate boosting technology is becoming more and more important in the field of consumer electronics. HDTV and multimedia PC systems can play video with a higher frame rate than broadcast video streams, and video frame rate enhancement technology can be applied to increase the original video frame rate to improve the viewing effect of end users.

目前视频帧率提升采用最多的方法是基于运动估计的运动补偿方法,大部分的帧率提升方法都是基于块匹配的运动估计方法。然而在前景与背景的交界处,由于运动的物体边缘块匹配的不准确性,导致估计出不准确的运动矢量,从而在闭塞的区域会出现光环效应、锯齿边缘等问题,导致帧率提升后的视频质量下降,影响终端用户的视觉效果。At present, the most widely used method for video frame rate improvement is the motion compensation method based on motion estimation, and most of the frame rate improvement methods are motion estimation methods based on block matching. However, at the junction of the foreground and the background, due to the inaccuracy of the edge block matching of the moving object, the inaccurate motion vector is estimated, so that problems such as halo effect and jagged edge will appear in the occluded area, resulting in the increase of the frame rate. The video quality of the video is degraded, affecting the visual effect of the end user.

发明内容Contents of the invention

针对帧率提升中存在的光环效应、锯齿边缘的问题,本申请提供了一种运动物体检测的前景、背景分离,采用基于块匹配的运动估计和光流场运动估计相结合的视频帧率提升方法,用以提高帧率提升后的视频质量。Aiming at the problems of halo effect and jagged edges in the frame rate improvement, this application provides a method for moving object detection to separate the foreground and background, and adopts a video frame rate improvement method based on the combination of block matching based motion estimation and optical flow field motion estimation , to improve video quality after frame rate boosting.

本发明的技术解决方案如下:Technical solution of the present invention is as follows:

一种基于UMHexagonS块匹配运动估计和光流场运动估计相结合的视频帧率提升方法,其特征在于该方法包括以下步骤:A video frame rate promotion method based on UMHexagonS block matching motion estimation and optical flow field motion estimation is characterized in that the method comprises the following steps:

步骤1:对原始视频进行处理,采用帧间差法分离前景背景,并标记边缘像素点;Step 1: Process the original video, use the inter-frame difference method to separate the foreground and background, and mark the edge pixels;

步骤2:采用可变尺寸UMHexagonS块匹配运动估计得到前景、背景的运动矢量;Step 2: use variable size UMHexagonS block matching motion estimation to obtain the motion vectors of foreground and background;

步骤3:采用光流场运动估计得到运动物体边缘像素点运动矢量;Step 3: Using optical flow field motion estimation to obtain the motion vector of the pixel point on the edge of the moving object;

步骤4:对得到的运动矢量进行后处理;Step 4: post-processing the obtained motion vector;

步骤5:对前景和背景进行重叠块运动补偿,对物体边缘进行双线性插值运动补偿,得到插入帧;Step 5: Carry out overlapping block motion compensation on the foreground and background, and perform bilinear interpolation motion compensation on the edge of the object to obtain the interpolation frame;

步骤6:将插入帧和原始帧合成高帧率的视频。Step 6: Combine the inserted frame and the original frame into a high frame rate video.

优选地,在步骤2和步骤3中对前景、背景、物体边缘分别采用自适应运动估计方法,以提高物体边缘像素点运动矢量的准确性。Preferably, in step 2 and step 3, an adaptive motion estimation method is used for the foreground, background, and object edge respectively, so as to improve the accuracy of the motion vector of the pixel point of the object edge.

优选地,在步骤4中对得到的运动矢量进行可靠性判断,对不可靠的运动矢量进行中值滤波,以提高运动矢量的准确性。Preferably, in step 4, a reliability judgment is performed on the obtained motion vectors, and a median filter is performed on unreliable motion vectors, so as to improve the accuracy of the motion vectors.

优选地,在步骤5中进行重叠块运动补偿,以降低块效应、提高视频质量。Preferably, motion compensation for overlapping blocks is performed in step 5 to reduce block effects and improve video quality.

附图说明Description of drawings

图1:是本发明整体处理框图。Fig. 1: is the overall processing block diagram of the present invention.

图2:UMHexagonS块匹配运动估计方法示意图。Figure 2: Schematic diagram of UMHexagonS block matching motion estimation method.

图3:仿真结果图。Figure 3: Simulation result graph.

具体实施方式detailed description

本发明针对图像中物体的运动情况,分为前景、背景、边缘区域,分别采用尺寸可变块匹配运动估计方法、基于光流场运动估计的方法,对运动矢量后处理后采用重叠块运动补偿的方法构造插入帧,起到降低光环效应、边缘锯齿化的作用,达到重构高质量高帧率视频的目标。The present invention is aimed at the movement of objects in the image, which is divided into foreground, background and edge regions, respectively adopts a size-variable block matching motion estimation method, and a method based on optical flow field motion estimation, and adopts overlapping block motion compensation after post-processing the motion vector The method constructs the inserted frame to reduce the effect of halo effect and edge aliasing, and achieve the goal of reconstructing high-quality and high-frame-rate video.

下面结合具体实施例(但不限于此例)以及附图对本发明进行进一步的说明。The present invention will be further described below in conjunction with specific embodiments (but not limited to this example) and accompanying drawings.

(一)对原始数字图像的处理:(1) Processing of original digital images:

(1)读入视频;(1) read in the video;

(2)设置计数器t=1,依次保存第i帧作为当前帧,第t+2帧作为下一帧,预留第t+1帧作为待插入帧;(2) Counter t=1 is set, and the i-th frame is saved successively as the current frame, the t+2 frame is used as the next frame, and the t+1 frame is reserved as the frame to be inserted;

(3)利用动态自适应阈值帧间差法进行运动目标检测来分离前景与背景。基于帧间差分的运动检测即帧差法,它根据相邻帧或隔帧图像间亮度变化的大小来检测运动目标。具体包括以下几个步骤:(3) Using dynamic adaptive threshold inter-frame difference method to detect moving objects to separate foreground and background. Motion detection based on frame difference is the frame difference method, which detects moving objects according to the brightness change between adjacent frames or every other frame image. Specifically include the following steps:

a.通过公式(1)计算第t帧图像与第t+2帧图像的差分,记为D(x,y):a. Calculate the difference between the t-th frame image and the t+2-th frame image by formula (1), and record it as D(x,y):

D(x,y)=|Ft+2(x,y)-Ft(x,y)| (公式1)D(x,y)=|F t+2 (x,y)-F t (x,y)| (Formula 1)

其中:Ft(x,y),Ft+2(x,y)分别表示t和t+2时刻的图像;Among them: F t (x, y), F t+2 (x, y) represent the images at time t and t+2 respectively;

b.动态自适应阈值TH的选取,阈值TH的取值为帧差法中差分图的最高灰度值和最低灰度值差的1/2。通过步骤a记录D(x,y)的最大值为Dmax,最小值为Dmin,TH=(Dmax-Dmin)/2;b. The selection of the dynamic adaptive threshold TH, the value of the threshold TH is 1/2 of the difference between the highest gray value and the lowest gray value of the difference image in the frame difference method. Through step a, record the maximum value of D(x,y) as D max , the minimum value as D min , TH=(D max -D min )/2;

c.对差分图像D(x,y)进行二值化处理,根据公式2进行图像分割:c. Binarize the difference image D(x, y), and perform image segmentation according to formula 2:

公式2中R(x,y)为二值化之后的差分图像,TH为图像分割的阈值。In formula 2, R(x, y) is the difference image after binarization, and TH is the threshold of image segmentation.

(二)运动矢量的估计阶段:(2) Estimation stage of motion vector:

(1)根据附图2所示,对于运动物体,采用块尺寸为4*4的UMHexagonS双向运动估计方法。具体步骤如下:(1) As shown in Figure 2, for moving objects, the UMHexagonS bidirectional motion estimation method with a block size of 4*4 is used. Specific steps are as follows:

a.虚拟待插入帧第t+1帧,根据步骤(一)的分割结果,将前景物体所在的位置分为若干4*4的矩形块,依次对每一小块进行双向运动估计;a. The t+1th frame of the virtual frame to be inserted, according to the segmentation result of step (1), the position of the foreground object is divided into several 4*4 rectangular blocks, and two-way motion estimation is carried out to each small block in turn;

b.块匹配的准则如下:根据公式3计算第t帧与第t+2帧对应块的最小绝对误差和(Sum of Absolute Difference,SAD),SAD定义如下:b. The criteria for block matching are as follows: Calculate the minimum absolute error sum (Sum of Absolute Difference, SAD) of the blocks corresponding to the tth frame and the t+2th frame according to formula 3, and SAD is defined as follows:

其中,Bi,j为第t+1帧中要估计的块,v为候选的运动矢量,s为第t+1中要插入的像素点,ft[s-v]为第t+1向前映射到第t帧中对应的像素点,ft+1[s+v]为第t+1帧向后映射到第t+2帧中对应的像素点;Among them, B i, j is the block to be estimated in the t+1th frame, v is the candidate motion vector, s is the pixel to be inserted in the t+1th frame, f t [sv] is the t+1th forward Mapped to the corresponding pixel in the tth frame, f t+1 [s+v] is the backward mapping of the t+1th frame to the corresponding pixel in the t+2th frame;

c.根据匹配准则,非对称十字型多层次六边形格点(Unsymmetrical Multi-resolution Hexagon,UMHexagonS)搜索方法采用了混合多层次搜索方式,具体步骤如下:c. According to the matching criteria, the Unsymmetrical Multi-resolution Hexagon (UMHexagonS) search method uses a mixed multi-level search method, and the specific steps are as follows:

Step1:中心点的预测,对初始中心点首先进行中值预测,然后是上层预测,最后进行前帧对应块的运动矢量预测;Step1: For the prediction of the center point, the initial center point is firstly predicted by the median value, then the upper layer prediction, and finally the motion vector prediction of the corresponding block of the previous frame;

Step2:混合多层次的运动搜索;Step2: Hybrid multi-level motion search;

Step2.1非对称十字搜索,然后以搜索到的最优点为中心点进行格点搜索,再以当前的最优点为中心点进行大六边形搜索;Step2.1 Asymmetrical cross search, then conduct a grid search with the searched optimal point as the center point, and then conduct a large hexagonal search with the current optimal point as the center point;

Step2.2进行扩展六边形搜索,直到最优点为中心点或者达到最大搜索次数时停止;Step2.2 Carry out extended hexagon search until the best point is the center point or stop when the maximum number of searches is reached;

Step2.3缩小搜索范围,进行菱形搜索,直到最优点为中心点或者达到最大搜索次数时停止。Step2.3 Narrow down the search range and perform a diamond search until the optimal point is the center point or the maximum number of searches is reached.

(2)对于相对静止的背景,采用块尺寸为8*8的UMHexagonS搜索双向运动估计方法(具体的步骤如步骤(1));(2) For a relatively still background, adopting a UMHexagonS with a block size of 8*8 to search for a two-way motion estimation method (the specific steps are as in step (1));

(3)对于前景与背景的边缘轮廓,采用基于光流场方法的运动估计。光流计算的基本模型是:假设在一个较小的空间临域上像素的亮度值保持恒定,通过最小二乘法最优化方法可以较快的计算出运动图像的局部光流场,具体步骤如下:(3) For the edge contours of the foreground and background, the motion estimation based on the optical flow field method is adopted. The basic model of optical flow calculation is: assuming that the brightness value of the pixel on a small spatial neighborhood remains constant, the local optical flow field of the moving image can be calculated quickly by the least squares optimization method. The specific steps are as follows:

a.令f(x,y,t)表示连续的时空亮度分布,如果沿着运动轨迹其亮度保持不变我们可以得到:a. Let f(x, y, t) represent the continuous spatio-temporal brightness distribution, if the brightness remains constant along the trajectory we can get:

公式4中,x,y分别沿着运动轨迹随时间t变化。对公式4运用微分链式法则可以得到:In Formula 4, x and y change along the trajectory with time t respectively. Applying the differential chain rule to Equation 4 gives:

公式5中,分别表示沿空间坐标的运动矢量的分量。公式5称为光流方程或光流约束条件,公式5还可以写成<矢量内积>的形式:In formula 5, represent the components of the motion vector along the space coordinates, respectively. Equation 5 is called optical flow equation or optical flow constraint, and Equation 5 can also be written in the form of <vector inner product>:

b.运动场的流量矢量逐像素变化的最小值应当满足光流方程,令:b. The minimum value of the pixel-by-pixel variation of the flow vector of the sports field should satisfy the optical flow equation, so that:

公式7表示光流方程中的误差,当εof(v(x,y,t))等于0的时,满足光流 方程。在有遮挡和噪声的情况下,求出εof(v(x,y,t))二次方的最小值。我们可以用正则化方法求光流,使下式最小:Equation 7 represents the error in the optical flow equation, and when ε of (v(x,y,t)) is equal to 0, the optical flow equation is satisfied. In the presence of occlusion and noise, find the minimum value of the square of ε of (v(x,y,t)). We can use the regularization method to find the optical flow, so that the following formula is minimized:

(公式 8 ) (Formula 8)

公式8中,是λ拉格朗日乘数,如果导数▽f和能比较精确的求出,参数可取较大者,否则可取较小者。In formula 8, is the λ Lagrangian multiplier, if the derivative ▽f and If it can be obtained more accurately, the larger parameter can be selected, otherwise the smaller one can be selected.

(三)运动矢量后处理阶段:(3) Motion vector post-processing stage:

a.运动矢量可靠性的判断:a. Judgment of motion vector reliability:

Step1:计算要判断块(记为B块)及其周围八个块的运动矢量的平均值:Step1: Calculate the average value of the motion vectors of the block to be judged (denoted as block B) and its surrounding eight blocks:

公式中vm为平均值,vi分别代表周围八个块的运动矢量。In the formula, v m is the average value, and v i represent the motion vectors of the surrounding eight blocks respectively.

Step2:计算平均差值:Step2: Calculate the average difference:

公式中vm为平均值,vi代表B块的运动矢量。In the formula, v m is the average value, and v i represents the motion vector of the B block.

Step3:计算差值:Step3: Calculate the difference:

Dc=|vm-v1| (公式11)Dc=|v m -v 1 | (Formula 11)

Step4:判断,如果Dc>Dn,则v1为不可靠运动矢量,需要中值滤波。Step4: Judgment, if Dc>Dn, then v 1 is an unreliable motion vector and requires median filtering.

b.对不可靠运动矢量进行中值滤波:b. Median filtering of unreliable motion vectors:

v1smooth=median[v1,v2,v3,...,v9] (公式12)v 1smooth = median[v 1 ,v 2 ,v 3 ,...,v 9 ] (Formula 12)

(四)运动补偿阶段:对前景与背景物体采用重叠块运动补偿方法(OBMC),对前景与背景边缘采用双线性插值的运动补偿方法。当运动矢量估计不准确或物 体运动不是简单的平移运动以及一个块中有多个不同物体运动时,采用重叠块运动补偿方法可以解决块效应问题。采用OBMC方法,一个像素的预测不仅基于其所属块的运动矢量的估计,还基于相邻块的运动矢量估计。(4) Motion compensation stage: use overlapping block motion compensation method (OBMC) for foreground and background objects, and adopt bilinear interpolation motion compensation method for foreground and background edges. When the motion vector estimation is inaccurate or the motion of the object is not a simple translation motion and there are many different objects in a block, the overlapping block motion compensation method can solve the block effect problem. With the OBMC method, the prediction of a pixel is not only based on the estimation of the motion vector of the block to which it belongs, but also based on the estimation of the motion vector of the neighboring blocks.

在传统的视频帧率提升方法中,对运动物体边缘做基于块的运动估计时,会使用到背景的像素点,这就造成了边缘估计的不准确性。在运动物体边缘采用基于像素点的运动估计就不会存在使用背景像素信息来估计运动物体像素信息的问题,从而可以得到正确的运动矢量,有效的解决了边缘光环效应和锯齿块问题。In the traditional video frame rate improvement method, when performing block-based motion estimation on the edge of the moving object, the pixels of the background are used, which causes the inaccuracy of the edge estimation. Using pixel-based motion estimation on the edge of the moving object will not have the problem of using background pixel information to estimate the pixel information of the moving object, so that the correct motion vector can be obtained, which effectively solves the problem of edge halo effect and jagged blocks.

如附图3所示,图中从左往右,从上到下一次分别为UMHexagonSexagonS块匹配运动估计直接运动补偿、重叠块运动补偿、光流场运动估计双线性插值运动补偿、本专利运动补偿仿真结果。As shown in Figure 3, from left to right and from top to bottom in the figure are UMHexagonSexagonS block matching motion estimation direct motion compensation, overlapping block motion compensation, optical flow field motion estimation bilinear interpolation motion compensation, this patent motion Compensation simulation results.

本发明采用标准YUV视频测试序列foreman序列得到仿真结果,与基于UMHexagonSexagonS运动估计补偿方法、基于光流场运动估计插值方法、重叠块运动补偿方法进行了比较,可以看出本发明的方法有效解决了边缘光环效应、边缘锯齿块的问题。The present invention adopts the standard YUV video test sequence foreman sequence to obtain the simulation results, and compares it with the UMHexagonSexagonS motion estimation compensation method, the optical flow field motion estimation interpolation method, and the overlapping block motion compensation method. It can be seen that the method of the present invention effectively solves the problem of Edge halo effect, edge jagged block issues.

Claims (3)

1.一种基于UMHexagonS块匹配运动估计和光流场运动估计相结合的视频帧率提升方法,其特征在于该方法包括以下步骤:1. A video frame rate promotion method based on UMHexagonS block matching motion estimation and optical flow field motion estimation is characterized in that the method comprises the following steps: 步骤1:对原始视频进行处理,采用帧间差法分离前景背景,并标记边缘像素点;Step 1: Process the original video, use the inter-frame difference method to separate the foreground and background, and mark the edge pixels; 步骤2:采用可变尺寸UMHexagonS块匹配运动估计得到前景、背景的运动矢量;Step 2: use variable size UMHexagonS block matching motion estimation to obtain the motion vectors of foreground and background; 步骤3:采用光流场运动估计得到运动物体边缘像素点运动矢量;Step 3: Using optical flow field motion estimation to obtain the motion vector of the pixel point on the edge of the moving object; 步骤4:对得到的运动矢量进行后处理,具体步骤为:Step 4: Post-processing the obtained motion vector, the specific steps are: a.运动矢量可靠性的判断:a. Judgment of motion vector reliability: (1):计算要判断块也就是B块及其周围八个块的运动矢量的平均值:(1): Calculate the average value of the motion vectors of the block to be judged, that is, block B and its surrounding eight blocks: vv mm == 11 99 &Sigma;&Sigma; ii == 11 99 vv ii 公式中vm为平均值,vi分别代表B块及其周围八个块的运动矢量,其中v1为B块的运动矢量;In the formula, v m is the average value, and v i represent the motion vectors of block B and its surrounding eight blocks respectively, wherein v 1 is the motion vector of block B; (2):计算平均差值:(2): Calculate the average difference: DD. nno == 11 88 &Sigma;&Sigma; ii == 22 99 || vv mm -- vv ii || ;; (3):计算差值:(3): Calculate the difference: DC=|vm-v1|;D C =|v m -v 1 |; (4):判断,如果DC>Dn,则v1为不可靠运动矢量,需要中值滤波;(4): Judgment, if D C >D n , then v 1 is an unreliable motion vector, requiring median filtering; b.对不可靠运动矢量进行中值滤波:b. Median filtering of unreliable motion vectors: v1smooth=median[v1,v2,v3,…,v9];v 1smooth = median[v 1 ,v 2 ,v 3 ,...,v 9 ]; 步骤5:对前景和背景进行重叠块运动补偿,对物体边缘进行双线性插值运动补偿,得到插入帧;Step 5: Carry out overlapping block motion compensation on the foreground and background, and perform bilinear interpolation motion compensation on the edge of the object to obtain the interpolation frame; 步骤6:将插入帧和原始帧合成高帧率的视频。Step 6: Combine the inserted frame and the original frame into a high frame rate video. 2.根据权利要求1所述的基于UMHexagonS块匹配运动估计和光流场运动估计相结合的视频帧率提升方法,其特征在于在步骤2和步骤3中对前景、背景、物体边缘分别采用自适应运动估计方法,以提高物体边缘像素点运动矢量的准确性。2. The video frame rate promotion method based on UMHexagonS block matching motion estimation and optical flow field motion estimation according to claim 1, is characterized in that in step 2 and step 3, adaptive A motion estimation method to improve the accuracy of motion vectors of pixel points on the edge of an object. 3.根据权利要求1所述的基于UMHexagonS块匹配运动估计和光流场运动估计相结合的视频帧率提升方法,其特征在于在步骤5中进行重叠块运动补偿,以降低块效应、提高视频质量。3. The video frame rate improvement method based on UMHexagonS block matching motion estimation and optical flow field motion estimation according to claim 1, characterized in that in step 5, overlapping block motion compensation is performed to reduce block effects and improve video quality .
CN201410125926.3A 2014-03-31 2014-03-31 A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation Expired - Fee Related CN103888767B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410125926.3A CN103888767B (en) 2014-03-31 2014-03-31 A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410125926.3A CN103888767B (en) 2014-03-31 2014-03-31 A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation

Publications (2)

Publication Number Publication Date
CN103888767A CN103888767A (en) 2014-06-25
CN103888767B true CN103888767B (en) 2017-07-28

Family

ID=50957457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410125926.3A Expired - Fee Related CN103888767B (en) 2014-03-31 2014-03-31 A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation

Country Status (1)

Country Link
CN (1) CN103888767B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104065975B (en) * 2014-06-30 2017-03-29 山东大学 Based on the frame per second method for improving that adaptive motion is estimated
WO2016187776A1 (en) * 2015-05-25 2016-12-01 北京大学深圳研究生院 Video frame interpolation method and system based on optical flow method
CN105915881B (en) * 2016-05-06 2017-12-01 电子科技大学 A kind of three-dimensional video-frequency frame per second method for improving based on conspicuousness detection
CN106303546B (en) * 2016-08-31 2019-05-14 四川长虹通信科技有限公司 Conversion method and system in a kind of frame rate
CN116847086A (en) * 2017-05-17 2023-10-03 株式会社Kt Method for decoding image and device for storing compressed video data
CN108040217B (en) * 2017-12-20 2020-01-24 深圳岚锋创视网络科技有限公司 A method, device and camera for video decoding
CN108280444B (en) * 2018-02-26 2021-11-16 江苏裕兰信息科技有限公司 Method for detecting rapid moving object based on vehicle ring view
CN110392282B (en) * 2018-04-18 2022-01-07 阿里巴巴(中国)有限公司 Video frame insertion method, computer storage medium and server
CN109889849B (en) * 2019-01-30 2022-02-25 北京市商汤科技开发有限公司 Video generation method, device, medium and equipment
CN110163892B (en) * 2019-05-07 2023-06-20 国网江西省电力有限公司检修分公司 Learning rate progressive updating method based on motion estimation interpolation and dynamic modeling system
CN113873095B (en) * 2020-06-30 2024-10-01 晶晨半导体(上海)股份有限公司 Motion compensation method and module, chip, electronic device and storage medium
CN112203095B (en) * 2020-12-04 2021-03-09 腾讯科技(深圳)有限公司 Video motion estimation method, device, equipment and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101489031A (en) * 2009-01-16 2009-07-22 西安电子科技大学 Adaptive frame rate up-conversion method based on motion classification
JP2010517415A (en) * 2007-01-26 2010-05-20 テレフオンアクチーボラゲット エル エム エリクソン(パブル) Image block classification
CN102595089A (en) * 2011-12-29 2012-07-18 香港应用科技研究院有限公司 Frame rate conversion using blended bidirectional motion vectors for halo reduction
CN103167304A (en) * 2013-03-07 2013-06-19 海信集团有限公司 Method and device for improving a stereoscopic video frame rates
CN103313059A (en) * 2013-06-14 2013-09-18 珠海全志科技股份有限公司 Method for judging occlusion area in process of frame rate up-conversion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120206567A1 (en) * 2010-09-13 2012-08-16 Trident Microsystems (Far East) Ltd. Subtitle detection system and method to television video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010517415A (en) * 2007-01-26 2010-05-20 テレフオンアクチーボラゲット エル エム エリクソン(パブル) Image block classification
CN101489031A (en) * 2009-01-16 2009-07-22 西安电子科技大学 Adaptive frame rate up-conversion method based on motion classification
CN102595089A (en) * 2011-12-29 2012-07-18 香港应用科技研究院有限公司 Frame rate conversion using blended bidirectional motion vectors for halo reduction
CN103167304A (en) * 2013-03-07 2013-06-19 海信集团有限公司 Method and device for improving a stereoscopic video frame rates
CN103313059A (en) * 2013-06-14 2013-09-18 珠海全志科技股份有限公司 Method for judging occlusion area in process of frame rate up-conversion

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Backward Adaptive Pixel-based Fast;Xiaolin Chen etal;《IEEE SIGNAL PROCESSING LETTERS》;20090531;370-373 *
Motion Compensated Frame Rate Up-conversion Using Soft-decision Motion Estimation and Adaptive-weighted Motion Compensated Interpolation;Yuanzhouhan CAO et al;《Journal of Computational Information Systems》;20130715;5789-5797 *
NEW FRAME RATE UP-CONVERSION USING BI-DIRECTIONAL MOTION ESTIMATION;Byung-Tae Choi etal;《IEEE Transactions on Consumer Electronics》;20000831;603-609 *
一种基于自适应补偿的快速帧速率上转换算法;杨越 等;《光了学报》;20081130;2336-2341 *
基于图像遮挡分析的帧率上变换;林川;《中国优秀博硕士学位论文全文数据库(硕士)》;20110115;I136-410 *

Also Published As

Publication number Publication date
CN103888767A (en) 2014-06-25

Similar Documents

Publication Publication Date Title
CN103888767B (en) A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation
CN105100807B (en) A kind of frame per second method for improving based on motion vector post-processing
CN102883175B (en) Methods for extracting depth map, judging video scene change and optimizing edge of depth map
CN102202164B (en) Motion-estimation-based road video stabilization method
TWI455588B (en) Bi-directional, local and global motion estimation based frame rate conversion
CN106210449B (en) A frame rate up-conversion motion estimation method and system for multi-information fusion
CN104219533B (en) A kind of bi-directional motion estimation method and up-conversion method of video frame rate and system
WO2012043841A1 (en) Systems for producing a motion vector field
CN106296728A (en) A kind of Segmentation of Moving Object method in unrestricted scene based on full convolutional network
CN104935832B (en) For the video keying method with depth information
CN107968946B (en) Video frame rate improving method and device
CN106851046A (en) Video dynamic super-resolution processing method and system
CN105069808A (en) Video image depth estimation method based on image segmentation
CN106778776B (en) Time-space domain significance detection method based on position prior information
CN102741884A (en) Mobile body detection device and mobile body detection method
CN106331723B (en) Video frame rate up-conversion method and system based on motion region segmentation
CN105488812A (en) Motion-feature-fused space-time significance detection method
CN104683783B (en) A kind of self adaptation depth map filtering method
CN103440664A (en) Method, system and computing device for generating high-resolution depth map
CN114449181B (en) Image and video processing method and system, data processing device and medium
CN102752483A (en) Filtering noise reduction system and filtering noise reduction method based on FPGA (field programmable gate array) platform
CN103903256B (en) Depth estimation method based on relative height-depth clue
Liu et al. TTVFI: Learning trajectory-aware transformer for video frame interpolation
CN101237581B (en) A Real-time Video Object Segmentation Method Based on Motion Feature in H.264 Compressed Domain
CN104065975B (en) Based on the frame per second method for improving that adaptive motion is estimated

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170728

Termination date: 20190331

CF01 Termination of patent right due to non-payment of annual fee