CN103888767B - A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation - Google Patents

A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation Download PDF

Info

Publication number
CN103888767B
CN103888767B CN201410125926.3A CN201410125926A CN103888767B CN 103888767 B CN103888767 B CN 103888767B CN 201410125926 A CN201410125926 A CN 201410125926A CN 103888767 B CN103888767 B CN 103888767B
Authority
CN
China
Prior art keywords
block
motion vector
motion
frame
optical flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410125926.3A
Other languages
Chinese (zh)
Other versions
CN103888767A (en
Inventor
孙国霞
赵悦
刘琚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201410125926.3A priority Critical patent/CN103888767B/en
Publication of CN103888767A publication Critical patent/CN103888767A/en
Application granted granted Critical
Publication of CN103888767B publication Critical patent/CN103888767B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention provides a kind of method of frame per second lifting.This method is broadly divided into four steps, is image segmentation respectively, obtains prospect, background and object edge;Variable-sized Block- matching estimation is used to prospect and background, the estimation based on optical flow field is used to object edge;Motion vector is post-processed, reliable motion vector is obtained;Motion compensation, synthesis insertion frame are carried out using overlapped block motion compensation and bilinear interpolation method.The problems such as method for the frame per second lifting that the present invention puts forward can solve the problem that the halo effect occurred in traditional frame per second method for improving, edge block sawtooth effect, is widely used in frame per second lifting field.

Description

UMH block-based motion estimations are carried with a kind of frame per second that optical flow field estimation is combined Lifting method
Technical field
The present invention relates to a kind of method of frame-rate video lifting, belong to video data process field.
Background technology
The frame per second method for improving of video is to lift low frame per second by inserting the frame of prediction in original frame per second video sequence The visual quality of video, obtains the video of high frame per second.Because video frame rate lifts diversified application, frame per second lift technique is disappearing Power-consuming subdomains are more and more important.HDTV and multimedia PC systems can play the video higher than broadcast video stream frame per second, video Frame per second lift technique just may apply to lifting original video frame per second to improve the viewing effect of terminal user.
Current video frame rate lifting is the motion compensation process based on estimation, most frame using most methods Rate method for improving is all based on the method for estimating of Block- matching.But in the intersection of prospect and background, due to the thing of motion The inaccuracy of body edge Block- matching, causes to estimate inaccurate motion vector, so that the ring of light occurs in the region of occlusion The problems such as effect, jagged edges, cause the video quality after frame per second lifting to decline, influence the visual effect of terminal user.
The content of the invention
The problem of for halo effect, jagged edges present in frame per second lifting, this application provides a kind of moving object Prospect, the background separation of detection, the video frame rate being combined using the estimation based on Block- matching and optical flow field estimation Method for improving, to improve the video quality after frame per second lifting.
The technical solution of the present invention is as follows:
A kind of video frame rate lifting being combined based on UMHexagonS block-based motion estimations and optical flow field estimation Method, it is characterised in that this method comprises the following steps:
Step 1:Original video is handled, prospect background, and marker edge pixel are separated using frame-to-frame differences method;
Step 2:The motion vector of prospect, background is obtained using variable-sized UMHexagonS block-based motion estimations;
Step 3:Moving object edge pixel point motion vector is obtained using optical flow field estimation;
Step 4:Obtained motion vector is post-processed;
Step 5:Overlapped block motion compensation is carried out to foreground and background, bilinear interpolation motion is carried out to object edge and is mended Repay, obtain inserting frame;
Step 6:Frame will be inserted and primitive frame synthesizes the video of high frame per second.
Preferably, adaptive motion estimation side is respectively adopted to prospect, background, object edge in step 2 and step 3 Method, to improve the accuracy of object edge pixel point motion vector.
Preferably, reliability judgement is carried out to obtained motion vector in step 4, insecure motion vector is carried out Medium filtering, to improve the accuracy of motion vector.
Preferably, overlapped block motion compensation is carried out in steps of 5, to reduce blocking effect, improve video quality.
Brief description of the drawings
Fig. 1:It is disposed of in its entirety block diagram of the present invention.
Fig. 2:UMHexagonS block matching motion estimation method schematic diagrames.
Fig. 3:Simulation result figure.
Embodiment
The present invention is directed to the motion conditions of objects in images, is divided into prospect, background, fringe region, size is respectively adopted can Become block matching motion estimation method, the method based on optical flow field estimation, using overlapping block fortune after being post-processed to motion vector The method construct insertion frame of dynamic compensation, plays a part of reduction halo effect, edge sawtooth, reaches the high-quality high frame per second of reconstruct The target of video.
The present invention is further detailed with reference to specific embodiment (but not limited to this example) and accompanying drawing.
(1) to the processing of original digital image:
(1) video is read in;
(2) counter t=1 is set, the i-th frame is preserved successively as present frame, t+2 frames reserve t+1 as next frame Frame is as being inserted into frame;
(3) carry out moving object detection to separate prospect and background using dynamic adaptive threshold frame-to-frame differences method.Based on frame Between the motion detection of difference be frame difference method, its brightness changes according to consecutive frame or between two field picture size detects motion mesh Mark.Specifically include following steps:
A. the difference of t two field pictures and t+2 two field pictures is calculated by formula (1), D (x, y) is designated as:
D (x, y)=| Ft+2(x,y)-Ft(x, y) | (formula 1)
Wherein:Ft(x,y),Ft+2(x, y) represents the image at t and t+2 moment respectively;
B. dynamic adaptive threshold TH selection, threshold value TH value is the highest gray value and most of difference diagram in frame difference method The 1/2 of low gray value differences.It is D by the step a maximums for recording D (x, y)max, minimum value is Dmin, TH=(Dmax-Dmin)/2;
C. binary conversion treatment is carried out to difference image D (x, y), image segmentation is carried out according to formula 2:
R (x, y) is the difference image after binaryzation in formula 2, and TH is the threshold value that image is split.
(2) estimation stages of motion vector:
(1) with reference to the accompanying drawings shown in 2, for moving object, block size is used for 4*4 UMHexagonS bi-directional motion estimations Method.Comprise the following steps that:
A. frame t+1 frames are virtually inserted into, according to the segmentation result of step (1), the position where foreground object is divided into Some 4*4 rectangular block, carries out bi-directional motion estimation to each fritter successively;
B. the criterion of Block- matching is as follows:According to formula 3 calculate t frames and t+2 frame corresponding blocks least absolute error and (Sum of Absolute Difference, SAD), SAD is defined as follows:
Wherein, Bi,jFor the block to be estimated in t+1 frames, v is the motion vector of candidate, and s is the picture to be inserted in t+1 Vegetarian refreshments, ft[s-v] is that t+1 is mapped to forward corresponding pixel in t frames, ft+1[s+v] is that t+1 frames are mapped to backward Corresponding pixel in t+2 frames;
C. according to matching criterior, asymmetric cross multi-level hexagon lattice point (Unsymmetrical Multi- Resolution Hexagon, UMHexagonS) searching method employs hybrid multilayer time way of search, comprises the following steps that:
Step1:The prediction of central point, median prediction is carried out to initial center point first, followed by upper strata is predicted, most laggard The motion-vector prediction of row previous frame corresponding blocks;
Step2:The motion search of hybrid multilayer time;
The asymmetric Cross Searchs of Step2.1, then point carries out grid search centered on the optimum point searched, then to work as Point carries out big hexagon search centered on preceding optimum point;
Step2.2 is extended hexagon search, stops when being put centered on optimum point or reaching maximum search number of times Only;
Step2.3 reduces hunting zone, carries out diamond search, is put centered on optimum point or reaches maximum search Stop during number.
(2) for the background of geo-stationary, block size is used to search for bi-directional motion estimation method for 8*8 UMHexagonS (specific step such as step (1));
(3) for the edge contour of prospect and background, using the estimation based on light stream field method.The base of optical flow computation This model is:Assuming that the brightness value for facing pixel on domain in a less space keeps constant, optimized by least square method Method can calculate the local optical flow field of moving image faster, comprise the following steps that:
A. make f (x, y, t) represent continuous space-time Luminance Distribution, if along movement locus its brightness keep it is constant we It can obtain:
In formula 4, x, y changes respectively along movement locus with time t.Formula 4 can be obtained with differential chain rule Arrive:
In formula 5,Point along the motion vector of space coordinate is represented respectively Amount.Formula 5 is referred to as optical flow equation or optical flow constraint condition, and formula 5 can also be write as<Inner product of vector>Form:
B. the minimum value that the flow vector of sports ground changes pixel-by-pixel should meet optical flow equation, order:
Formula 7 represents the error in optical flow equation, works as εof(v (x, y, t)) be equal to 0 when, meet optical flow equation.Having In the case of blocking with noise, ε is obtainedofThe minimum value of (v (x, y, t)) quadratic power.We can ask light stream with regularization method, Make following formula minimum:
(Formula 8)
In formula 8,It is λ Lagrange's multipliers, such as Tab phenolphthaleinum Number ▽ f andIt more accurate can obtain, parameter can use the greater, otherwise can use smaller.
(3) motion vector post-processing stages:
A. the judgement of motion vector reliability:
Step1:The average value of the motion vector of eight blocks of decision block (being designated as B blocks) and its surrounding is wanted in calculating:
V in formulamFor average value, viThe motion vector of eight blocks around representing respectively.
Step2:Calculate mean difference:
V in formulamFor average value, viRepresent the motion vector of B blocks.
Step3:Calculating difference:
Dc=| vm-v1| (formula 11)
Step4:Judge, if Dc > Dn, v1It is unreliable motion vector, it is necessary to medium filtering.
B. medium filtering is carried out to unreliable motion vector:
v1smooth=median [v1,v2,v3,...,v9] (formula 12)
(4) motion compensation stage:Overlapped block motion compensation method (OBMC) is used to prospect and background object, to prospect The motion compensation process of bilinear interpolation is used with background edge.When estimation of motion vectors is inaccurate or object of which movement is not letter When there are multiple different objects motions in single translational motion and a block, block can be solved using overlapped block motion compensation method Effect problem.Using OBMC methods, the prediction of a pixel is based not only on the estimation of the motion vector of its affiliated block, also based on phase The estimation of motion vectors of adjacent block.
In traditional video frame rate method for improving, when doing block-based estimation to moving object edge, it can use To the pixel of background, this has resulted in the inaccuracy of margin estimation.The fortune based on pixel is used at moving object edge Dynamic estimation would not have the problem of moving object Pixel Information is estimated using background pixel information, correct so as to obtain Motion vector, effectively solve the problems, such as edge halo effect and sawtooth block.
As shown in Figure 3, in figure from left to right, it is respectively once from top to bottom UMHexagonSexagonS Block- matchings fortune Dynamic estimation first motion compensation, overlapped block motion compensation, the motion compensation of optical flow field estimation bilinear interpolation, this patent motion Compensate simulation result.
The present invention obtains simulation result using standard yuv video cycle tests foreman sequences, with based on UMHexagonSexagonS motion estimation and compensations method, based on optical flow field estimation interpolation method, overlapped block motion compensation side Method is compared, it can be seen that the problem of method of the invention efficiently solves edge halo effect, edge sawtooth block.

Claims (3)

1. a kind of video frame rate lifting side being combined based on UMHexagonS block-based motion estimations and optical flow field estimation Method, it is characterised in that this method comprises the following steps:
Step 1:Original video is handled, prospect background, and marker edge pixel are separated using frame-to-frame differences method;
Step 2:The motion vector of prospect, background is obtained using variable-sized UMHexagonS block-based motion estimations;
Step 3:Moving object edge pixel point motion vector is obtained using optical flow field estimation;
Step 4:Obtained motion vector is post-processed, concretely comprised the following steps:
A. the judgement of motion vector reliability:
(1):I.e. calculate the average value for the motion vector for wanting eight blocks of decision block B blocks and its surrounding:
v m = 1 9 &Sigma; i = 1 9 v i
V in formulamFor average value, viThe motion vector of eight blocks of B blocks and its surrounding, wherein v are represented respectively1Sweared for the motion of B blocks Amount;
(2):Calculate mean difference:
D n = 1 8 &Sigma; i = 2 9 | v m - v i | ;
(3):Calculating difference:
DC=| vm-v1|;
(4):Judge, if DC>Dn, then v1It is unreliable motion vector, it is necessary to medium filtering;
B. medium filtering is carried out to unreliable motion vector:
v1smooth=median [v1,v2,v3,…,v9];
Step 5:Overlapped block motion compensation is carried out to foreground and background, bilinear interpolation motion compensation is carried out to object edge, obtained To insertion frame;
Step 6:Frame will be inserted and primitive frame synthesizes the video of high frame per second.
2. it is combined according to claim 1 based on UMHexagonS block-based motion estimations and optical flow field estimation Video frame rate method for improving, it is characterised in that prospect, background, object edge are respectively adopted adaptively in step 2 and step 3 Method for estimating, to improve the accuracy of object edge pixel point motion vector.
3. it is combined according to claim 1 based on UMHexagonS block-based motion estimations and optical flow field estimation Video frame rate method for improving, it is characterised in that carry out overlapped block motion compensation in steps of 5, to reduce blocking effect, improve video Quality.
CN201410125926.3A 2014-03-31 2014-03-31 A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation Expired - Fee Related CN103888767B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410125926.3A CN103888767B (en) 2014-03-31 2014-03-31 A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410125926.3A CN103888767B (en) 2014-03-31 2014-03-31 A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation

Publications (2)

Publication Number Publication Date
CN103888767A CN103888767A (en) 2014-06-25
CN103888767B true CN103888767B (en) 2017-07-28

Family

ID=50957457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410125926.3A Expired - Fee Related CN103888767B (en) 2014-03-31 2014-03-31 A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation

Country Status (1)

Country Link
CN (1) CN103888767B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104065975B (en) * 2014-06-30 2017-03-29 山东大学 Based on the frame per second method for improving that adaptive motion is estimated
CN105517671B (en) * 2015-05-25 2020-08-14 北京大学深圳研究生院 Video frame interpolation method and system based on optical flow method
CN105915881B (en) * 2016-05-06 2017-12-01 电子科技大学 A kind of three-dimensional video-frequency frame per second method for improving based on conspicuousness detection
CN106303546B (en) * 2016-08-31 2019-05-14 四川长虹通信科技有限公司 Conversion method and system in a kind of frame rate
CN116866584A (en) * 2017-05-17 2023-10-10 株式会社Kt Method for decoding and encoding video and apparatus for storing compressed video data
CN108040217B (en) * 2017-12-20 2020-01-24 深圳岚锋创视网络科技有限公司 Video decoding method and device and camera
CN108280444B (en) * 2018-02-26 2021-11-16 江苏裕兰信息科技有限公司 Method for detecting rapid moving object based on vehicle ring view
CN110392282B (en) * 2018-04-18 2022-01-07 阿里巴巴(中国)有限公司 Video frame insertion method, computer storage medium and server
CN109889849B (en) * 2019-01-30 2022-02-25 北京市商汤科技开发有限公司 Video generation method, device, medium and equipment
CN110163892B (en) * 2019-05-07 2023-06-20 国网江西省电力有限公司检修分公司 Learning rate progressive updating method based on motion estimation interpolation and dynamic modeling system
CN113873095B (en) * 2020-06-30 2024-10-01 晶晨半导体(上海)股份有限公司 Motion compensation method and module, chip, electronic device and storage medium
CN112203095B (en) * 2020-12-04 2021-03-09 腾讯科技(深圳)有限公司 Video motion estimation method, device, equipment and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101489031A (en) * 2009-01-16 2009-07-22 西安电子科技大学 Adaptive frame rate up-conversion method based on motion classification
JP2010517415A (en) * 2007-01-26 2010-05-20 テレフオンアクチーボラゲット エル エム エリクソン(パブル) Image block classification
CN102595089A (en) * 2011-12-29 2012-07-18 香港应用科技研究院有限公司 Frame-rate conversion using mixed bidirectional motion vector for reducing corona influence
CN103167304A (en) * 2013-03-07 2013-06-19 海信集团有限公司 Method and device for improving a stereoscopic video frame rates
CN103313059A (en) * 2013-06-14 2013-09-18 珠海全志科技股份有限公司 Method for judging occlusion area in process of frame rate up-conversion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120206567A1 (en) * 2010-09-13 2012-08-16 Trident Microsystems (Far East) Ltd. Subtitle detection system and method to television video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010517415A (en) * 2007-01-26 2010-05-20 テレフオンアクチーボラゲット エル エム エリクソン(パブル) Image block classification
CN101489031A (en) * 2009-01-16 2009-07-22 西安电子科技大学 Adaptive frame rate up-conversion method based on motion classification
CN102595089A (en) * 2011-12-29 2012-07-18 香港应用科技研究院有限公司 Frame-rate conversion using mixed bidirectional motion vector for reducing corona influence
CN103167304A (en) * 2013-03-07 2013-06-19 海信集团有限公司 Method and device for improving a stereoscopic video frame rates
CN103313059A (en) * 2013-06-14 2013-09-18 珠海全志科技股份有限公司 Method for judging occlusion area in process of frame rate up-conversion

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Backward Adaptive Pixel-based Fast;Xiaolin Chen etal;《IEEE SIGNAL PROCESSING LETTERS》;20090531;370-373 *
Motion Compensated Frame Rate Up-conversion Using Soft-decision Motion Estimation and Adaptive-weighted Motion Compensated Interpolation;Yuanzhouhan CAO et al;《Journal of Computational Information Systems》;20130715;5789-5797 *
NEW FRAME RATE UP-CONVERSION USING BI-DIRECTIONAL MOTION ESTIMATION;Byung-Tae Choi etal;《IEEE Transactions on Consumer Electronics》;20000831;603-609 *
一种基于自适应补偿的快速帧速率上转换算法;杨越 等;《光了学报》;20081130;2336-2341 *
基于图像遮挡分析的帧率上变换;林川;《中国优秀博硕士学位论文全文数据库(硕士)》;20110115;I136-410 *

Also Published As

Publication number Publication date
CN103888767A (en) 2014-06-25

Similar Documents

Publication Publication Date Title
CN103888767B (en) A kind of frame per second method for improving that UMH block-based motion estimations are combined with optical flow field estimation
US8736767B2 (en) Efficient motion vector field estimation
CN104219533B (en) A kind of bi-directional motion estimation method and up-conversion method of video frame rate and system
US8446524B2 (en) Apparatus and method for frame rate conversion
Kang et al. Motion compensated frame rate up-conversion using extended bilateral motion estimation
JP4198608B2 (en) Interpolated image generation method and apparatus
US8724022B2 (en) Frame rate conversion using motion estimation and compensation
CN105100807B (en) A kind of frame per second method for improving based on motion vector post-processing
US20090208123A1 (en) Enhanced video processing using motion vector data
CN101621693B (en) Frame frequency lifting method for combining target partition and irregular block compensation
US9148622B2 (en) Halo reduction in frame-rate-conversion using hybrid bi-directional motion vectors for occlusion/disocclusion detection
CN107968946B (en) Video frame rate improving method and device
US20110261264A1 (en) Image Processing
CN103402098A (en) Video frame interpolation method based on image interpolation
CN107483960B (en) Motion compensation frame rate up-conversion method based on spatial prediction
CN102457724B (en) Image motion detecting system and method
CN102868879A (en) Method and system for converting video frame rate
CN104717402B (en) A kind of Space-time domain combines noise estimating system
KR100565066B1 (en) Method for interpolating frame with motion compensation by overlapped block motion estimation and frame-rate converter using thereof
CN104811726B (en) The candidate motion vector selection method of estimation in frame per second conversion
US20120008689A1 (en) Frame interpolation device and method
CN103051857A (en) Motion compensation-based 1/4 pixel precision video image deinterlacing method
US7110453B1 (en) Motion or depth estimation by prioritizing candidate motion vectors according to more reliable texture information
CN104065975B (en) Based on the frame per second method for improving that adaptive motion is estimated
Kim et al. An efficient motion-compensated frame interpolation method using temporal information for high-resolution videos

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170728

Termination date: 20190331